You are viewing a single comment's thread from:

RE: SteemWorld ~ Weekly Support ~ #16

in #steemworld6 years ago

I've built a web service in PHP for answering the requests from the clients. The parsed data is being stored in SQLite databases on SSD. I use multiple databases to be able to re-parse only some parts of the blockchain when needed (by copying the files to a backup path on my server, so that the clients can still retrieve the old data in the meantime. With SQLite it's also possible to join data from multiple DBs in one SQL statement).

I currently don't run an own node on that server and the new blocks are being downloaded from other public instances with cURL (I will start an own one in the coming months) and they are parsed each minute (with help of Cron). The blocks are being stored in SQLite databases (100,000 blocks per file), which (when full) are being compressed with XZ, which makes them much smaller in size. To save even more SSD space I'm using an own algorithm for JSON compression.

If you want to access the account history in real time, you would need to start an own node with enabled account history plugin and parse the virtual ops of the incoming blocks yourself (or use the inbuilt RocksDB solution). I currently would not recommend this and it may be better to use Hivemind for that, but I guess you will nevertheless have the same distance (a few minutes) to the current head block in that case.

Sort:  

@steemchiller that is an incredible engineering feat you just described. I think a lot of potential steem developers would be interested to learn more how you do it.

Can you still query a SQLite db which has been compressed with XZ ?

Can you still query a SQLite db which has been compressed with XZ ?

No, that's not possible. I heard that there are compression plugins for SQLite available, but I've never tried them.

My parser automatically decompresses the files when needed without deleting the original archive files. After walking through 100.000 blocks it deletes the current decompressed file and continues with decompressing the next archive file and so on. Decompression for one file takes about 3-5 seconds, so it does not take much time when executing a full re-parse, and I don't need much additional disk space on my server.

Understood, you're decompressing only for cron jobs or full re-parse but not for regular user request. Thank you for the answer.