You are viewing a single comment's thread from:

RE: A Hacky Guide to Hive (part 1.5)

in #dev2 months ago (edited)

It seems to me that you have given a very good explanation of how the blockchain works. I didn't know that events were segmented every 3 seconds. Is there an explanation as to why that time was chosen and not another?

While the blockchain contains all transfers to and from an account, the final balance needs to be calculated;
The data needs to be compiled first, for quick access.

I find this very interesting, and I would like to read an explanation of how this process is calculated or done.

Thanks for the mention and I see that the images in your post look good.

Interesting work, I will be waiting to read your hacky guide, and I know that new witnesses like @daddydog will be interested in taking a look.

pd: get well soon from your injury on your hand.

Sort:  

It seems to me that you have given a very good explanation of how the blockchain works. I didn't know that events were segmented every 3 seconds. Is there an explanation as to why that time was chosen and not another?

Hive has a block time of 3 seconds.
Within those 3 seconds all sorts of things must happen; validation of transactions, serialzation, consensus finding and final signing of the block. That takes some time and communication around the globe and stuff.
As I understand it, 3 seconds is a limit, because of the speed which data travels with.

I find this very interesting, and I would like to read an explanation of how this process is calculated or done.

Uhm...
For your account's balance, you just have to go though all blocks and summarize all transfers to and from your account (and all posting rewards and orders on the market and other stuff). In the end, the calculation is a (simple) addition (and subtraction), the result is a sum.
With every further transfer, the sum changes.

For your account's balance, you just have to go though all blocks and summarize all transfers to and from your account (and all posting rewards and orders on the market and other stuff). In the end, the calculation is a (simple) addition (and subtraction), the result is a sum.
With every further transfer, the sum changes.

So this implies that the larger the user or blockchain data = the longer the response time will be, or does that not affect the process? 😒

So this implies that the larger the user or blockchain data = the longer the response time will be, or does that not affect the process? 😒

No, because there is an API endpoint (and an underlying database table) that provides quick access to that information.

If you wanted to find all necessary information by going through the blocks, it wouldn't really matter, how many transfers an account made - the time consuming part would be going through all millions of blocks, since the account was created. So it would take longer, the older an account is, mostly...

But... That's the whole point of this post.
Please read again 😅

the time consuming part would be going through all millions of blocks, since the account was created.

if there are millions of transactions, do it takes longer to find the result?

Suppose there is an account that has been created a month ago and another account that has been created 4 years ago.

Does the one created a 4 years ago spend more time to search through all the millions of blocks than the one created 1 month ago spend less time and is faster? o.O

Does the one created a 4 years ago spend more time to search through all the millions of blocks than the one created 1 month ago spend less time and is faster? o.O

It would take longer for the older account, if this was the method yes.

But that's the job of the Hive Daemon and when you ask a node for your account's balance, that information has already been prepared and compiled and lies in a database table ready for quick access.

I am begging you: read the post please!

The Hive daemon watches all events and creates databases in the background.

Is this database in the RAM of the local computer? , that is, the PC that is making the query/consult.

If so, I suppose that if there is a lot of data, the PC will need a good amount of RAM available to access that data.(4GB RAM?, 8GB?, 32GB? )

I am begging you: read the post please!

😀 I do, asking to clear the points I want to know more.

The Hive daemon watches all events and creates databases in the background.

The local computer as the client needs NO DATA.
You can use peakd or ecency client or whatever to querry from Hive's nodes.

The Hive nodes need a lot of storage.
Whether that's all in RAM or on disc or wherever is your choice. That's mainly done on the OS side of things...
block_log alone is several 100 GB and growing...

I am begging you: read the post please!

😀 I do, asking to clear the points I want to know more.

Then please try to not only read, but to understand.
I included hyperlinks to further sources. That took extra time. Click on them and read! PLEASE!