For your account's balance, you just have to go though all blocks and summarize all transfers to and from your account (and all posting rewards and orders on the market and other stuff). In the end, the calculation is a (simple) addition (and subtraction), the result is a sum.
With every further transfer, the sum changes.
So this implies that the larger the user or blockchain data = the longer the response time will be, or does that not affect the process? π
No, because there is an API endpoint (and an underlying database table) that provides quick access to that information.
If you wanted to find all necessary information by going through the blocks, it wouldn't really matter, how many transfers an account made - the time consuming part would be going through all millions of blocks, since the account was created. So it would take longer, the older an account is, mostly...
But... That's the whole point of this post.
Please read again π
if there are millions of transactions, do it takes longer to find the result?
Suppose there is an account that has been created a month ago and another account that has been created 4 years ago.
Does the one created a 4 years ago spend more time to search through all the millions of blocks than the one created 1 month ago spend less time and is faster? o.O
It would take longer for the older account, if this was the method yes.
But that's the job of the Hive Daemon and when you ask a node for your account's balance, that information has already been prepared and compiled and lies in a database table ready for quick access.
I am begging you: read the post please!
Is this database in the RAM of the local computer? , that is, the PC that is making the query/consult.
If so, I suppose that if there is a lot of data, the PC will need a good amount of RAM available to access that data.(4GB RAM?, 8GB?, 32GB? )
π I do, asking to clear the points I want to know more.
The local computer as the client needs NO DATA.
You can use peakd or ecency client or whatever to querry from Hive's nodes.
The Hive nodes need a lot of storage.
Whether that's all in RAM or on disc or wherever is your choice. That's mainly done on the OS side of things...
block_log alone is several 100 GB and growing...
Then please try to not only read, but to understand.
I included hyperlinks to further sources. That took extra time. Click on them and read! PLEASE!
Well, I understand it π, but my idea is that those who come to read the publication, read the comments and find the answers you have given me, so that you don't have to answer them again.
O, thanks for answering, and those who read will have some answers yet explained π.