Sort:  

Maybe ppl dont have to understand it as long as the ppl they follow understand it and can guide them.

Many ppl on here probably dont have the tEchnical skills nor interest in understanding all under the hood stuff.

They dont have to understand the technical. I dont have any coding knowledge whatsoever.

They really only need to spend some time understand the digital platform business model or network theory. It isnt that complicated, at least the basics if you avoid the math (the kind that has no numbers and only letters).

There's so much going on when you do anything on Hive. Your action has to reach 100+ nodes(lets say your action takes 1kb of data, that 100kb to just share to every node). Then it goes to every application streaming the chain.

Lets say theres only 500 of those running around. So that's another 500kb of data being transferred(I know that compression exists, but for now we'll ignore that). Your small action ended up producing a lot data transfers immediately.

But immediate usage is not the only time it's done. Any time a node syncs up, that has to get synced up again. And anyone who streams the chain in the future. It might be 1kb now, but in a year, that might have caused GBs of usage.

This doesn't even account for all the internal processing that Leo specifically needs to do for your thread. It would be great to hear from @khaleelkazi what all happens when a thread is made.

There is a lot of data shared. We are still under 500 GB for the total chain. However, there is talk of offering the ability for nodes to only operate parts of the database if they so choose.

500 gigs is nothing these days. At my day job, we probably collect more than that much metadata(data about data) daily. It's actually super cheap to get drives that are insane. 4TB enterprise drives can be bought for less than $400.