I believe you are the only one thinking in these terms here. But guess this is all data LeoAi will utilize?
You are viewing a single comment's thread from:
I believe you are the only one thinking in these terms here. But guess this is all data LeoAi will utilize?
It pulls from the blockchain so yes anything on chain tied to the thread is captured.
So it deals with payouts, votes, comments, transfers, etc...
Anything that is done at the base layer is going to be read.
It will be easy to keep track of people miss using the place after that then I guess
There should be the ability to pull data in that manner based upon time. Each imprint on the blockchain is timestamped.
Lets hope that we might stop spending so much from DHF to find and downvote spamming if we can utilize AI
We wont. We excel at stupidity there.
But it is also my thinking from an activity perspective.
People need to realize that a thread is a lot more than just a thread.
Maybe ppl dont have to understand it as long as the ppl they follow understand it and can guide them.
Many ppl on here probably dont have the tEchnical skills nor interest in understanding all under the hood stuff.
They dont have to understand the technical. I dont have any coding knowledge whatsoever.
They really only need to spend some time understand the digital platform business model or network theory. It isnt that complicated, at least the basics if you avoid the math (the kind that has no numbers and only letters).
There's so much going on when you do anything on Hive. Your action has to reach 100+ nodes(lets say your action takes 1kb of data, that 100kb to just share to every node). Then it goes to every application streaming the chain.
Lets say theres only 500 of those running around. So that's another 500kb of data being transferred(I know that compression exists, but for now we'll ignore that). Your small action ended up producing a lot data transfers immediately.
But immediate usage is not the only time it's done. Any time a node syncs up, that has to get synced up again. And anyone who streams the chain in the future. It might be 1kb now, but in a year, that might have caused GBs of usage.
This doesn't even account for all the internal processing that Leo specifically needs to do for your thread. It would be great to hear from @khaleelkazi what all happens when a thread is made.
There is a lot of data shared. We are still under 500 GB for the total chain. However, there is talk of offering the ability for nodes to only operate parts of the database if they so choose.
500 gigs is nothing these days. At my day job, we probably collect more than that much metadata(data about data) daily. It's actually super cheap to get drives that are insane. 4TB enterprise drives can be bought for less than $400.