Sort:  

Lets say theres only 500 of those running around. So that's another 500kb of data being transferred(I know that compression exists, but for now we'll ignore that). Your small action ended up producing a lot data transfers immediately.

But immediate usage is not the only time it's done. Any time a node syncs up, that has to get synced up again. And anyone who streams the chain in the future. It might be 1kb now, but in a year, that might have caused GBs of usage.

This doesn't even account for all the internal processing that Leo specifically needs to do for your thread. It would be great to hear from @khaleelkazi what all happens when a thread is made.

There is a lot of data shared. We are still under 500 GB for the total chain. However, there is talk of offering the ability for nodes to only operate parts of the database if they so choose.

500 gigs is nothing these days. At my day job, we probably collect more than that much metadata(data about data) daily. It's actually super cheap to get drives that are insane. 4TB enterprise drives can be bought for less than $400.