In my opinion, Hive could be operated now entirely off home-based nodes (except for the issue of decentralized hosting of images and video, but work is being done here).
Traffic can easily be distributed across multiple API nodes using "hive availability" software such as haxproy and this software can also detect when individual API nodes are failing and can't respond to traffic. This is free software we use now to distribute the traffic of api.hive.blog across multiple internal nodes (we don't do this for performance reasons, but for redundancy, and to make it easier for us to do upgrades without resulting in an outage during the upgrade time).
Not every home-based API node would have the network bandwidth to run an haproxy node in this setup, but I'm certain that many internet offerings here in the US, and most of the western world, can support a node with the necessary bandwidth. Anyone running such an haproxy node in this scenario would also need to make sure they have an "unlimited data plan" to avoid excessive fees by their ISP.
Is there potential for Hive to be sharded like Ethereum is going towards/happening on? I wasn't sure if Hive was already or not. thanks!
Hive can be sharded if necessary, it's a generalized database technique. It's not now, mainly because it's not needed at this point.
I have 1Gbit Ethernet yet I have a 1TB data cap. A witness node uses about 250-300GB of traffic/month from my observations, an API node far more than that.
I also live in an area where we get snow and occasional bad weather that can knock out the power for 1 minute to as long as 4 days.
More and more ISP are implementing data caps, Comcast just announced they will be enforcing data caps for everyone, previously it was only certain states.
While 99% of the time, my connection is fast and reliable, I avoid hosting anything off it due to these issues.
One of the problems issues I have been noticing lately is not all API nodes are running the latest version or correctly functioning, but to HAProxy they would return a heathy status and would continue to serve traffic.
Comcast, my home provider, just recently instituted data caps this month, but you can "upgrade" your plan back to unlimited for $30/month.
I think it's $100 for me, I have to look I don't think we have caps just yet, not till next month I think. I pay for 1Gbit Internet and have the same cap as someone with 50Mbps.
No, that's not necessarily true, and it's not true the way we use it. Haproxy allows you to define your health check.
Do you have it setup to find issues like Anyx's node running an old version? That caused a lot of grief for a few days.
It's not setup that way now, because we use it to check the health of our own hiveminds, and we keep them at least functionally similar at all times (although we run different codes on them simultaneously sometimes, to see how they behave differently, especially when testing optimizations).
There are more advanced things that could be done though, such as having a haxproxy that is distributing its load do cross-checking on the responses from different nodes to API calls that should have the same answers from each node. But I think that's overkill for now.
But is this needed? IMO because we pay witnesses, servers don't need to be super cheap. Sure the chain can't die from that point.
I'm not that technical, but how much resources would some premade smart contracts eat onchain?
Some examples:
Loan Hive to HDB
W-ETH
W-BTC
W-... for onchain exchange ( would be an awesome dex IMO)
I think some basic Smart contracts on Hive would add more value in use cases and Token value. Also for Rcs.
Servers need to be cheap if you want resistance against infrastructure attacks by large infrastructure providers.
I agree with you.
If I remember right steemit pays around 250k a month for multiple nodes. It looks really crazy to me what possible is.
I think now is time to add more applications to hive. Because with the time computing power and internet connection getting cheaper too. So the time should work for hive.