Sort:  

What I do miss are clear and precise instructions / documentation on how to set up and run a community node. Hardware resources needed, recommended, software needed, set up procedure, day to day operations needed. The n the why 's of running a node - benefits, not only financial.

Perhaps all this is already written and published somewhere?

All this to distribute the load from your cluster to the community.

Next, looking into the future - one word comes to mind - sharded.

If it really is a requirement to have 128 GB of RAM to run a node then that's way too much. Somehow the load should be lowered to come into a reasonable range. I am not telling you anything new, I guess.

OK, I realize that my suggestions are not very constructive. I can't propose an out of the box solution. I can only voice my concern. I believe that you are working on a long term solution to keep the platform functional and scalable.

Good luck!

I'm starting to understand the architecture more (I think), and am feeling more confident about the long-term future than I was, but here are some of my thoughts about what you could do:

  1. Add some differentiation to the RPC server:
    a) Allow it to be run to provide only the last n days worth of vote, comment and transfer data. This is all I would need for running my services on it, and I expect many others could also manage with less historic data. A smaller index surely reduces the RAM requirements significantly.
    b) Allow it to run in a 'ledger only' mode, where the index could be smaller (as it would exclude all other operations). This is almost all the exchanges would need I think.
  2. Communicate better about how the infrastructure works and the hardware requirements for running each part. I'd love to see some diagrams of the systems architecture.

I realise this will need some work on steemd, but over time, these increasingly demanding running requirements are centralising the network unnecessarily, making it more vulnerable, stifling community innovation and leading to doubts about scalability.

Thanks for your time.