Details about the setup of my public node

in HiveDevs5 years ago (edited)

This post will be a go-to reference for myself, and a resource for anyone interested in setting up an api node. It's very technical, so if you don't belong to one of those two groups you can skip it.


Introduction

My current setup is close to the default hivemind setup provided in the example jussi config. After talks with @gtg one of the nodes offers a few more apis for performance reasons, as the fat node is very slow. There surely are not-optimal or even misconfigurations, optimization will be an ongoing process. See the end of the post for more info, make sure to read and understand everything before you start. I will update this post with any important changes.


Hardware

First, we need hardware. The setup consists of 3 nodes, for which I selected the following specs:

  • hivemind
    32GB RAM
    2x240GB SSD RAID0
  • fat
    64GB RAM
    2x480GB SSD RAID0
  • accounthistory
    64GB RAM
    2x512GB NVMe RAID0 (64GB SWAP)

All are set up with a clean install of Ubuntu 18.04


Setup

Common steps

Set up each server to log in securely and with a dedicated user. The user on all machines will be called "hive" during this process. Individual needs may differ so I don't go into details here and only provide the steps necessary to proceed.

sudo apt-get update && sudo apt-get upgrade
sudo apt-get install -y screen git nginx certbot python-certbot-nginx

Step 1: make everything sync

hivemind node

install software

cd ~
sudo apt-get install -y python3 python3-pip postgresql postgresql-contrib docker.io
git clone https://gitlab.syncad.com/hive/hivemind.git
cd hivemind
sudo pip3 install -e .[test]

setup database

sudo su postgres
createdb hive

Create db user hive and grant access to database
createuser --interactive
psql
GRANT ALL PRIVILEGES ON DATABASE hive TO hive;
\q
exit

optimize postgres
sudo nano /etc/postgresql/10/main/postgresql.conf
use https://pgtune.leopard.in.ua/ to find the optimal settings for your machine. I used the following:

# DB Version: 10
# OS Type: linux
# DB Type: web
# Total Memory (RAM): 32 GB
# Data Storage: ssd

max_connections = 200
shared_buffers = 8GB
effective_cache_size = 24GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.7
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 20971kB
min_wal_size = 1GB
max_wal_size = 4GB

sudo service postgresql restart

The irredeemables list is a blacklist containing mass spammers mostly. It's recommended to use it if you serve browser based interfaces, because the amount of comments by these accounts creates a lot of traffic and is a burden on browsers. It's defined in /home/hive/hivemind/hive/conf.py under --muted-accounts-url. You can change it there, or add the environment variable MUTED_ACCOUNTS_URL in both scripts if you do not want to use the default. I offer an empty version if you don't want to filter the results.

Create sync script
nano sync.sh
Insert the following (the STEEMD_URL is temporary until your own fat node has synced, update and restart it in a few days to speed up the hivemind sync):

#!/bin/bash
export DATABASE_URL=postgresql://hive:pass@localhost:5432/hive
export STEEMD_URL='{"default": "https://fat.pharesim.me"}'
export HTTP_SERVER_PORT=28091
hive sync
chmod +x sync.sh
screen -S hivesync

./sync.sh
Use Ctrl-a d to detach screen, screen -r hivesync to reattach

The whole sync process takes about a week. Don't forget changing the STEEMD_URL when your fat node is finished. Unlike the steemd replays, you can interrupt this sync at any time and it picks up where you stopped.

The sync is finished when you see single blocks coming in. Keep it running, and set up the server:

cp sync.sh hivemind.sh
nano hivemind.sh

Change sync in the end for server

screen -S hivemind

./hivemind.sh
Use Ctrl-a d to detach screen, screen -r hivemind to reattach

steemd nodes

Both the fat and the accounthistory node will run an instance of steemd, these are the steps to prepare them:

sudo apt-get install -y autoconf automake cmake g++ git libbz2-dev libsnappy-dev libssl-dev libtool make pkg-config python3-jinja2 libboost-chrono-dev libboost-context-dev libboost-coroutine-dev libboost-date-time-dev libboost-filesystem-dev libboost-iostreams-dev libboost-locale-dev libboost-program-options-dev libboost-serialization-dev libboost-signals-dev libboost-system-dev libboost-test-dev libboost-thread-dev doxygen libncurses5-dev libreadline-dev perl ntp
cd
git clone https://github.com/openhive-network/hive
cd hive
git checkout v0.23.0
git submodule update --init --recursive
mkdir build
cd build

The build options differ for the two nodes

fat

cmake -DCMAKE_BUILD_TYPE=Release -DLOW_MEMORY_NODE=OFF -DCLEAR_VOTES=OFF -DSKIP_BY_TX_ID=OFF -DBUILD_STEEM_TESTNET=OFF -DENABLE_MIRA=ON -DSTEEM_STATIC_BUILD=ON ..

accounthistory

cmake -DCMAKE_BUILD_TYPE=Release -DLOW_MEMORY_NODE=ON -DCLEAR_VOTES=ON -DSKIP_BY_TX_ID=OFF -DBUILD_STEEM_TESTNET=OFF -DENABLE_MIRA=OFF -DSTEEM_STATIC_BUILD=ON ..

Again on both:

make -j$(nproc) steemd
cd
mkdir bin
cp /home/hive/hive/build/programs/steemd bin/v0.23.0
mkdir .steemd
nano .steemd/config.ini

And again, the configs differ for the two nodes

fat

log-appender = {"appender":"stderr","stream":"std_error"} {"appender":"p2p","file":"logs/p2p/p2p.log"}
log-logger = {"name":"default","level":"info","appender":"stderr"} {"name":"p2p","level":"warn","appender":"p2p"}
backtrace = yes

plugin = webserver p2p json_rpc witness account_by_key reputation market_history
plugin = database_api account_by_key_api network_broadcast_api reputation_api
plugin = market_history_api condenser_api block_api rc_api

history-disable-pruning = 0
account-history-rocksdb-path = "blockchain/account-history-rocksdb-storage"
block-data-export-file = NONE
block-log-info-print-interval-seconds = 86400
block-log-info-print-irreversible = 1
block-log-info-print-file = ILOG
sps-remove-threshold = 200

shared-file-dir = "blockchain"
shared-file-size = 360G

shared-file-full-threshold = 0
shared-file-scale-rate = 0
follow-max-feed-size = 500
follow-start-feeds = 0
market-history-bucket-size = [15,60,300,3600,86400]
market-history-buckets-per-size = 5760

p2p-seed-node = anyx.io:2001 gtg.steem.house:2001 seed.jesta.us:2001

rc-skip-reject-not-enough-rc = 0
rc-compute-historical-rc = 0
statsd-batchsize = 1
tags-start-promoted = 0
tags-skip-startup-update = 0
transaction-status-block-depth = 64000
transaction-status-track-after-block = 0

webserver-http-endpoint = 0.0.0.0:28091
webserver-ws-endpoint = 0.0.0.0:28090
webserver-thread-pool-size = 32

enable-stale-production = 0
required-participation = 33
witness-skip-enforce-bandwidth = 1

Not sure about the shared_file_size here, it's rocksdb? Better safe than sorry...

accounthistory

log-appender = {"appender":"stderr","stream":"std_error"} {"appender":"p2p","file":"logs/p2p/p2p.log"}
log-logger = {"name":"default","level":"info","appender":"stderr"} {"name":"p2p","level":"warn","appender":"p2p"}
backtrace = yes

plugin = webserver p2p json_rpc witness
plugin = rc market_history_account_history_rocksdb transaction_status account_by_key
plugin = database_api condenser_api market_history_api account_history_api transaction_status_api account_by_key_api
plugin = block_api network_broadcast_api rc_api

history-disable-pruning = 1
account-history-rocksdb-path = "blockchain/account-history-rocksdb-storage"
block-data-export-file = NONE
block-log-info-print-interval-seconds = 86400
block-log-info-print-irreversible = 1
block-log-info-print-file = ILOG
sps-remove-threshold = 200

shared-file-dir = "/run/hive"
shared-file-size = 120G

shared-file-full-threshold = 9500
shared-file-scale-rate = 1000
flush-state-interval = 0
follow-max-feed-size = 500
follow-start-feeds = 0
market-history-bucket-size = [15,60,300,3600,86400]
market-history-buckets-per-size = 5760

p2p-seed-node = anyx.io:2001 seed.jesta.us:2001

rc-skip-reject-not-enough-rc = 0
rc-compute-historical-rc = 0
statsd-batchsize = 1
tags-start-promoted = 0
tags-skip-startup-update = 0
transaction-status-block-depth = 64000
transaction-status-track-after-block = 42000000

webserver-http-endpoint = 0.0.0.0:28091
webserver-ws-endpoint = 0.0.0.0:28090
webserver-thread-pool-size = 32

The fat node also needs a database.cfg
nano .steemd/database.cfg

These settings are for 32GB of RAM. Adapt global.shared_cache.capacity, global.write_buffer_manager.write_buffer_size and global.object_count accordingly

{
  "global": {
    "shared_cache": {
      "capacity": "21474836480"
    },
    "write_buffer_manager": {
      "write_buffer_size": "4294967296"
    },
    "object_count": 250000,
    "statistics": false
  },
  "base": {
    "optimize_level_style_compaction": true,
    "increase_parallelism": true,
    "block_based_table_options": {
      "block_size": 8192,
      "cache_index_and_filter_blocks": true,
      "bloom_filter_policy": {
        "bits_per_key": 10,
        "use_block_based_builder": false
      }
    }
  }
}

As well as increased file limits (hive is the username, adapt to your memory again)
sudo nano /etc/security/limits.conf
Insert near the end of the file

hive soft  nofile 262140
hive hard nofile 262140

sudo nano /etc/sysctl.conf
Insert near the end of the file

fs.file-max = 2097152

You need to log in to the server again for these to take effect.

The accounthistory requires a change of the size of /run
mount -o remount ,size=120G /run
and a directory /run/hive

sudo mkdir /run/hive
sudo chown hive:hive /run/hive

Then continue on both servers.
Download block_log.index and block_log

rsync -avh --progress --append rsync://files.privex.io/hive/block_log.index .steemd/blockchain/block_log.index
rsync -avh --progress --append rsync://files.privex.io/hive/block_log .steemd/blockchain/block_log

Go for a walk or have dinner.

Start up steemd and replay blockchain
screen -S hive

echo    75 | sudo tee /proc/sys/vm/dirty_background_ratio
echo  1000 | sudo tee /proc/sys/vm/dirty_expire_centisecs
echo    80 | sudo tee /proc/sys/vm/dirty_ratio
echo 30000 | sudo tee /proc/sys/vm/dirty_writeback_centisecs

~/bin/v0.23.0 --replay
Use Ctrl-d a to detach from screen, and screen -r hive to reattach.

The sync process takes a bit more than 2 days on the fat node, less on accounthistory. Do not interrupt them, or you will have to start over. If you are syncing a hivemind node, don't forget to switch to the fat node when that's finished.

Step 2: webserver + routing

all nodes

All requests will be proxied by nginx, so we need this on all machines. We will install SSL certificates, so all communication is encrypted and all nodes can be called individually.

sudo nano /etc/nginx/sites-enabled/hive
The config is the same for each node, only change the server_name

upstream hivesrvs {
# Dirty Hack. Causes nginx to retry node
   server 127.0.0.1:28091;
   server 127.0.0.1:28091;
   server 127.0.0.1:28091;
   server 127.0.0.1:28091;
   keepalive 10;
}

server {
    server_name hivemind/fat/acchist.you.tld;
    root /var/www/html/;

    location ~ ^(/|/ws) {
        proxy_pass http://hivesrvs;
        proxy_set_header Connection "";
        include snippets/rpc.conf;
    }

Add the rpc.conf to each server
sudo nano /etc/nginx/snippets/rpc.conf
Insert

access_log off;

proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_connect_timeout 10;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";

keepalive_timeout 65;
keepalive_requests 100000;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
proxy_ssl_verify off;

Let certbot configure the domains for automatic redirect to https
sudo certbot --nginx

hivemind node

We need an additional file for nginx, for the general entry point

sudo cp /etc/nginx/sites-enabled/hive /etc/nginx/sites-enabled/api
sudo nano /etc/nginx/sites-enabled/api

Change both occurances of hivesrvs to jussisrv, the ports from 28091 to 9000, and the server_name to api., or whatever you want your node to be accessible on.

jussi

cd ~
git clone https://gitlab.syncad.com/hive/jussi.git

Create build script
nano build.sh
and insert

#!/bin/bash
cd /home/hive/jussi
sudo docker build -t="$USER/jussi:$(git rev-parse --abbrev-ref HEAD)" .

chmod +x build.sh

Create run script
nano jussi.sh
and insert

#!/bin/bash
cd /home/hive/jussi
sudo docker run -itp 9000:8080 --log-opt max-size=50m "$USER/jussi:$(git rev-parse --abbrev-ref HEAD)"

chmod +x jussi.sh

screen -S jussi

cd ~/jussi
nano DEV_config.json

Currently, my config looks like this:

{
    "limits": { "accounts_blacklist": [ "accounttoblock" ] },
    "upstreams": [
      {
        "name": "steemd",
        "translate_to_appbase": true,
        "urls": [["steemd", "https://fat1.pharesim.me" ]],
        "ttls": [["steemd", 3]],
        "timeouts": [["steemd",3]]
      },
      {
        "name": "appbase",
        "urls": [
          ["appbase", "https://fat1.pharesim.me"],

          ["appbase.account_history_api", "https://acchist1.pharesim.me"],
          ["appbase.condenser_api.get_account_history", "https://acchist1.pharesim.me"],
          ["appbase.condenser_api.get_ops_in_block", "https://acchist1.pharesim.me"],
          ["appbase.condenser_api.get_transaction", "https://acchist1.pharesim.me"],

          ["appbase.condenser_api.get_followers", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_following", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_follow_count", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_discussions_by_trending", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_discussions_by_hot", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_discussions_by_promoted", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_discussions_by_created", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_discussions_by_blog", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_discussions_by_feed", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_discussions_by_comments", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_reblogged_by", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_replies_by_last_update", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_trending_tags", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_discussions_by_author_before_date", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_post_discussions_by_payout", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_comment_discussions_by_payout", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_blog", "http://localhost:28091"],
          ["appbase.condenser_api.get_blog_entries", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_account_votes", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_state", "https://hivemind.pharesim.me"],

          ["appbase.condenser_api.get_state.params=['witnesses']", "https://acchist1.pharesim.me"],
          ["appbase.condenser_api.get_state.params=['/witnesses']", "https://acchist1.pharesim.me"],
          ["appbase.condenser_api.get_state.params=['/~witnesses']", "https://acchist1.pharesim.me"],
          ["appbase.condenser_api.get_state.params=['~witnesses']", "https://acchist1.pharesim.me"],

          ["appbase.follow_api", "https://hivemind.pharesim.me"],
          ["appbase.tags_api", "https://hivemind.pharesim.me"],

          ["appbase.market_history_api", "https://acchist1.pharesim.me"],
          ["appbase.transaction_status_api", "https://acchist1.pharesim.me"],
          ["appbase.account_by_key_api", "https://acchist1.pharesim.me"],
          ["appbase.block_api", "https://acchist1.pharesim.me"],
          ["appbase.network_broadcast_api", "https://acchist1.pharesim.me"],
          ["appbase.rc_api", "https://acchist1.pharesim.me"]
        ],
        "ttls": [
          ["appbase", 3],
          ["appbase.login_api",-1],
          ["appbase.network_broadcast_api", -1],
          ["appbase.follow_api", 10],
          ["appbase.market_history_api", 1],
          ["appbase.condenser_api", 3],
          ["appbase.condenser_api.get_block", -2],
          ["appbase.condenser_api.get_block_header", -2],
          ["appbase.condenser_api.get_content", 1],
          ["appbase.condenser_api.get_state", 1],
          ["appbase.condenser_api.get_state.params=['/trending']", 30],
          ["appbase.condenser_api.get_state.params=['trending']", 30],
          ["appbase.condenser_api.get_state.params=['/hot']", 30],
          ["appbase.condenser_api.get_state.params=['/welcome']", 30],
          ["appbase.condenser_api.get_state.params=['/promoted']", 30],
          ["appbase.condenser_api.get_state.params=['/created']", 10],
          ["appbase.condenser_api.get_dynamic_global_properties", 3]
        ],
        "timeouts": [
          ["appbase", 3],
          ["appbase.network_broadcast_api",0],
          ["appbase.chain_api.push_block", 0],
          ["appbase.chain_api.push_transaction", 0],
          ["appbase.condenser_api.broadcast_block", 0],
          ["appbase.condenser_api.broadcast_transaction", 0],
          ["appbase.condenser_api.broadcast_transaction_synchronous", 0],
          ["appbase.condenser_api.get_account_history", 20],
          ["appbase.condenser_api.get_account_votes", 20],
          ["appbase.condenser_api.get_ops_in_block.params=[2889020,false]", 20],
          ["appbase.account_history_api.get_account_history", 20],
          ["appbase.account_history_api.get_ops_in_block.params={\"block_num\":2889020,\"only_virtual\":false}", 20]
        ]
      },
      {
        "name": "hive",
        "urls": [["hive", "http://localhost:28091"]],
        "ttls": [["hive", -1]],
        "timeouts": [["hive", 30]]
      },
    {
      "name": "bridge",
      "translate_to_appbase": false,
      "urls": [["bridge","http://localhost:28091"]],
      "ttls": [["bridge",-1]],
      "timeouts": [["bridge",30]]
    }
    ]
  }

cd
./build.sh
This takes a while, when finished
./run.sh
Use Ctrl-a d to detach screen, screen -r jussi to reattach
If you update your config, run build.sh outside of screen, then attach to screen and restart the run script. (There may be faster ways to update the docker with a new config, but I'm new to this).

Something about steemd apis

As you might have realized, there are some duplicates in the apis fat and accounthistory provide. That's because of what I mentioned above, fat is slow. I did not investigate which aren't needed on fat now to still work for hivemind, so I didn't change anything in that default configuration. I also just took the list for apis on accounthistory from @gtg without questioning. There is unnecessary redundancy for sure, instructions may change in the future to improve on this. Your perfect setup may differ completely, depending on which apis you need to serve (most).

Finishing words

That's it. After everything is synced, you should have a working public node ready to serve requests! If this guide has helped you and/or you want to support my work on Hive infrastructure, education, onboarding and retention, please help me secure my witness spot!

Stats

Current output of df -h on the three servers (April 14):

hivemind

/dev/md2        407G  272G  115G  71% /

fat

/dev/md2        815G  369G  406G  48% /

accounthistory

tmpfs           120G   58G   63G  48% /run
/dev/md2        874G  531G  299G  65% /
Sort:  

I probably cannot adequately express my appreciation for folks undertaking to improve the decentralization of Hive, as you are.

I am not a dev, but have some questions regarding how your build affects censorship and resistance to it on Hive.

"bloom_filter_policy"

Is this a reference to @bloom?

I note you are incorporating @themarkymark's blacklist. Does this include #irredeemables github list? I did not see a specific reference to that list, so inquire. I have strongly advocated for far greater public awareness and involvement in that absolute censorship of affected accounts, and it is my hope that full nodes that do not simply mirror that API censorship mechanism arise.

I would appreciate any comment you might feel appropriate regarding that apparently covertly exercised censorship mechanism, and specifically regarding it's present application to @joe.public, who seems to have been placed on that list and completely censored on all front ends on Hive using extant full nodes, for no other reason than annoying Bernie and Marty.

I do not advocate being a troll. However, I am confident that the definition of a troll is highly subjective, and allowing such total censorship to be applied based on personal opinion is a grave threat to Hive's ability to secure free speech. I would rather deal with trolls and flags than potential censorship because powerful accounts don't like them, or maybe any of us.

Trolls are annoying. Censorship is existential. Steem today reveals what that slippery slope ensures Hive will become if public information and involvement in API mediated censorship is not established presently.

Thanks very much!

I would appreciate any comment you might feel appropriate regarding that apparently covertly exercised censorship mechanism, and specifically regarding it's present application to @joe.public, who seems to have been placed on that list and completely censored on all front ends on Hive using extant full nodes

Not true. You must be mixing Hive's blacklists and Steemit's censorship.
To see the difference, just compare:
https://hive.blog/@joe.public
https://steemit.com/@gtg

While Stinc under Yuchen seems to have gone after back catalogs, I received a comment from @joe.public today, which is now invisible, and my reply is also invisible. This is censorship at the API level, exactly as it is being undertaken by Stinc and the CCP on Steem, just on Hive it is only applied to active communications, and not catalogs.

"The users in the irredeemables list will have their comments and posts filtered out and their flags will not be considered in the logic that determines if comments or posts are hidden in most front end applications including Condenser (the software that powers steemit.com)."

The latest update to the #irredeemables list was last month, just before Hive forked off Steem. I have received no indication Hive treated this list any differently than Steem did, and absent intentional changes to how Hive works, this exact list should have the exact same affect on Hive it does on Steem.

here you can see the full list as of that time, administered by @themarkymark and friends, and @joe.public is listed as #622.

No reason given for his being on that list is compelling enough to mandate his total censorship on Hive. He annoyed @themarkymark and Bernie.

That's all it takes to generate a complete censorship of Hive users. That's damn little difference from what Sun Yuchen is doing to Steem.

Hive needs censorship resistance, not a band of merry censors keeping the narrative safe for consensus witnesses and ninjamine whales.

I do appreciate your comment. I am aware of the feathers I have ruffled, and expect your adamance regarding independent thought is particularly revealed by your comment here. I note you're an expert coder, and that nothing I have said in this comment is new information to you.

So, why do you note the minor difference in target of the same exact mechanism that is actively censoring folks on both Steem, wielded by Sun Yuchen today, and @themarkymark and his band of merry silencers of dissent on Hive today? Seems a bit misleading.

Edit: I should say 'the exact same method' rather than 'mechanism, as you are not on the #irredeemables list, which shows that Yuchen is using a different list via the API method to censor folks.

Both method and mechanisms are very different now.

Anti-censorship features of Hive are on the consensus level.
True, that things gets more complicated nowadays when part of API endpoint is a non-consensus front-end oriented social layer such as hivemind.
Node and front-end operators can chose to ignore those features. I'm (same as @pharesim) running on defaults, inherited from Steem (now changed to be independent for obvious reasons)
I see a lot of improvements needed for those mechanisms to improve transparency and decentralization.
In a long term I believe this will be replaced by some sophisticated mechanisms relaying on a web of trust.

In a long term I believe this will be replaced by some sophisticated mechanisms relaying on a web of trust.

yeah man were supposed to have that DIY Decentralzied Voice KYC free p2p ID system that letsyou verify up to 3 otehr people in a web where you submit face selfies every few weeks or days and you have VERIFICATION witnmesses at least thats what I see we could have (The stuff we all BELIEVED @dan was doing at block one as $750 million dollar CTO lol but he ACTUALLy just created an idea and allowed US to create the plan spread across all our brains

just do what dan led us to believe hed do for voice, but on hive, where we need a validator node system where you can tie your owner keys to some system where yoru recovery account can restore your account after 1 year or 2 months or 6 months of inactivity and where you can get people to verify if you still have your account, selfi verification, youd end up with validator node wars , but youd try to create alliances and allow binter blockhain communication, we could allow a Keybase.io system where you verify who you are by having a verification of like 7 different social media accounts . I dunno tehers SO many ways to do this i want to try them all

image.png

I just dont think dan will deliver on this, maybe but voice doesnt even let me make posts just comments. and i geta 700 voice daily reward , ghey if that was $700 a day id be happy .

what we need is to finish this silly dream dan has, he is innovating less and less the larger and larger his company gets. But WE can be nimble quick and jump over the candle stick ;)

I am not a coder, and seem to have misunderstood, from your comment, that the API mechanism was being applied on both forks, with the difference that Steem was adding catalogs to what is not passed to front ends.

However, I am learning, and what I am learning today is that folks that are competent to work with APIs have undertaken what I consider to be vast improvements in how censorship is going to be implemented going forward.

I am grateful for your considerations, and for sharing them with me here. I hope trust can be unnecessary, just as I hope spam will stop. I'd much rather users themselves be enabled to choose what is censored from their feeds, as that best enables free speech and dissent in a world that seems hell bent on duplicating the Chinese model of total information control, where any information not mandatory is forbidden.

The centralized platforms are all headed in that direction. Since I, and many I have read say so, are here because of censorship, enabling users to be the only censor of their feeds strikes me as the most potent recruiting campaign possible.

Seems to me it's just the right thing to do, as well.

Thanks!

First off, it's not marky and his friends. It's marky. And it's a difficult, unrewarding job I have the utmost respect for. Nobody else is doing it.
Someone like joe attacking him constantly on a personal level asked for it, and I don't judge. He was removed now though.

As Gandalf said, the process and marky managing it was inherited from steem. Things will be handled differently moving forward, but it takes time.

The new filtering on Steemit works on another level. I don't even call that censorship though, as that describes an action by governments, not private persons. I don't see a duty for anyone to serve anything. It's uncensorable on blockchain level, not more.

During the discussions I have had with @themarkymark I have learned much I did not know, from him and others, such as that there is no lack of spammers that have no personal conflict with Bernie. Indeed, @themarkymark was correct when he indicated I was largely uninformed on the matter.

I will mention my improved understanding to him directly when time allows.

I have not yet read @gtg's comment, but am working through my discussions and expect to, if it's a new comment.

While I note we are not entirely in agreement philosophically, I nonetheless deeply appreciate the action you have taken, and credit you for being the proximate cause of improvements to Hive that have resulted. For that I am deeply grateful, and also because it's almost 2 am here, I will refrain from walls of text that are just me being disagreeable.

Thank you very much.

Is this a reference to @bloom?

No

I note you are incorporating @themarkymark's blacklist

The irredeemables were taken from steemit without changes for now. As it was managed by them (with the help of marky), that wasn't something I could influence so I never really cared.
I do now! This list definitely has to be discussed and a process established to not let a single person just add someone who annoys them. Thanks for pointing it out, I will check more opinions.

Please keep me in the loop as you manage the censorship issue.

I have advocated for requiring a vote on HPS proposals to include an account on #irredeemables, due to it's severe and total censorship of affected accounts. I feel strongly that the same mechanism considered sufficient to elect witnesses should be the mechanism to essentially forever silence people.

Advocates of censoring the account can present evidence in support of the inclusion, the accused can present their own, and voters decide. I also recommend periodic appeals be allowed of the censored account via the same HPS method. If they receive enough votes, they should be allowed to get off the list.

We need a robust mechanism to undertake such a terminal threat to users of Hive as API total and complete censorship of all ability to communicate with other folks here, and I can't think of one better atm.

Thanks!

In my view, blocking content from the interfaces should happen there. If an interface operator runs their own API node it makes sense for them to block it there directly. I do not operate an interface though, so I will probably set my node up with no or just a minimal irredeemables list. My hivemind is not fully synced yet and routed to anyx, so I have a few days to hear other opinions and make a decision then.

This comment is for issues, changes and such

condenser_api.get_account_history needs a higher timeout in jussi, added 20s

  • added get_reblogged_by to jussi, is served by hivemind
  • second nginx config file on hivemind (api+hivemind)
  • completed setup of hivemind
  • updated stats
  • condenser_api.get_transaction routed to acchist
  • raised timeout for account_history_api.get_account_history

Updated STEEMD_URL to new syntax

Added max log size to jussi's docker run command to stop the disk from filling up

Hey @pharesim, thank you for setting up a node for HIVE. I do have a few questions on the hardware side though.

1 - Why did you go with RAID0 when you have NVMe? It shouldn't really give groundbreaking boost to the RW.
2 - Why 3 different servers/nodes rather than going with just one big server/node?

  1. For storage space ;)
  2. How you set that up in detail really depends on what you want. Postgres and rocksdb are heavy on disk i/o, the accounthistory node will soon use the disk for swapping. Other setups work too, in fact this guide by @privex with a completely different one was one of my resources for setting it up: https://hackmd.io/@KoCktFVzTnePd9BdXfC7og/HJxKBAhyv8

I love hearing about old school steemians now on hive! so refreshing! I remember pharesim from steeminvite which was just a great time during steem's glory days. we can recapture them again with enough youtube videos @jerrybanfield style focused on hive and bee memes!

WOOO I read that SO fast! I resteemed and voted because I saw PHARESIM! Anything PHARESIM posst I will Support Blindly untill someone stops me! WOOOOOo I love having a vague idea of whose valuable based on a superficial view of a blockchain reddit a few years ago WOOOOO

Im so EXCITED for the good old days!!!

HiveFrogPharsim1.png

Thank you for all your efforts 😌 All the things you're doing for HIVE are sincerely appreciated by the community members.

Again, thanks for your service on all this!

I rEALLy want a dogecoin BTC eth tip bot and @discordtip COULD add Hive posts, it DOES have hive in discord, but imagine COMMENT bot that could TIP u, posts and comments, and it SHOWS ON THE POST HOW MUCH u have MADE in TIPS which formatted can just be like a special consensus memo... we all pick a special memo format for sending BTCP to someone on hive-engine and we can chill

I'm leaving this comment as a bookmark for myself.

God bless all the nerds that are running API NODES!!! The amount of RAM that it requires is astronomical. For me, the only difficult part is acquiring the hardware+bandwith, setting it up is the easy part ...

Downvote solely for disagreement on rewards, not because I dislike your post.
I actually really appreciate that you are running a full node, and that you are documenting how you did it etc., I just don't believe that any Hive post in the current reward "environment" deserves >80$ Payout.

Great effort. Thanks for documenting your process.

Minor addendum:
STEEMD_URL syntax has changed, is now:

export STEEMD_URL='{"default": "http://api.example.com:8091"}'

Thanks for the notice, updated