Optimists expected it sooner.
Pessimists expected it later.
Realists are smarter than that.
Yet another promotional video rendered for the brand new Hive fork 24. Bee Impressed ;-)
"and everything under the Sun is in tune but the Sun is eclipsed by the moon"
- Eclipse , Pink Floyd
Nearly two months ago, the first version of Eclipse code was released. It was tagged as v1.0.0
You can read more about it in my previous witness update.
Last week, an official release candidate was announced.
https://gitlab.syncad.com/hive/hive/-/releases/v1.0.11
It’s meant for witnesses and developers to perform more extensive tests before things are set in stone. HF24 will occur on September 8, 2020, provided no major bugs are discovered. You can find more information here: Hive HF24 Information Mega Post.
I didn’t have enough time to post a more detailed Hive Pressure episode, because there are still a lot of things to do before the Hard Fork, so there’s no time left for "blogging".
"The Eclipse Paradox" courtesy of @therealwolf
So let’s keep things short and useful.
Public node
There’s a v1.0.11
instance configured for a "complete hived API" available at https://beta.openhive.network
curl -s --data '{"jsonrpc":"2.0", "method":"condenser_api.get_version", "params":[], "id":1}' https://beta.openhive.network | jq .
{
"jsonrpc": "2.0",
"result": {
"blockchain_version": "1.24.0",
"hive_revision": "1aa1b7a44b7402b882326e13ba59f6923336486a",
"fc_revision": "1aa1b7a44b7402b882326e13ba59f6923336486a",
"chain_id": "0000000000000000000000000000000000000000000000000000000000000000"
},
"id": 1
}
You can use that endpoint to test your current apps and libs and their ability to communicate with the new version. Please keep in mind that this is only a hived
endpoint - no jussi, no hivemind.
If you want a hands-on experience, here are some useful tips on how to tame the Eclipse.
Build Eclipse
Make sure you have all the prerequisites
apt-get install -y \
autoconf \
automake \
autotools-dev \
build-essential \
cmake \
doxygen \
git \
libboost-all-dev \
libyajl-dev \
libreadline-dev \
libssl-dev \
libtool \
liblz4-tool \
ncurses-dev \
python3 \
python3-dev \
python3-jinja2 \
python3-pip \
libgflags-dev \
libsnappy-dev \
zlib1g-dev \
libbz2-dev \
liblz4-dev \
libzstd-dev
You can get that list from the Builder.DockerFile
file.
Clone sources
git clone https://gitlab.syncad.com/hive/hive
Checkout the release candidate
cd hive && git checkout v1.0.11
git submodule update --init --recursive
Build it
mkdir -p ~/build-v1.0.11 && cd ~/build-v1.0.11
cmake -DCMAKE_BUILD_TYPE=Release \
-DLOW_MEMORY_NODE=ON \
-DCLEAR_VOTES=ON \
-DSKIP_BY_TX_ID=OFF \
-DBUILD_HIVE_TESTNET=OFF \
-DENABLE_MIRA=OFF \
-DHIVE_STATIC_BUILD=ON \
../hive
make -j4
Get the resulting hived and cli_wallet
They are at:
~/build-v1.0.11/programs/hived/hived
and
~/build-v1.0.11/programs/cli_wallet/cli_wallet
respectively.
You can check if you have a proper version running
~/build-v1.0.11/programs/hived/hived --version
The result should be exactly:
"version" : { "hive_blockchain_hard_fork" : "1.24.0", "hive_git_revision" : "1aa1b7a44b7402b882326e13ba59f6923336486a" }
Run it
Now you are ready to run it.
Not much has changed… well, OK, even the name of the binary has changed.
It’s now hived
not for obvious reasons.steemd
Any steem related configuration options were changed to their hive equivalents.
Most of the basics you know from previous versions are here.
I’ll try to present some sample configurations for most common use cases.
Of course, they need to be adjusted to suit your specific needs.
Fat Node
Let's start from the configuration of the Fat Node, because it’s the slowest one, replays for many days, and is a real pain when it comes to maintenance.
Run it with:
true fact is that the fat node is gone
Yes, that’s a real command.
No, it doesn’t make sense to run it.
Yes, you can do that anyway, it won’t harm you.
Improvements made:
It’s fast, lightweight, and gone.
Hooray!
Simple Hive Node
Next, the most common node in our Hive universe.
With some small changes to configuration, it can act as a seed node, a witness node or a personal node for your private wallet communication. If we wanted to use the Bitcoin naming convention, we’d call it a full node, because it has everything you need to keep Hive running. (It just doesn’t talk much).
Example config for Seed Node / Witness Node
log-appender = {"appender":"stderr","stream":"std_error"}
log-logger = {"name":"default","level":"info","appender":"stderr"}
backtrace = yes
plugin = witness
shared-file-size = 20G
shared-file-full-threshold = 9500
shared-file-scale-rate = 1000
p2p-seed-node = api.openhive.network:2001
webserver-thread-pool-size = 32
flush-state-interval = 0
Whether this node is just a seed node or a witness node, the config.ini
file for it is pretty much the same.
One difference is that the witness should add their witness
and private-key
entries such as:
witness = "gtg"
private-key = 5Jw5msvr1JyKjpjGvQrYTXmAEGxPB6obZsY3uZ8WLyd6oD56CDt
(No, this is not my key. The purpose is to show you that the value for witness
is in quotes and the value for private-key
is without them. Why? Because of reasons.)
Another difference is that while the witness node is usually kept secret, a public seed node should be available to the outside world, so you might want to explicitly choose which interface / port it should bind to for p2p communication:
p2p-endpoint = 0.0.0.0:2001
You need 280GB for blocks and 20GB for the state file. 16GB RAM should be enough. All that, of course, increases with time.
AH API node
We used to have a "full API node" (with every plugin possible). Since the Hivemind era, we’ve had an "AH API node" and a "Fat API node" to feed the Hivemind.
Now, we’ve managed to get rid of the Fat node, and feed the Hivemind from a single type of instance.
It’s usually called an AH Node, where AH stands for Account History. While it has many more plugins, account_history_rocksdb
is the biggest, heaviest, and "most powerful" one, hence the name.
Its build configuration is the same as for the simplest seed node.
Its runtime configuration makes it what it is.
Example config for AH Node
log-appender = {"appender":"stderr","stream":"std_error"}
log-logger = {"name":"default","level":"info","appender":"stderr"}
backtrace = yes
plugin = webserver p2p json_rpc
plugin = database_api condenser_api
plugin = witness
plugin = rc
plugin = market_history
plugin = market_history_api
plugin = account_history_rocksdb
plugin = account_history_api
plugin = transaction_status
plugin = transaction_status_api
plugin = account_by_key
plugin = account_by_key_api
plugin = reputation
plugin = reputation_api
plugin = block_api network_broadcast_api rc_api
account-history-rocksdb-path = "blockchain/account-history-rocksdb-storage"
shared-file-size = 20G
shared-file-full-threshold = 9500
shared-file-scale-rate = 1000
flush-state-interval = 0
market-history-bucket-size = [15,60,300,3600,86400]
market-history-buckets-per-size = 5760
p2p-endpoint = 0.0.0.0:2001
p2p-seed-node = api.openhive.network:2001
transaction-status-block-depth = 64000
transaction-status-track-after-block = 46000000
webserver-http-endpoint = 0.0.0.0:8091
webserver-ws-endpoint = 0.0.0.0:8090
webserver-thread-pool-size = 256
Aside from the account history, which is implemented using account_history_rocksdb
plugin (don’t use the old non-rocksdb), there are other plugins and corresponding APIs included in the configuration to serve information about resource credits, internal market history, transaction statuses, reputation etc.
Yes, shared-file-size
at current block height can be really that small.
So are the memory requirements.
Account History RocksDB storage currently takes about 400GB.
The blockchain itself takes 280GB.
I'd suggest at least 32GB or 64GB RAM depending on the workload, so the buffers/cache could keep the stress away from storage.
Exchange Node
A lot depends on internal procedures and specific needs of a given exchange.
(Internal market support? One tracked account or more?)
If you ran it before, you probably know exactly what you need.
One thing to pay attention to is that the RocksDB flavor of AH, i.e. account_history_rocksdb
and all related settings have to be configured with the rocksdb version.
Example config for the Exchange Node
log-appender = {"appender":"stderr","stream":"std_error"}
log-logger = {"name":"default","level":"info","appender":"stderr"}
backtrace = yes
plugin = webserver p2p json_rpc
plugin = database_api condenser_api
plugin = witness
plugin = rc
plugin = account_history_rocksdb
plugin = account_history_api
plugin = transaction_status
plugin = transaction_status_api
plugin = block_api network_broadcast_api rc_api
account-history-rocksdb-path = "blockchain/account-history-rocksdb-storage"
shared-file-size = 20G
shared-file-full-threshold = 9500
shared-file-scale-rate = 1000
flush-state-interval = 0
account-history-rocksdb-track-account-range = ["binance-hot","binance-hot"]
account-history-rocksdb-track-account-range = ["bittrex","bittrex"]
account-history-rocksdb-track-account-range = ["blocktrades","blocktrades"]
account-history-rocksdb-track-account-range = ["deepcrypto8","deepcrypto8"]
account-history-rocksdb-track-account-range = ["huobi-pro","huobi-pro"]
p2p-endpoint = 0.0.0.0:2001
p2p-seed-node = api.openhive.network:2001
transaction-status-block-depth = 64000
transaction-status-track-after-block = 46000000
webserver-http-endpoint = 0.0.0.0:8091
webserver-ws-endpoint = 0.0.0.0:8090
webserver-thread-pool-size = 32
It’s very similar to the AH API node setup, but instead of tracking 1.4 million accounts, we are using account-history-rocksdb-track-account-range
to specify account(s) used by the exchange.
Pay attention to "rocksdb" in the variable name and make sure you track only the account(s) you need. Usually it’s just one, such as bittrex
.
Please note that each time you change the list of tracked accounts, you will have to start over with the replay.
Getting blocks
Nothing has changed here. Either use block_log
from your own source or get one from a public source such as:
https://gtg.openhive.network/get/blockchain/block_log
(It’s always up to date, it takes roughly 280GB)
Putting things together
By default, your tree should look like this:
.hived/
├── blockchain
│ └── block_log
└── config.ini
As you can see, we’ve moved from ~/.steemd
to ~/.hived
so config.ini
should be placed there.
~/.hived/blockchain
should contain a block_log
file.
Once you start a replay, block_log.index
file will be generated.
If you’ve enabled the account_history_rocksdb
plugin, then you will also have a ~/.hived/blockchain/account-history-rocksdb-storage
directory with RocksDB data storage.
Replay
I’d recommend starting with a clean data dir as shown above, but you can use --force-replay
.
Why force it?
Because of another cool feature that will try to resume replay, and in this case we want to avoid that and start from scratch.
Replay times
Of course, this depends on your configuration, node type, your hardware and admin skills, but for a well-tuned environment they shouldn’t be more than:
- 8-12 hours for witnesses
- 12-36 hours for exchanges
- 18-72 hours for public API node operators.
No Hivemind yet
It still has some rough edges that have to be smoothed out before the release. It only needs an AH API Node to feed the data from and it should replay from scratch within 2-4 days or so.
Any questions?
Ask them here in the comments or on the OpenHive.Chat but please be patient.
I might respond with delay.
Great post ! Thank you for the write up.
Side note, you might be getting an error that looks like this:
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
It's because your cmake version is too old, the build system in the new hard fork requires a pretty up to date cmake version. See https://askubuntu.com/questions/829310/how-to-upgrade-cmake-in-ubuntu on how to update, or even better do your own research on how to update.
Thanks, I haven't checked that, I just assumed Ubuntu 18.04 LTS where it works out of the box. With 16 and 20 it might be some more hassle. In some near future (although after HF) I will make sure to have it compatible with Ubuntu 20.04 LTS.
Update: indeed,
cmake
included inUbuntu 16.04 LTS
is too old, installing recent version solve that, one can do that either by following your link or https://apt.kitware.com/(which seems to be more consistent approach since it's using packaging system)
Are we still supporting 1604 ?
I'm not ;-)
You can use docker of course.
But it also seems to be possible to build it natively after some usual tweaks (
cmake
andboost
).(I just did that to confirm that's still possible)
I was just checking. I am somehow not a fan of containers when it comes to running blockchains.
Lolz...those 2 comments...feels like an alien language to a non dev. Hehe.
Seriously...thanks for all the work you are putting in to make Hive an attractive destination.
Stupid short question:
Why a witness is selfvoting?
I assume that you are talking about voting for content.
Short answer is: Because it's OK to selfvote.
It's unrelated to being a witness or not. Witnesses provides security and reliability to the network. When they vote they are doing that with their "user's hat on".
There used to be a switch that voted for your own content automatically after posting, but because early-vote penalty window it was removed as it was sub-optimal for curation rewards.
Reddit auto-ups your own content and it really makes sense, also for Hive:
If you don't think your content deserves an upvote, why are you wasting blockchain resources for it?
Post what you would like to see.
Vote for what you would like to be voted.
Well, it's always subjective, so other users can disagree with the reward and eventually all engaged parties will reach the consensus about their up and down votes.
In my country they call the word:
EHRE
HONORARY
Do you want to give me excuses or present a reasonable and serious external image to your voters?
Excuses? Why would I make any?
So you are claiming that voting for own content is not "honorary"?
Replayed a witness backup node (in tmpfs), it's using 16GB of RAM, very impressive compared to the current steemd eating 59GB! hived ftw :)
Nice vid ... the rest of the post was boring :)
I know, right?
Posting on Hive about Hive is so boring.
I'll try to make it up to you with even more videos.
(I need to update those created long time ago for STEMsocial)
There's also #joinhive video initiative that you might find not boring ;-)
On serious note, looks like you have been busy, making a lot of things happens!
Thanks for that.
How the post read to me:
Awesome Vid (but too short imo)
Stuff i dont understand
Meme
More stuff i dont understand
😂🤣 Your posts and blocktrade's are like way too much for my little brain, but i always try and read them to see if i learn by osmosis.
Don't worry it's not your brain :), just your profession, you wouldn't go and read a medical report for a surgeon and expect to understand it.
Same applies here, this is aimed for technical people (witnesses + node runners)
🤗 yes, yet still i like to read them like the masochist i am 🤣😂, i actually think is always good to try and read them too learn, even if is just a little,maybe not the code itself but just to understand a little how the whole ecosystem i spend most of my day on works.
I agree! A couple of weeks ago someone sent me a scientific report on current vaccine research and trials for covid-19. It was very difficult to understand, but the little things I was able to get out of it were still interesting.
We’re ready for Eclipse
Well, good for you, I'm still not ready ;-)
...but I'm working on it.
Although I don't understand much I still read with interest. One question, Is the hive blockchain 280GB at the moment?
The block_log is indeed 280GB.
Thanks for reply, That is pretty amazing and more manageable than I thought.
Thanks for the information, it seems very complete to me, therefore I do not have any questions.
Happy Sunday
Actually It's not been long I was introduce to this so all this configuration of block chain from steem to hive I don't understand. I dont know if these can be done on phone or only laptops
I like this post
Thanks you so much, you made a fantastic work!
That's why I got a downvote from you? ;-)
Congratulations @gtg! You have completed the following achievement on the Hive blockchain and have been rewarded with new badge(s) :
You can view your badges on your board And compare to others on the Ranking
If you no longer want to receive notifications, reply to this comment with the word
STOP
To support your work, I also upvoted your post!
Do not miss the last post from @hivebuzz:
There are so many technical things,
Yeah I enjoyed the video it would be much-appreciated @gtg if include a graphical image to summarize next time Sir.
Video looks great! Thanks for sharing :)
Nice post with complete information
Whats this about? i dont get it jajsakjshas please resume :'-(
So boring i understand nothing :D