Me again. Usual doing/writing balance issues.
The biggest minor release of them all
There were some posts by @blocktrades already, but it's not enough to emphasize how much work was performed to deliver you the new MINOR version.
Versioning
Under the legacy chain, we used versions 0.x.y
up to 0.22.y
, where x
indicated a major release requiring a HardFork, and y
referred to a minor release, usually involving bug fixes, improvements, or anything else that did not require consensus changes.
With version 0.23.0
, we forked from the legacy chain and began our new journey, during which we achieved a technological breakthrough with the release of version 1.24.0
, codenamed Eclipse.
Currently, we are in the HardFork 27 era, and have been using version 1.27.4
for a year while working on a "few details" to improve our API's technological stack.
Let me share some statistics (source: GitHub):
Metric | Legacy Chain | Hive |
---|---|---|
Versions | 0.10.0 - 0.19.0 | 1.27.4 - 1.27.5 |
Releases | 9 major | 1 minor |
Commits | 1412 | 1153 |
Changed files | 312 | 1543 |
Additions | 32217 | 153683 |
Deletions | 12192 | 50646 |
Contributors | 15 | 18 |
And that's just the core Hive, although most of the improvements were made to supporting software.
api.openhive.network upcoming upgrade
I’ve promised to be the last to upgrade, ensuring my node serves as a reliable fallback in case any unexpected issues arise from the recent updates.
So far, so good; I don’t know of any concerning issues that could/should stop me from upgrading.
If you do, speak now, as I’m about to shut down the old stack within a week from now.
If you're unsure whether your application is ready for the new stack, test it using another API endpoint, such as api.hive.blog
, which is already on the new stack.
The old way
Based on a hived
with all API plugins, including the most resource-hungry: account history,
and hivemind
as a python application that syncs via hived’s API and uses a PostgreSQL backend for database storage. Some API operators had separate broadcaster nodes (lightweight consensus node just for broadcast and/or get_block requests), haproxy to handle the load among different instances, jussi for routing requests, redis for caching, nginx for SSL termination and rate limiting, etc.
So it’s very easy and fast to set up if you are me and have all that in your muscle memory after deploying those environments hundreds of times, otherwise it could be a pain in the back(end).
Just for the record, the current old-style hived account history node requires 1.6 TB of storage, of which the block_log occupies 464 GB. The snapshot takes up 1.1 TB, which compresses down to 721 GB. The Hivemind database storage occupies 700 GB, and the shared_memory.bin
file needs roughly 22 GB.
All combined takes roughly 2.3TB
The new way
The new way of running API nodes is just letting our hived
do what it does best - handle blockchain stuff, and let all the other work which is storing, organizing, and presenting data be handled by software that’s made for it and has been doing it great for decades - PostgreSQL.
We are no longer using the RocksDB account history plugin (well, it’s still very good if you just need to track a few accounts, like exchanges do), instead we use an HAF-based account history API server. Hivemind
is also HAF-based so it no longer bothers hived
for data. This architecture allows for more fancy specialized APIs, such as a balance tracker and an HAF-based block explorer.
Unfortunately, all that makes it more complex and it could be a challenge to set it up, and to solve that problem, @blocktrades came up with the haf_api_node one-stop repo, which lets you just get what you need and set up with minimal effort and user input.
It still requires time and resources, but it is server resources not human resources.
Out of the box, assuming typical setup (with ZFS compression), datadir takes roughly 2.6TB
Useful stuff https://gtg.openhive.network/
Hivemind dump
After discontinuing v1.27.4
, I will no longer update the hivemind dump for non-HAF setups.
The latest dump is hivemind-20240416-v1.26.3-3ba5511b.dump.tar.bz2
Despite the name that contains v1.26.3
, it’s compatible with the current hived version. Hivemind 3ba5511b
was the latest version to support non-HAF sync.
Because with the new architecture, hivemind is no longer a separate entity, it is not possible to provide just a hivemind snapshot.
It’s the biggest disadvantage of using a new stack, but the other features outweigh this inconvenience.
Account History API Node snapshot
The latest snapshot for the Account History Node is ah-20240417-v1.27.4.tar.bz2
.
Obviously, you can still run hived
with the RocksDB account history plugin with 1.27.5
.
Exchanges (a.k.a. General-purpose) node
The account history plugin is still recommended for exchanges due to its minimal overhead and no requirement for HAF (K.I.S.S.(tm)).
I will continue to serve general-purpose snapshots that include account history for a set of commonly used exchange accounts (and my own, because it’s cool).
TODO
More posts about what’s going on, mirrornet updates, fast node deploy cheat sheet.
Footnotes
Don’t forget to like and subscribe upvote and follow. Or what’s more important - vote for your favorite witnesses (up to 30).
This comment has a few to do with the topic of the post, but I love the flowers and it HIVE’s shape!
I'm so blind that I didn't even realize they were in the shape! I just scrolled by and thought "ohhh pretty flowers".
It use to happen heheh
The sole purpose of that image is to amuse and offset the boredom of this post. ;-)
That comparison table is nothing short of incredible. The biggest flow of HIVE I can see is that hardly anyone knows about it. Hopefully we will be able to onboard more developers to build more DAPPs that can funnel the blockchain more users. Invennium and @vsc.network are what I'm mostly looking forward to. The great thing about the social media element we have is that it could work as an incentive to retain users around after they try out one DAPP and even market other DAPPs to the users on chain.
!PIZZA
!LUV
!CTP
gtg, vimukthi sent you LUV. 🙂 (1/1) tools | trade | connect | wiki | daily
Made with LUV by crrdlx.
Off topic: The cherry flower bunch created a Hive icon illusion which is really nice. I wonder how did you find it... How are you by the way? 😁
It's just a Hive logo that was fed as input to QR code conditioned controlnet (Monster Labs) in Stable Diffusion tool that was supposed to create an illusion with cherry blossom...
I'm fine, thanks, I just got back from Hive workshops at @krolestwo :-)
Wow, this is cool... I hope you had a great time at KBK...
Impressive the comparison table at the beginning from GitHub between the legacy chain and Hive. Most people don't know what we have "under the hood".
Oh, actually are was more focused on showing how much work is now being done compared to back then.
When I would like to compare what's changed under the hood since the fork between us and that old, obsolete, Thief Sun's
S***it
I would probably be downvoted for creating a low effort post.So maybe a simple comment here would be more suitable and I will describe in details what changed in core steem code since the time they stole funds from users, ready? Here you go:
And that's it.
Of course there are also changes in configuration (pretty much all they did in last 2 years), and looks like this:
-shared-file-size = 70G +shared-file-size = 300G
Yes, they increased amount of memory allocated for shared_memory.bin file for their account history node setup from 70GB to 300GB. And that was what they were suggesting as default config two years ago. To put that in perspective, currently we suggest 24GB or so.
:-)
Oh my! I didn't realize they hadn't changed practically anything since the fork. At this rate and without any optimizations, they will soon run out of suitable storage hardware for their account history node. Not that I care, lol.
Even though I love witness updates, I can't say something else other than I absolutely love the featured image. The thing that you can notice Hive in pretty much anything out there is awesome, even if needs a bit of technology to make it possible sometimes!
...or maybe we are just mental and see Hive everywhere ;-)
Could be! This is my 7th year on the chain, so no doubt my life is very well linked with Hive :)
Oh god, we are not only mental, but also very old!
Very experienced!! Not old xD
Wait, what? I thought you are always running latest
develop
, and now additionally on Ubuntu 24 😉Yes ;-) I'm just not exposing it to the production endpoint lately, because there were enough experiments with cutting-edge/bleeding-edge technologies, so I needed a safe harbor and a reference endpoint. I'm happy that I can finally switch to a new stack on all my machines, because handling a mix of them required lots of extra resources for supporting infrastructure.
(No, I don't have Ubuntu 24.04 LTS yet, even on my test servers)
Thank you for the awesome update.
Keep pushing Hive (code) to do better.
!PIZZA
!LOL
!ALIVE
@gtg! You Are Alive so I just staked 0.1 $ALIVE to your account on behalf of @ cryptoyzzy. (2/10)
The tip has been paid for by the We Are Alive Tribe through the earnings on @alive.chat, feel free to swing by our daily chat any time you want, plus you can win Hive Power (2x 50 HP) and Alive Power (2x 500 AP) delegations (4 weeks), and Ecency Points (4x 50 EP), in our chat every day.
lolztoken.com
A duckumentary.
Credit: reddit
@gtg, I sent you an $LOLZ on behalf of cryptoyzzy
(2/8)
Delegate Hive Tokens to Farm $LOLZ and earn 110% Rewards. Learn more.
$PIZZA slices delivered:
cryptoyzzy tipped gtg
@vimukthi(1/5) tipped @gtg
Congratulations @gtg! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)
Your next payout target is 306000 HP.
The unit is Hive Power equivalent because post and comment rewards can be split into HP and HBD
You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word
STOP
Check out our last posts: