7th update of 2024: early "release notes" for Hived and HAF apps

in HiveDevs2 months ago (edited)

blocktrades update.png

I've been super busy for a while, so I fell behind a bit on reporting progress (it's been about 4 months since my last report...). Originally I was planning an earlier release of everything, but in the end we decided to push back the releases so that we could add more features that we considered important to completed.

Since so much work has been done since my last report, a report of all the details would be a bit overwhelming, so I decided to write this report in the form of earlier "release notes" for all the software, just to keep it compact.

I'm breaking the report up into two separate posts, this first one covers the backend services (e.g. hived, HAF, API servers, client libraries), and the next report which I'll publish in the next few days will cover work done on the user interfaces (denser, the block explorer UI, clive).

Hived: blockchain node software

New features and functional improvements

  1. HF28 features:

  2. New configuration options to preventing hived nodes from being flooded by transactions. This included adding several new config options: rc-flood-level, rc-flood-surcharge, and max-mempool-size (the new default settings are probably fine for most hived nodes). RC costs of incoming transactions now start increasing when high activity (flooding) is detected in order to reduce the number of pending transactions that can build up and consume RAM. Once a transaction is finally put into a block, costs are calculated using regular rules, so final RC costs are not affected by this change.
    https://gitlab.syncad.com/hive/hive/-/merge_requests/1397
    https://gitlab.syncad.com/hive/hive/-/merge_requests/1390

  3. Better handling of shared_memory.bin allocation errors: https://gitlab.syncad.com/hive/hive/-/merge_requests/1372 . Hived will produce an error if the shared_memory.bin will lead to an insufficient amount of free space on the filesystem holding shared memory. This change can prevent runtime failures during memory allocation and helps to find a mis-configuration of the shared memory.

  4. Improved error messaging: https://gitlab.syncad.com/hive/hive/-/merge_requests/1359/ https://gitlab.syncad.com/hive/hive/-/merge_requests/1387

  5. Hived API supports keep-alive connections: https://gitlab.syncad.com/hive/hive/-/merge_requests/1363

  6. Generated config.ini contains generated default option values as comments. This change makes it easier to detect which options have been customized, as such options will have an explicit setting in the file: https://gitlab.syncad.com/hive/hive/-/merge_requests/1361

  7. Eliminated configuration options and plugins previously marked as deprecated: https://gitlab.syncad.com/hive/hive/-/issues/649

  8. Support for pruned block log. Hived configuration options allows storing blocks in local blockchain subdirectory as:

    • a single monolithic file (old legacy way),
    • split into 1M block parts, which allows you to relocate files across different filesystems and symlink them in the blockchain directory
    • pruned to store the latest N 1M block parts (including setting N=0 to avoid storing any blocks)

Block log pruning saves disk space, but also reduces the ability to support block_api usage via hived (although nowadays blocks are better served via HAfAH anyway) and P2P syncing of nodes that need to catch up to the current head block.

Hived can automatically split an existing monolithic block_log when the conf file is changed from monolithic to split storage.

The block-log-split option defaults to 9999 to enable split mode in the block storage. So, if you update an existing hived that is synced with the new version, it will automatically split your block log at startup (this one-time process takes around half an hour).

The block_log_util tool also has been extended to support file splitting.

Bugfixes

  1. Fixed hived crash that could occur when a block_log file doesn't exist https://gitlab.syncad.com/hive/hive/-/merge_requests/1367
  2. Fixed concurrency problems between witness and writer threads while evaluating blockchain operations: https://gitlab.syncad.com/hive/hive/-/merge_requests/1268
  3. Fixes related to the condenser_api and producing the JSON legacy form of operations: https://gitlab.syncad.com/hive/hive/-/merge_requests/1381

HAF: framework for creating new Hive APIs and apps

New features and functional improvements

  1. New API calls for HAF apps to allow for more flexible and less error-prone coding of an app's operation processing loop. One of the biggest benefits is that it is easier to make sure a HAF app will always be in a consistent state if it gets interupted and then restarted: https://gitlab.syncad.com/hive/haf/-/merge_requests/504
  2. Removed block_num, operation type and _timestamp columns to reduce SQL database storage size: https://gitlab.syncad.com/hive/haf/-/merge_requests/492 https://gitlab.syncad.com/hive/haf/-/merge_requests/496
  3. HAF maintenance actions are performed by pg_cron tool: https://gitlab.syncad.com/hive/haf/-/merge_requests/531
  4. HAF database uses C collation settings to speedup index search and creation: https://gitlab.syncad.com/hive/haf/-/merge_requests/514

Bugfixes

  1. Fixed database integrity problem during HAF node restart when reversible data has been collected: https://gitlab.syncad.com/hive/haf/-/merge_requests/532
  2. Corrected HAF node upgrade issues: https://gitlab.syncad.com/hive/haf/-/merge_requests/529
  3. Fixed errors in the autodetach procedure for dead apps: https://gitlab.syncad.com/hive/haf/-/merge_requests/525
  4. Fixed bugs leading to multiple application deadlock during attaching a context: https://gitlab.syncad.com/hive/haf/-/merge_requests/518
  5. Fixed deadlock during HAF data dumping: https://gitlab.syncad.com/hive/haf/-/merge_requests/505 https://gitlab.syncad.com/hive/haf/-/merge_requests/511

Hivemind: social media API

New features and functional improvements

  1. Switching to PostgREST as HTTP server provider from the current Python based implementation to improve performance and pave the way for a REST-based hivemind API.
  2. Hivemind APIs additionally return mute reason: https://gitlab.syncad.com/hive/hivemind/-/merge_requests/626
  3. bridge.get_discussion API returns information related to pinned post: https://gitlab.syncad.com/hive/hivemind/-/merge_requests/715
  4. bridge.get_account_posts supports observer parameter: https://gitlab.syncad.com/hive/hivemind/-/merge_requests/716
  5. Community support improvements: https://gitlab.syncad.com/hive/hivemind/-/merge_requests/711
  6. Sync performance optimizations: https://gitlab.syncad.com/hive/hivemind/-/merge_requests/706 https://gitlab.syncad.com/hive/hivemind/-/merge_requests/718 https://gitlab.syncad.com/hive/hivemind/-/merge_requests/746
  7. Hivemind uses a new Reputation Tracker HAF application to calculate reputation data in a way matching hived's reputation_plugin algorithm.
  8. Reduced Hivemind docker image size: https://gitlab.syncad.com/hive/hivemind/-/merge_requests/726
  9. Hivemind sync processing switched to use the new HAF application main loop: https://gitlab.syncad.com/hive/hivemind/-/merge_requests/701

HAfAH: account history API

  1. Support for a new REST API including generated Swagger documentation. The REST API docs for Hafah are here: https://api.syncad.com/?urls.primaryName=HAfAH

Balance tracker API: tracks token balance histories for accounts

New features and functional improvements

  1. Support for a new REST API including generated Swagger documentation. The REST API docs for balance tracker are here: https://api.syncad.com/?urls.primaryName=Balance+Tracker
  2. Support for balance history APIs: https://gitlab.syncad.com/hive/balance_tracker/-/merge_requests/105
  3. HAF's new app loop used for the operation processing code: https://gitlab.syncad.com/hive/balance_tracker/-/merge_requests/100
  4. Allow custom schema installation to support separate deployments of balance tracker in the same database (i.e. when balance_tracker is referenced from multiple parent applications or uses some specific configuration).

Reputation tracker: API for fetching account reputation

Created a new HAF application which calculates reputation values matching to Hived reputation plugin.

This application is used by Hivemind and HAF Block Explorer to provide reputation data needed for their APIs.

The REST API docs for the reputation tracker are here: https://api.syncad.com/?urls.primaryName=Reputation+Tracker

HAF Block Explorer

New features and functional improvements

  1. Support for a new REST API including generated Swagger documentation. The REST API docs for the block explorer are here: https://api.syncad.com/?urls.primaryName=HAF+Block+Explorer
  2. API calls accepting block range parameters can take a block number or a timestamp as block range constraints https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/214
  3. Witness API improvements:
  4. App sync uses new HAF main loop scheme: https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/196
  5. Reputation data is read from the reputation_tracker app's schema. Previously reputation calculations were done within the block explorer's sync thread, so syncing the block explorer is now about 2x faster than earlier versions (but we also track many more types of balances, so it is about the same sync speed as the initial release done back last September): https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/177

WAX API library for Hive apps

Initial release candidate of Hive integration library into Typescript (NodeJS/Web browser) environment. Wax allows developers to easily create and process Hive blockchain transactions and operations. It it is also easy to add support new custom API call definitions (e.g. API calls for custom HAF apps) which can then be called as regular functions.

On the implementation side, the Wax library directly shares C++ code from the Hive Protocol to always match the blockchain's behavior. Wax is an object-oriented library and is coded to allow for "intellisense" style support from IDE's such as Visual Code, which greatly simplifies writing client code with the library.

We are also in the process of developing a python version of Wax. The python version is already being used as we develop Clive, a new wallet for Hive, but it is not yet ready for official public integrations.

Beekeeper: light weight process for securely storing cryptographic keys

  1. New API endpoints: has_wallet and ability to create temporary (not persisted) wallets. https://gitlab.syncad.com/hive/hive/-/merge_requests/1355
  2. New API endpoint to allow import of multiple keys: https://gitlab.syncad.com/hive/hive/-/merge_requests/1326
  3. Preliminary support for python wrappers to enable integrating Beekeeper tool into a python workflow: https://gitlab.syncad.com/hive/helpy/-/merge_requests/55

Bugfixes

  1. Fixed bug in wallet synchronization between opened multiple network sessions: https://gitlab.syncad.com/hive/hive/-/merge_requests/1344
  2. Improved session timeout handling: https://gitlab.syncad.com/hive/hive/-/merge_requests/1342
  3. Shutdown fixes: https://gitlab.syncad.com/hive/hive/-/merge_requests/1311

API Node Benchmarking

We're still making improvements to the backend services, but we have begun initial test of "full api node" syncing. Here's some tests I ran on one of our faster servers (AMD 7950 with 2x4TB T700 nvmes):

  • HAF replay time: 15 hours
  • Simultaneous replay of hafbe (30 hours), reptracker (35.6 hours), and hivemind (61.8 hours).
  • So total time to get a fast node up with all services is 15 (haf) + 61.8 (hivemind, the longest of the three services) = 78.6 hours = 3 days 6.6 hours

I don't expect much improvement in the sync times prior to the release, but these numbers are looking pretty good, especially since the blockchain grows by about 1 million blocks per month and these numbers are still close to times from the release last year despite all the blocks that have been added since then.

What's next?

We're still adding features and working on documentation, but we have started doing real-world deployment testing, and the next step will be to do some production testing with real world traffic. At this point, I'm shooting for a December release, as I think we have too many useful features that should be released as soon as possible, so we shouldn't delay the releases to add many more features.

Sort:  

3 days 6.6 hours
I don't expect much improvement in the sync times prior to the release, but these numbers are looking pretty good, especially since the blockchain grows by about 1 million blocks per month and these numbers are still close to times from the release last year despite all the blocks that have been added since then.

Oh, not only that, they are better than times that we had 4 years ago when we had less than half of the blocks that we have now.
It's funny to look at old posts to compare what have changed:
Here's my 4 years old "Hive Pressure 2: How to Answer Hive Questions?"

TL;DR: "Roughly you need 4 days and 9 hours to have it synced to the latest head block."

Of course my old server, and not even my current one can't beat yours, but assuming 33% slower, I think it could still keep up with with 4d9h benchmark :-)

Well, it is of course a night and day comparison if we look at code from 4 years ago.

For some reason (I can't remember why), I took a look at state of Steemit code a couple of days ago to see how long it looks like it will be before it collapses under its own weight.

It turns out the only "fixes" that have been done are all to limit the functionality of the code so that their servers don't just die now.

As one example, despite having about 1/10th our transaction volume, apparently their servers began having problems serving up account histories and this was loading down the API servers and resulting in locking issues in the hived nodes that were serving up account history info.

What was the solution? Speed it up or offload the work? Nope! Instead now they only allow fetching the last 7 days worth of operations for an account :-) I guess you're supposed to go to a block explorer to see the rest...

That's a hilariously bad solution, but absolutely 100% on-par for what Sun would mandate, so I'm not at all surprised.

Giving Sun even credit for this is in fact too much credit: as far as I can tell, it was done by some independent dev, who can see the upcoming wall that Steem is driving towards and is trying to slow down the car.

I had gathered previously that the RC cost of transactions would go up if things got too busy. Does this happen much? I assume it protects us from certain kinds of attack.

There is some code like that, but it is a bit different than the change we just made. That code charges extra for particular resources when it detects they are being used a lot. And in that case the actual final RC cost goes up, it is not just a temporary cost. But none of those increases protected well against this particular form of attack.

We developed this as a solution after flooding a test net with as many transactions as it was possible to throw at it, all while leaving the block size at its current level (64K), which results in the worst case memory usage for pending transactions (because it constantly generates more transactions than can fit in the blocks and they build up). Such an attack isn't particularly easy to do (we had to write special versions of hived that could even manage to do it), but long term it is important to have this kind of resilience. Part of this attack is also only opened up because we are increasing the time that pending transactions can stay in memory (which will be very useful for transactions signed by multiple parties since it takes some time for each party to pass around and sign the transaction).

You are doing well and I hope you have planned unique features for the hive platform. All the best!

!PIZZA
!LOL
!INDEED

My friend Phillip had his lip removed last week
now we just call him Phil.

Credit: reddit
@blocktrades, I sent you an $LOLZ on behalf of cryptoyzzy

(9/10)
NEW: Join LOLZ's Daily Earn and Burn Contest and win $LOLZ

Wow, thanks so much for the detailed report! You can see that there has been a lot of progress since the last update.

i always say that hive boom boom, i am feeling proud that i am part of hive since birth, great updates, and every new features bring us close to real success,

Es un gran informe detallado , esto es un manual que me impulsa a investigar y conocer esa gran cantidad de términos técnicos los cuales desconozco, además de ser un informe es una guía para comprender y estudiar

PIZZA!

$PIZZA slices delivered:
@danzocal(2/10) tipped @blocktrades

Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with this rare badge

Post with the highest payout of the day.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with this rare badge

Post with the highest payout of the week.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

I've been checked out from Hive for a while, but wanted to ask, if I want to develop a HAF dapp on hive, will it take care of micro-forks or will I need to handle that? I have a web 2 game that I want to tokenize

HAF was designed to automatically handle micro-forks. It actually has two modes:1) in the default mode, it tracks all changes up in your app's tables up to the last irreversible block, so it can revert those changes in case of a fork and 2) irreversible mode, where it the blockchain data is only made available to your app after the block becomes irreversible. Originally I planned for a lot of apps to use the default mode, but with OBI, using the second mode works pretty well too (and it is slightly faster).

Amazing. I don't see anything about HAF in the hive developer portal, how can i get started?

First, I suggest getting familiar with how the HAF API node stack works. Follow the readme instructions here: https://gitlab.syncad.com/hive/haf_api_node

Have a nice day my friend 😘

Loading...