Originally I planned to release in December, but just before the holidays kicked into full gear, I found an issue with the HAF upgrade process during my testing. The HAF upgrade process enables a node to update its software without having to do a full replay (a full replay takes several days, whereas an upgrade completes in minutes or less typically).
While this problem wouldn’t cause any immediate issue for API node operators, as they’re going to have to do a full replay for this release anyways, it would have required API nodes to do more full replays again, if we found any further problems after the release.
So based on this, I decided to delay the official release of the HAF API node suite until after the holidays when all our devs were back and we had a fix for the upgrade issue (which we now have).
So we’ll be releasing it officially this week. It’s available now as 1.27.7rc16 for anyone who wants to get a jump on replaying it.
Also, here’s a quick summary on some of what the BlockTrades team has been working on since my last report. As usual, it’s not a complete list. Red items are links in gitlab to the actual work done.
Hived: blockchain node software
Mostly minor changes here recently.
Upcoming work will focus on an overhaul of the transaction signing system. Originally I had vague hopes we might be able to complete those changes in February, but realistically it’s likely to be at least some time in March/April time frame I think.
These changes will be included as part of the hardfork and they are also tightly related to the support for “Lite Accounts” using a HAF API (my plan is to offer a similar signing feature set across the two systems to keep things simple). So the Lite Account API will probably be rolled out on a similar time frame.
HAF: framework for creating new Hive APIs and apps
Most of the work done recently was performance optimizations made as a result of benchmark testing.
- Add index status tracking and new functions for HAF index management. This allows apps to add extra indexes to HAF tables and also tracks which apps needs those indexes. If an app registers the requested index before HAF finishes a replay, the index will be built non-concurrently along with the other HAF indexes. If an app requests it later, the index will be built concurrently to avoid disrupting the operation of other HAF apps.
- Add functions for apps to request/wait for full vacuums and allow for concurrent index creation during live sync During testing we found that some app tables that were heavily modified were growing in size a lot during an app’s replay. By periodically performing full vacuums on these tables, these tables get re-shrunk back to a reasonable size, and the apps replay much faster (around 2x faster in the case of reputation tracker, for example, if I recall correctly). Smaller tables also means the API calls that reference these tables are faster. This functionality is primarily useful for tables that receive a lot of UPDATE queries, as it clears out dead tuples.
- Analyze tables after index creation Instead of running 'analyze' in a separated transaction after all indexes are created during a HAF replay, now each table is analyzed in the same transaction that creates the indexes. This way apps are told 'HAF is ready' when HAF has finished the table analyzes and is therefore configured for maximal performance.
- Speedup processing associated with keyauth state provider
- Bug fixes and improves to HAF upgrade procedure This is the fix mentioned at the top of this post.
- Repairing threading bug involved with schemas
- Implement state change tracking for haf apps for performance reporting
Hivemind: social media API
A huge amount of work was done to hivemind as part of our final optimization work prior to release. Below represents only a small portion of this work:
- Full vacuum hive.posts after massive sync for better performance
- Speedup vacuuming after replay by only vacuuming hivemind tables and do it in concurrent threads
- Speedup json checking for API calls
- Speedup list_pop_communities
- Speed up queries by removing function call from filtering process
- Add author_id to posts parent_id_id_idx to allow get_account_posts_by_replies to do an index-only scan
- Reduce storage by removing list_comments API and associated table/indexes
- Set reasonable API return limits to reduce unnecessary server loading and prevent API amplification attacks
- Cleaned up and optimized many API calls (various merge requests)
HAfAH: account history API
- Automate publishing hafah images to remote registry when new images get tagged
- Grants to hafah_user needs to be on the end of installation to properly grant roles to backend functions, also fix rewriter rules
- Include case when signature is null in get_transaction
Balance tracker API: tracks token balance histories for accounts
- Push the postgrest-rewriter to docker registries for tagged releases
- Vacuum full account_balance_history table every 10m during massive sync
- Fix healthchecker issue
- Add application_name parameter to the postgres connection string to make it easier to see what app is doing what in pgAdmin and friends
Reputation tracker: API for fetching account reputation
- Do a vacuum full every time minutes during massive sync to keep size of account_reputations table reasonable
- Register HAF indexes using new index registry API
- Fix healthchecker issue
- Allow the docker health check for the block processor to report health
HAF Block Explorer
- vacuum full some of the balance tracker tables
- Register HAF indexes
- Rewrite and speedup block searching
- Replacing UNION with UNION ALL in the block processing function results in significantly improved query performance
- Remove cache from last synced block
WAX API library for Hive apps
I expect we’ll officially release the Typescript version of wax in the next couple of weeks. Below is a sample of recently completed work:
- Added support for account authority update operation
- Improvements to key leakage detection
- Working on Python version of wax: now it can create and sign transactions, but currently only offline version is supported.
- Continuing work on a generic UI component for the health-checker.
HAF API Node (Scripts for deploying and managing an API node)
- Tagged rc16 release candidate for all the software components for a HAF API node
- Increased cache time for some API calls based on production performance analysis
- Force nginx to parse and store the request body in a way that it can be logged better for performance analysis tools
- Allow hivemind's block processor 30s to shut down cleanly before killing it to prevent potential problems when interrupted during massive sync
- create_zfs_dataset creates an empty snapshot + snapshot_zfs_datasets checks if shared_memory.bin is ok
- Separate the hivemind installer so that it's not integrated into the block-processing container so that the postgrest server can start as soon as installer is finished
- Switch to PostgREST implementation of Hivemind API server
- Fix balance tracker so that it can be run standalone again
- Create a tool to simplify using specific versions of all apps (e.g. the current develop versions)
In related work, we also created a handy new tool for managing submodule dependencies across HAF and HAF apps: https://gitlab.syncad.com/hive/update_submodules
What's next?
We’ve essentially finished production testing for the rc16 HAF API node software, except for the final test of replacing it as our production software on api.hive.blog. We expect to complete that operation in the next few days (by Monday at the very latest).
We updated the release candidate testing server (https://api.syncad.com) to rc16 today, and I recommend all apps do final testing against this API node ASAP, prior to the switchover on https://api.hive.blog itself, to reduce the chance for a late discovery of compatibility issues that need to be adjusted for.
Once the software is officially released and running on api.hive.blog, I’ll make another post to provide some performance metrics for the new release and other information of interest.
This is great news! It's exciting to hear that the HAF API node software is finally being released. I'm sure this will be a huge benefit to the Hive community.I'm particularly interested in the improvements made to the HAF upgrade process. This will make it much easier for node operators to stay up-to-date with the latest software.I'm also impressed with the work that has been done on Hivemind. The optimizations you've made will make it a much more efficient and reliable social media platform.I'm looking forward to seeing how the new Lite Accounts API will work. I think this will be a great way to make Hive more accessible to new users.
Thanks for sharing this update. I'm sure it will be of interest to many in the Hive community.
Ang isa sa mga wish ko sa Pinoy Hive community ay magkaroon ng Pinoy Hive witness with baremetal implementation(at hindi vps) na located sa dito sa Pinas.
Na susuporta sa lahat ng mga Pilipino Hive users at magpapadticipate sa mga local Hive events and onboarding.
I am more willing to reduce my witness votes to many and focus on the “Pinoy Hive Witness”!
@guruvaj, that is a terrific initiative! A dedicated Pinoy Hive witness with bare-metal implementation in the Philippines would be an important contribution to the community. Supporting Filipino Hive users and participating in local events is critical for growth and engagement. Your willingness to direct your witness votes toward this initiative displays a strong commitment to the local Hive community. I wish you the best of success in getting this done!
Dati DBuzz carry that flag, but since funds are scarce it is closed for a few years already.
But we need a true blue Pinoy Hive devs and techpreneur to do this for the community.
This is a critical topic, and I agree. Historically, lack of finance has hampered projects such as this. It is critical to find devoted Filipino Hive developers and techpreneurs to create and maintain this witness. It is a large endeavor, but the potential benefits for the Filipino Hive community are tremendous. I am hopeful that a solution may be found to meet the financing and development needs.
This is my last response sir , cauze my RC is under threat again.😂 Good morning and God bless you more sir. ❤️
No worries, focus more on reading and scripting your next awesome blog.
Rest your acct for the next 24 hrs.
Can't wait for the software to be released 😁
Wow releasing it this week is still good. I really appreciate the efforts you put to make things work. Please keep up the good efforts also with your good plans.
Thanks for the update!
@blocktrades Sounds good! I'll definitely be sure to check out the rc16 testing server and run some final tests before the switchover. I'm excited to see the performance metrics for the new release. I'm sure it will be a significant improvement.
How would we even be able to release the software, without your rigorous testing?
That's a great question! Our rigorous testing is what allows us to confidently release high-quality software that meets our users' needs and expectations. It's an investment in the reliability and stability of product.
Wow.. this is great, honestly you guys are really trying a lot,
May God continue to bless you guys
Thanks for keeping us updated and all the work on here. 💪🏻⚡
Sweet! With this full release I can finally have a look at wrangling it to run on my set up. I'd need to get it to not complain about anything ZFS related as I'd prefer to just run it on a single VM on my machine and having multiple layers of raid and zfs probably won't help performance all that much.
Hopefully won't be too difficult.
Normally we recommend it be run on a bare metal system, if possible, since everything is encapsulated in a docker. But IIRC you're using promox, and it should not be a problem to run everything in a VM anyways.
Yes, bare metal is ideal, but a Proxmox VM should be perfectly fine. Thanks for clarifying!
Que bueno que la tecnología esté avanzado para el beneficio de hive
You did a huge work! Thank you for all you do!
!PIZZA
$PIZZA slices delivered:
@irisworld(1/5) tipped @blocktrades
Congratulations @blocktrades! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)
Your next target is to reach 180000 upvotes.
You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word
STOP
Check out our last posts:
Thanks for sharing an updates. And it's sound exciting.
Could you please stop downvoting all my posts on Day 7. @blocktrades
I have blogging here every day for 7 years now. … Thanks.
So excited to read the news! Awesome way to kick off the new year after a loss in family on the 1st of January. At least my 2nd family (the chain) is doing better!