@timcliff's Witness Hardfork Approval Standards v0.1

in #witness-category6 years ago (edited)

I am really happy with the outcome of hardfork 20. That is obviously a crazy thing to say after all the chaos that occurred, but let me explain.

Witnesses' Role in Hardfork Approvals

Prior to HF20, there was very little focus put on the witnesses' responsibility to test and review code before accepting a hardfork. There has also not been an effective way for witnesses to advocate/push for change without standing in the way of the progress that the development team is making.

After the events of HF20, the conversation around witnesses' role in the approval of hardforks has dramatically shifted. This is important, because the next big hardfork (SMTs) is going to be 10x as complicated as HF20, with a lot more things that can go wrong.

Also, the stakes are going to be much higher.

It is great that HF20 brought attention to many of the issues that were there, so we can have a conversation about how to fix them.

SMTs

SMTs are going to be getting a lot of attention from people outside, and if the launch of SMTs is not successful - a lot of people are going to notice. Developers may end up taking their business elsewhere (to another blockchain), and they definitely won't be telling their friends to use Steem.

If SMTs are successful though, it has the opportunity to draw in a lot of new businesses that are interested in building on Steem. These businesses will have the potential to draw a lot of capital and new users to Steem.

We want the launch of SMTs to be a huge success, and in order for that to happen - changes are needed.

Hardfork Approval Standards v0.1

This post is following the lead of the many other witnesses who have started the conversation about what changes are needed in order to prevent another incident like we had with hardfork 20.

Below is a draft of the standards that I plan to use for deciding whether to approve or deny a hardfork going forward. While no set of standards will 100% prevent another issue from ever happening, I believe that by following these standards we can significantly reduce our risk of another failure by increasing our chances of uncovering issues before they reach the mainnet.

Keep in mind, these standards are an initial draft. I am looking for feedback. I plan to use the feedback I receive to create my "version 1.0" standards that I will actually follow for approving the next hardfork.

Hardfork Proposal

The development team should create a detailed proposal describing the changes they are planning to make. The proposal should clearly explain what they are planning to change, and the reasoning behind it.

The stakeholders should discuss the proposal and express their views for or against it. Note: All users who hold SP (even a small amount) are considered stakeholders.

As a witness, I will evaluate the proposal and feedback from the stakeholders. I will publicly express my views for or against the proposal.

Note: It is important to make the distinction that my support for the proposal does not mean that I will ultimately accept the hardfork. There are many other factors that may ultimately lead to my rejection of the hardfork before the final vote is taken.

If I am fundamentally opposed to the changes being proposed, I will make it very clear that I will not be accepting the hardfork and provide my reasons why. I will do this as early as possible, so that:

  1. Developers have the opportunity to adjust their proposal if needed.
  2. Developers do not spend time coding something if there is not consensus on it's approval.
  3. Stakeholders can adjust their witness votes if they disagree with my position.

No Surprise Changes

If additional functional changes are to be included in the hardfork which were not part of the original proposal, or if changes that were part of the original proposal are significantly changed or removed, the development team should communicate the changes as a proposal amendment. The amendment should follow the same review process as the original proposal.

Development Tracking

There should be an issue in GitHub for every change that is being made. The issue should have an appropriate user story that explains what is changing and why the change is being made. All pull requests should be linked to an issue. Witnesses should be keeping up with the development as it progresses in GitHub.

The development team should also be able to provide the witnesses a list of all the issues in the hardfork, so that witnesses can do an appropriate comprehensive review of all the changes included in the hardfork.

Questions and Concerns Addressed

Witnesses and stakeholders should have the opportunity to ask questions about specific changes via their issues in GitHub. The development team should respond to all reasonable questions and concerns that are documented in the GitHub issues.

Automated Tests

The Steem blockchain code already has extensive automated testing in place (which the development team maintains), that verifies updates work the way they are intended, and new changes don't break existing functionality.

These automated tests should continue to be updated to account for the new cases that need to be verified as functionality is changed. Witnesses and stakeholders should open issues if there are test cases that are not accounted for by the automated tests when new changes are checked in.

Test Environment

It is critical that the community be provided with a test environment to sufficiently verify all of the changes before they go live in the mainnet. There are many aspects to this, which I will outline below. It also may be necessary to run multiple testnets in parallel in order to provide sufficient coverage of all the different scenarios in the allotted time.

MVP Verification

Testers must be able to verify the "minimum viable product" (MVP) functionality of all changes included in the hardfork on the testnet.

Testnet Tool Infrastructure

This is one of the most crucial aspects of being able to properly test on the testnet. We need our infrastructure of tools on the testnet to match to what is there on the mainnet.

  • This includes a condenser (steemit.com) instance, a block explorer (i.e. testnet.steemd.com), cli_wallet capabilities, and SteemConnect (i.e. testnet.steemconnect.com).
  • There should also be instructions provided on how to use all of the developer libraries (Steem Python, Steem-JS, Beem, etc.) to connect and interface with the testnet.
  • If there are new blockchain API methods that are exposed, tools and/or libraries to access and use those methods should be provided.
  • Ideally all third-party websites and tools (SteemPeak, Steem Monsters, Vessel, Voting Bots, SteemAuto, etc.) should create a testnet or sandbox version of their products that users can use to experiment with on the testnet.

Test Plan

In the steem.chat witness channel, @ned asked:

My proposal for this is that the development team should be responsible for providing the community with a test plan. This test plan should be a documented series of tests that they performed prior to the release of code in order to verify the proper functioning of every change. The test plan should be executable by community testers on the testnet.

Auditors / testers should be able to verify the proper functionality of the hardfork by executing the test steps in the test plan on the testnet.

Community members should also be expected to perform additional tests if they think of use cases that were not covered by the documented test steps.

Pre-Fork Parallel Operation Test

There is a period of time where the new hardfork code is installed on some nodes, but not all, and the hardfork time has not occured yet. Testing this scenario should be included in the test plan in order to ensure that the "parallel operations" period does not cause any unexpected issues, including unplanned forks.

Simulate "Real World" Conditions

As much as possible, the testnet should simulate the real world conditions that will take place on the mainnet. If there are real world conditions that cannot be accurately replicated on the testnet in order to fully verify the proper operation of a change before it goes live, the limitations should be properly communicated to the community ahead of time, along with the risks involved.

Sufficient Time to Test

The testnet should run for a sufficient amount of time in order for testers to be able to verify all of the changes. For changes that require time for the test to play out (such as verifying a new post receives proper payout seven days after it is created) an instance of the testnet must run uninterrupted for long enough to verify the end-to-end functionality of the change.

If patches are released that invalidate the results of previous tests, additional time should be given in order to re-verify the functionality that needs to be re-tested.

Testers have a responsibility to use the time that is given to test optimally. (i.e. do not wait until the night before the hardfork to start testing and complain that there is not enough time to test.)

Witness Participation in Testnet

Witnesses should be expected to participate in the testnet by running a block producing node. This will allow them to verify the fork occurs as expected on their node, as well as verify any witness-specific functionality included in the fork. Along with this, witnesses should be expected to supply a price feed and submit reasonable witness parameters to facilitate testing. Stakeholders should vote in the witnesses on the testnet who are running testnet witness nodes.

Stable Build Running for 14 Days

The testnet should have a stable build running for at least 14 days prior to the hardfork with no critical issues found. If a critical issue is found that requires a patch to be deployed, the 14 day countdown should be restarted.

Documentation Updates

It should be expected that the documentation for the Steem blockchain be kept up to date along with any changes that are made.

This includes:

  • New API methods are documented in developer portal.
  • Changes to existing API methods are documented in developer portal.
  • New steemd parameters are documented and explained.
  • All changes to the config.ini file are documented and explained.
  • Any updates to the build process are explained in the release notes.
  • Issues are created to update the whitepaper, bluepaper, and steemit.com FAQ as needed.
  • Recipes are provided in the developer portal for common expected tasks (example: how to calculate the RC cost of a transaction).
  • Complicated new functionality with lots of moving parts (such as the new RC system) should have a wiki article or Steem blog post which explains how it works (example: RC Bandwidth System).

Community Expectations

The Steemit dev team has an important/crucial role to play in making this a success, but the witnesses and stakeholders in the community have a large responsibility in this as well. There are several things that WE should take responsibility for, including:

  • EVERYONE should be expected to test on the testnet. Testing is a team effort. The more people we have trying various things in the test environment, the more scenarios we will cover, and the more issues we will find.
  • Stakeholders should be expected to create issues in the appropriate GitHub repository if they find something wrong. If creating an issue is not something they are comfortable/capable of doing, then they should reach out to a witness or other community member who can create one on their behalf. If an issue is critical, the the severity should be properly communicated in the GitHub issue.
  • Witnesses need to let the development team know loudly, clearly, and as early as possible if something is a showstopper. In other words if there is an issue that needs to be fixed in order for a witness to approve the hardfork, they need to make it very clear that they are not voting for the hardfork until the issue is fixed. Both Steemit and the witnesses should have a clear picture of all the issues that exist based on whatever has been reported in GitHub (see above).

Final Veto Authority

Ultimately witnesses have final veto authority on all hardforks. Even if all of their documented standards are met, if they uncover something that they feel presents a threat to the network/stakeholders which was not covered by their standards, they can always deny or delay the hardfork by voting 'no'.

Witness voting standards are not meant to inhibit this ability, but witnesses should do their best to make it clear ahead of time (via their standards) under what grounds they will reject a HF.

IMO, communication is the most important thing. If a witness is going to vote 'no' on a hardfork, they have a responsibility to make their position known as early as possible, so that the development team and stakeholders can adjust accordingly with the least amount of wasted resources.

Evolving Standards

It is expected that these standards will continue to evolve over time. While the list I have above is not going to prevent every possible issue, it is a big step forward from where we currently are. Most importantly, they are standards that I believe are possible to meet by the time we have the next hardfork.

Feedback Requested

Whether the stakeholders are on board with supporting witnesses' standards, and whether the development team is willing to adapt to the standards witnesses present, are two critical components to making this a success. Hopefully we can find the right balance in order to get the right parties on board.

Please provide your feedback in the comments below.

Sort:  

Hello @timcliff Sir
will steemit ever run out of rewards?

The short answer is that the blockchain is programmed to always continue dispersing some new coins for rewards. More information on this can be found in the whitepaper. There are a lot of other factors to consider though. Things are not guaranteed to last forever.

Thanks for your answer.
I also think that steemit will make it self into tge social media platforms.

My proposal for this is that the development team should be responsible for providing the community with a test plan. This test plan should be a documented series of tests that they performed prior to the release of code in order to verify the proper functioning of every change. The test plan should be executable by community testers on the testnet.

I think one way to go about this is to look at unittests, like those found here:

https://github.com/steemit/steem/tree/master/tests

Then, do a code review of a unittest, similar to this:

https://steemit.com/steem/@inertia/code-review-clear-null-account

Once there's a clear description of what the c++ tests do, there's a path to community testing. It's not always clear-cut because the unittests can do things like summon funds into existence or simulate the passage of time. That's where more tools will be required.

IMO there is a role for both automated tests, and 'human' tests. The amount of people that are qualified to understand and properly audit the automated tests is very limited. If we rely on people being able to do this in order to properly test, we will be significantly underutilizing the resources we have (i.e. the community members who may not understand C++, but are willing to spend time playing around in the testnet).

Agreed, automated tests can only account for a certain number of cases. Unpredictable humans will always find ways to use a system beyond what the developers intended.
Your proposal above is good.

Agreed on all points Tim - good work. I only have to add at this point that I think bridging the gap between community, witnesses and developers - plus also providing management tools and oversight - might best be achieved by using a tool like http://www.testquality.com that integrates in to github for free.

I think there is a need for two distinct types of test cases. One focused on the technical aspect of the changes and one for non-technical individuals.

Even enough witnesses are not enough tech savvy to run successfully some highly technical tests, and asking for help to run them would only waste the time of those who can run these tests successfully.

But non-technical tests I'm sure they can run. If those tests should be the same as the ones for the vast majority of stakeholders, or may include some more restricted areas, they can access from their positions as witnesses, I don't have enough information to know what's better.

But I think there should be differentiated test cases, based on the level of tech expertise and access to certain tools and server setups, and certainly some for the large userbase who understand their role as stakeholders.

@timcliff ... the value you bring to steem is insurmountable. You lead by way of a great example, and I know many people look up to you for the research you do, and the summaries you give.

Thanks very much, from me, and a lot of other people. :)

Great job, @timcliff. (I tried to write something more than "nice post!" ;-) ) I can't find any significant disagreements here so I could basically do a Ctrl+C, Ctrl+V to get a decent draft of approval standards. ;-)

Testing is a team effort.

I would like to point out, that witnesses are ... just witnesses.

Yes, witnesses play a very important roles here, however, we are nothing without those who vote for us.
Feedback from stakeholders is very important. Yes. Before the Hardfork.
Also, websites/tools providers should timely upgrade software to support upcoming changes and let us and developers know about possible problems.
I still can see tons of old style requests for database_api hitting my API endpoints. Guys, seriously?

So I think your suggestions are very much in the right direction. Do you think some of what you suggest can be turned into a procedure? For example, step 1 would be for a dev team (Steemit, Inc.'s or any other one) to present a suggested change to the protocol (just conceptual, no code), then step 2 would be for the witnesses to give feedback and include an Approve/Disapprove/Need Further Info position in their feedback, then step 3 is for the dev team to adjust the proposed change until such time that there is a 17/20 consensus of the top 20 witnesses, step 4 is for the dev team to start coding and create a pull request when done, step 5 is for the witnesses to test the pull request and each one to give a green or red light (i.e. "This change passes all my testing" or "This change does NOT pass all my testing"), step 6 is to get to only green lights from the witnesses, step 7 is to schedule the hardfork on a testnet and run it and wait to see that all tests pass (including things like 7-day payouts), step 8 is to schedule and run the hardfork on the mainnet, step 9 is what to do if things break on the mainnet. So a more formal and standardized/repeatable process that would increase the likelihood of the Steem protocol being continuously updated in a smooth and stable way.

All this could conceivably be done in Github, using its project management capabilities. Ideally, someone would create a tool that is specific to Steem and can be used for planning, scheduling, testing and running of hardforks on this blockchain. The tool would incorporate the above procedure or something akin to it. But the current process - using a social media platform (Steemit.com in this case) to coordinate feedback gathering, communicate suggested changes, and handle other important project management aspects - does not seem appropriate at all.

Over time, the procedure for doing hardforks would obviously change and be improved upon. Ideally there would be improvements to it after each hardfork (i.e. we'll get better at doing hardforks after each hardfork). But if the procedure is documented, then people can do pull requests to suggest improvements and talk specifically about the steps of the procedure. The witnesses would be the ones who accept or reject suggested changes to the procedure for doing hardforks, or to the software tool that is used to plan, schedule and run the hardforks.

Curious to hear what you think.

It is unlikely that it will turn into that formal of a proceeding. It will most likely end up needing to be witnesses putting pressure where/when needed if their standards are not being met.

It seems to me that it will turn into whatever we turn it into. If you see a more standardized procedure for doing hardforks as something desirable, do you have any suggestions for a person like me who has extensive agile project management and organizational skills as to how I can contribute to making it happen?

It seems to me that it will turn into whatever we turn it into.

It is not that easy, because of the "we" part. As a stakeholder such as yourself, you can make suggestions, but there is no way for you to get people to follow them. Even as a witness, I can do the same, and I can be a little bit more forceful with my suggestions by withholding my vote from a hardfork, but even that isn't necessarily going to be enough to force action. The dev team still may not do what I suggest, and the stakeholders may vote me out and replace me with a witness who has different standards.

do you have any suggestions for a person like me who has extensive agile project management and organizational skills as to how I can contribute to making it happen?

You can reach out to @andrarchy if you are interested in helping in an official capacity.

Thanks. Following your suggestion, I reached out to @andrarchy and he suggested that I make a full post with my proposal. So I did: https://steemit.com/steem/@borislavzlatanov/steem-s-governance-towards-a-continuous-improvement-system

I am interested to hear if this would make for a more stable governance process from a witness point of view, with clearly distributed roles.

It's well thought out. To a large extent, we already are doing something basically along those lines. IMO it is a little bit too formal though. Getting a lot of the parties mentioned in the post to do things a certain way is a little bit like herding cats.

Thank you for reading it. Is what you're already doing happening on Slack? Because what I see on Steemit is more so people pushing for their own point of view rather than looking at metrics and designing an experiment to determine how well their idea would work.

Yeah, it can be as formal as needed. Getting a lot of the parties to participate in a given process can happen if they see benefit from using the process. They have to have confidence that it's a process we're all agreeing to use, and we're all agreeing to adjust it to suit our collective needs, and we'll use data to determine what works how well, rather than the one with the most power/influence making the decision. If people have confidence that this is indeed the case, they will participate if they are asked to and it is shown to them very clearly how to participate.

Witness Participation in Testnet

This is a much needed aspect. Apparently Who are the testnet witnesses? #22 was discussed way back in 2017 and really glad to see that we are finally getting this done for real.

Couple of doubts:

  1. How will the price feeds work ?
  2. What happens if 17 of 21 witnesses disagree on a particular fork ? (BFT scenario).
  1. Witnesses should produce a price feed on the testnet. I will update the post for that.
  2. If 17/21 witnesses are not voting in favor of a hardfork at the time the hardfork is scheduled, it will not go into effect and the "pre-hardfork" version will continue as the current version. Technically if 17/21 witnesses vote in favor of a HF after the scheduled time, it will still occur, so "delaying" a hardfork is technically possible.
  1. Ok. will look out for your post.
  2. What happens if the 17 is no more the 17/19 elected witnesses after the 17/19 disagreed on a certain HF ? ie, at time, n 17 disagreed on an HF. But 3 seconds later, few of the 17 gets pushed to the top 21 or 21+ what would be the next step ?

If at any time after the scheduled hardfork time 17/21 (or more) of the witnesses in the 21 block round are voting in favor of the hardfork, it will occur.

This seems thorough. I was wondering about the tracking of tests and results with so many people testing. Sounds like each test will be a github issue and testers can comment with their results/concerns under each one? Or how is the info to be organized? (I’ve never used GitHub so not clearly able to visualize how the data will get organized. I’m more accustomed to spreadsheets.)

Posted using Partiko iOS

Tools for CI will auto-magically run the tests and publish the results in many cases. Only for the manual scenarios we will have go through the github issue scenario. When we get bug reports/issues, we can always prioritise them and fix. In a recent disaster resecue scenario around 1900 volunteers collaborated over github on a project and even with say, 1000 people testing/subimitting-PRs/fixing issues the management team which was hardly 6 people was above to handle the work load. Moral of the story is - anything possible with communities. There could be chaos in the beginning, but it will evolve into an orderly place very quickly.

Thanks for the clarification and example. Sounds like the manual scenarios aren't so plentiful that a hierarchical reporting structure is needed.

I think so. After I posted this comment, I just went and marked a PR as one needing review and testing ( https://github.com/IEEEKeralaSection/rescuekerala/pull/1003 ) so, once things stabilize we will not have trouble handing the project.

Sounds like the manual scenarios aren't so plentiful

In RC changes and SMT changes there could be manual scenarios. Once we stabilize like the state where we are now, we will not need a very elaborate hierarchical structure. Numerous projects starting from GNU's GCC, Bash, Emacs, Linux Kernel all have tried and proved that the community approach will work. But yes, in most of the above cases, the community had a "benevolent dictator" or a well defined code-of-conduct.

Hopefully the next big step for steem blockchain would be in right step that will make it more mainstream :)

There should also be instructions provided on how to use all of the developer libraries (Steem Python, Steem-JS, Beam, etc.) to connect and interface with the testnet.

I think it's beem python library, not beam.

Yes, you are right. Thank you! (updated)

Why not make the 20 top witness to say 100 , so thats its less centralized. Why only 20 nodes (+1 in rotation from the remaining?). This will make this project more democratized and usher more innovation

There are trade-offs. One of the things is the amount of witnesses that need to validate a block before it is irreversible. More "top" witnesses would slow this down. Another is the amount of witnesses that need to come to consensus in order to adopt a hardfork. More isn't necessarily bad, but it isn't necessarily better either. It is a fairly lengthily discussion to have, and likely isn't going to go anywhere due to the amount of work involved to make the change, and the other development items that are higher priority with more tangable benefit.

That's pretty much same reason why you elect x members of the parliament.

No you really cant model it like that. Steem covers the whole world rt, so if you consider the whole population then how many representative are there? Very huge number. I think the algorithm should be tweaked as steem block chain scales....I think its a compromise to achieve the speed needed for B2C application but it wouldnt be good to dismiss a better system might be works some where else.

No, the idea is same. For the sake of effectiveness you are not asking whole population to vote for or against when you need to make a decision. You chose delegates. Difference is, that here if your witness doesn't do the job well, you can "fire" him within 3 seconds.

If the SMTs roll out will be ten times more complicated with the potential of being 10 times the carfuffle the last HF was then we better get our stuff in order and test the daylights out of that fork. I would hate to see our hopes dashed after waiting so long for SMTs to put this blockchain on the map

Thanks a lot for doing your work. I'm @clixmoney and this is my second account. I just voted for you as a witness. I'm building this community to make more collaborations in steem blockchain. We are now doing interviews, collaboration videos and building it step by step. I will always vote for the same witness with both accounts. Thanks for any suppot, we use all the earning to power up and upvote our members.

Thanks :)

If you plan to always vote for the same witnesses from both accounts, it may be easiest for you to just "proxy" your witness voting from account B to account A. You can do this on the steemit.com witness voting page. That way whatever votes you make with the one account will automatically be done by the other.

Yeah, some followers told me that they did so with my main account and they are voting for I vote. But how to do that, is there a post or a video about that ?

Just go here: https://steemit.com/~witnesses and scroll down to the bottom. You just need to enter the name of the account that you want to do the witness votes for you into the "proxy" section.

It is a very good article.
I will translate this article and forward it to the Korean community members.
Can I do that?

곰돌이가 @ayogom님의 소중한 댓글에 $0.013을 보팅해서 $0.011을 살려드리고 가요. 곰돌이가 지금까지 총 778번 $11.725을 보팅해서 $10.737을 구했습니다. @gomdory 곰도뤼~

I really like what Iam hearing in this Post. Good stuff. Iam going to reread it a couple of times to get a firm hold on it. Iam a tech phob lol

Thanks for posting it @timcliff :)

This is a great plan of action. Thanks for sharing and for all of the hard work that went into it @timcliff.

Hi @timcliff!

Your post was upvoted by @steem-ua, new Steem dApp, using UserAuthority for algorithmic post curation!
Your UA account score is currently 8.525 which ranks you at #5 across all Steem accounts.
Your rank has not changed in the last three days.

In our last Algorithmic Curation Round, consisting of 285 contributions, your post is ranked at #1. Congratulations!

Evaluation of your UA score:
  • Your follower network is great!
  • The readers appreciate your great work!
  • Great user engagement! You rock!

Feel free to join our @steem-ua Discord server

Sorry I missed this when it was first published, but these are some great standards to follow. I really hope that as many of these as possible are incorporated into any proposed changes. From a non-technical point of view, I would say the most essential are :

  • Test Environment
  • Simulate "Real World" Conditions
  • Sufficient Time to Test

There have been some proposals flying around to "fix" the current system. One of these is free or discounted downvotes. As a proponent of engaging content being a key to Steem's success, I hope this change is never implemented (as I think it will be the final nail in the content coffin). If it is ever seriously considered, I am confident the standards you proposed above would show that in "Real World" conditions, downvotes would be used as selfishly as upvotes have and will create an incredibly negative and anti-social social network.