That's exactly how it works in Hive. Witnesses outside of top20 are chosen for 21st schedule position with frequency proportional to the power of their backing. I've described the mechanism here when we had to fix a bug introduced to that mechanism.
Very interesting read. Ok, this is good, I was not aware of this 21 place. But 21 is not enough in my opinion, it should be much more. As I said I prefer this design rather than a pure proportional. It would be good to propose a similar approach in koinos. There is no need to remove the proof of burn mechanism. We just have to limit the effective VHP. The miner can accumulate any amount of VHP, but during the computation of the difficulty their VHP is limited to some value that depends on the total supply. In this way it can reproduce the same strategy used in Hive. But I would like to increase the number of nodes. Let's say the VHP limitation is 1/100 of the total supply, so the top 100 will have the same influence. And change it from time to time depending on the number of miners in the network. @andrarchy
When dealing with consensus code one can assume someone unrelated actually looked through the change... Moreover the code is under scrutiny of people that invested a lot of effort (and most likely also money) in the project. That is not true for contracts that anyone can build at any time.
The same logic applies to contracts. If someone is investing a lot of money or efforts in a project, he will make an exhaustive scrutiny whether it is a blockchain or a smart contract. There is no difference because there is money involved.
... the nodes are going to run the binaries uploaded to the chain, so all you can do is compare if the uploaded version is the same as what you managed to build.
Correct. This is the process to verify at 100% that the source code matches with the contract in the chain, and in this case there is no doubt of the transparency of the dApp developer. Apart from that, the verifier also has to check if there was a contract in the past for this address, in order to check if the data in the storage was not manipulated. Then it's fine.
In case of contracts, even if you have the sources and can build them, the build has to be repeatable (exactly the same binary output for each compilation)
For instance, in fogata (the mining pool) we provide a docker to compile the contract. Then the docker will always use the same compiler in order to compile the same binary. Then everyone can verify it matches with the binary in the chain. In the future new services could emerge to simplify this process, like the source code verification used in etherscan.