I actually agree with you. In the long term (benefit to the species) knowledge must be released to all. So I'm not the person who believes in radical secrecy or absolute secrecy. What I promote is privacy where the distinction is that privacy is all about access control. It's about ownership of certain information and protecting those ownership rights.
So it could be set up for example that all secrets expire after for example 66 years. This is because most people will not be alive 66 years from now for example. Classified information works in this way where it's classified for a period of time and eventually declassified.
This is also true for patents, copyright, etc. The restrictions exist temporarily in order to provide a competitive advantage to the inventor. If we can guarantee cryptographically that the lock on the invention will expire after x amount of years then in my opinion this is far better than radical transparency.
Why? Because we can still have freedom, maintain ownership rights, reward inventions, basically we can get the benefits of access control/strategic restriction. At the same time we also as a species would be guaranteed to get the long term benefits of knowledge release.
The issue with radical transparency is most people only look at the upsides while ignoring all the downsides. Why support any radical absolutism? Absolute secrecy is stupid but so is absolute transparency. To be locked into either of these limits freedom in different ways in my opinion. Locked into radical transparency and we lose individuality in favor of conformity without the ability for competition. In radical secrecy we have the problem of corruption where anyone and everyone could be taking a bribe, or betray you for the other side, or be spying on you or be the mole.
What we seem to want is the ability to have transparency where it aids in producing security and privacy where it aids in producing security. The absolutes don't seem to be about security and are more about politics.
Suppose a technology exists or can be developed which can scan the brain of anyone. This remote brain scanner could be used to detect any lies. At the push of a button the remote FMRI like brain scanner is activated and now when we interact we can know who is honest and who is dishonest without anyone having the ability to lie. This is the kind of transparency we want correct?
But then how to implement something like that? If it's done in a stupid way then not only will the brain scanner be used to detect lies but it will be used to detect everything because who needs privacy at all? Suddenly it's no longer about security but about something else entirely. Suddenly it's people asking questions "does this person think the way I do?" and politics may even go into that.
So while the perfect lie detector improves security by detecting all deception so that we can enter contracts while knowing each side has honest intent there is still the problem that this technology will be abused by people who have other agendas. The same way a brain scanner can be used to promote security it can also be used to rob people of individuality, take away human rights (dehumanize people), and much worse.
In a blockchain context if we had the ability to know who we can and can't trust it would solve the security problems. I don't have to know nor would I want to know every aspect of every thought in a persons head to answer my query. This is why what goes on in their head should be private with the mere exception being:
- Is this person honest?
- Can I trust this person?
The rest that goes on in their head is none of my business. The problem is that the more I'm exposed to unnecessary information the more I'm likely to judge or perhaps not want to be their friend or perhaps change my behavior toward them even if they are telling the truth, are honest, aren't lying.
How do we manage this problem? Well the idea I have is to simply let people have their privacy. Let artificial intelligence be the filter. When you ask the intelligent agent (can I trust Alice?) and it knows Alice better than she knows herself ? Also Alice can't lie because remember in this hypothetical scenario the brain scanner knows all her secrets?
The point is as long as my query is answered with a yes or a no I don't care about or need the details. In fact my access to the details should be restricted by the AI. The data which got computed remained encrypted so no one really gets to see it. No one really gets to access her secrets as it's not necessary for human eyes to see her secrets in order for the query to be answered yes or no.
The problem is humans have a desire to want to see people's secrets without a security need to see it.
Conclusion
- The transparency can exist implemented in such a way that we can have all the security benefits.
- Human access to all secrets is not necessary to receive the benefits.
- As long as data can be put through analysis by machine learning and the output of that analysis directed toward the entities, institutions, individuals who require access to make a security related decision then we get the benefits (security) from transparency without the risks.
Your data is therefore allowed to be private yet your reputation score can still be calculated. You can share all your secrets with the Internet without any fear and you can maintain your individuality. You can also share a score which can show whether or not you can be trusted and in what ways. You can even verify that you're not lying or identify anyone who tries to lie to you.
So what is missing? What more would you want? Do you really intend to study every piece of data about every person you meet in a radically transparent environment? Do we see that happening now? Do we have more trust now that things are more transparent? More human rights? I don't see the benefits of radical transparency playing out. I do think if we focus on managing the risks and expanding the benefits we can seek to implement things in the best way which in my opinion isn't going to be any kind of absolute.
Blockchain can't even do a reputation system because it's too transparent for that to work. Blockchain can't tell me who I can trust. Blockchain can't tell me who is or isn't lying to me. Blockchain can't tell me who I should or shouldn't be friends with. It can't answer any of my queries about anybody. It can't compute using machine learning or help me to make better decisions social or otherwise.
So the problem and I say it all the time is that transparency without wisdom can be just as harmful as privacy without wisdom. When you have transparency and you don't know what to do with it then you make poor use of the information you glean from it (this is what we see with blockchain). If you have privacy and again you don't know how to make the best use of it then you abuse it (we see this happen as well).
In the practical world what does radical transparency do to make my life better, easier, safer, and help me to find new friends, new people to cooperate with, etc? Well it's not even doing that. It's dividing people apart by highlighting the differences but doesn't bring anyone together. It's helping people to identify what they don't like and who they don't want to be friends with. It's not doing anything to help people identify who they can trust. So if radical transparency works I'd like to see people live that lifestyle and prove it works.
And the same with radical privacy. If you live in such a way where no one knows anything about you and you don't use any social media or share anything I'd like to see how you make any friends or get along in society. I'd like to see how you build up a reputation and achieve cooperation as a completely anonymous unknown entity. Most important for either of these lifestyles are whether or not the people living in these ways are happy.