Having a system that can categorize and recognize what is real, what is false, and what is misleading is an aspirational goal that will never be achieved. Looking at the statement alone does not provide us with any reliable way to know what is true. Everything on an information cycle is subject to that kind of uncertainty. Additionally, it applies to visuals, sounds, and text. We can't know if something is true just by looking at it; similarly, we can't tell if a video is actual just by watching it. Any proof, be it text, photos, or videos, is now moot.
Historians and chroniclers say it can take a long time—perhaps even a century—to gather, compare, and examine all the facts surrounding an event. Events typically fade from memory before we can determine how true they are. Confirming any report that is less than one day old is difficult.
What chance do we have of stopping reports, falsehoods, deceit, misinformation, hoaxes, fakes, superstitious beliefs, and disinformation from sweeping a society?
Trust must be established before facts can be considered. In challenging situations, a trust fund might be helpful. We may not be able to create innovations that reveal the absolute truth, but we can create cutting-edge technology that people can rely on.
Suppose there is a system where the origin of all data is permanently embedded in the components. (Blockchain could be utilized.) Accordingly, there is always an originating source for every assertion. There would be a subject, verb, object, and assertion at the end of every statement: One who affirms its veracity. The declaration includes the chain of provenance and the sequence of resources if an insurance claim is priced or transmitted. People can prefer specific sources due to this interconnected web of resources.
However, we can imagine a system where it's easy to sort among these linked sources (explanations to afterthoughts ) and give them credibility based on their internet reputation. While this resource filtering mechanism does make it more difficult, it does not eliminate scams or false insurance claims.
If we wanted to be strict, we could say that no points more than four hops distant from a starting resource would be considered. The author asserts that they discovered a news reporter's statement that values resource A. Also, it's possible that each of those hops needs an exceptionally high-reliability score. Anything with a history of travel via sources we have determined to be unreliable could be disregarded. Trust fund scores can be defined in various ways; more than one method may exist. Also, we have to have faith in this accounting procedure and agent. Similar to how we sign up for a newspaper, there may be many agencies or filters that assess trust. How quickly and enthusiastically you rectify mistakes is another indicator of your reliability. Resources that are good at handling changes and faults tend to acquire more trust than those that aren't. Thus, this scoring procedure becomes another layer, an additional resource that must be trusted.
Although it takes a long time, scientific research is still in the business of settling disputes by identifying and addressing competing claims and counterclaims. Faster and more suited to news cycles, this trusted method relies less on consensus. Diverse tribes of people rely on different sources, leading to media polarization in many regions today. And some consumer media hide where their information comes from, and those people are OK with that. If certain indigenous groups are unconcerned about the possibility of a new reliance system, then no change may occur in the current information polarization. However, trust and confidence in rapidly moving information would undoubtedly be enhanced by competing at least one system with unchangeable provenance of sources. Picture this: Wikipedia uses published and chained-sourced sources instead of only published resources. Even if some people choose to ignore it, that would undoubtedly increase its credibility.
With the advent of AI and deep fakes, the necessity for embedded chain-sourcing is amplified a millionfold. Just like with text, it is impossible to tell the veracity of a picture, audio clip, or video by looking at it. Any topic, from history to pure fiction, can be realistically rendered by a well-trained AI. When it comes to news, no one can tell from a glance if it's true or not. We are thus left with no choice except to depend on the resource itself to ascertain its condition. The system should penalize the sources with devalued track records that accompany the product if the resource does not reveal its nature or if there is any information surrounding it. Online reputations in this system are persistent; changes take time regardless, even though points can be lost more quickly than gained. A resource with a history of misstated products will have a long way to go before it can redeem itself by providing accurate products. Bear in mind that some sources—for example, a Hollywood special effects company that creates impressive deep fakes—can get an excellent reputation because their work is labeled as fiction. This claim is embedded in the product, so anyone can verify the information online. As a result, they are seen as a reliable source, even though they produce deep fakes.
If you're a media consumer, your goal should be a trust fund, not the truth.
Posted Using InLeo Alpha
Congratulations @lucidlucrecia! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)
Your next target is to reach 60 posts.
You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word
STOP