This is amazingly possible, based on what I currently understand.
AI models created historically inaccurate depictions of famous people when fed faulty data and if they self check this data , they would place it in their database as correct and factual, but fake news.
This fallacy is also demonstarted by AI adding non-edible and potentially harmful ingredients to foods because they change the consistency to desirable consistency without regard for harmful effects of adding concrete or other inorganic substances to food. A self database assembling AI would add these errors to it's database to be buried insode complex instruction sets, which would probably be adopted and not sufficiently checked by lazy humans.
You are viewing a single comment's thread from: