But then, whose ethics will it be trained on?? Ethics can sometimes vary from one culture to another.
Great point! Ethics is not as straightforward as logic, and its nuances can vary even within the same culture, based on different currents of thought.
I noticed AI models try to keep their answers on a neutral ground on what can be considered controversial topics even when a direct question is asked, and only if you insist they provide a more targeted answer. Probably it's ingrained in their training to be as non-conflictual as they can to avoid potential lawsuits for the firms behind these models.
Right. I think that's a safer route to take for AI models, tread the neutral ground by default with little deviation into both sides of the spectrum.
Now, if an AI model goes rogue, because of a technical issue or so, it will be interesting to observe what it brings out based on the same data it's trained on :)
I saw Sam Altman in an interview, and he said ChatGPT uncensored (he didn't use this word) is pretty difficult to work with.
That makes a lot of sense, it's more like a black box, the method of operation can't really be understood.