You are viewing a single comment's thread from:

RE: Deep Fakes: Major Concern or Inconsequential Anecdote?

in #writing7 years ago

This reminds me a lot of Adobe's scary audio editing application that can take a small sample of audio and then essentially allow you to make a person say something in their own voice, Photoshop for audio, an article about it here on Ars Technica. They debuted it in 2016 and have been quiet ever since, but this Adobe app combined with clever video editing and machine learning, we are entering an era where we legitimately can say that we can't or shouldn't trust everything we see or hear.

I believe techniques are being developed to detect this kind of thing, but the content isn't the problem. I think the problem is being able to stop it, as we have seen with the scarily targeted machine generated content on YouTube that is targeting kids, the problem is stopping it as quickly as it is being created.