The growing problem of deep-fakery
Yes — deepfakes are creepy. But if Black Mirror has taught us anything it’s that humans are the real problem. Deepfakes, video or audio files generated by AI with unnerving realism, are “eerily dystopian.” But an online manipulation expert says in this New York Times video that the collective hysteria they generate is just as problematic (watch, runtime: 03:38). More worrying than misleading content, she says, is the “liar’s dividend,” where unscrupulous people leverage widespread skepticism for their own benefit. When anything can be fake, the guilty can easily dismiss the truth as being fake too.
AI can actually be used with human intelligence to slow the spread of fake news (watch, runtime: 03:09). This is the model used by Leeds-based startup Crisp Thinking, which is paid by multinationals to scan social media for harmful comments that could damage their brand. Crisp Thinking can’t stop fake news, but it can identify it at an early stage, and advise companies on damage control — which is sometimes the best you can hope for.