Hide table of contents

Longer title for this question: To what extent does misinformation/disinformation (or the rise of deepfakes) pose a problem? And to what extent is it tractable?

  1. Are there good analyses of the scope of this problem? If not, does anyone want to do a shallow exploration?
  2. Are there promising interventions (e.g.certificates of some kind) that could be effective (in the important sense)?

Context and possibly relevant links: 

I’m posting this because I’m genuinely curious, and feel like I lack a lot of context on this. I haven't done any relevant research myself.

21

0
0

Reactions

0
0
New Answer
New Comment

4 Answers sorted by

This isn't a particularly deep or informed take, but my perspective on it is that the "misinformation problem" is similar to what Scott called the cowpox of doubt:

What annoys me about the people who harp on moon-hoaxing and homeopathy – without any interest in the rest of medicine or space history – is that it seems like an attempt to Other irrationality.

It’s saying “Look, over here! It’s irrational people, believing things that we can instantly dismiss as dumb. Things we feel no temptation, not one bit, to believe. It must be that they are defective and we are rational.”

But to me, the rationality movement is about Self-ing irrationality.

It is about realizing that you, yes you, might be wrong about the things that you’re most certain of, and nothing can save you except maybe extreme epistemic paranoia.

10 years ago, it was popular to hate on moon-hoaxing and homeopathy, now it's popular to hate on "misinformation". Fixating on obviously-wrong beliefs is probably counterproductive to forming correct beliefs on important and hard questions.

You mean people hate on others who fall for misinformation? I haven't noticed that so far. My impression of the misinformation discourse is ~ "Yeah, this shit is scary, today it might still be mostly easy to avoid, but we'll soon drown in an ocean of AI-generated misinformation!"

Which also doesn't seem right. I think I expect this to be in large part a technical problem that will mostly get solved because it is and probably will be such a prominent issue in the coming years, affecting many of the most profitable tech firms.

Excerpt from Deepfakes: A Grounded Threat Assessment - Center for Security and Emerging Technology (I haven't read the whole paper):

This paper examines the technical literature on deepfakes to assess the threat they pose. It draws two conclusions. First, the malicious use of crudely generated deepfakes will become easier with time as the technology commodifies. Yet the current state of deepfake detection suggests that these fakes can be kept largely at bay. 

Second, tailored deepfakes produced by technically sophisticated actors will represent the greater threat over time. Even moderately resourced campaigns can access the requisite ingredients for generating a custom deepfake. However, factors such as the need to avoid attribution, the time needed to train an ML model, and the availability of data will constrain how sophisticated actors use tailored deepfakes in practice.

Based on this assessment, the paper makes four recommendations:

  • Build a Deepfake “Zoo”: Identifying deepfakes relies on rapid access to examples of synthetic media that can be used to improve detection algorithms. Platforms, researchers, and companies should invest in the creation of a deepfake “zoo” that aggregates and makes freely available datasets of synthetic media as they appear online.
  • Encourage Better Capabilities Tracking: The technical literature around ML provides critical insight into how disinformation actors will likely use deepfakes in their operations, and the limitations they might face in doing so. However, inconsistent documentation practices among researchers hinders this analysis. Research communities, funding organizations, and academic publishers should work toward developing common standards for reporting progress in generative models.
  • Commodify Detection: Broadly distributing detection technology can inhibit the effectiveness of deepfakes. Government agencies and philanthropic organizations should distribute grants to help translate research findings in deepfake detection into user-friendly apps for analyzing media. Regular training sessions for journalists and professions likely to be targeted by these types of techniques may also limit the extent to which members of the public are duped.
  • Proliferate Radioactive Data: Recent research has shown that datasets can be made “radioactive.” ML systems trained on this kind of data generate synthetic media that can be easily identified. Stakeholders should actively encourage the “radioactive” marking of public datasets likely to train deep generative models. This would significantly lower the costs of detection for deepfakes generated by commodified tools. It would also force more sophisticated disinformation actors to source their own datasets to avoid detection

Is it tractable?

  1. One might argue that the amount of misinformation in the world is decreasing, not increasing. Maybe we're much more aware of it, which would be a good thing.
  2. Lesswrong and the EA Forum are making progress on this, no? This is one of my top ideas for how tech can help our causes
  3. Wikipedia also helps a lot, I think. There might be other such ideas (because of inadequate equilibria), so if we find them, it might be a worthy use of EA founders+funds: A relatively easy way to provide a ton of value to society in a way that is hard (or maybe impossible) to monitize.

Regarding deep fakes:

 

Scott Alexander wrote about it:

https://slatestarcodex.com/2020/01/30/book-review-human-compatible/

 

This part stuck with me:

Also, it’s hard to see why forging videos should be so much worse than forging images through Photoshop, forging documents through whatever document-forgers do, or forging text through lying. Brookings explains that deepfakes might cause nuclear war because someone might forge a video of the President ordering a nuclear strike and then commanders might believe it. But it’s unclear why this is so much more plausible than someone writing a memo saying “Please launch a nuclear strike, sincerely, the President” and commanders believing that. Other papers have highlighted the danger of creating a fake sex tape with a politician in order to discredit them, but you can already convincingly Photoshop an explicit photo of your least favorite politician, and everyone will just laugh at you.

Comments3
Sorted by Click to highlight new comments since:

One speculative, semi-vague, and perhaps hedgehoggy point that I've often come back to when thinking about this:

I think it's quite possible that many people have a set of beliefs/assumptions about democracies which cause them to grossly (albeit perhaps not ultimately) underestimate the threat of mis- and dis-information in democracies: in conversations and research presentations I've listened to, I've frequently heard people frame the issue of audiences believing misinformation/disinformation as such audiences making some mistake or irrational choice. This certainly makes sense when it comes to conspiracy theories that tell you to do personally-harmful things like not getting any vaccines or foolishly investing all of your money in some bubble. However, I feel that people in these conversations/presentations will occasionally confuse epistemic rationality (i.e., wanting to have accurate beliefs) and instrumental rationality (i.e., wanting to do--including believe--whatever maximizes one's own interests): sometimes having inaccurate beliefs is more personally beneficial than having accurate beliefs, especially for social or psychological reasons.

This stands out most strongly when it comes to democracies and voting: unlike your personal medical and financial choices, your voting behavior has effectively no "ostensible personal impact" (i.e., on who gets elected and subsequently what policies are put into place which affect you). Given this, lines of reasoning such as "voters are incentivized to have accurate beliefs because if they believe crazy things they're more likely to support policies that harm themselves" are flawed.

In reality, rather than framing the question by simply asking "why do voters have these irrational beliefs / why are they making these mistakes", I think it's important to also ask "Why would we even expect these voters to have accurate beliefs in the first place?"

Ultimately, I have more nuanced views on the potential health and future of democracy, but I think that disinformation/misinformation strikes at one of the core weak points of democracy: [setting aside the non-democratic features of democracies (e.g., non- or semi-democratic institutions within government)] democracies manage to function largely because voters are 1) delusional about the impact of their voting choices, and/or 2) motivated by psychological and social reasons--including some norms like "I shouldn't believe crazy things"--to make somewhat reasonable decisions. Mis- and dis-information, however, seem to undermine these norms.

There is a lot of thought in this post and a lot of dense context provided in the links.

 

Overall, "misinformation" seems like an extremely broad area. I find it difficult situating and absorbing the information presented in the links.

The OP has put a lot of content into deep fakes. This seems important, but it's unclear if this is the subject she is most interested, and it's unclear how it's related to "misinformation" overall.

I wish I had more knowledge about what misinformation is and how we should think about it, or its opposite, "Truth". For example, related to the ongoing invasion of Ukraine, Ukrainian aligned content has dominated western social media. This content isn't entirely truthful, yet it probably serves the principles of justice and freedom in a way that most people like.

 

Maybe a way to get more replies and engagement would be for the OP to provide a few paragraphs on what they are most interested in (maybe it is deep fakes, or maybe something else) or provide their views and concerns. 

Also potentially relevant: a skeptical talk on "media literacy" I enjoyed skimming: https://points.datasociety.net/you-think-you-want-media-literacy-do-you-7cad6af18ec2

More from Lizka
Curated and popular this week
Relevant opportunities