AI-generated infodemic challenges 🤖

No images? Click here

 

2023 brings us new deep fakes, and other AI-generated challenges to infodemic management.

 

Infodemic Management News Flash

Thursday 12 January 2023 | Issue #48

 
 

 Feature art

Anyone can produce art or even images that look like real photographs with a click of a button, using AI tools like DALL-E 2. Generative AI is powerful technology with implications for art, design, business, government and society. But it is still early in its adoption and we are still missing the guardrails on how to responsibly put this technology to use in our society. The implications for such AI-generated content is that it can automate the process of creating misinformation that is visually and narratively compelling.

This complicates efforts related to promoting verification of content and information literacy when the source of a claim cannot be easily identified, or the underlying sources of information that the generative AI tool is using are not transparently published or vetted for accuracy.

For a light overview, see this World Economic Forum’s article.

Source : DALL-E 2, output to prompt « person chasing butterflies that are shaped like a warning triangle with a butterfly net, digital art »

 
 

Opportunities for action

Call For Expressions of Interest: AI4PEP funding to deepen the understanding of how responsible AI solutions can improve public health preparedness and response

AI4PEP is seeking expressions of interest from researchers who want to design AI solutions for clinical medicine in Africa, Asia, Latin America and the Caribbean and the Middle East and North Africa. The specific aims of AI4PEP are to deepen the understanding of how to use responsible AI solutions to improve public health preparedness and response to emerging and re-emerging infectious disease outbreaks, establish and support a multi-regional network across the Global South to address gaps in knowledge, capacities, and solutions and inform national, regional, and global policies and practices on the use of AI to improve health equity and strengthen public health systems. Researchers and institutions from target regions are invited to apply for funding. This includes infodemic managers!

To read the full EOI and to apply, it’s here. The deadline for submission is 15 January 2023.

Call for papers: Inaugural African Social Marketing Association conference (AfSMAC 2023)

AfSMAC 2023 will bring together over one hundred academics, practitioners, and social policy makers from 24 - 26 April 2023 in Johannesburg, South Africa, to consider how social marketing, social enterprise, innovation and behaviour change practice more broadly, can help solve the big social, health, economic, security and environmental challenges. AfSMAC is looking for submissions for 20, 30 and 45-minute sessions and posters for nine different tracks, one of which is focused on mis- and dis-information. This is a great opportunity for infodemic managers to showcase their work and share ideas!

To read the call for papers and submit, it’s here. The deadline is 27 January 2023.

AfSMAC 2023 is organized by the newly formed African Social Marketing Association (AfSMA), a new regional branch of the long established International Social Marketing Association (iSMA) and partners at FUSE

 
 

Multimedia

Interactive toolkit on behavior change strategies

The Make It website presents an interactive toolkit showcasing 15 behavior change strategies that can help you anticipate problems, identify opportunity points, and design solutions based on what people are actually likely to do. The toolkit has been tested in several real-life projects and with over 2000 learners (including university students, entrepreneurs, CEOs) - and went through numerous iterations. Make It aims to make behavioral science and game thinking more accessible and actionable to enrich your life and help you design better experiences, communications, and interventions.

Download the poster and explore the 15 strategies at https://www.makeit.tools/

 
 

Job opportunities

  • Concordia University Research Assistant, Misinformation on Social Media in the Context of Public Health (Canada)  
  • Internews Network Humanitarian Information Manager, Rooted in Trust Sudan (Remote)
  • Internews Network Content Creator, Rooted in Trust Yemen (Remote)
  • John Hopkins University Research Program Coordinator (USA) 
  • OECD Mis- and Disinformation Junior Policy Analyst (France)
  • Pew Research Center Research Director, News and Information (United States)
  • Syracuse University Post Doc - DARPA Semafor Research Program in Misinformation and Disinformation (United States)
  • UNICEF Health Specialist (Social & Behavior Change) (Nigeria)
  • UNICEF Social & Behavior Change Specialist (Somalia)
  • University of Groningen 3 Postdocs ethics/law/economics/psychology of misinformation and stereotypes (Netherlands)
  • University of South Carolina UNESCO Chair in Data, Media and Society (Tenure-Track) (United States)
  • University of Southern California Media Fellow (United States)
  • WHO Consultant – Facilitate the implementation of research project on message framing informed by behavioural insights (Remote)
  • WHO Internship – Epidemic and Pandemic Preparedness and Prevention/EPP (Switzerland)

     

 
 

Upcoming Events

WHO and Story Collider: Public health practitioners recount their experience of the COVID-19 infodemic: Third edition 

The infodemic has affected health professionals personally and professionally and changed the way health systems have responded to the COVID-19 pandemic. On 26 January 2023, Infodemic Managers Diana Rubio, Julián Sánchez Viamonte, and Jessica Mariana Lorenzo Coronado will share their own personal experiences about managing the infodemic during the time of COVID-19. The event will be in Spanish, is free and will be brought to you in partnership with The Story Collider to promote science and health through more effective storytelling. 

To find out more and register for the event, it’s here. 

UNESCO Global Conference: Internet for Trust - Regulating Digital Platforms for Information as a Public Good

In February, UNESCO is convening a conference on “Internet for Trust - Regulating Digital Platforms for Information as a Public Good”. At the conference, participants will discuss a draft guidance for coregulation of digital platforms in order to support freedom of expression and the availability of accurate and reliable information in the public sphere. The guidance is available for public comment. 

To learn more about the event and register, it's here. The deadline to register is 17 February 2023.

To read the guidance document and make your voice heard, it's here. The deadline for public comment is 20 January 2023.

 
 

Did you miss previous events?

NASEM Committee on Understanding and Addressing Misinformation about Science Meeting #1

The National Academies of Science, Engineering and Medicine (NASEM) Committee on Understanding and Addressing Misinformation about Science held a virtual information gathering session on 14 December 2022. Watch the session replay to see panelists Cary Funk, Pew Research Center, Alice Marwick, University of North Carolina-Chapel Hill, Robert O’Connor, National Science Foundation, Tina Purnat, World Health Organization, Dietram Scheufele, University of Wisconsin-Madison, and Claire Wardle, Brown University discuss the nature and scope of the problem of science misinformation and its impacts, solutions to limit its spread, and guidance on interventions, policies, and research toward reducing harms caused from misinformation. WHO’s Tina Purnat spoke specifically about the information environment and health. You can watch the replay and future information-gathering sessions of the committee on the NASEM website.

 
 

What we're reading

 

Combating Misinformation as a Core Function of Public Health

January 2023

 

PhD Thesis: The Long-Term Effectiveness of Inoculation Against Misinformation: An Integrated Theory of Memory, Threat, and Motivation

November 2022

 

The impact of information sources on COVID-19 vaccine hesitancy and resistance in sub-Saharan Africa

January 2023

 

A global analysis of COVID-19 intra-action reviews: reflecting on, adjusting and improving country emergency preparedness and response during a pandemic

2022

 

The Future of Infodemic Surveillance as Public Health Surveillance

December 2022

 

The Role of Context in Vaccine Stance Prediction for Twitter Users 

December 2022

 

Research Methods in Deliberative Democracy

2022

 

Psychosocial Perceptions as Significant Impact Modifiers: A Mixed Method Research Among Hospitalized Covid-19 Patients in A Tertiary Care Hospital in Coimbatore District, Tamil Nadu

December 2022

 

Rethinking Library and Information Services amidst Virulent Covid-19 Global Pandemic

December 2022

 

Everything causes cancer? Beliefs and attitudes towards cancer prevention among anti-vaxxers, flat earthers, and reptilian conspiracists: online cross sectional survey

December 2022

 

The Impact of COVID-19 Pandemics on the Development of Health Risk Communication: Challenges and Opportunities

December 2022

 

Beyond The Pandemic: Human Resource Management Insights for Navigating Through Global Crises

December 2022

 

A threat-based hate model: How symbolic and realistic threats underlie hate and aggression

November 2022

 
 

Fun with numbers

> 80%

… is the percentage of training data for AI-powered content generator GPT-3 that comes from the open web. This can include concerning content, such as outbound links from Reddit which can harbor low quality content and misinformation. 

AI generative tools are only as good as the data they are trained on and later used on. Therefore, if the training dataset uses low quality content, stigmatizing language, perpetuates stereotypes, incorrectly parses scientific studies or otherwise presents opinion and not fact, the output of the generative AI tool will be tainted. It will provide responses that are seemingly factual even if incorrect or can accidently perpetuate misinformation in numerous permutations of the AI-generated responses. 

Source: https://www.niemanlab.org/2022/12/generative-ai-brings-wrongness-at-scale/ 

 

About the News Flash

An infodemic is an overabundance of information—some accurate, some not—that spreads alongside a disease outbreak. Infodemics are nothing new, but in the digital age, they spread in real time and create a breeding ground for uncertainty. Uncertainty fuels skepticism and distrust, which is a perfect environment for fear, anxiety, finger-pointing, stigma, violent aggression and dismissal of proven public health measures. To manage an infodemic, we need to understand what contributes to it. So that’s why we’re sending you these updates. In each issue of the WHO’s Infodemic Management News Flash we’ll share the latest work happening at the global level, as well as highlight some of the challenges and solutions with infodemics in local contexts. We’ll also provide you with a few takeaways to help you be an effective infodemic manager in your daily life.

If you want to read previous issues of the News Flash, visit this webpage.

If you have a tip on infodemic management or an idea for a future News Flash, email us at infodemicmanagement@who.int. Thanks for joining us on this journey.

Didn't receive this direct to your inbox? Subscribe here.
 
 

Infodemic Management

Health Emergencies Programme

World Health Organization

Our email address is infodemicmanagement@who.int

Preferences  |  Unsubscribe