Header home

Type “AI images” into your search engine and you will notice a pattern. Go on, give it a go!

The result is striking, and it’s the same on photo libraries and content platforms. In fact, the lack of variety, and the inaccuracy is almost inescapable. The predominance of sci-fi inspired and anthropomorphised images, and the lack of readily accessible alternative images or ideas, make it hard to communicate accurately about AI.

This matters because without wider public comprehension of AI technologies, applications and governance, many people are left in the dark as to the important changes that impact their lives.

These AI images also add to the public mistrust of AI, a growing problem for innovation in a field that is sometimes seen as biased, opaque and extractive.

Finally, we think that images like these don't encourage the necessary diversity of people to enter the AI workforce and address the AI talent gap.

The issues explained

The current dominant images reinforce dangerous misconceptions, and at best limit the public understanding of the current use and workings of AI systems, their potential and implications. They do this in a number of ways, which have been identified and discussed through many research papers and articles. For example:

In this paper Alberto Romele talks about the blind spot which the AI Ethics community has over stock imagery of artificial intelligence.

The AI Myths project talks about how shiny humanoid robots are often misleadingly used to represent AI.

Philipp Schmitt at Noema magazine explores how researchers have illustrated AI over the decades.

In "The Whiteness of AI" Steven Cave and Kanta Dihal discuss how AI is often portrayed as white “in colour, ethnicity, or both”.

Joanna Bryson’s paper explains the "Moral, Legal, and Economic Hazard of Anthropomorphizing Robots and AI".

In "The AI Creation Meme" Beth Singler investigates the imagery that features a human hand and a machine hand reaching out to one another.

The "Is Seeing Believing?" project from Culture A asks how we can evolve the visual language of AI.

@notmyrobots has a longer reading list of literature relating to the issues of depicting AI as robots and the AI narratives project from the Royal Society and the Leverhulme Centre for the Future of Intelligence examines wider issues with the portrayal and perception of AI.

The Better Images of AI: A Guide for Users and Creators by Dr Kanta Dihal and Tania Duarte summaries the results of new and existing research undertaken as part of this project.

This research all shows that the images that are commonly used today often misrepresent the technology, reinforce harmful stereotypes and spread misleading cultural tropes.

Towards better images

We need images that more realistically portray the technology and the people behind it and point towards its strengths, weaknesses, context and applications. For example, images which:

Represent a wider range of humans and human cultures than ‘caucasian businessperson’
Represent the human, social and environmental impacts of AI systems
Reflect the realistically messy, complex, repetitive and statistical nature of AI systems
Accurately reflect the capabilities of the technology; it is generally applied to specific tasks, it is not of human-level intelligence and does not have emotions
Show realistic applications of AI now, not in some unspecified science-fiction future
Don't show physical robotic hardware where there is none
Avoid monolithic or unknowable representations of AI systems
Don't show electronic representations of human brains
Constitute a wider variety of ways to depict different types, uses, sentiments, and implications of AI

We also have a blog where we and others write about the issues and problems of images of AI and showcase adjacent projects which add to the discourse and understanding.

What we’re doing

We aim to create a new repository of better images of AI that anyone can use, starting with a collection of inspirational images. We will continue to gather images for this repository and commission original work. We welcome image submissions and funding to commission and brief more artists. The first stage of this project is designed to explore what these new images might look like, and to invite people from different creative, technical and other backgrounds to work together to develop better images.

Most images of AI we see come from stock image libraries, which become self referential rather than engaging with the proliferating applications of AI. In showcasing some alternative approaches, and making these available, we hope to inspire users, creators and commissioners of stock images to think more about what they are communicating and how this can be more authentically, inclusively and creatively represented.

In creating new imagery we need to consider what makes a good stock image. Why do people use them and how? Is the image representing a particular part of the technology or is it trying to tell a wider story? What emotional response should the audience have when looking at it? Does it help people understand the technology and is it an accurate representation?

We are aware that changing the images we use to represent AI is not enough to address all the harmful cultural assumptions and power asymmetries embedded in AI, nor do they directly undo AI harms. However we are hopeful that this effort can contribute to changing things for the better.

Get involved

We are currently inviting participation from organisations and individuals in the following areas:

  • Join the project team
  • Sponsor artist commissions
  • Evaluate new images
  • Submit existing images
  • Sponsor competitions
  • Help develop this website
  • Publicise the images and site
  • Become a Partner or Supporter organisation
  • Fund further work

If you want to get involved in any of these ways or you just want to know more, then we’d love to hear from you. Please get in touch