The Art of Science Photography in the Age of AI
As a science photographer for over 30 years, Felice Frankel has witnessed the evolution of tools used to convey research findings visually. Her recent opinion piece in Nature magazine highlighted the growing trend of generative artificial intelligence (GenAI) in images and its implications for communicating research. In an interview, Frankel discussed the challenges and concerns surrounding the use of AI-generated images in science.
The Line Between Acceptable and Unacceptable Manipulation
Q: You’ve mentioned that as soon as a photo is taken, the image can be considered "manipulated." How do you draw the line between acceptable and unacceptable manipulation?
A: In the broadest sense, the decisions made on how to frame and structure the content of an image, along with which tools used to create the image, are already a manipulation of reality. We need to remember that the image is merely a representation of the thing, and not the thing itself. The critical issue is not to manipulate the data, and in the case of most images, the data is the structure.
Ensuring Ethical Communication of Research
Q: What can researchers do to ensure their research is communicated correctly and ethically?
A: With the advent of AI, I see three main issues concerning visual representation: the difference between illustration and documentation, the ethics around digital manipulation, and a continuing need for researchers to be trained in visual communication. For years, I have been trying to develop a visual literacy program for science and engineering researchers. We need to require students to learn how to critically look at a published graph or image and decide if there is something weird going on with it. We need to discuss the ethics of "nudging" an image to look a certain predetermined way.
The Future of Science Communication
Q: Generative AI is not going away. What do you see as the future for communicating science visually?
A: For the Nature article, I decided that a powerful way to question the use of AI in generating images was by example. I used one of the diffusion models to create an image using the following prompt: "Create a photo of Moungi Bawendi’s nano crystals in vials against a black background, fluorescing at different wavelengths, depending on their size, when excited with UV light." The results of my AI experimentation were often cartoon-like images that could hardly pass as reality — let alone documentation — but there will be a time when they will be. In conversations with colleagues in research and computer-science communities, all agree that we should have clear standards on what is and is not allowed. And most importantly, a GenAI visual should never be allowed as documentation.
Conclusion
As AI-generated images become more prevalent, it is crucial for researchers to understand the importance of transparency and ethics in visual communication. By requiring clear labeling, prompt disclosure, and image provenance, we can ensure that AI-generated visuals are used responsibly and effectively.
FAQs
- How do you define acceptable and unacceptable manipulation in image creation?
In the broadest sense, the decisions made on how to frame and structure the content of an image, along with which tools used to create the image, are already a manipulation of reality. - What can researchers do to ensure their research is communicated correctly and ethically?
Researchers should require students to learn how to critically look at a published graph or image and decide if there is something weird going on with it. We need to discuss the ethics of "nudging" an image to look a certain predetermined way. - What is the future of science communication with the rise of AI?
AI-generated visuals will be useful for illustration purposes, but we should require clear standards on what is and is not allowed. A GenAI visual should never be allowed as documentation.