By Brooke Belisle, author of Depth Effects: Dimensionality from Camera to Computation
Recent headlines are announcing the end of photography, as AI changes what counts as a photograph or makes it impossible to judge. The New York Times has published multiple, interactive articles prompting readers to test whether they can “believe their eyes” by distinguishing between photographs and AI-generated images. Results suggest that most readers are so bad at this task that our performance skews past random: we are good at getting it wrong. The implications of this uncertainty—or overconfidence—extend beyond problems of fakery and tricks of photorealism to raise broader questions about visual mediation in our moment. As AI changes how images are understood to capture and convey whatever they depict, our everyday ways of seeing and knowing through images seems to be in crisis.
What does it mean, if I am more likely to identify an AI-generated image of a human face—which was rendered entirely from patterns of data about other images—as a “real person” and more likely to label a photograph of an actual person a “fake”? This is not just about AI but about aesthetic and cultural logics that condition what a “person” looks like and how personhood is pictured in photographs. My recent book, Depth Effects: Dimensionality from Camera to Computation offers a long view of questions like these, which have become all the more important with the rise of AI.
Over the past decade, mainstream processes of visual representation have been profoundly transformed by computational techniques—ways of making images that rely not only on digital technology, but also on statistical and predictive operations of machine learning and neural processing. These techniques engage images as spatialized patterns of information—analyzing, extracting, and predicting relationships between pixels in coordinate arrays. The spatial strategies of computational imaging disrupt long-standing conventions of lens-based mediation, ways that photographic and cinematic images have seemed to capture and represent the visible world. However, as new techniques unsettle long-standing norms, this may not entirely break from the history of lens-based media as much as offer different ways to understand it.
Depth Effects proposes an alternative throughline from very early experiments with photography to contemporary problems in computational imaging. It looks back to often overlooked practices when photography was a new medium, before it was defined through modernist and medium-specific ideas about the singular, indexical, imprint of an instant. It shows how recent advances in object recognition, depth mapping, and photogrammetry point back, unexpectedly, to nineteenth-century strategies of stereoscopic photography and photosculpture, to the first spatial conventions for photographing people, and to the earliest efforts to use cameras for making maps. It reconsiders how dimensional relationships between and within images have seemed to render the voluminous shape of things, the hidden depths of subjectivity, and the objective coherence of the world itself.
Depth Effects will not teach you how to ace tests detecting AI-generated images. Instead, it will show you why current rhetoric about “real” and “fake” photographs misses the broader questions that we need to consider as visual technologies and aesthetic norms of visual mediation change. It will help you understand, for example, how almost every image you see or create today on a smartphone is already inflected by AI—and how that could matter given the ways these images mediate how you view, interpret, and interact with the world around you. It will challenge you to think about what is at stake in the ways flat, bounded images or discrete arrays of pixels capture and convey—and even constitute—the dimensions of our lived experience and everyday lives.