Q&A with Benjamin Snyder, author of "Spy Plane"
In 2020, the Baltimore Police Department had an aerial surveillance plane that could supposedly photograph and track every person in public view. Spy Plane reveals what happened with this controversial policing experiment. Drawing from incredible access and direct observations inside the for-profit tech startup that ran the program for Baltimore detectives, sociologist Benjamin H. Snyder recounts real criminal cases as they were worked by police using this untested tool.
Deploying aircraft with powerful cameras built by a small company called Persistent Surveillance Systems, the spy plane program promised to help police “solve otherwise unsolvable crimes” by tracking the whereabouts of suspects in violent crime cases. Created for the battlefields of Iraq, it had never been adapted on so large a scale in a U.S. city. This riveting book gives an unprecedented look inside the shadowy world of for-profit law enforcement technology experiments, explaining why police and community leaders place so much faith in unproven technology to fix the problem of urban violence but continually come up short.
Benjamin H. Snyder is Associate Professor of Sociology at Williams College. He is the author of The Disrupted Workplace: Time and the Moral Order of Flexible Capitalism.
What motivated you to write Spy Plane?
I have a longstanding interest in how people give almost magical qualities to technology, especially the idea that they can “solve” social problems. In 2017, I heard an episode of the podcast Radiolab, which covered the story of the spy plane’s first trial run in Baltimore in 2016. It made a lot of claims that sounded far-fetched to me. My wife’s family is also from Baltimore, so I care about the city deeply. I thought there might be a more complex story to the plane, and I was well positioned to investigate, so I started poking around.
How did you get access to this data and shadowy world of law enforcement technology? What did this insider perspective enable you to see that others haven’t?
I was able to get access because of the unusual openness of Ross McNutt, one of the inventors of spy plane technology and the CEO of Persistent Surveillance Systems. In 2017, I asked him if I could study his company, and he said yes. It was basically as simple as that.
Why did he give me access? McNutt is what technology scholars call a “techno-solutionist.” He was confident that, if the public could see how the technology works, they would embrace aerial surveillance as a relatively cheap, efficient “fix” to crime in a city that is otherwise ill-equipped to address root causes. So, he wanted me to provide “total transparency,” as he called it. He let me shadow his analysts, even giving me key-card access to the company’s data terminals where I could look at the raw footage from the plane. I was also able to form relationships with detectives, who would come in and out of the operations center to get help on their cases.
This access allowed me to see beyond the hype of the spy plane. In the media, the program was often portrayed as either a cutting-edge piece of military hardware—an all-seeing eye in the sky—or the embodiment of Orwellian dystopia—Big Brother is watching. Seeing how the tool was deployed in a live situation revealed something much more complex, though no less concerning. A lot of this complexity would have been impossible to see if I had to rely on the public facing parts of the program, such as interim reports from police and auditors or PSS’s marketing materials.
While the book centers on this specific case in Baltimore, what larger story does it tell us about surveillance, technology, and law enforcement in the U.S.?
The main thing I found is that the spy plane was glitchy, unreliable, and unleashed new harms on the public. For example, it turned out that it was relatively easy for spy plane analysts to accidentally track the wrong person from a homicide scene and not realize it—what’s called a “false positive.” At least one time, this resulted in nearly arresting a completely innocent person. Fortunately, the mistake was caught in time, but the fact that it could even occur was not disclosed to the public before the program launched. Importantly, the risk of false positives was concentrated in already economically precarious, majority Black neighborhoods, because those neighborhoods had been identified as good “test sites” for the experiment. This is really common for law enforcement technologies. If you dig into it, you’ll find that things like facial recognition cameras, gunshot detection systems, police bodycams, and even CCTV have a long history of being deployed first in Black neighborhoods as a field test, often without oversight. These tests often unleash unforeseen harms that the general public rarely hears about.
We in the U.S. need to anticipate the next iteration of the war-to-police pipeline by caring about and resisting these experiments abroad now.
Why don’t these issues get more attention? Public debate about these somewhat more mundane, though certainly harmful, risks is crowded out by what I call the “boomer-doomer hype cycle.” That’s when boomers (or boosters) hype up a technology as a breakthrough silver bullet for stopping crime. Then the doomers clap back that the technology is a massive threat to humanity, which is building toward a dystopian future in which the state can “watch us all.” We see this with so-called “AI” right now.
What the case of the spy plane taught me is that both ideas are forms of hype. When they feed on each other, they make it difficult to have a clearheaded debate about basic things: How does the technology actually work in a live deployment? Does anyone know if it works, or is this an experiment? If it hasn’t been tested already, do we know what kinds of glitches and malfunctions are possible? Could it unleash new risks that are hard to foresee? Who will most likely be hurt by these? Who gets to say when to pull the plug, if problems arise? These are questions about deployment, not some far off (dys)utopian future. If communities were better prepared with this practical language, they might be able to head off some of the immediate dangers before the technologies ever develop into something as dystopian as a Big Brother.
What was one surprising moment, insight or story from the process of writing the book?
A few months into my fieldwork inside the operations center, I realized I could potentially be subpoenaed in a criminal or civil trial. My fieldnotes contained some pretty sensitive information about the messy reality of spy plane investigations, which different parties in the court system might find useful to their side. However, I had promised confidentiality to all the respondents in my study (except McNutt). If my fieldnotes were given to a judge, it would be a breach of ethics. It would also likely sour future relationships between startups and ethnographers. I went to the sociological literature on the subpoenaing of fieldnotes and found out that others had been in this same position. Those who refused to comply had even done jail time! I wished I had thought about that more before going in. In the end, I was never asked for my notes, thank goodness. In the methodological appendix of the book, I give future researchers some tips and tricks for approaching this kind of situation with more care than I did.
What is one key message you hope readers take away from the book?
We’re living in a time when international conflicts (in Gaza, Ukraine, Kashmir, etc.) are being used as test sites for developing sometimes ghastly new surveillance technologies. These tools, many made by for-profit startups, will most likely make their way into US police forces someday, just like the spy plane made its way from Iraq to Baltimore. We in the U.S. need to anticipate the next iteration of the war-to-police pipeline by caring about and resisting these experiments abroad now. We also need to create strategies for resistance at home too, before these experiments continue.