It’s time yet again to talk about AI and photography. If you don’t want to, I sympathise – I’m not sure I want to either. But this thing is happening, and it is happening fast.
In just the past month or so, two significant AI stores have shaken up the world of photography. Back in April, there was the well-publicised controversy at the Sony World Photography Awards, when German artist Boris Eldagsen refused to accept his award in the creative open category, on the grounds that he had created the work using AI.
Just a couple of weeks later, in early May, Amnesty International was roundly criticised when it was revealed that it has used artificially generated images to depict police brutality in Colombia. While the organisation claimed it had done this to protect the identities of protestors in order to prevent state retaliation, critics pointed out that the move further blurred lines between truth and fiction in a time when it can already be hard to tell the difference. And it’s hard to square any reasons Amnesty gave for its use of the image with the simple fact that using photojournalistic work from the ground at Colombia would have cost them money, whereas using the AI image did not.
We are absolutely going to see more stories like this, probably very soon. Like it or not (I don’t like it), AI is already profoundly affecting the world of photography. It’s time to think about how to respond.
Photojournalism and AI
As publishers continue to experiment with using AI imagery in the manner that Amnesty International did, the question hanging over everyone’s heads is what exactly this means for the future of photography. The answers aren’t yet all that clear cut.
“The big question is whether it can fully replace reality,” says Alex McBride, a freelance photographer currently working in Ukraine. “The key thing about photojournalism, which makes it feel a little bit impenetrable by AI, is that it can't be doctored – it's a huge no-no in the field. Anyone caught even making minor alterations to images end up being ostracised.”
Indeed, it feels a safe enough bet, for now, that AI will never replace the core purpose of photojournalism. When important global events take place, people will always want to see a real image of what has happened. But photographers don’t just make their money from photos taken yesterday – there are also residuals as images are used and reused in the exact capacity that Amnesty International used its AI-generated image. If that income source disappears, it becomes harder for someone to support themselves as a photojournalist – leading to fewer photojournalists.
“I guess it depends on how willing publications are to have photorealistic illustrations versus the real thing,” Alex says. “It could put people like me out of work if they want to avoid taking risks and paying up for imagery.”
Would readers notice if stock imagery were gradually replaced with AI-generated illustration? It’s hard to say for sure, given that the waters are getting muddier every day. Alex points me towards an Instagram user called @ai.s.a.m, who uses AI to create documentary-style scenes, and in all honesty, the resulting images are really cool. We both enjoy scrolling through them; though we do notice that @ai.s.a.m is fond of tagging their creations with #documentaryphotography, despite them by definition fulfilling neither criteria.
Would someone idly scrolling through the #documentaryphotography tag notice that this image is not a real photograph? Would you? How about if it were printed in a magazine, or a book?
“They are not.”
I attended the press view of the Sony World Photography Awards shortly before the ceremony. I walked among the printed photos, on the lookout for potential feature ideas. I remember passing Boris Elgadsen’s image, and the main thought I had was that it wasn’t really my thing. Then I moved on.
I’m not going to pretend that had I inspected the image closer, I would have spotted its true nature and blown the whole thing wide open. For one, Elgadsen claims he had immediately told the SWPA upon receipt of the award that the image was AI-generated. The organisation’s version of events claims that they’d established the image had been “co-created” using AI and were looking forward to further discussion with Elgadsen on the topic. He refutes this, but regardless, it seems like the organisers were at least somewhat aware of the image’s true nature before Elgadsen turned down his award.
For two, the fact that the image was printed and hung on the walls of Somerset House automatically gave it a certain cachet in my naive punter’s brain. I don’t recall seeing any information indicating the image had been made in whole or in part using AI. I just assumed an image hung on these walls, in this context, would be a real photograph not a computer-generated one, the same way you assume that if a sandwich has been placed on the deli counter, it has probably been made using real meat, cheese and bread, as opposed to cardboard and sawdust.
Such assumptions have been relatively safe to make throughout the lifespan of photography, but clearly are no longer. Elgadsen said he applied for the award with his AI image as a “cheeky monkey”, to find out whether the competitions are ready for what’s coming. “They are not,” he concluded.
A lot of people are clearly unnerved about the pace at which AI is moving. A recent open letter from the Future of Life Institute exhorted the top AI labs to pause development of systems more powerful than Chat GPT-4, for at least six months. Let us take a breath here, was the general sentiment.
“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones?” the letter asked.
Among the signatories was Craig Peters, CEO of Getty Images, as well as multiple photographers working in various disciplines. While it’s hard to argue with the general sentiments in the letter, it’s even harder to imagine it having the slightest effect on any of the people working on the next generation of AIs.
After all, as the letter acknowledges, it has become a race. Why should I stop when my competitors won’t, developers will ask, and they’ll have a point. What’s more, given how fast open source AI models are proliferating, there isn’t really any central body to meaningfully enact such a pause. Right now, anyone can experiment with developing AI tools if they want to. And they do want to.
Say it again: this is happening. So it’s time to start thinking about what to do about it.
It was heartening to see the swift backlash in response to the Amnesty International AI image. Since pressure on the AI development communities is clearly not going to work, it should instead be applied to the publishers. An open letter that I do think is worth adding a name to is this one from author and activist Molly Crabapple and the Center for Artistic Inquiry and Reporting, calling on publishers to cease using AI-generated images.
It’s possible that artists and photographers may end up needing to take a collective stand, and refuse to work with publishers who continue to use AI generation tools as a means to avoid paying for their artwork. This may also mean legal action from those whose work the AI tools are collaging, chopping up and remixing – and indeed, this has started to happen.
For the competitions, the organisers should be employing people who have expertise in AI to help identify generated images, ideally before such images get hung on the walls of Somerset House. That seems like something of a no-brainer.
However, there could be other steps to take, too. For instance, photography awards could introduce categories specifically for images that have been generated in part or in whole by AI – fencing them off to their own section, with its own judging process.
In some places this is already happening. The Gomma Photography Grant, an annual monetary award supporting emerging photographers in various genres, recently announced the creation of an “experimental playground” for AI imagery, called Gommalabs.
“We will give space to artists to work, play and publish AI generated images and hybrids,” the organisation said. “As a side line of Gomma Books, Gommalabs will be specialised in the publication of ground-breaking, avant-garde, experimental books; while Gomma Books will continue to publish film and digital photography, a craft that takes time and skill to master and that is and always will be impossible to be replaced.”
And in case you didn’t pick up on the implication there, Gomma proceeded to spell it out: “Please also note that Gomma Photography Grant will not accept, nor award AI generated images.”
This feels like an effective way to at least try to contain the AI stuff, and there’s no reason that competitions like Landscape Photographer of the Year or, yes, the Sony World Photography Awards could not implement something similar.
I’m sure some of you will have recoiled at that. So I want to make it clear – I don’t like this either. Putting AI categories in photography awards is not something that I think the world particularly needs. It does not spark joy in my soul. But, again: this is happening.
And I admit this is a deeply imperfect solution. After all, bad actors will still try to sneak their AI images into the “real” categories – you know they will. Plus, there are some tricky grey areas to be sorted through. Would an image that had been tuned up with one of Photoshop’s AI-powered editing tools, or captured using AI subject-detect autofocus, have to be sorted into the AI category? This would mean it would sit alongside something made up from whole cloth by Midjourney. Doesn’t seem particularly fair to either image, does it?
So, what now?
I’m really not proclaiming to have solved the AI-in-photo-competitions problem, King Solomon-style, just as I’m not claiming to have saved the photojournalism industry from death by a thousand generative cuts. But I hope that by suggesting potential practical solutions, people in the photography industry who are smarter than me will start thinking about this. And I suppose “smarter people than me should be thinking about this” is basically the main reason that anyone writes articles about anything.
But, you might be saying, but, maybe this is just another tech-bro fad, like the NFTs and Mark Zuckerberg’s Metaverse. Maybe it’s all going to go away in a few months, be put back in its box and chalked up as another of Silicon Valley’s bad ideas. Maybe we just don’t have to worry about it, and soon we’ll all go back to talking about leading lines and depth of field, again.
And that might well happen. But my question is – does it feel that way to you? Because it doesn’t to me. Not this time.