In October, Christie’s became the first auction house to sell a piece of AI art.
With AI becoming incorporated into more aspects of our daily lives, from writing to driving, it’s only natural that artists would also start to experiment with artificial intelligence. In fact, Christie’s just sold its first piece of AI art — a blurred face titled Portrait of Edmond Belamy — for $432,500, 45 times its original estimate.
The piece being sold at Christie’s is part of a new wave of AI art created via machine learning. Paris-based artists Hugo Caselles-Dupré, Pierre Fautrel and Gauthier Vernier fed thousands of portraits into an algorithm, “teaching” it the aesthetics of past examples of portraiture. The algorithm then created Portrait of Edmond Belamy. The painting is “not the product of a human mind,” the auction house noted in its preview. “It was created by artificial intelligence, an algorithm defined by [an] algebraic formula.”
If artificial intelligence is used to create images, can the final product really be thought of as art? Should there be a threshold of influence over the final product that an artist needs to wield? As the director of the Art & AI Lab at Rutgers University, I’ve been wrestling with these questions — specifically, the point at which the artist should cede credit to the machine.
MACHINES ENROL IN ART CLASS
Over the last 50 years, several artists have written computer programs to generate art — what I call “algorithmic art.” It requires the artist to write detailed code with an actual visual outcome in mind. One of the earliest practitioners of this form is Harold Cohen, who wrote the program AARON to produce drawings that followed a set of rules Cohen had created. But the AI art that has emerged over the past couple of years incorporates machine-learning technology. Artists create algorithms not to follow a set of rules, but to “learn” a specific aesthetic by analyzing thousands of images. The algorithm then tries to generate new images in adherence to the aesthetics it has learned.
To begin, the artist chooses a collection of images to feed the algorithm, a step I call “pre-curation.” For the purpose of this example, let’s say the artist chooses traditional portraits from the past 500 years. Most of the AI artworks that have emerged over the past few years have used a class of algorithms called generative adversarial networks. First introduced by computer scientist Ian Goodfellow in 2014, these algorithms are called “adversarial” because there are two sides to them: One generates random images; the other has been taught, via the input, how to judge these images and deem which best align with the input.
So the portraits from the past 500 years are fed into a generative AI algorithm that tries to imitate these inputs. The algorithms then come back with a range of output images, and the artist must sift through them and select those he or she wishes to use, a step I call “post-curation.” So there is an element of creativity: The artist is very involved in pre- and post-curation. The artist might also tweak the algorithm as needed to generate the desired outputs.
Serendipity or Malfunction?
The generative algorithm can produce images that surprise even the artist presiding over the process. For example, a generative adversarial network being fed portraits could end up producing a series of deformed faces. What should we make of this? Psychologist Daniel E. Berlyne has studied the psychology of aesthetics for several decades. He found that novelty, surprise, complexity, ambiguity and eccentricity tend to be the most powerful stimuli in works of art. The generated portraits from the generative adversarial network — with all of the deformed faces — are certainly novel, surprising and bizarre.
They also evoke British figurative painter Francis Bacon’s famous deformed portraits, such as Three Studies for a Portrait of Henrietta Moraes. But there’s something missing in the deformed, machine-made faces: intent. While it was Bacon’s intent to make his faces deformed, the deformed faces we see in the example of AI art aren’t necessarily the goal of the artist or the machine. What we are looking at are instances in which the machine has failed to properly imitate a human face, and has instead spit out some surprising deformities, yet this is exactly the sort of image that Christie’s is auctioning.
Does this outcome really indicate a lack of intent? I would argue that the intent lies in the process, even if it doesn’t appear in the final image. For example, to create The Fall of the House of Usher, artist Anna Ridler took stills from a 1929 film version of the Edgar Allen Poe short story “The Fall of the House of Usher.” She made ink drawings from the still frames and fed them into a generative model, which produced a series of new images that she then arranged into a short film.
Another example is Mario Klingemann’s The Butcher’s Son, a nude portrait that was generated by feeding the algorithm images of stick figures and images of pornography. I use these two examples to show how artists can really play with these AI tools in any number of ways. While the final images might have surprised the artists, they didn’t come out of nowhere: There was a process behind them, and there was certainly an element of intent.
A Form of Conceptual Art
Nonetheless, many are skeptical of AI art. Pulitzer Prize-winning art critic Jerry Saltz has said he finds the art produced by AI artists boring and dull, including The Butcher’s Son. Perhaps they’re correct in some cases. In the deformed portraits, for example, you could argue that the resulting images aren’t all that interesting. They’re really just imitations — with a twist — of pre-curated inputs.
But it’s not just about the final image. It’s a about the creative process — one that involves an artist and a machine collaborating to explore new visual forms in revolutionary ways. For this reason, I have no doubt that this is conceptual art, a form that dates back to the 1960s, in which the idea behind the work and the process is more important than the outcome. As for The Butcher’s Son, one of the pieces Saltz derided as boring? It recently won the Lumen Prize, a prize dedicated for art created with technology. As much as some critics might decry the trend, it seems that AI art is here to stay.
*[This article was originally published by The Conversation.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.