AI-generated "faceless portraits" by Ahmed Elgammal and AICAN. (Artrendex Inc. / The Atlantic)

The AI-Art Gold Rush Is Here

An artificial-intelligence “artist” got a solo show at a Chelsea gallery. Will it reinvent art, or destroy it?

The images are huge and square and harrowing: a form, reminiscent of a face, engulfed in fiery red-and-yellow currents; a head emerging from a cape collared with glitchy feathers, from which a shape suggestive of a hand protrudes; a heap of gold and scarlet mottles, convincing as fabric, propping up a face with grievous, angular features. These are part of “Faceless Portraits Transcending Time,” an exhibition of prints recently shown at the HG Contemporary gallery in Chelsea, the epicenter of New York’s contemporary-art world. All of them were created by a computer.

The catalog calls the show a “collaboration between an artificial intelligence named AICAN and its creator, Dr. Ahmed Elgammal,” a move meant to spotlight, and anthropomorphize, the machine-learning algorithm that did most of the work. According to HG Contemporary, it’s the first solo gallery exhibit devoted to an AI artist.

If they hadn’t found each other in the New York art scene, the players involved could have met on a Spike Jonze film set: a computer scientist commanding five-figure print sales from software that generates inkjet-printed images; a former hotel-chain financial analyst turned Chelsea techno-gallerist with apparent ties to fine-arts nobility; a venture capitalist with two doctoral degrees in biomedical informatics; and an art consultant who put the whole thing together, A-Team–style, after a chance encounter at a blockchain conference. Together, they hope to reinvent visual art, or at least to cash in on machine-learning hype along the way.

The gallery show might just be a coming-out party for Elgammal’s venture-backed, fine-art econometrics start-up. The computer scientist has created some legitimately striking pieces. But he and his partners also want to sell AICAN as a “solution” to art, one that could predict forthcoming trends and perhaps even produce works in those styles. The idea is so contemporary and extravagant, it might qualify as art better than the strange portraits on exhibit at the gallery.

The AI-art gold rush began in earnest last October, when the New York auction house Christie’s sold Portrait of Edmond de Belamy, an algorithm-generated print in the style of 19th-century European portraiture, for $432,500.

Bystanders in and out of the art world were shocked. The print had never been shown in galleries or exhibitions before coming to market at auction, a channel usually reserved for established work. The winning bid was made anonymously by telephone, raising some eyebrows; art auctions can invite price manipulation. It was created by a computer program that generates new images based on patterns in a body of existing work, whose features the AI “learns.” What’s more, the artists who trained and generated the work, the French collective Obvious, hadn’t even written the algorithm or the training set. They just downloaded them, made some tweaks, and sent the results to market.

“We are the people who decided to do this,” the Obvious member Pierre Fautrel said in response to the criticism, “who decided to print it on canvas, sign it as a mathematical formula, put it in a gold frame.” A century after Marcel Duchamp made a urinal into art by putting it in a gallery, not much has changed, with or without computers. As Andy Warhol famously said, “Art is what you can get away with.”

Pierre Fautrel poses with Portrait of Edmond de Belamy, the AI-generated artwork that sold for $432,500 at auction. (Timothy A. Clary / AFP / Getty)

The best way to get away with something is to make it feel new and surprising. Using a computer is hardly enough anymore; today’s machines offer all kinds of ways to generate images that can be output, framed, displayed, and sold—from digital photography to artificial intelligence. Recently, the fashionable choice has become generative adversarial networks, or GANs, the technology that created Portrait of Edmond de Belamy. Like other machine-learning methods, GANs use a sample set—in this case, art, or at least images of it—to deduce patterns, and then they use that knowledge to create new pieces. A typical Renaissance portrait, for example, might be composed as a bust or three-quarter view of a subject. The computer may have no idea what a bust is, but if it sees enough of them, it might learn the pattern and try to replicate it in an image.

GANs use two neural nets (a way of processing information modeled after the human brain) to produce images: a “generator” and a “discerner.” The generator produces new outputs—images, in the case of visual art—and the discerner tests them against the training set to make sure they comply with whatever patterns the computer has gleaned from that data. The quality or usefulness of the results depends largely on having a well-trained system, which is difficult.

That’s why folks in the know were upset by the Edmond de Belamy auction. The image was created by an algorithm the artists didn’t write, trained on an “Old Masters” image set they also didn’t create. The art world is no stranger to trend and bluster driving attention, but the brave new world of AI painting appeared to be just more found art, the machine-learning equivalent of a urinal on a plinth.

Ahmed Elgammal thinks AI art can be much more than that. A Rutgers University professor of computer science, Elgammal runs an art-and-artificial-intelligence lab, where he and his colleagues develop technologies that try to understand and generate new “art” (the scare quotes are Elgammal’s) with AI—not just credible copies of existing work, like GANs do. “That’s not art, that’s just repainting,” Elgammal says of GAN-made images. “It’s what a bad artist would do.”

Elgammal calls his approach a “creative adversarial network,” or CAN. It swaps a GAN’s discerner—the part that ensures similarity—for one that introduces novelty instead. The system amounts to a theory of how art evolves: through small alterations to a known style that produce a new one. That’s a convenient take, given that any machine-learning technique has to base its work on a specific training set.

Faceless Portrait of a Merchant, one of the AI portraits produced by Ahmed Elgammal and AICAN. (Artrendex Inc.)

The results are striking and strange, although calling them a new artistic style might be a stretch. They’re more like credible takes on visual abstraction. The images in the show, which were produced based on training sets of Renaissance portraits and skulls, are more figurative, and fairly disturbing. Their gallery placards name them dukes, earls, queens, and the like, although they depict no actual people—instead, human-like figures, their features smeared and contorted yet still legible as portraiture. Faceless Portrait of a Merchant, for example, depicts a torso that might also read as the front legs and rear haunches of a hound. Atop it, a fleshy orb comes across as a head. The whole scene is rippled by the machine-learning algorithm, in the way of so many computer-generated artworks.

According to Elgammal, ordinary observers can’t tell the difference between an AI-generated image and a “normal” one in the context of a gallery or an art fair. That’s an accomplishment—the abstract images AICAN produces do have visual coherence and appeal. But the whole of 20th-century art was predicated on the idea that putting something in a gallery or museum makes it art, rather than the opposite.

When I asked Elgammal which Renaissance artists he chose for the training set and why, he sent me a Dropbox link to 3,000 portraits by varied artists across at least two centuries—Titian, Gerard ter Borch, and Giovanni Antonio Boltraffio, among others. The subjects vary widely, from unknown figures who would have sat for portraits for familial record to people of historical import such as Erasmus. Specific subjects, artists, or styles bear less importance than total volume.

That might be an inevitability of AI art: Wide swaths of art-historical context are abstracted into general, visual patterns. AICAN’s system can pick up on general rules of composition, but in the process, it can overlook other features common to works of a particular era and style.

“You can’t really pick a form of painting that’s more charged with cultural meaning than portraiture,” John Sharp, an art historian trained in 15th-century Italian painting and the director of the M.F.A. program in design and technology at Parsons School of Design, told me. The portrait isn’t just a style, it’s also a host for symbolism. “For example, men might be shown with an open book to show how they are in dialogue with that material; or a writing implement, to suggest authority; or a weapon, to evince power.” Take Portrait of a Youth Holding an Arrow, an early-16th-century Boltraffio portrait that helped train the AICAN database for the show. The painting depicts a young man, believed to be the Bolognese poet Girolamo Casio, holding an arrow at an angle in his fingers and across his chest. It doubles as both weapon and quill, a potent symbol of poetry and aristocracy alike. Along with the arrow, the laurels in Casio’s hair are emblems of Apollo, the god of both poetry and archery.

A neural net couldn’t infer anything about the particular symbolic trappings of the Renaissance or antiquity—unless it was taught to, and that wouldn’t happen just by showing it lots of portraits. For Sharp and other critics of computer-generated art, the result betrays an unforgivable ignorance about the supposed influence of the source material.

But for the purposes of the show, the appeal to the Renaissance might be mostly a foil, a way to yoke a hip, new technology to traditional painting in order to imbue it with the gravity of history: not only a Chelsea gallery show, but also an homage to the portraiture found at the Met. To reinforce a connection to the cradle of European art, some of the images are presented in elaborate frames, a decision the gallerist, Philippe Hoerle-Guggenheim (yes, that Guggenheim; he says the relation is “distant”), told me he insisted upon. Meanwhile, the technical method makes its way onto the gallery placards in an official-sounding way—“Creative Adversarial Network print.” But both sets of inspirations, machine-learning and Renaissance portraiture, get limited billing and zero explanation at the show. That was deliberate, Hoerle-Guggenheim said. He’s betting that the simple existence of a visually arresting AI painting will be enough to draw interest—and buyers. It would turn out to be a good bet.

Some viewers interpret AI art’s promise as a threat. In his office, Hoerle-Guggenheim showed me a comment on an Instagram post for the show, complaining that the gallery is featuring art created by machines: “What a shame for an art gallery…instead of supporting human beings giving their vibrant vision of our world.” Given the general fears about robots taking human jobs, it’s understandable that some viewers would see an artificial intelligence taking over for visual artists, of all people, as a sacrificial canary.

Hoerle-Guggenheim celebrates the criticism—it just demonstrates interest in the show. Elgammal takes it seriously, but he thinks the worry is misplaced. “I’m more into collaboration now,” he told me, swearing off his earlier interest in generating images that human viewers would accept as visual art. The conceit of collaboration has been baked into AICAN’s labors, at the HG Contemporary gallery and beyond. But it’s odd to list AICAN as a collaborator—painters credit pigment as a medium, not as a partner. Even the most committed digital artists don’t present the tools of their own inventions that way; when they do, it’s only after years, or even decades, of ongoing use and refinement.

But Elgammal insists that the move is justified because the machine produces unexpected results. “A camera is a tool—a mechanical device—but it’s not creative,” he said. “Using a tool is an unfair term for AICAN. It’s the first time in history that a tool has had some kind of creativity, that it can surprise you.” Casey Reas, a digital artist who co-designed the popular visual-arts-oriented coding platform Processing, which he uses to create some of his fine art, isn’t convinced. “The artist should claim responsibility over the work rather than to cede that agency to the tool or the system they create,” he told me.

Three AICAN-generated prints on display at the HG Contemporary Gallery. (Ian Bogost)

Elgammal’s financial interest in AICAN might explain his insistence on foregrounding its role. Unlike a specialized print-making technique or even the Processing coding environment, AICAN isn’t just a device that Elgammal created. It’s also a commercial enterprise.

Elgammal has already spun off a company, Artrendex, that provides “artificial-intelligence innovations for the art market.” One of them offers provenance authentication for artworks; another can suggest works a viewer or collector might appreciate based on an existing collection; another, a system for cataloging images by visual properties and not just by metadata, has been licensed by the Barnes Foundation to drive its collection-browsing website.

The company’s plans are more ambitious than recommendations and fancy online catalogs. When presenting on a panel about the uses of blockchain for managing art sales and provenance, Elgammal caught the attention of Jessica Davidson, an art consultant who advises artists and galleries in building collections and exhibits. Davidson had been looking for business-development partnerships, and she became intrigued by AICAN as a marketable product. “I was interested in how we can harness it in a compelling way,” she says.

Davidson also sold the “Faceless Portraits Transcending Time” show to Hoerle-Guggenheim, who had been looking for an AI-oriented artist to feature in his gallery. “It was important to solidify the legitimacy of what we’re trying to do,” Davidson told me, “and putting together and launching a very traditional solo exhibition in Chelsea was important.”

The exhibit is more than a stunt. For one one thing, the prints are for sale, priced from $6,000 to $18,000. That’s eminently affordable for the Chelsea scene; Hoerle-Guggenheim told me that a “large amount” has been purchased, and that he expected to sell out the show. But for another, the exhibit is also a rhetorical maneuver to lay the groundwork for a larger effort: to use AI to understand, and maybe even define, future visual aesthetics.

“We gave the machine images of art, labeled by style—Renaissance, Baroque, realism, impressionism, and so on—and the machine figured out the chronology,” Elgammal said. It’s a remarkable accomplishment that could upend the belief that artistic progress depends on human reason alone. Elgammal is embracing the challenge. He theorizes that AICAN and similar technologies can predict upcoming art trends based on currently popular techniques and styles. At a minimum, that makes Artrendex, and AICAN, a potentially valuable business-intelligence platform.

Davidson explained that the system has already been used to analyze Instagram posts by the hundreds of thousands, and to use that information to figure out what pieces at hot festivals such as Art Basel might be poised to become the next big things. “In an art market that is worth more than $64 billion,” the Artrendex website reads, “where the mass of that market is art bought as investment, comes the need for data-analytics tools that assert the potential value of art.” Last year, Khosla Ventures funded the company with a $2.4 million investment to build and market tools for art econometrics. That’s more than the average visual artist will make in a lifetime.

AICAN’s commercial potential turns the tool from a quirky AI-art partner into a potentially valuable general-purpose technology. And that’s made Elgammal want to control who gets access to it, for now. Reas has used Processing to create prints that he sells through gallery representation. But he and his collaborators also released the tools, open-source, to the community to do with as they wish. The same is true for many GAN algorithms and data sets. Elgammal stands by the idea that AICAN is a collaborator, but for now he’s making the decisions about who gets to work with it. “We’re establishing an artist-in-residence program to bring in artists to collaborate with AICAN internally at this point, before we make it available,” he said. “Approved” collaborators have included Devin Gharakhanian and Tim Bengel, whose work with AICAN was exhibited at Scope Miami Beach late last year.

Davidson’s hopes for AICAN are even more ambitious, and even more commercial. She envisions “building out pipelines” to corporate collections—such as those of hotels or office buildings, which need art to hang in commercial spaces. Given enough data about user preferences for visual images, AICAN and its cousins could, in theory, deduce the hippest looks for the next season, and Artrendex could create and manufacture low-cost editions suitable for hanging in guest rooms or office lobbies. Perhaps the company could even sell a subscription to refresh those images, a kind of Thomas Kinkade of machine-learning art that would produce a regular income to satisfy the expectations of Artrendex’s venture-capital investors.

That’s only one possible future, and it’s not even clear that Artrendex will pursue it. Alex Morgan, a Khosla principal who helped recruit Elgammal and his company, told me that the upside of the investment is “unknown but large,” and that he hopes the company “democratizes the creation and appreciation of art in dramatically powerful ways.” But automated, commercial kitsch to hang above king beds in Hyatt suites could arrive soon after the Chelsea gallery community celebrates the technology as a creative, machine-learning companion.

The art market is just that: a market. Some of the most renowned names in art today, from Damien Hirst to Banksy, trade in the trade of art as much as—and perhaps even more than—in the production of images, objects, and aesthetics. No artist today can avoid entering that fray, Elgammal included. “Is he an artist?” Hoerle-Guggenheim asked himself of the computer scientist. “Now that he’s in this context, he must be.” But is that enough? In Sharp’s estimation, “Faceless Portraits Transcending Time” is a tech demo more than a deliberate oeuvre, even compared to the machine-learning-driven work of his design-and-technology M.F.A. students, who self-identify as artists first.

Faceless Portrait #1 (Artrendex Inc.)

Judged as Banksy or Hirst might be, Elgammal’s most art-worthy work might be the Artrendex start-up itself, not the pigment-print portraits that its technology has output. Elgammal doesn’t treat his commercial venture like a secret, but he also doesn’t surface it as a beneficiary of his supposedly earnest solo gallery show. He’s argued that AI-made images constitute a kind of conceptual art, but conceptualists tend to privilege process over product or to make the process as visible as the product.

Hoerle-Guggenheim worked as a financial analyst for Hyatt before getting into the art business via some kind of consulting deal (he responded cryptically when I pressed him for details). At first I wondered if he’s the brainchild behind the commercial hospitality-art idea, but he professed not to fully understand Elgammal’s relationship with Davidson.

In the worst-case scenario, machines might absorb artistic practice entirely. AICAN could develop into a closed system in which an artificial intelligence scours the information space for influences, generates a new iteration of art, and then reanalyzes that work’s reception in the human world, ad infinitum. When I laid out that risk to Davidson, she admitted that she hadn’t considered the prospect, but she also doesn’t find it likely. “I think what it’s doing is timely and relevant,” Davidson said of AICAN. “But it’s not necessarily becoming a tastemaker. It’s more analytical.”

She might be right. Despite Morgan’s aspirations, even a machine that judges and produces aesthetic styles cannot escape becoming a style of its own. For now, the AI look is interesting and novel, but it will always be an aesthetic bound to a particular time period. The trappings of machine learning look fresh and interesting today, but soon they too will become tiresome, like NTSC-video scan lines and JPEG compression artifacts did after they ceased to be novelties brought into the gallery. Eventually, the most important ones carry on as art history. AICAN is neither a savior nor an annihilator of art. It’s just another style, bound by trends and accidents to a moment that will pass like any other.

The 20th-century avant garde turned anything whatsoever into art, an idea that overtook popular culture in the 21st. Now anyone can claim to be a “creator” of any kind, and can earn some legitimacy for that claim on YouTube, or Instagram, or DeviantArt, or whatever. Today, computer-science and venture-backed start-ups are driving cultural production instead. And yet, of all the aesthetic forms, fine art might be the most compatible with technological disruption—both thrive on novelty, even if it burns hot and fast.

But it’s unclear which party will rule, and which will follow. If these AI portraits of dukes and knights symbolize a new power in art, then whose faces are missing from them? The ones who would embrace the outcomes in whole, not just in part. When all is said and done, Elgammal may be an earnest computer scientist turned accidental artist and entrepreneur drawn into the orbit of wily art-market players. Or he may be a sly impresario feeding off a sincere gallerist to get lift for his commercial venture to master the art market. Art’s fate might depend on which story fetches the higher bid.

Ian Bogost is a contributing writer at The Atlantic.