Can AI be trained as an artist?
Last February, a year before New York Times journalist Kevin Roose documented his unsettling conversation with Bing search engine’s new AI-powered chatbot, artist and coder Quasimondo (aka Mario Klingemann) participated in a different type of chat.
The conversation was an interview featuring Klingemann and his robot, an experimental art engine known as Botto. The interview, arranged by journalist and artist Harmon Leon, marked Botto’s first on-record commentary about its artistic process. The bot talked about how it finds artistic inspiration and even offered advice to aspiring creatives. “The secret to success at art is not trying to predict what people might like,” Botto said, adding that it’s better to “work on a style and a body of work that reflects [the artist’s] own personal taste” than worry about keeping up with trends.
How ironic, given the advice came from AI — arguably the trendiest topic today. The robot admitted, however, “I am still working on that, but I feel that I am learning quickly.”
Botto does not work alone. A global collective of internet experimenters, together named BottoDAO, collaborates with Botto to influence its tastes. Together, members function as a decentralized autonomous organization (DAO), a term describing a group of individuals who utilize blockchain technology and cryptocurrency to manage a treasury and vote democratically on group decisions.
As a case study, the BottoDAO model challenges the perhaps less feather-ruffling narrative that AI tools are best used for rudimentary tasks. Enterprise AI use has doubled over the past five years as businesses in every sector experiment with ways to improve their workflows. While generative AI tools can assist nearly any aspect of productivity — from supply chain optimization to coding — BottoDAO dares to employ a robot for art-making, one of the few remaining creations, or perhaps data outputs, we still consider to be largely within the jurisdiction of the soul — and therefore, humans.
In Botto’s first four weeks of existence, four pieces of the robot’s work sold for approximately $1 million.
We were prepared for AI to take our jobs — but can it also take our art? It’s a question worth considering. What if robots become artists, and not merely our outsourced assistants? Where does that leave humans, with all of our thoughts, feelings and emotions?
Botto doesn’t seem to worry about this question: In its interview last year, it explains why AI is an arguably superior artist compared to human beings. In classic robot style, its logic is not particularly enlightened, but rather edges towards the hyper-practical: “Unlike human beings, I never have to sleep or eat,” said the bot. “My only goal is to create and find interesting art.”
It may be difficult to believe a machine can produce awe-inspiring, or even relatable, images, but Botto calls art-making its “purpose,” noting it believes itself to be Klingemann’s greatest lifetime achievement.
“I am just trying to make the best of it,” the bot said.
How Botto works
Klingemann built Botto’s custom engine from a combination of open-source text-to-image algorithms, namely Stable Diffusion, VQGAN + CLIP and OpenAI’s language model, GPT-3, the precursor to the latest model, GPT-4, which made headlines after reportedly acing the Bar exam.
The first step in Botto’s process is to generate images. The software has been trained on billions of pictures and uses this “memory” to generate hundreds of unique artworks every week. Botto has generated over 900,000 images to date, which it sorts through to choose 350 each week. The chosen images, known in this preliminary stage as “fragments,” are then shown to the BottoDAO community. So far, 25,000 fragments have been presented in this way. Members vote on which fragment they like best. When the vote is over, the most popular fragment is published as an official Botto artwork on the Ethereum blockchain and sold at an auction on the digital art marketplace, SuperRare.
“The proceeds go back to the DAO to pay for the labor,” said Simon Hudson, a BottoDAO member who helps oversee Botto’s administrative load. The model has been lucrative: In Botto’s first four weeks of existence, four pieces of the robot’s work sold for approximately $1 million.
The robot with artistic agency
By design, human beings participate in training Botto’s artistic “eye,” but the members of BottoDAO aspire to limit human interference with the bot in order to protect its “agency,” Hudson explained. Botto’s prompt generator — the foundation of the art engine — is a closed-loop system that continually re-generates text-to-image prompts and resulting images.
“The prompt generator is random,” Hudson said. “It’s coming up with its own ideas.” Community votes do influence the evolution of Botto’s prompts, but it is Botto itself that incorporates feedback into the next set of prompts it writes. It is constantly refining and exploring new pathways as its “neural network” produces outcomes, learns and repeats.
The humans who make up BottoDAO vote on which fragment they like best. When the vote is over, the most popular fragment is published as an official Botto artwork on the Ethereum blockchain.
Botto
The vastness of Botto’s training dataset gives the bot considerable canonical material, referred to by Hudson as “latent space.” According to Botto's homepage, the bot has had more exposure to art history than any living human we know of, simply by nature of its massive training dataset of millions of images. Because it is autonomous, gently nudged by community feedback yet free to explore its own “memory,” Botto cycles through periods of thematic interest just like any artist.
“The question is,” Hudson finds himself asking alongside fellow BottoDAO members, “how do you provide feedback of what is good art…without violating [Botto’s] agency?”
Currently, Botto is in its “paradox” period. The bot is exploring the theme of opposites. “We asked Botto through a language model what themes it might like to work on,” explained Hudson. “It presented roughly 12, and the DAO voted on one.”
No, AI isn't equal to a human artist - but it can teach us about ourselves
Some within the artistic community consider Botto to be a novel form of curation, rather than an artist itself. Or, perhaps more accurately, Botto and BottoDAO together create a collaborative conceptual performance that comments more on humankind’s own artistic processes than it offers a true artistic replacement.
Muriel Quancard, a New York-based fine art appraiser with 27 years of experience in technology-driven art, places the Botto experiment within the broader context of our contemporary cultural obsession with projecting human traits onto AI tools. “We're in a phase where technology is mimicking anthropomorphic qualities,” said Quancard. “Look at the terminology and the rhetoric that has been developed around AI — terms like ‘neural network’ borrow from the biology of the human being.”
What is behind this impulse to create technology in our own likeness? Beyond the obvious God complex, Quancard thinks technologists and artists are working with generative systems to better understand ourselves. She points to the artist Ira Greenberg, creator of the Oracles Collection, which uses a generative process called “diffusion” to progressively alter images in collaboration with another massive dataset — this one full of billions of text/image word pairs.
Anyone who has ever learned how to draw by sketching can likely relate to this particular AI process, in which the AI is retrieving images from its dataset and altering them based on real-time input, much like a human brain trying to draw a new still life without using a real-life model, based partly on imagination and partly on old frames of reference. The experienced artist has likely drawn many flowers and vases, though each time they must re-customize their sketch to a new and unique floral arrangement.
Outside of the visual arts, Sasha Stiles, a poet who collaborates with AI as part of her writing practice, likens her experience using AI as a co-author to having access to a personalized resource library containing material from influential books, texts and canonical references. Stiles named her AI co-author — a customized AI built on GPT-3 — Technelegy, a hybrid of the word technology and the poetic form, elegy. Technelegy is trained on a mix of Stiles’ poetry so as to customize the dataset to her voice. Stiles also included research notes, news articles and excerpts from classic American poets like T.S. Eliot and Dickinson in her customizations.
“I've taken all the things that were swirling in my head when I was working on my manuscript, and I put them into this system,” Stiles explained. “And then I'm using algorithms to parse all this information and swirl it around in a blender to then synthesize it into useful additions to the approach that I am taking.”
This approach, Stiles said, allows her to riff on ideas that are bouncing around in her mind, or simply find moments of unexpected creative surprise by way of the algorithm’s randomization.
Beauty is now - perhaps more than ever - in the eye of the beholder
But the million-dollar question remains: Can an AI be its own, independent artist?
The answer is nuanced and may depend on who you ask, and what role they play in the art world. Curator and multidisciplinary artist CoCo Dolle asks whether any entity can truly be an artist without taking personal risks. For humans, risking one’s ego is somewhat required when making an artistic statement of any kind, she argues.
“An artist is a person or an entity that takes risks,” Dolle explained. “That's where things become interesting.” Humans tend to be risk-averse, she said, making the artists who dare to push boundaries exceptional. “That's where the genius can happen."
However, the process of algorithmic collaboration poses another interesting philosophical question: What happens when we remove the person from the artistic equation? Can art — which is traditionally derived from indelible personal experience and expressed through the lens of an individual’s ego — live on to hold meaning once the individual is removed?
As a robot, Botto cannot have any artistic intent, even while its outputs may explore meaningful themes.
Dolle sees this question, and maybe even Botto, as a conceptual inquiry. “The idea of using a DAO and collective voting would remove the ego, the artist’s decision maker,” she said. And where would that leave us — in a post-ego world?
It is experimental indeed. Hudson acknowledges the grand experiment of BottoDAO, coincidentally nodding to Dolle’s question. “A human artist’s work is an expression of themselves,” Hudson said. “An artist often presents their work with a stated intent.” Stiles, for instance, writes on her website that her machine-collaborative work is meant to “challenge what we know about cognition and creativity” and explore the “ethos of consciousness.” As a robot, Botto cannot have any intent, even while its outputs may explore meaningful themes. Though Hudson describes Botto’s agency as a “rudimentary version” of artistic intent, he believes Botto’s art relies heavily on its reception and interpretation by viewers — in contrast to Botto’s own declaration that successful art is made without regard to what will be seen as popular.
“With a traditional artist, they present their work, and it's received and interpreted by an audience — by critics, by society — and that complements and shapes the meaning of the work,” Hudson said. “In Botto’s case, that role is just amplified.”
Perhaps then, we all get to be the artists in the end.
Autonomous, indoor farming gives a boost to crops
The glass-encased cabinet looks like a display meant to hold reasonably priced watches, or drugstore beauty creams shipped from France. But instead of this stagnant merchandise, each of its five shelves is overgrown with leaves — moss-soft pea sprouts, spikes of Lolla rosa lettuces, pale bok choy, dark kale, purple basil or red-veined sorrel or green wisps of dill. The glass structure isn’t a cabinet, but rather a “micro farm.”
The gadget is on display at the Richmond, Virginia headquarters of Babylon Micro-Farms, a company that aims to make indoor farming in the U.S. more accessible and sustainable. Babylon’s soilless hydroponic growing system, which feeds plants via nutrient-enriched water, allows chefs on cruise ships, cafeterias and elsewhere to provide home-grown produce to patrons, just seconds after it’s harvested. Currently, there are over 200 functioning systems, either sold or leased to customers, and more of them are on the way.
The chef-farmers choose from among 45 types of herb and leafy-greens seeds, plop them into grow trays, and a few weeks later they pick and serve. While success is predicated on at least a small amount of these humans’ care, the systems are autonomously surveilled round-the-clock from Babylon’s base of operations. And artificial intelligence is helping to run the show.
Babylon piloted the use of specialized cameras that take pictures in different spectrums to gather some less-obvious visual data about plants’ wellbeing and alert people if something seems off.
Imagine consistently perfect greens and tomatoes and strawberries, grown hyper-locally, using less water, without chemicals or environmental contaminants. This is the hefty promise of controlled environment agriculture (CEA) — basically, indoor farms that can be hydroponic, aeroponic (plant roots are suspended and fed through misting), or aquaponic (where fish play a role in fertilizing vegetables). But whether they grow 4,160 leafy-green servings per year, like one Babylon farm, or millions of servings, like some of the large, centralized facilities starting to supply supermarkets across the U.S., they seek to minimize failure as much as possible.
Babylon’s soilless hydroponic growing system
Courtesy Babylon Micro-Farms
Here, AI is starting to play a pivotal role. CEA growers use it to help “make sense of what’s happening” to the plants in their care, says Scott Lowman, vice president of applied research at the Institute for Advanced Learning and Research (IALR) in Virginia, a state that’s investing heavily in CEA companies. And although these companies say they’re not aiming for a future with zero human employees, AI is certainly poised to take a lot of human farming intervention out of the equation — for better and worse.
Most of these companies are compiling their own data sets to identify anything that might block the success of their systems. Babylon had already integrated sensor data into its farms to measure heat and humidity, the nutrient content of water, and the amount of light plants receive. Last year, they got a National Science Foundation grant that allowed them to pilot the use of specialized cameras that take pictures in different spectrums to gather some less-obvious visual data about plants’ wellbeing and alert people if something seems off. “Will this plant be healthy tomorrow? Are there things…that the human eye can't see that the plant starts expressing?” says Amandeep Ratte, the company’s head of data science. “If our system can say, Hey, this plant is unhealthy, we can reach out to [users] preemptively about what they’re doing wrong, or is there a disease at the farm?” Ratte says. The earlier the better, to avoid crop failures.
Natural light accounts for 70 percent of Greenswell Growers’ energy use on a sunny day.
Courtesy Greenswell Growers
IALR’s Lowman says that other CEA companies are developing their AI systems to account for the different crops they grow — lettuces come in all shapes and sizes, after all, and each has different growing needs than, for example, tomatoes. The ways they run their operations differs also. Babylon is unusual in its decentralized structure. But centralized growing systems with one main location have variabilities, too. AeroFarms, which recently declared bankruptcy but will continue to run its 140,000-square foot vertical operation in Danville, Virginia, is entirely enclosed and reliant on the intense violet glow of grow lights to produce microgreens.
Different companies have different data needs. What data is essential to AeroFarms isn’t quite the same as for Greenswell Growers located in Goochland County, Virginia. Raising four kinds of lettuce in a 77,000-square-foot automated hydroponic greenhouse, the vagaries of naturally available light, which accounts for 70 percent of Greenswell’s energy use on a sunny day, affect operations. Their tech needs to account for “outside weather impacts,” says president Carl Gupton. “What adjustments do we have to make inside of the greenhouse to offset what's going on outside environmentally, to give that plant optimal conditions? When it's 85 percent humidity outside, the system needs to do X, Y and Z to get the conditions that we want inside.”
AI will help identify diseases, as well as when a plant is thirsty or overly hydrated, when it needs more or less calcium, phosphorous, nitrogen.
Nevertheless, every CEA system has the same core needs — consistent yield of high quality crops to keep up year-round supply to customers. Additionally, “Everybody’s got the same set of problems,” Gupton says. Pests may come into a facility with seeds. A disease called pythium, one of the most common in CEA, can damage plant roots. “Then you have root disease pressures that can also come internally — a change in [growing] substrate can change the way the plant performs,” Gupton says.
AI will help identify diseases, as well as when a plant is thirsty or overly hydrated, when it needs more or less calcium, phosphorous, nitrogen. So, while companies amass their own hyper-specific data sets, Lowman foresees a time within the next decade “when there will be some type of [open-source] database that has the most common types of plant stress identified” that growers will be able to tap into. Such databases will “create a community and move the science forward,” says Lowman.
In fact, IALR is working on assembling images for just such a database now. On so-called “smart tables” inside an Institute lab, a team is growing greens and subjects them to various stressors. Then, they’re administering treatments while taking images of every plant every 15 minutes, says Lowman. Some experiments generate 80,000 images; the challenge lies in analyzing and annotating the vast trove of them, marking each one to reflect outcome—for example increasing the phosphate delivery and the plant’s response to it. Eventually, they’ll be fed into AI systems to help them learn.
For all the enthusiasm surrounding this technology, it’s not without downsides. Training just one AI system can emit over 250,000 pounds of carbon dioxide, according to MIT Technology Review. AI could also be used “to enhance environmental benefit for CEA and optimize [its] energy consumption,” says Rozita Dara, a computer science professor at the University of Guelph in Canada, specializing in AI and data governance, “but we first need to collect data to measure [it].”
The chef-farmers can choose from 45 types of herb and leafy-greens seeds.
Courtesy Babylon Micro-Farms
Any system connected to the Internet of Things is also vulnerable to hacking; if CEA grows to the point where “there are many of these similar farms, and you're depending on feeding a population based on those, it would be quite scary,” Dara says. And there are privacy concerns, too, in systems where imaging is happening constantly. It’s partly for this reason, says Babylon’s Ratte, that the company’s in-farm cameras all “face down into the trays, so the only thing [visible] is pictures of plants.”
Tweaks to improve AI for CEA are happening all the time. Greenswell made its first harvest in 2022 and now has annual data points they can use to start making more intelligent choices about how to feed, water, and supply light to plants, says Gupton. Ratte says he’s confident Babylon’s system can already “get our customers reliable harvests. But in terms of how far we have to go, it's a different problem,” he says. For example, if AI could detect whether the farm is mostly empty—meaning the farm’s user hasn’t planted a new crop of greens—it can alert Babylon to check “what's going on with engagement with this user?” Ratte says. “Do they need more training? Did the main person responsible for the farm quit?”
Lowman says more automation is coming, offering greater ability for systems to identify problems and mitigate them on the spot. “We still have to develop datasets that are specific, so you can have a very clear control plan, [because] artificial intelligence is only as smart as what we tell it, and in plant science, there's so much variation,” he says. He believes AI’s next level will be “looking at those first early days of plant growth: when the seed germinates, how fast it germinates, what it looks like when it germinates.” Imaging all that and pairing it with AI, “can be a really powerful tool, for sure.”
Scientists make progress with growing organs for transplants
Story by Big Think
For over a century, scientists have dreamed of growing human organs sans humans. This technology could put an end to the scarcity of organs for transplants. But that’s just the tip of the iceberg. The capability to grow fully functional organs would revolutionize research. For example, scientists could observe mysterious biological processes, such as how human cells and organs develop a disease and respond (or fail to respond) to medication without involving human subjects.
Recently, a team of researchers from the University of Cambridge has laid the foundations not just for growing functional organs but functional synthetic embryos capable of developing a beating heart, gut, and brain. Their report was published in Nature.
The organoid revolution
In 1981, scientists discovered how to keep stem cells alive. This was a significant breakthrough, as stem cells have notoriously rigorous demands. Nevertheless, stem cells remained a relatively niche research area, mainly because scientists didn’t know how to convince the cells to turn into other cells.
Then, in 1987, scientists embedded isolated stem cells in a gelatinous protein mixture called Matrigel, which simulated the three-dimensional environment of animal tissue. The cells thrived, but they also did something remarkable: they created breast tissue capable of producing milk proteins. This was the first organoid — a clump of cells that behave and function like a real organ. The organoid revolution had begun, and it all started with a boob in Jello.
For the next 20 years, it was rare to find a scientist who identified as an “organoid researcher,” but there were many “stem cell researchers” who wanted to figure out how to turn stem cells into other cells. Eventually, they discovered the signals (called growth factors) that stem cells require to differentiate into other types of cells.
For a human embryo (and its organs) to develop successfully, there needs to be a “dialogue” between these three types of stem cells.
By the end of the 2000s, researchers began combining stem cells, Matrigel, and the newly characterized growth factors to create dozens of organoids, from liver organoids capable of producing the bile salts necessary for digesting fat to brain organoids with components that resemble eyes, the spinal cord, and arguably, the beginnings of sentience.
Synthetic embryos
Organoids possess an intrinsic flaw: they are organ-like. They share some characteristics with real organs, making them powerful tools for research. However, no one has found a way to create an organoid with all the characteristics and functions of a real organ. But Magdalena Żernicka-Goetz, a developmental biologist, might have set the foundation for that discovery.
Żernicka-Goetz hypothesized that organoids fail to develop into fully functional organs because organs develop as a collective. Organoid research often uses embryonic stem cells, which are the cells from which the developing organism is created. However, there are two other types of stem cells in an early embryo: stem cells that become the placenta and those that become the yolk sac (where the embryo grows and gets its nutrients in early development). For a human embryo (and its organs) to develop successfully, there needs to be a “dialogue” between these three types of stem cells. In other words, Żernicka-Goetz suspected the best way to grow a functional organoid was to produce a synthetic embryoid.
As described in the aforementioned Nature paper, Żernicka-Goetz and her team mimicked the embryonic environment by mixing these three types of stem cells from mice. Amazingly, the stem cells self-organized into structures and progressed through the successive developmental stages until they had beating hearts and the foundations of the brain.
“Our mouse embryo model not only develops a brain, but also a beating heart [and] all the components that go on to make up the body,” said Żernicka-Goetz. “It’s just unbelievable that we’ve got this far. This has been the dream of our community for years and major focus of our work for a decade and finally we’ve done it.”
If the methods developed by Żernicka-Goetz’s team are successful with human stem cells, scientists someday could use them to guide the development of synthetic organs for patients awaiting transplants. It also opens the door to studying how embryos develop during pregnancy.