Massive benefits of AI come with environmental and human costs. Can AI itself be part of the solution?
The recent explosion of generative artificial intelligence tools like ChatGPT and Dall-E enabled anyone with internet access to harness AI’s power for enhanced productivity, creativity, and problem-solving. With their ever-improving capabilities and expanding user base, these tools proved useful across disciplines, from the creative to the scientific.
But beneath the technological wonders of human-like conversation and creative expression lies a dirty secret—an alarming environmental and human cost. AI has an immense carbon footprint. Systems like ChatGPT take months to train in high-powered data centers, which demand huge amounts of electricity, much of which is still generated with fossil fuels, as well as water for cooling. “One of the reasons why Open AI needs investments [to the tune of] $10 billion from Microsoft is because they need to pay for all of that computation,” says Kentaro Toyama, a computer scientist at the University of Michigan. There’s also an ecological toll from mining rare minerals required for hardware and infrastructure. This environmental exploitation pollutes land, triggers natural disasters and causes large-scale human displacement. Finally, for data labeling needed to train and correct AI algorithms, the Big Data industry employs cheap and exploitative labor, often from the Global South.
Generative AI tools are based on large language models (LLMs), with most well-known being various versions of GPT. LLMs can perform natural language processing, including translating, summarizing and answering questions. They use artificial neural networks, called deep learning or machine learning. Inspired by the human brain, neural networks are made of millions of artificial neurons. “The basic principles of neural networks were known even in the 1950s and 1960s,” Toyama says, “but it’s only now, with the tremendous amount of compute power that we have, as well as huge amounts of data, that it’s become possible to train generative AI models.”
Though there aren’t any official figures about the power consumption or emissions from data centers, experts estimate that they use one percent of global electricity—more than entire countries.
In recent months, much attention has gone to the transformative benefits of these technologies. But it’s important to consider that these remarkable advances may come at a price.
AI’s carbon footprint
In their latest annual report, 2023 Landscape: Confronting Tech Power, the AI Now Institute, an independent policy research entity focusing on the concentration of power in the tech industry, says: “The constant push for scale in artificial intelligence has led Big Tech firms to develop hugely energy-intensive computational models that optimize for ‘accuracy’—through increasingly large datasets and computationally intensive model training—over more efficient and sustainable alternatives.”
Though there aren’t any official figures about the power consumption or emissions from data centers, experts estimate that they use one percent of global electricity—more than entire countries. In 2019, Emma Strubell, then a graduate researcher at the University of Massachusetts Amherst, estimated that training a single LLM resulted in over 280,000 kg in CO2 emissions—an equivalent of driving almost 1.2 million km in a gas-powered car. A couple of years later, David Patterson, a computer scientist from the University of California Berkeley, and colleagues, estimated GPT-3’s carbon footprint at over 550,000 kg of CO2 In 2022, the tech company Hugging Face, estimated the carbon footprint of its own language model, BLOOM, as 25,000 kg in CO2 emissions. (BLOOM’s footprint is lower because Hugging Face uses renewable energy, but it doubled when other life-cycle processes like hardware manufacturing and use were added.)
Luckily, despite the growing size and numbers of data centers, their increasing energy demands and emissions have not kept pace proportionately—thanks to renewable energy sources and energy-efficient hardware.
But emissions don’t tell the full story.
AI’s hidden human cost
“If historical colonialism annexed territories, their resources, and the bodies that worked on them, data colonialism’s power grab is both simpler and deeper: the capture and control of human life itself through appropriating the data that can be extracted from it for profit.” So write Nick Couldry and Ulises Mejias, authors of the book The Costs of Connection.
The energy requirements, hardware manufacture and the cheap human labor behind AI systems disproportionately affect marginalized communities.
Technologies we use daily inexorably gather our data. “Human experience, potentially every layer and aspect of it, is becoming the target of profitable extraction,” Couldry and Meijas say. This feeds data capitalism, the economic model built on the extraction and commodification of data. While we are being dispossessed of our data, Big Tech commodifies it for their own benefit. This results in consolidation of power structures that reinforce existing race, gender, class and other inequalities.
“The political economy around tech and tech companies, and the development in advances in AI contribute to massive displacement and pollution, and significantly changes the built environment,” says technologist and activist Yeshi Milner, who founded Data For Black Lives (D4BL) to create measurable change in Black people’s lives using data. The energy requirements, hardware manufacture and the cheap human labor behind AI systems disproportionately affect marginalized communities.
AI’s recent explosive growth spiked the demand for manual, behind-the-scenes tasks, creating an industry described by Mary Gray and Siddharth Suri as “ghost work” in their book. This invisible human workforce that lies behind the “magic” of AI, is overworked and underpaid, and very often based in the Global South. For example, workers in Kenya who made less than $2 an hour, were the behind the mechanism that trained ChatGPT to properly talk about violence, hate speech and sexual abuse. And, according to an article in Analytics India Magazine, in some cases these workers may not have been paid at all, a case for wage theft. An exposé by the Washington Post describes “digital sweatshops” in the Philippines, where thousands of workers experience low wages, delays in payment, and wage theft by Remotasks, a platform owned by Scale AI, a $7 billion dollar American startup. Rights groups and labor researchers have flagged Scale AI as one company that flouts basic labor standards for workers abroad.
It is possible to draw a parallel with chattel slavery—the most significant economic event that continues to shape the modern world—to see the business structures that allow for the massive exploitation of people, Milner says. Back then, people got chocolate, sugar, cotton; today, they get generative AI tools. “What’s invisible through distance—because [tech companies] also control what we see—is the massive exploitation,” Milner says.
“At Data for Black Lives, we are less concerned with whether AI will become human…[W]e’re more concerned with the growing power of AI to decide who’s human and who’s not,” Milner says. As a decision-making force, AI becomes a “justifying factor for policies, practices, rules that not just reinforce, but are currently turning the clock back generations years on people’s civil and human rights.”
Ironically, AI plays an important role in mitigating its own harms—by plowing through mountains of data about weather changes, extreme weather events and human displacement.
Nuria Oliver, a computer scientist, and co-founder and vice-president of the European Laboratory of Learning and Intelligent Systems (ELLIS), says that instead of focusing on the hypothetical existential risks of today’s AI, we should talk about its real, tangible risks.
“Because AI is a transverse discipline that you can apply to any field [from education, journalism, medicine, to transportation and energy], it has a transformative power…and an exponential impact,” she says.
AI's accountability
“At the core of what we were arguing about data capitalism [is] a call to action to abolish Big Data,” says Milner. “Not to abolish data itself, but the power structures that concentrate [its] power in the hands of very few actors.”
A comprehensive AI Act currently negotiated in the European Parliament aims to rein Big Tech in. It plans to introduce a rating of AI tools based on the harms caused to humans, while being as technology-neutral as possible. That sets standards for safe, transparent, traceable, non-discriminatory, and environmentally friendly AI systems, overseen by people, not automation. The regulations also ask for transparency in the content used to train generative AIs, particularly with copyrighted data, and also disclosing that the content is AI-generated. “This European regulation is setting the example for other regions and countries in the world,” Oliver says. But, she adds, such transparencies are hard to achieve.
Google, for example, recently updated its privacy policy to say that anything on the public internet will be used as training data. “Obviously, technology companies have to respond to their economic interests, so their decisions are not necessarily going to be the best for society and for the environment,” Oliver says. “And that’s why we need strong research institutions and civil society institutions to push for actions.” ELLIS also advocates for data centers to be built in locations where the energy can be produced sustainably.
Ironically, AI plays an important role in mitigating its own harms—by plowing through mountains of data about weather changes, extreme weather events and human displacement. “The only way to make sense of this data is using machine learning methods,” Oliver says.
Milner believes that the best way to expose AI-caused systemic inequalities is through people's stories. “In these last five years, so much of our work [at D4BL] has been creating new datasets, new data tools, bringing the data to life. To show the harms but also to continue to reclaim it as a tool for social change and for political change.” This change, she adds, will depend on whose hands it is in.
The livestock trucks arrived all night. One after the other they backed up to the wood chute leading to a dusty corral and loosed their cargo — 580 head of cattle by the time the last truck pulled away at 3pm the next afternoon. Dan Probert, astride his horse, guided the cows to paddocks of pristine grassland stretching alongside the snow-peaked Wallowa Mountains. They’d spend the summer here grazing bunchgrass and clovers and biscuitroot. The scuffle of their hooves and nibbles of their teeth would mimic the elk, antelope and bison that are thought to have historically roamed this portion of northeastern Oregon’s Zumwalt Prairie, helping grasses grow and restoring health to the soil.
The cows weren’t Probert’s, although the fifth-generation rancher and one other member of the Carman Ranch Direct grass-fed beef collective also raise their own herds here for part of every year. But in spring, when the prairie is in bloom, Probert receives cattle from several other ranchers. As the grasses wither in October, the cows move on to graze fertile pastures throughout the Columbia Basin, which stretches across several Pacific Northwest states; some overwinter on a vegetable farm in central Washington, feeding on corn leaves and pea vines left behind after harvest.
Sharing land and other resources among farmers isn’t new. But research shows it may be increasingly relevant in a time of climatic upheaval, potentially influencing “farmers to adopt environmentally friendly practices and agricultural innovation,” according to a 2021 paper in the Journal of Economic Surveys. Farmers might share knowledge about reducing pesticide use, says Heather Frambach, a supply chain consultant who works with farmers in California and elsewhere. As a group they may better qualify for grants to monitor soil and water quality.
Most research around such practices applies to cooperatives, whose owner-members equally share governance and profits. But a collective like Carman Ranch’s — spearheaded by fourth-generation rancher Cory Carman, who purchases beef from eight other ranchers to sell under one “regeneratively” certified brand — shows when producers band together, they can achieve eco-benefits that would be elusive if they worked alone.
Vitamins and minerals in soil pass into plants through their roots, then into cattle as they graze, then back around as the cows walk around pooping.
Carman knows from experience. Taking over her family's land in 2003, she started selling grass-fed beef “because I really wanted to figure out how to not participate in the feedlot world, to have a healthier product. I didn't know how we were going to survive,” she says. Part of her land sits on a degraded portion of Zumwalt Prairie replete with invasive grasses; working to restore it, she thought, “What good does it do to kill myself trying to make this ranch more functional? If you want to make a difference, change has to be more than single entrepreneurs on single pieces of land. It has to happen at a community level.” The seeds of her collective were sown.
Raising 100 percent grass-fed beef requires land that’s got something for cows to graze in every season — which most collective members can’t access individually. So, they move cattle around their various parcels. It’s practical, but it also restores nutrient flows “to the way they used to move, from lowlands and canyons during the winter to higher-up places as the weather gets hot,” Carman says. Meaning, vitamins and minerals in soil pass into plants through their roots, then into cattle as they graze, then back around as the cows walk around pooping.
Cory Carman sells grass-fed beef, which requires land that’s got something for cows to graze in every season.
Courtesy Cory Carman
Each collective member has individual ecological goals: Carman brought in pigs to root out invasive grasses and help natives flourish. Probert also heads a more conventional grain-finished beef collective with 100 members, and their combined 6.5 million ranchland acres were eligible for a grant supporting climate-friendly practices, which compels them to improve soil and water health and biodiversity and make their product “as environmentally friendly as possible,” Probert says. The Washington veg farmer reduced tilling and pesticide use thanks to the ecoservices of visiting cows. Similarly, a conventional hay farmer near Carman has reduced his reliance on fertilizer by letting cattle graze the cover crops he plants on 80 acres.
Additionally, the collective must meet the regenerative standards promised on their label — another way in which they work together to achieve ecological goals. Says David LeZaks, formerly a senior fellow at finance-focused ecology nonprofit Croatan Institute, it’s hard for individual farmers to access monetary assistance. “But it's easier to get financing flowing when you increase the scale with cooperatives or collectives,” he says. “This supports producers in ways that can lead to better outcomes on the landscape.”
New, smaller scale farmers might gain the most from collective and cooperative models.
For example, it can help them minimize waste by using more of an animal, something our frugal ancestors excelled at. Small-scale beef producers normally throw out hides; Thousand Hills’ 50 regenerative beef producers together have enough to sell to Timberland to make carbon-neutral leather. In another example, working collectively resulted in the support of more diverse farms: Meadowlark Community Mill in Wisconsin went from working with one wheat grower, to sourcing from several organic wheat growers marketing flour under one premium brand.
Another example shows how these collaborations can foster greater equity, among other benefits: The Federation of Southern Cooperatives has a mission to support Black farmers as they build community health. It owns several hundred forest acres in Alabama, where it teaches members to steward their own forest land and use it to grow food — one member coop raises goats to graze forest debris and produce milk. Adding the combined acres of member forest land to the Federation’s, the group qualified for a federal conservation grant that will keep this resource available for food production, and community environmental and mental health benefits. “That's the value-add of the collective land-owner structure,” says Dãnia Davy, director of land retention and advocacy.
New, smaller scale farmers might gain the most from collective and cooperative models, says Jordan Treakle, national program coordinator of the National Family Farm Coalition (NFFC). Many of them enter farming specifically to raise healthy food in healthy ways — with organic production, or livestock for soil fertility. With land, equipment and labor prohibitively expensive, farming collectively allows shared costs and risk that buy farmers the time necessary to “build soil fertility and become competitive” in the marketplace, Treakle says. Just keeping them in business is an eco-win; when small farms fail, they tend to get sold for development or absorbed into less-diversified operations, so the effects of their success can “reverberate through the entire local economy.”
Frambach, the supply chain consultant, has been experimenting with what she calls “collaborative crop planning,” where she helps farmers strategize what they’ll plant as a group. “A lot of them grow based on what they hear their neighbor is going to do, and that causes really poor outcomes,” she says. “Nobody replanted cauliflower after the [atmospheric rivers in California] this year and now there's a huge shortage of cauliflower.” A group plan can avoid the under-planting that causes farmers to lose out on revenue.
It helps avoid overplanted crops, too, which small farmers might have to plow under or compost. Larger farmers, conversely, can sell surplus produce into the upcycling market — to Matriark Foods, for example, which turns it into value-add products like pasta sauce for companies like Sysco that supply institutional kitchens at colleges and hospitals. Frambach and Anna Hammond, Matriark’s CEO, want to collectivize smaller farmers so that they can sell to the likes of Matriark and “not lose an incredible amount of income,” Hammond says.
Ultimately, farming is fraught with challenges and even collectivizing doesn’t guarantee that farms will stay in business. But with agriculture accounting for almost 30 percent of greenhouse gas emissions globally, there's an “urgent” need to shift farming practices to more environmentally sustainable models, as well as a “demand in the marketplace for it,” says NFFC’s Treakle. “The growth of cooperative and collective farming can be a huge, huge boon for the ecological integrity of the system.”
Story by Big Think
We live in strange times, when the technology we depend on the most is also that which we fear the most. We celebrate cutting-edge achievements even as we recoil in fear at how they could be used to hurt us. From genetic engineering and AI to nuclear technology and nanobots, the list of awe-inspiring, fast-developing technologies is long.
However, this fear of the machine is not as new as it may seem. Technology has a longstanding alliance with power and the state. The dark side of human history can be told as a series of wars whose victors are often those with the most advanced technology. (There are exceptions, of course.) Science, and its technological offspring, follows the money.
This fear of the machine seems to be misplaced. The machine has no intent: only its maker does. The fear of the machine is, in essence, the fear we have of each other — of what we are capable of doing to one another.
How AI changes things
Sure, you would reply, but AI changes everything. With artificial intelligence, the machine itself will develop some sort of autonomy, however ill-defined. It will have a will of its own. And this will, if it reflects anything that seems human, will not be benevolent. With AI, the claim goes, the machine will somehow know what it must do to get rid of us. It will threaten us as a species.
Well, this fear is also not new. Mary Shelley wrote Frankenstein in 1818 to warn us of what science could do if it served the wrong calling. In the case of her novel, Dr. Frankenstein’s call was to win the battle against death — to reverse the course of nature. Granted, any cure of an illness interferes with the normal workings of nature, yet we are justly proud of having developed cures for our ailments, prolonging life and increasing its quality. Science can achieve nothing more noble. What messes things up is when the pursuit of good is confused with that of power. In this distorted scale, the more powerful the better. The ultimate goal is to be as powerful as gods — masters of time, of life and death.
Should countries create a World Mind Organization that controls the technologies that develop AI?
Back to AI, there is no doubt the technology will help us tremendously. We will have better medical diagnostics, better traffic control, better bridge designs, and better pedagogical animations to teach in the classroom and virtually. But we will also have better winnings in the stock market, better war strategies, and better soldiers and remote ways of killing. This grants real power to those who control the best technologies. It increases the take of the winners of wars — those fought with weapons, and those fought with money.
A story as old as civilization
The question is how to move forward. This is where things get interesting and complicated. We hear over and over again that there is an urgent need for safeguards, for controls and legislation to deal with the AI revolution. Great. But if these machines are essentially functioning in a semi-black box of self-teaching neural nets, how exactly are we going to make safeguards that are sure to remain effective? How are we to ensure that the AI, with its unlimited ability to gather data, will not come up with new ways to bypass our safeguards, the same way that people break into safes?
The second question is that of global control. As I wrote before, overseeing new technology is complex. Should countries create a World Mind Organization that controls the technologies that develop AI? If so, how do we organize this planet-wide governing board? Who should be a part of its governing structure? What mechanisms will ensure that governments and private companies do not secretly break the rules, especially when to do so would put the most advanced weapons in the hands of the rule breakers? They will need those, after all, if other actors break the rules as well.
As before, the countries with the best scientists and engineers will have a great advantage. A new international détente will emerge in the molds of the nuclear détente of the Cold War. Again, we will fear destructive technology falling into the wrong hands. This can happen easily. AI machines will not need to be built at an industrial scale, as nuclear capabilities were, and AI-based terrorism will be a force to reckon with.
So here we are, afraid of our own technology all over again.
What is missing from this picture? It continues to illustrate the same destructive pattern of greed and power that has defined so much of our civilization. The failure it shows is moral, and only we can change it. We define civilization by the accumulation of wealth, and this worldview is killing us. The project of civilization we invented has become self-cannibalizing. As long as we do not see this, and we keep on following the same route we have trodden for the past 10,000 years, it will be very hard to legislate the technology to come and to ensure such legislation is followed. Unless, of course, AI helps us become better humans, perhaps by teaching us how stupid we have been for so long. This sounds far-fetched, given who this AI will be serving. But one can always hope.