Massive benefits of AI come with environmental and human costs. Can AI itself be part of the solution?
The recent explosion of generative artificial intelligence tools like ChatGPT and Dall-E enabled anyone with internet access to harness AI’s power for enhanced productivity, creativity, and problem-solving. With their ever-improving capabilities and expanding user base, these tools proved useful across disciplines, from the creative to the scientific.
But beneath the technological wonders of human-like conversation and creative expression lies a dirty secret—an alarming environmental and human cost. AI has an immense carbon footprint. Systems like ChatGPT take months to train in high-powered data centers, which demand huge amounts of electricity, much of which is still generated with fossil fuels, as well as water for cooling. “One of the reasons why Open AI needs investments [to the tune of] $10 billion from Microsoft is because they need to pay for all of that computation,” says Kentaro Toyama, a computer scientist at the University of Michigan. There’s also an ecological toll from mining rare minerals required for hardware and infrastructure. This environmental exploitation pollutes land, triggers natural disasters and causes large-scale human displacement. Finally, for data labeling needed to train and correct AI algorithms, the Big Data industry employs cheap and exploitative labor, often from the Global South.
Generative AI tools are based on large language models (LLMs), with most well-known being various versions of GPT. LLMs can perform natural language processing, including translating, summarizing and answering questions. They use artificial neural networks, called deep learning or machine learning. Inspired by the human brain, neural networks are made of millions of artificial neurons. “The basic principles of neural networks were known even in the 1950s and 1960s,” Toyama says, “but it’s only now, with the tremendous amount of compute power that we have, as well as huge amounts of data, that it’s become possible to train generative AI models.”
Though there aren’t any official figures about the power consumption or emissions from data centers, experts estimate that they use one percent of global electricity—more than entire countries.
In recent months, much attention has gone to the transformative benefits of these technologies. But it’s important to consider that these remarkable advances may come at a price.
AI’s carbon footprint
In their latest annual report, 2023 Landscape: Confronting Tech Power, the AI Now Institute, an independent policy research entity focusing on the concentration of power in the tech industry, says: “The constant push for scale in artificial intelligence has led Big Tech firms to develop hugely energy-intensive computational models that optimize for ‘accuracy’—through increasingly large datasets and computationally intensive model training—over more efficient and sustainable alternatives.”
Though there aren’t any official figures about the power consumption or emissions from data centers, experts estimate that they use one percent of global electricity—more than entire countries. In 2019, Emma Strubell, then a graduate researcher at the University of Massachusetts Amherst, estimated that training a single LLM resulted in over 280,000 kg in CO2 emissions—an equivalent of driving almost 1.2 million km in a gas-powered car. A couple of years later, David Patterson, a computer scientist from the University of California Berkeley, and colleagues, estimated GPT-3’s carbon footprint at over 550,000 kg of CO2 In 2022, the tech company Hugging Face, estimated the carbon footprint of its own language model, BLOOM, as 25,000 kg in CO2 emissions. (BLOOM’s footprint is lower because Hugging Face uses renewable energy, but it doubled when other life-cycle processes like hardware manufacturing and use were added.)
Luckily, despite the growing size and numbers of data centers, their increasing energy demands and emissions have not kept pace proportionately—thanks to renewable energy sources and energy-efficient hardware.
But emissions don’t tell the full story.
AI’s hidden human cost
“If historical colonialism annexed territories, their resources, and the bodies that worked on them, data colonialism’s power grab is both simpler and deeper: the capture and control of human life itself through appropriating the data that can be extracted from it for profit.” So write Nick Couldry and Ulises Mejias, authors of the book The Costs of Connection.
The energy requirements, hardware manufacture and the cheap human labor behind AI systems disproportionately affect marginalized communities.
Technologies we use daily inexorably gather our data. “Human experience, potentially every layer and aspect of it, is becoming the target of profitable extraction,” Couldry and Meijas say. This feeds data capitalism, the economic model built on the extraction and commodification of data. While we are being dispossessed of our data, Big Tech commodifies it for their own benefit. This results in consolidation of power structures that reinforce existing race, gender, class and other inequalities.
“The political economy around tech and tech companies, and the development in advances in AI contribute to massive displacement and pollution, and significantly changes the built environment,” says technologist and activist Yeshi Milner, who founded Data For Black Lives (D4BL) to create measurable change in Black people’s lives using data. The energy requirements, hardware manufacture and the cheap human labor behind AI systems disproportionately affect marginalized communities.
AI’s recent explosive growth spiked the demand for manual, behind-the-scenes tasks, creating an industry described by Mary Gray and Siddharth Suri as “ghost work” in their book. This invisible human workforce that lies behind the “magic” of AI, is overworked and underpaid, and very often based in the Global South. For example, workers in Kenya who made less than $2 an hour, were the behind the mechanism that trained ChatGPT to properly talk about violence, hate speech and sexual abuse. And, according to an article in Analytics India Magazine, in some cases these workers may not have been paid at all, a case for wage theft. An exposé by the Washington Post describes “digital sweatshops” in the Philippines, where thousands of workers experience low wages, delays in payment, and wage theft by Remotasks, a platform owned by Scale AI, a $7 billion dollar American startup. Rights groups and labor researchers have flagged Scale AI as one company that flouts basic labor standards for workers abroad.
It is possible to draw a parallel with chattel slavery—the most significant economic event that continues to shape the modern world—to see the business structures that allow for the massive exploitation of people, Milner says. Back then, people got chocolate, sugar, cotton; today, they get generative AI tools. “What’s invisible through distance—because [tech companies] also control what we see—is the massive exploitation,” Milner says.
“At Data for Black Lives, we are less concerned with whether AI will become human…[W]e’re more concerned with the growing power of AI to decide who’s human and who’s not,” Milner says. As a decision-making force, AI becomes a “justifying factor for policies, practices, rules that not just reinforce, but are currently turning the clock back generations years on people’s civil and human rights.”
Ironically, AI plays an important role in mitigating its own harms—by plowing through mountains of data about weather changes, extreme weather events and human displacement.
Nuria Oliver, a computer scientist, and co-founder and vice-president of the European Laboratory of Learning and Intelligent Systems (ELLIS), says that instead of focusing on the hypothetical existential risks of today’s AI, we should talk about its real, tangible risks.
“Because AI is a transverse discipline that you can apply to any field [from education, journalism, medicine, to transportation and energy], it has a transformative power…and an exponential impact,” she says.
AI's accountability
“At the core of what we were arguing about data capitalism [is] a call to action to abolish Big Data,” says Milner. “Not to abolish data itself, but the power structures that concentrate [its] power in the hands of very few actors.”
A comprehensive AI Act currently negotiated in the European Parliament aims to rein Big Tech in. It plans to introduce a rating of AI tools based on the harms caused to humans, while being as technology-neutral as possible. That sets standards for safe, transparent, traceable, non-discriminatory, and environmentally friendly AI systems, overseen by people, not automation. The regulations also ask for transparency in the content used to train generative AIs, particularly with copyrighted data, and also disclosing that the content is AI-generated. “This European regulation is setting the example for other regions and countries in the world,” Oliver says. But, she adds, such transparencies are hard to achieve.
Google, for example, recently updated its privacy policy to say that anything on the public internet will be used as training data. “Obviously, technology companies have to respond to their economic interests, so their decisions are not necessarily going to be the best for society and for the environment,” Oliver says. “And that’s why we need strong research institutions and civil society institutions to push for actions.” ELLIS also advocates for data centers to be built in locations where the energy can be produced sustainably.
Ironically, AI plays an important role in mitigating its own harms—by plowing through mountains of data about weather changes, extreme weather events and human displacement. “The only way to make sense of this data is using machine learning methods,” Oliver says.
Milner believes that the best way to expose AI-caused systemic inequalities is through people's stories. “In these last five years, so much of our work [at D4BL] has been creating new datasets, new data tools, bringing the data to life. To show the harms but also to continue to reclaim it as a tool for social change and for political change.” This change, she adds, will depend on whose hands it is in.
How Leqembi became the biggest news in Alzheimer’s disease in 40 years, and what comes next
A few months ago, Betsy Groves traveled less than a mile from her home in Cambridge, Mass. to give a talk to a bunch of scientists. The scientists, who worked for the pharmaceutical companies Biogen and Eisai, wanted to know how she lived her life, how she thought about her future, and what it was like when a doctor’s appointment in 2021 gave her the worst possible news. Groves, 73, has Alzheimer’s disease. She caught it early, through a lumbar puncture that showed evidence of amyloid, an Alzheimer’s hallmark, in her cerebrospinal fluid. As a way of dealing with her diagnosis, she joined the Alzheimer’s Association’s National Early-Stage Advisory Board, which helped her shift into seeing her diagnosis as something she could use to help others.
After her talk, Groves stayed for lunch with the scientists, who were eager to put a face to their work. Biogen and Eisai were about to release the first drug to successfully combat Alzheimer’s in 40 years of experimental disaster. Their drug, which is known by the scientific name lecanemab and the marketing name Leqembi, was granted accelerated approval by the U.S. Food and Drug Administration last Friday, Jan. 6, after a study in 1,800 people showed that it reduced cognitive decline by 27 percent over 18 months.
It is no exaggeration to say that this result is a huge deal. The field of Alzheimer’s drug development has been absolutely littered with failures. Almost everything researchers have tried has tanked in clinical trials. “Most of the things that we've done have proven not to be effective, and it's not because we haven’t been taking a ton of shots at goal,” says Anton Porsteinsson, director of the University of Rochester Alzheimer's Disease Care, Research, and Education Program, who worked on the lecanemab trial. “I think it's fair to say you don't survive in this field unless you're an eternal optimist.”
As far back as 1984, a cure looked like it was within reach: Scientists discovered that the sticky plaques that develop in the brains of those who have Alzheimer’s are made up of a protein fragment called beta-amyloid. Buildup of beta-amyloid seemed to be sufficient to disrupt communication between, and eventually kill, memory cells. If that was true, then the cure should be straightforward: Stop the buildup of beta-amyloid; stop the Alzheimer’s disease.
It wasn’t so simple. Over the next 38 years, hundreds of drugs designed either to interfere with the production of abnormal amyloid or to clear it from the brain flamed out in trials. It got so bad that neuroscience drug divisions at major pharmaceutical companies (AstraZeneca, Pfizer, Bristol-Myers, GSK, Amgen) closed one by one, leaving the field to smaller, scrappier companies, like Cambridge-based Biogen and Tokyo-based Eisai. Some scientists began to dismiss the amyloid hypothesis altogether: If this protein fragment was so important to the disease, why didn’t ridding the brain of it do anything for patients? There was another abnormal protein that showed up in the brains of Alzheimer’s patients, called tau. Some researchers defected to the tau camp, or came to believe the proteins caused damage in combination.
The situation came to a head in 2021, when the FDA granted provisional approval to a drug called aducanumab, marketed as Aduhelm, against the advice of its own advisory council. The approval was based on proof that Aduhelm reduced beta-amyloid in the brain, even though one research trial showed it had no effect on people’s symptoms or daily life. Aduhelm could also cause serious side effects, like brain swelling and amyloid related imaging abnormalities (known as ARIA, these are basically micro-bleeds that appear on MRI scans). Without a clear benefit to memory loss that would make these risks worth it, Medicare refused to pay for Aduhelm among the general population. Two congressional committees launched an investigation into the drug’s approval, citing corporate greed, lapses in protocol, and an unjustifiably high price. (Aduhelm was also produced by the pharmaceutical company Biogen.)
To be clear, Leqembi is not the cure Alzheimer’s researchers hope for. While the drug is the first to show clear signs of a clinical benefit, the scientific establishment is split on how much of a difference Leqembi will make in the real world.
So far, Leqembi is like Aduhelm in that it has been given accelerated approval only for its ability to remove amyloid from the brain. Both are monoclonal antibodies that direct the immune system to attack and clear dysfunctional beta-amyloid. The difference is that, while that’s all Aduhelm was ever shown to do, Leqembi’s makers have already asked the FDA to give it full approval – a decision that would increase the likelihood that Medicare will cover it – based on data that show it also improves Alzheimer’s sufferer’s lives. Leqembi targets a different type of amyloid, a soluble version called “protofibrils,” and that appears to change the effect. “It can give individuals and their families three, six months longer to be participating in daily life and living independently,” says Claire Sexton, PhD, senior director of scientific programs & outreach for the Alzheimer's Association. “These types of changes matter for individuals and for their families.”
To be clear, Leqembi is not the cure Alzheimer’s researchers hope for. It does not halt or reverse the disease, and people do not get better. While the drug is the first to show clear signs of a clinical benefit, the scientific establishment is split on how much of a difference Leqembi will make in the real world. It has “a rather small effect,” wrote NIH Alzheimer’s researcher Madhav Thambisetty, MD, PhD, in an email to Leaps.org. “It is unclear how meaningful this difference will be to patients, and it is unlikely that this level of difference will be obvious to a patient (or their caregivers).” Another issue is cost: Leqembi will become available to patients later this month, but Eisai is setting the price at $26,500 per year, meaning that very few patients will be able to afford it unless Medicare chooses to reimburse them for it.
The same side effects that plagued Aduhelm are common in Leqembi treatment as well. In many patients, amyloid doesn’t just accumulate around neurons, it also forms deposits in the walls of blood vessels. Blood vessels that are shot through with amyloid are more brittle. If you infuse a drug that targets amyloid, brittle blood vessels in the brain can develop leakage that results in swelling or bleeds. Most of these come with no symptoms, and are only seen during testing, which is why they are called “imaging abnormalities.” But in situations where patients have multiple diseases or are prescribed incompatible drugs, they can be serious enough to cause death. The three deaths reported from Leqembi treatment (so far) are enough to make Thambisetty wonder “how well the drug may be tolerated in real world clinical practice where patients are likely to be sicker and have multiple other medical conditions in contrast to carefully selected patients in clinical trials.”
Porsteinsson believes that earlier detection of Alzheimer’s disease will be the next great advance in treatment, a more important step forward than Leqembi’s approval.
Still, there are reasons to be excited. A successful Alzheimer’s drug can pave the way for combination studies, in which patients try a known effective drug alongside newer, more experimental ones; or preventative studies, which take place years before symptoms occur. It also represents enormous strides in researchers’ understanding of the disease. For example, drug dosages have increased massively—in some cases quadrupling—from the early days of Alzheimer’s research. And patient selection for studies has changed drastically as well. Doctors now know that you’ve got to catch the disease early, through PET-scans or CSF tests for amyloid, if you want any chance of changing its course.
Porsteinsson believes that earlier detection of Alzheimer’s disease will be the next great advance in treatment, a more important step forward than Leqembi’s approval. His lab already uses blood tests for different types of amyloid, for different types of tau, and for measures of neuroinflammation, neural damage, and synaptic health, but commercially available versions from companies like C2N, Quest, and Fuji Rebio are likely to hit the market in the next couple of years. “[They are] going to transform the diagnosis of Alzheimer's disease,” Porsteinsson says. “If someone is experiencing memory problems, their physicians will be able to order a blood test that will tell us if this is the result of changes in your brain due to Alzheimer's disease. It will ultimately make it much easier to identify people at a very early stage of the disease, where they are most likely to benefit from treatment.”
Learn more about new blood tests to detect Alzheimer's
Early detection can help patients for more philosophical reasons as well. Betsy Groves credits finding her Alzheimer’s early with giving her the space to understand and process the changes that were happening to her before they got so bad that she couldn’t. She has been able to update her legal documents and, through her role on the Advisory Group, help the Alzheimer’s Association with developing its programs and support services for people in the early stages of the disease. She still drives, and because she and her husband love to travel, they are hoping to get out of grey, rainy Cambridge and off to Texas or Arizona this spring.
Because her Alzheimer’s disease involves amyloid deposits (a “substantial portion” do not, says Claire Sexton, which is an additional complication for research), and has not yet reached an advanced stage, Groves may be a good candidate to try Leqembi. She says she’d welcome the opportunity to take it. If she can get access, Groves hopes the drug will give her more days to be fully functioning with her husband, daughters, and three grandchildren. Mostly, she avoids thinking about what the latter stages of Alzheimer’s might be like, but she knows the time will come when it will be her reality. “So whatever lecanemab can do to extend my more productive ways of engaging with relationships in the world,” she says. “I'll take that in a minute.”
How to have a good life, based on the world's longest study of happiness
What makes for a good life? Such a simple question, yet we don't have great answers. Most of us try to figure it out as we go along, and many end up feeling like they never got to the bottom of it.
Shouldn't something so important be approached with more scientific rigor? In 1938, Harvard researchers began a study to fill this gap. Since then, they’ve followed hundreds of people over the course of their lives, hoping to identify which factors are key to long-term satisfaction.
Eighty-five years later, the Harvard Study of Adult Development is still going. And today, its directors, the psychiatrists Bob Waldinger and Marc Shulz, have published a book that pulls together the study’s most important findings. It’s called The Good Life: Lessons from the World’s Longest Scientific Study of Happiness.
In this podcast episode, I talked with Dr. Waldinger about life lessons that we can mine from the Harvard study and his new book.
Listen on Apple | Listen on Spotify | Listen on Stitcher | Listen on Amazon | Listen on Google
More background on the study
Back in the 1930s, the research began with 724 people. Some were first-year Harvard students paying full tuition, others were freshmen who needed financial help, and the rest were 14-year-old boys from inner city Boston – white males only. Fortunately, the study team realized the error of their ways and expanded their sample to include the wives and daughters of the first participants. And Waldinger’s book focuses on the Harvard study findings that can be corroborated by evidence from additional research on the lives of people of different races and other minorities.
The study now includes over 1,300 relatives of the original participants, spanning three generations. Every two years, the participants have sent the researchers a filled-out questionnaire, reporting how their lives are going. At five-year intervals, the research team takes a peek their health records and, every 15 years, the psychologists meet their subjects in-person to check out their appearance and behavior.
But they don’t stop there. No, the researchers factor in multiple blood samples, DNA, images from body scans, and even the donated brains of 25 participants.
Robert Waldinger, director of the Harvard Study of Adult Development.
Katherine Taylor
Dr. Waldinger is Clinical Professor of Psychiatry at Harvard Medical School, in addition to being Director of the Harvard Study of Adult Development. He got his M.D. from Harvard Medical School and has published numerous scientific papers he’s a practicing psychiatrist and psychoanalyst, he teaches Harvard medical students, and since that is clearly not enough to keep him busy, he’s also a Zen priest.
His book is a must-read if you’re looking for scientific evidence on how to design your life for more satisfaction so someday in the future you can look back on it without regret, and this episode was an amazing conversation in which Dr. Waldinger breaks down many of the cliches about the good life, making his advice real and tangible. We also get into what he calls “side-by-side” relationships, personality traits for the good life, and the downsides of being too strict about work-life balance.
Show links
- Bob Waldinger
- Waldinger's book, The Good Life: Lessons from the World's Longest Scientific Study of Happiness
- The Harvard Study of Adult Development
- Waldinger's Ted Talk
- Gallup report finding that people with good friends at work have higher engagement with their jobs
- The link between relationships and well-being
- Those with social connections live longer