Massive benefits of AI come with environmental and human costs. Can AI itself be part of the solution?
The recent explosion of generative artificial intelligence tools like ChatGPT and Dall-E enabled anyone with internet access to harness AI’s power for enhanced productivity, creativity, and problem-solving. With their ever-improving capabilities and expanding user base, these tools proved useful across disciplines, from the creative to the scientific.
But beneath the technological wonders of human-like conversation and creative expression lies a dirty secret—an alarming environmental and human cost. AI has an immense carbon footprint. Systems like ChatGPT take months to train in high-powered data centers, which demand huge amounts of electricity, much of which is still generated with fossil fuels, as well as water for cooling. “One of the reasons why Open AI needs investments [to the tune of] $10 billion from Microsoft is because they need to pay for all of that computation,” says Kentaro Toyama, a computer scientist at the University of Michigan. There’s also an ecological toll from mining rare minerals required for hardware and infrastructure. This environmental exploitation pollutes land, triggers natural disasters and causes large-scale human displacement. Finally, for data labeling needed to train and correct AI algorithms, the Big Data industry employs cheap and exploitative labor, often from the Global South.
Generative AI tools are based on large language models (LLMs), with most well-known being various versions of GPT. LLMs can perform natural language processing, including translating, summarizing and answering questions. They use artificial neural networks, called deep learning or machine learning. Inspired by the human brain, neural networks are made of millions of artificial neurons. “The basic principles of neural networks were known even in the 1950s and 1960s,” Toyama says, “but it’s only now, with the tremendous amount of compute power that we have, as well as huge amounts of data, that it’s become possible to train generative AI models.”
Though there aren’t any official figures about the power consumption or emissions from data centers, experts estimate that they use one percent of global electricity—more than entire countries.
In recent months, much attention has gone to the transformative benefits of these technologies. But it’s important to consider that these remarkable advances may come at a price.
AI’s carbon footprint
In their latest annual report, 2023 Landscape: Confronting Tech Power, the AI Now Institute, an independent policy research entity focusing on the concentration of power in the tech industry, says: “The constant push for scale in artificial intelligence has led Big Tech firms to develop hugely energy-intensive computational models that optimize for ‘accuracy’—through increasingly large datasets and computationally intensive model training—over more efficient and sustainable alternatives.”
Though there aren’t any official figures about the power consumption or emissions from data centers, experts estimate that they use one percent of global electricity—more than entire countries. In 2019, Emma Strubell, then a graduate researcher at the University of Massachusetts Amherst, estimated that training a single LLM resulted in over 280,000 kg in CO2 emissions—an equivalent of driving almost 1.2 million km in a gas-powered car. A couple of years later, David Patterson, a computer scientist from the University of California Berkeley, and colleagues, estimated GPT-3’s carbon footprint at over 550,000 kg of CO2 In 2022, the tech company Hugging Face, estimated the carbon footprint of its own language model, BLOOM, as 25,000 kg in CO2 emissions. (BLOOM’s footprint is lower because Hugging Face uses renewable energy, but it doubled when other life-cycle processes like hardware manufacturing and use were added.)
Luckily, despite the growing size and numbers of data centers, their increasing energy demands and emissions have not kept pace proportionately—thanks to renewable energy sources and energy-efficient hardware.
But emissions don’t tell the full story.
AI’s hidden human cost
“If historical colonialism annexed territories, their resources, and the bodies that worked on them, data colonialism’s power grab is both simpler and deeper: the capture and control of human life itself through appropriating the data that can be extracted from it for profit.” So write Nick Couldry and Ulises Mejias, authors of the book The Costs of Connection.
The energy requirements, hardware manufacture and the cheap human labor behind AI systems disproportionately affect marginalized communities.
Technologies we use daily inexorably gather our data. “Human experience, potentially every layer and aspect of it, is becoming the target of profitable extraction,” Couldry and Meijas say. This feeds data capitalism, the economic model built on the extraction and commodification of data. While we are being dispossessed of our data, Big Tech commodifies it for their own benefit. This results in consolidation of power structures that reinforce existing race, gender, class and other inequalities.
“The political economy around tech and tech companies, and the development in advances in AI contribute to massive displacement and pollution, and significantly changes the built environment,” says technologist and activist Yeshi Milner, who founded Data For Black Lives (D4BL) to create measurable change in Black people’s lives using data. The energy requirements, hardware manufacture and the cheap human labor behind AI systems disproportionately affect marginalized communities.
AI’s recent explosive growth spiked the demand for manual, behind-the-scenes tasks, creating an industry described by Mary Gray and Siddharth Suri as “ghost work” in their book. This invisible human workforce that lies behind the “magic” of AI, is overworked and underpaid, and very often based in the Global South. For example, workers in Kenya who made less than $2 an hour, were the behind the mechanism that trained ChatGPT to properly talk about violence, hate speech and sexual abuse. And, according to an article in Analytics India Magazine, in some cases these workers may not have been paid at all, a case for wage theft. An exposé by the Washington Post describes “digital sweatshops” in the Philippines, where thousands of workers experience low wages, delays in payment, and wage theft by Remotasks, a platform owned by Scale AI, a $7 billion dollar American startup. Rights groups and labor researchers have flagged Scale AI as one company that flouts basic labor standards for workers abroad.
It is possible to draw a parallel with chattel slavery—the most significant economic event that continues to shape the modern world—to see the business structures that allow for the massive exploitation of people, Milner says. Back then, people got chocolate, sugar, cotton; today, they get generative AI tools. “What’s invisible through distance—because [tech companies] also control what we see—is the massive exploitation,” Milner says.
“At Data for Black Lives, we are less concerned with whether AI will become human…[W]e’re more concerned with the growing power of AI to decide who’s human and who’s not,” Milner says. As a decision-making force, AI becomes a “justifying factor for policies, practices, rules that not just reinforce, but are currently turning the clock back generations years on people’s civil and human rights.”
Ironically, AI plays an important role in mitigating its own harms—by plowing through mountains of data about weather changes, extreme weather events and human displacement.
Nuria Oliver, a computer scientist, and co-founder and vice-president of the European Laboratory of Learning and Intelligent Systems (ELLIS), says that instead of focusing on the hypothetical existential risks of today’s AI, we should talk about its real, tangible risks.
“Because AI is a transverse discipline that you can apply to any field [from education, journalism, medicine, to transportation and energy], it has a transformative power…and an exponential impact,” she says.
AI's accountability
“At the core of what we were arguing about data capitalism [is] a call to action to abolish Big Data,” says Milner. “Not to abolish data itself, but the power structures that concentrate [its] power in the hands of very few actors.”
A comprehensive AI Act currently negotiated in the European Parliament aims to rein Big Tech in. It plans to introduce a rating of AI tools based on the harms caused to humans, while being as technology-neutral as possible. That sets standards for safe, transparent, traceable, non-discriminatory, and environmentally friendly AI systems, overseen by people, not automation. The regulations also ask for transparency in the content used to train generative AIs, particularly with copyrighted data, and also disclosing that the content is AI-generated. “This European regulation is setting the example for other regions and countries in the world,” Oliver says. But, she adds, such transparencies are hard to achieve.
Google, for example, recently updated its privacy policy to say that anything on the public internet will be used as training data. “Obviously, technology companies have to respond to their economic interests, so their decisions are not necessarily going to be the best for society and for the environment,” Oliver says. “And that’s why we need strong research institutions and civil society institutions to push for actions.” ELLIS also advocates for data centers to be built in locations where the energy can be produced sustainably.
Ironically, AI plays an important role in mitigating its own harms—by plowing through mountains of data about weather changes, extreme weather events and human displacement. “The only way to make sense of this data is using machine learning methods,” Oliver says.
Milner believes that the best way to expose AI-caused systemic inequalities is through people's stories. “In these last five years, so much of our work [at D4BL] has been creating new datasets, new data tools, bringing the data to life. To show the harms but also to continue to reclaim it as a tool for social change and for political change.” This change, she adds, will depend on whose hands it is in.
Each afternoon, kids walk through my neighborhood, on their way back home from school, and almost all of them are walking alone, staring down at their phones. It's a troubling site. This daily parade of the zombie children just can’t bode well for the future.
That’s one reason I felt like Gaia Bernstein’s new book was talking directly to me. A law professor at Seton Hall, Gaia makes a strong argument that people are so addicted to tech at this point, we need some big, system level changes to social media platforms and other addictive technologies, instead of just blaming the individual and expecting them to fix these issues.
Gaia’s book is called Unwired: Gaining Control Over Addictive Technologies. It’s fascinating and I had a chance to talk with her about it for today’s podcast. At its heart, our conversation is really about how and whether we can maintain control over our thoughts and actions, even when some powerful forces are pushing in the other direction.
Listen on Apple | Listen on Spotify | Listen on Stitcher | Listen on Amazon | Listen on Google
We discuss the idea that, in certain situations, maybe it's not reasonable to expect that we’ll be able to enjoy personal freedom and autonomy. We also talk about how to be a good parent when it sometimes seems like our kids prefer to be raised by their iPads; so-called educational video games that actually don’t have anything to do with education; the root causes of tech addictions for people of all ages; and what kinds of changes we should be supporting.
Gaia is Seton’s Hall’s Technology, Privacy and Policy Professor of Law, as well as Co-Director of the Institute for Privacy Protection, and Co-Director of the Gibbons Institute of Law Science and Technology. She’s the founding director of the Institute for Privacy Protection. She created and spearheaded the Institute’s nationally recognized Outreach Program, which educated parents and students about technology overuse and privacy.
Professor Bernstein's scholarship has been published in leading law reviews including the law reviews of Vanderbilt, Boston College, Boston University, and U.C. Davis. Her work has been selected to the Stanford-Yale Junior Faculty Forum and received extensive media coverage. Gaia joined Seton Hall's faculty in 2004. Before that, she was a fellow at the Engelberg Center of Innovation Law & Policy and at the Information Law Institute of the New York University School of Law. She holds a J.S.D. from the New York University School of Law, an LL.M. from Harvard Law School, and a J.D. from Boston University.
Gaia’s work on this topic is groundbreaking I hope you’ll listen to the conversation and then consider pre-ordering her new book. It comes out on March 28.
Time to visit your TikTok doc? The good and bad of doctors on social media
Rakhi Patel has carved a hobby out of reviewing pizza — her favorite food — on Instagram. In a nod to her preferred topping, she calls herself thepepperoniqueen. Photos and videos show her savoring slices from scores of pizzerias. In some of them, she’s wearing scrubs — her attire as an inpatient neurology physician associate at Tufts Medical Center in Boston.
“Depending on how you dress your pizza, it can be more nutritious,” said Patel, who suggests a thin crust, sugarless tomato sauce and vegetables galore as healthier alternatives. “There are no boundaries for a health care professional to enjoy pizza.”
Beyond that, “pizza fuels my mental health and makes me happy, especially when loaded with pepperoni,” she said. “If I’m going to be a pizza connoisseur, then I also need to take care of my physical health by ensuring that I get at least three days of exercise per week and eat nutritiously when I’m not eating pizza.”
She’s among an increasing number of health care professionals, including doctors and nurses, who maintain an active persona on social media, according to bioethics researchers. They share their hobbies and interests with people inside and outside the world of medicine, helping patients and the public become acquainted with the humans behind the scrubs or white coats. Other health care experts limit their posts to medical topics, while some opt for a combination of personal and professional commentaries. Depending on the posts, ethical issues may come into play.
“Health care professionals are quite prevalent on social media,” said Mercer Gary, a postdoctoral researcher at The Hastings Center, an independent bioethics research institute in Garrison, New York. “They’ve been posting on #medTwitter for many years, mainly to communicate with one another, but, of course, anyone can see the threads. Most recently, doctors and nurses have become a presence on TikTok.”
On social media, many health care providers perceive themselves to be “humanizing” their profession by coming across as more approachable — “reminding patients that providers are people and workers, as well as repositories of medical expertise,” Gary said. As a result, she noted that patients who are often intimidated by clinicians may feel comfortable enough to overcome barriers to scheduling health care appointments. The use of TikTok in particular may help doctors and nurses connect with younger followers.
When health care providers post on social media, they must bear in mind that they have legal and ethical duties to their patients, profession and society, said Elizabeth Levy, founder and director of Physicians for Justice.
While enduring three years of pandemic conditions, many health care professionals have struggled with burnout, exhaustion and moral distress. “Much health care provider content on social media seeks to expose the difficulties of the work,” Gary added. “TikTok and Instagram reels have shown health care providers crying after losing a patient or exhausted after a night shift in the emergency department.”
A study conducted in Beijing, China and published last year found that TikTok is the world’s most rapidly growing video application, amassing 1.6 billion users in 2021. “More and more patients are searching for information on genitourinary cancers via TikTok,” the study’s authors wrote in Frontiers in Oncology, referring to cancers of the urinary tracts and male reproductive organs. Among the 61 sample videos examined by the researchers, health care practitioners contributed the content in 29, or 47 percent, of them. Yet, 22 posts, 36 percent, were misinformative, mostly due to outdated information.
More than half of the videos offered good content on disease symptoms and examinations. The authors concluded that “most videos on genitourinary cancers on TikTok are of poor to medium quality and reliability. However, videos posted by media agencies enjoyed great public attention and interaction. Medical practitioners could improve the video quality by cooperating with media agencies and avoiding unexplained terminologies.”
When health care providers post on social media, they must bear in mind that they have legal and ethical duties to their patients, profession and society, said Elizabeth Levy, founder and director of Physicians for Justice in Irvine, Calif., a nonprofit network of volunteer physicians partnering with public interest lawyers to address the social determinants of health.
“Providers are also responsible for understanding the mechanics of their posts,” such as who can see these messages and how long they stay up, Levy said. As a starting point for figuring what’s acceptable, providers could look at social media guidelines put out by their professional associations. Even beyond that, though, they must exercise prudent judgment. “As social media continues to evolve, providers will also need to stay updated with the changing risks and benefits of participation.”
Patients often research their providers online, so finding them on social media can help inform about values and approaches to care, said M. Sara Rosenthal, a professor and founding director of the program for bioethics and chair of the hospital ethics committee at the University of Kentucky College of Medicine.
Health care providers’ posts on social media also could promote patient education. They can advance informed consent and help patients navigate the risks and benefits of various treatments or preventive options. However, providers could violate ethical principles if they espouse “harmful, risky or questionable therapies or medical advice that is contrary to clinical practice guidelines or accepted standards of care,” Rosenthal said.
Inappropriate self-disclosure also can affect a provider’s reputation, said Kelly Michelson, a professor of pediatrics and director of the Center for Bioethics and Medical Humanities at Northwestern University’s Feinberg School of Medicine. A clinician’s obligations to professionalism extend beyond those moments when they are directly taking care of their patients, she said. “Many experts recommend against clinicians ‘friending’ patients or the families on social media because it blurs the patient-clinician boundary.”
Meanwhile, clinicians need to adhere closely to confidentiality. In sharing a patient’s case online for educational purposes, safeguarding identity becomes paramount. Removing names and changing minor details is insufficient, Michelson said.
“The patient-clinician relationship is sacred, and it can only be effective if patients have 100 percent confidence that all that happens with their clinician is kept in the strictest of confidence,” she said, adding that health care providers also should avoid obtaining information about their patients from social media because it can lead to bias and risk jeopardizing objectivity.
Academic clinicians can use social media as a recruitment tool to expand the pool of research participants for their studies, Michelson said. Because the majority of clinical research is conducted at academic medical centers, large segments of the population are excluded. “This affects the quality of the data and knowledge we gain from research,” she said.
Don S. Dizon, a professor of medicine and surgery at the Warren Alpert Medical School of Brown University in Providence, Rhode Island, uses LinkedIn and Doximity, as well as Twitter, Instagram, TikTok, Facebook, and most recently, YouTube and Post. He’s on Twitter nearly every day, where he interacts with the oncology community and his medical colleagues.
Also, he said, “I really like Instagram. It’s where you will see a hybrid of who I am professionally and personally. I’ve become comfortable sharing both up to a limit, but where else can I combine my appreciation of clothes with my professional life?” On that site, he’s seen sporting shirts with polka dots or stripes and an occasional bow-tie. He also posts photos of his cats.
Don S. Dizon, a professor of medicine and surgery at Brown, started using TikTok several years ago, telling medical stories in short-form videos.
Don S. Dizon
Dizon started using TikTok several years ago, telling medical stories in short-form videos. He may talk about an inspirational patient, his views on end-of-life care and death, or memories of people who have passed. But he is careful not to divulge any details that would identify anyone.
Recently, some people have become his patients after viewing his content on social media or on the Internet in general, which he clearly states isn’t a forum for medical advice. “In both situations, they are so much more relaxed when we meet, because it’s as if they have a sense of who I am as a person,” Dizon said. “I think that has helped so much in talking through a cancer diagnosis and a treatment plan, and yes, even discussions about prognosis.”
He also posts about equity and diversity. “I have found myself more likely to repost or react to issues that are inherently political, including racism, homophobia, transphobia and lack-of-access issues, because medicine is not isolated from society, and I truly believe that medicine is a social justice issue,” said Dizon, who is vice chair of diversity, equity, inclusion and professional integrity at the SWOG Cancer Research Network.
Through it all, Dizon likes “to break through the notion of doctor as infallible and all-knowing, the doctor as deity,” he said. “Humanizing what I do, especially in oncology, is something that challenges me on social media, and I appreciate the opportunities to do it on TikTok.”