Massive benefits of AI come with environmental and human costs. Can AI itself be part of the solution?
The recent explosion of generative artificial intelligence tools like ChatGPT and Dall-E enabled anyone with internet access to harness AI’s power for enhanced productivity, creativity, and problem-solving. With their ever-improving capabilities and expanding user base, these tools proved useful across disciplines, from the creative to the scientific.
But beneath the technological wonders of human-like conversation and creative expression lies a dirty secret—an alarming environmental and human cost. AI has an immense carbon footprint. Systems like ChatGPT take months to train in high-powered data centers, which demand huge amounts of electricity, much of which is still generated with fossil fuels, as well as water for cooling. “One of the reasons why Open AI needs investments [to the tune of] $10 billion from Microsoft is because they need to pay for all of that computation,” says Kentaro Toyama, a computer scientist at the University of Michigan. There’s also an ecological toll from mining rare minerals required for hardware and infrastructure. This environmental exploitation pollutes land, triggers natural disasters and causes large-scale human displacement. Finally, for data labeling needed to train and correct AI algorithms, the Big Data industry employs cheap and exploitative labor, often from the Global South.
Generative AI tools are based on large language models (LLMs), with most well-known being various versions of GPT. LLMs can perform natural language processing, including translating, summarizing and answering questions. They use artificial neural networks, called deep learning or machine learning. Inspired by the human brain, neural networks are made of millions of artificial neurons. “The basic principles of neural networks were known even in the 1950s and 1960s,” Toyama says, “but it’s only now, with the tremendous amount of compute power that we have, as well as huge amounts of data, that it’s become possible to train generative AI models.”
Though there aren’t any official figures about the power consumption or emissions from data centers, experts estimate that they use one percent of global electricity—more than entire countries.
In recent months, much attention has gone to the transformative benefits of these technologies. But it’s important to consider that these remarkable advances may come at a price.
AI’s carbon footprint
In their latest annual report, 2023 Landscape: Confronting Tech Power, the AI Now Institute, an independent policy research entity focusing on the concentration of power in the tech industry, says: “The constant push for scale in artificial intelligence has led Big Tech firms to develop hugely energy-intensive computational models that optimize for ‘accuracy’—through increasingly large datasets and computationally intensive model training—over more efficient and sustainable alternatives.”
Though there aren’t any official figures about the power consumption or emissions from data centers, experts estimate that they use one percent of global electricity—more than entire countries. In 2019, Emma Strubell, then a graduate researcher at the University of Massachusetts Amherst, estimated that training a single LLM resulted in over 280,000 kg in CO2 emissions—an equivalent of driving almost 1.2 million km in a gas-powered car. A couple of years later, David Patterson, a computer scientist from the University of California Berkeley, and colleagues, estimated GPT-3’s carbon footprint at over 550,000 kg of CO2 In 2022, the tech company Hugging Face, estimated the carbon footprint of its own language model, BLOOM, as 25,000 kg in CO2 emissions. (BLOOM’s footprint is lower because Hugging Face uses renewable energy, but it doubled when other life-cycle processes like hardware manufacturing and use were added.)
Luckily, despite the growing size and numbers of data centers, their increasing energy demands and emissions have not kept pace proportionately—thanks to renewable energy sources and energy-efficient hardware.
But emissions don’t tell the full story.
AI’s hidden human cost
“If historical colonialism annexed territories, their resources, and the bodies that worked on them, data colonialism’s power grab is both simpler and deeper: the capture and control of human life itself through appropriating the data that can be extracted from it for profit.” So write Nick Couldry and Ulises Mejias, authors of the book The Costs of Connection.
The energy requirements, hardware manufacture and the cheap human labor behind AI systems disproportionately affect marginalized communities.
Technologies we use daily inexorably gather our data. “Human experience, potentially every layer and aspect of it, is becoming the target of profitable extraction,” Couldry and Meijas say. This feeds data capitalism, the economic model built on the extraction and commodification of data. While we are being dispossessed of our data, Big Tech commodifies it for their own benefit. This results in consolidation of power structures that reinforce existing race, gender, class and other inequalities.
“The political economy around tech and tech companies, and the development in advances in AI contribute to massive displacement and pollution, and significantly changes the built environment,” says technologist and activist Yeshi Milner, who founded Data For Black Lives (D4BL) to create measurable change in Black people’s lives using data. The energy requirements, hardware manufacture and the cheap human labor behind AI systems disproportionately affect marginalized communities.
AI’s recent explosive growth spiked the demand for manual, behind-the-scenes tasks, creating an industry described by Mary Gray and Siddharth Suri as “ghost work” in their book. This invisible human workforce that lies behind the “magic” of AI, is overworked and underpaid, and very often based in the Global South. For example, workers in Kenya who made less than $2 an hour, were the behind the mechanism that trained ChatGPT to properly talk about violence, hate speech and sexual abuse. And, according to an article in Analytics India Magazine, in some cases these workers may not have been paid at all, a case for wage theft. An exposé by the Washington Post describes “digital sweatshops” in the Philippines, where thousands of workers experience low wages, delays in payment, and wage theft by Remotasks, a platform owned by Scale AI, a $7 billion dollar American startup. Rights groups and labor researchers have flagged Scale AI as one company that flouts basic labor standards for workers abroad.
It is possible to draw a parallel with chattel slavery—the most significant economic event that continues to shape the modern world—to see the business structures that allow for the massive exploitation of people, Milner says. Back then, people got chocolate, sugar, cotton; today, they get generative AI tools. “What’s invisible through distance—because [tech companies] also control what we see—is the massive exploitation,” Milner says.
“At Data for Black Lives, we are less concerned with whether AI will become human…[W]e’re more concerned with the growing power of AI to decide who’s human and who’s not,” Milner says. As a decision-making force, AI becomes a “justifying factor for policies, practices, rules that not just reinforce, but are currently turning the clock back generations years on people’s civil and human rights.”
Ironically, AI plays an important role in mitigating its own harms—by plowing through mountains of data about weather changes, extreme weather events and human displacement.
Nuria Oliver, a computer scientist, and co-founder and vice-president of the European Laboratory of Learning and Intelligent Systems (ELLIS), says that instead of focusing on the hypothetical existential risks of today’s AI, we should talk about its real, tangible risks.
“Because AI is a transverse discipline that you can apply to any field [from education, journalism, medicine, to transportation and energy], it has a transformative power…and an exponential impact,” she says.
AI's accountability
“At the core of what we were arguing about data capitalism [is] a call to action to abolish Big Data,” says Milner. “Not to abolish data itself, but the power structures that concentrate [its] power in the hands of very few actors.”
A comprehensive AI Act currently negotiated in the European Parliament aims to rein Big Tech in. It plans to introduce a rating of AI tools based on the harms caused to humans, while being as technology-neutral as possible. That sets standards for safe, transparent, traceable, non-discriminatory, and environmentally friendly AI systems, overseen by people, not automation. The regulations also ask for transparency in the content used to train generative AIs, particularly with copyrighted data, and also disclosing that the content is AI-generated. “This European regulation is setting the example for other regions and countries in the world,” Oliver says. But, she adds, such transparencies are hard to achieve.
Google, for example, recently updated its privacy policy to say that anything on the public internet will be used as training data. “Obviously, technology companies have to respond to their economic interests, so their decisions are not necessarily going to be the best for society and for the environment,” Oliver says. “And that’s why we need strong research institutions and civil society institutions to push for actions.” ELLIS also advocates for data centers to be built in locations where the energy can be produced sustainably.
Ironically, AI plays an important role in mitigating its own harms—by plowing through mountains of data about weather changes, extreme weather events and human displacement. “The only way to make sense of this data is using machine learning methods,” Oliver says.
Milner believes that the best way to expose AI-caused systemic inequalities is through people's stories. “In these last five years, so much of our work [at D4BL] has been creating new datasets, new data tools, bringing the data to life. To show the harms but also to continue to reclaim it as a tool for social change and for political change.” This change, she adds, will depend on whose hands it is in.
New device can diagnose concussions using AI
For a long time after Mary Smith hit her head, she was not able to function. Test after test came back normal, so her doctors ruled out the concussion, but she knew something was wrong. Finally, when she took a test with a novel EyeBOX device, recently approved by the FDA, she learned she indeed had been dealing with the aftermath of a concussion.
“I felt like even my husband and doctors thought I was faking it or crazy,” recalls Smith, who preferred not to disclose her real name. “When I took the EyeBOX test it showed that my eyes were not moving together and my BOX score was abnormal.” To her diagnosticians, scientists at the Minneapolis-based company Oculogica who developed the EyeBOX, these markers were concussion signs. “I cried knowing that finally someone could figure out what was wrong with me and help me get better,” she says.
Concussion affects around 42 million people worldwide. While it’s increasingly common in the news because of sports injuries, anything that causes damage to the head, from a fall to a car accident, can result in a concussion. The sudden blow or jolt can disrupt the normal way the brain works. In the immediate aftermath, people may suffer from headaches, lose consciousness and experience dizziness, confusion and vomiting. Some recover but others have side effects that can last for years, particularly affecting memory and concentration.
There is no simple standard-of-care test to confirm a concussion or rule it out. Neither do they appear on MRI and CT scans. Instead, medical professionals use more indirect approaches that test symptoms of concussions, such as assessments of patients’ learning and memory skills, ability to concentrate and problem solving. They also look at balance and coordination. Most tests are in the form of questionnaires or symptom checklists. Consequently, they have limitations, can be biased and may miss a concussion or produce a false positive. Some people suspected of having a concussion may ordinarily have difficulties with literary and problem-solving tests because of language challenges or education levels.
Another problem with current tests is that patients, particularly soldiers who want to return to combat and athletes who would like to keep competing, could try and hide their symptoms to avoid being diagnosed with a brain injury. Trauma physicians who work with concussion patients have the need for a tool that is more objective and consistent.
“This type of assessment doesn’t rely on the patient's education level, willingness to follow instructions or cooperation. You can’t game this.” -- Uzma Samadani, founder of Oculogica
“The importance of having an objective measurement tool for the diagnosis of concussion is of great importance,” says Douglas Powell, associate professor of biomechanics at the University of Memphis, with research interests in sports injury and concussion. “While there are a number of promising systems or metrics, we have yet to develop a system that is portable, accessible and objective for use on the sideline and in the clinic. The EyeBOX may be able to address these issues, though time will be the ultimate test of performance.”
The EyeBOX as a window inside the brain
Using eye movements to diagnose a concussion has emerged as a promising technique since around 2010. Oculogica combined eye movements with AI to develop the EyeBOX to develop an unbiased objective diagnostic tool.
“What’s so great about this type of assessment is it doesn’t rely on the patient's education level, willingness to follow instructions or cooperation,” says Uzma Samadani, a neurosurgeon and brain injury researcher at the University of Minnesota, who founded Oculogica. “You can’t game this. It assesses functions that are prompted by your brain.”
In 2010, Samadani was working on a clinical trial to improve the outcome of brain injuries. The team needed some way to measure if seriously brain injured patients were improving. One thing patients could do was watch TV. So Samadani designed and patented an AI-based algorithm that tracks the relationship between eye movement and concussion.
The EyeBOX test requires patients to watch movie or music clips for 220 seconds. An eye tracking camera records subconscious eye movements, tracking eye positions 500 times per seconds as patients watch the video. It collects over 100,000 data points. The device then uses AI to assess whether there’s any disruptions from the normal way the eyes move.
Cranial nerves are responsible for transmitting information between the brain and the body. Many are involved in eye movement. Pressure caused by a concussion can affect how these nerves work. So tracking how the eyes move can indicate if there’s anything wrong with the cranial nerves and where the problem lies.
If someone is healthy, their eyes should be able to focus on an object, follow movement and both eyes should be coordinated with each other. The EyeBox can detect abnormalities. For example, if a patient’s eyes are coordinated but they are not moving as they should, that indicates issues in the central brain stem, whilst only one eye moving abnormally suggests that a particular nerve section is affected.
Uzma Samadani with the EyeBOX device
Courtesy Oculogica
“The EyeBOX is a monitor for cranial nerves,” says Samadani. “Essentially it’s a form of digital neurological exam. “Several other eye-tracking techniques already exist, but they rely on subjective self-reported symptoms. Many also require a baseline, a measure of how patients reacted when they were healthy, which often isn’t available.
VOMS (Vestibular Ocular Motor Screen) is one of the most accurate diagnostic tests used in clinics in combination with other tests, but it is subjective. It involves a therapist getting patients to move their head or eyes as they focus or follow a particular object. Patients then report their symptoms.
The King-Devick test measures how fast patients can read numbers and compares it to a baseline. Since it is mainly used for athletes, the initial test is completed before the season starts. But participants can manipulate it. It also cannot be used in emergency rooms because the majority of patients wouldn’t have prior baseline tests.
Unlike these tests, EyeBOX doesn’t use a baseline and is objective because it doesn’t rely on patients’ answers. “It shows great promise,” says Thomas Wilcockson, a senior lecturer of psychology in Loughborough University, who is an expert in using eye tracking techniques in neurological disorders. “Baseline testing of eye movements is not always possible. Alternative measures of concussion currently in development, including work with VR headsets, seem to currently require it. Therefore the EyeBOX may have an advantage.”
A technology that’s still evolving
In their last clinical trial, Oculogica used the EyeBOX to test 46 patients who had concussion and 236 patients who did not. The sensitivity of the EyeBOX, or the probability of it correctly identifying the patient’s concussion, was 80.4 percent. Meanwhile, the test accurately ruled out a concussion in 66.1 percent of cases. This is known as its specificity score.
While the team is working on improving the numbers, experts who treat concussion patients find the device promising. “I strongly support their use of eye tracking for diagnostic decision making,” says Douglas Powell. “But for diagnostic tests, we would prefer at least one of the sensitivity or specificity values to be greater than 90 percent. Powell compares EyeBOX with the Buffalo Concussion Treadmill Test, which has sensitivity and specificity values of 73 and 78 percent, respectively. The VOMS also has shown greater accuracy than the EyeBOX, at least for now. Still, EyeBOX is competitive with the best diagnostic testing available for concussion and Powell hopes that its detection prowess will improve. “I anticipate that the algorithms being used by Oculogica will be under continuous revision and expect the results will improve within the next several years.”
“The color of your skin can have a huge impact in how quickly you are triaged and managed for brain injury. People of color have significantly worse outcomes after traumatic brain injury than people who are white.” -- Uzma Samadani, founder of Oculogica
Powell thinks the EyeBOX could be an important complement to other concussion assessments.
“The Oculogica product is a viable diagnostic tool that supports clinical decision making. However, concussion is an injury that can present with a wide array of symptoms, and the use of technology such as the Oculogica should always be a supplement to patient interaction.”
Ioannis Mavroudis, a consultant neurologist at Leeds Teaching Hospital, agrees that the EyeBOX has promise, but cautions that concussions are too complex to rely on the device alone. For example, not all concussions affect how eyes move. “I believe that it can definitely help, however not all concussions show changes in eye movements. I believe that if this could be combined with a cognitive assessment the results would be impressive.”
The Oculogica team submitted their clinical data for FDA approval and received it in 2018. Now, they’re working to bring the test to the commercial market and using the device clinically to help diagnose concussions for clients. They also want to look at other areas of brain health in the next few years. Samadani believes that the EyeBOX could possibly be used to detect diseases like multiple sclerosis or other neurological conditions. “It’s a completely new way of figuring out what someone’s neurological exam is and we’re only beginning to realize the potential,” says Samadani.
One of Samadani’s biggest aspirations is to help reduce inequalities in healthcare because of skin color and other factors like money or language barriers. From that perspective, the EyeBOX’s greatest potential could be in emergency rooms. It can help diagnose concussions in addition to the questionnaires, assessments and symptom checklists, currently used in the emergency departments. Unlike these more subjective tests, EyeBOX can produce an objective analysis of brain injury through AI when patients are admitted and assessed, unrelated to their socioeconomic status, education, or language abilities. Studies suggest that there are racial disparities in how patients with brain injuries are treated, such as how quickly they're assessed and get a treatment plan.
“The color of your skin can have a huge impact in how quickly you are triaged and managed for brain injury,” says Samadani. “As a result of that, people of color have significantly worse outcomes after traumatic brain injury than people who are white. The EyeBOX has the potential to reduce inequalities,” she explains.
“If you had a digital neurological tool that you could screen and triage patients on admission to the emergency department you would potentially be able to make sure that everybody got the same standard of care,” says Samadani. “My goal is to change the way brain injury is diagnosed and defined.”
Catching colds may help protect kids from Covid
A common cold virus causes the immune system to produce T cells that also provide protection against SARS-CoV-2, according to new research. The study, published last month in PNAS, shows that this effect is most pronounced in young children. The finding may help explain why most young people who have been exposed to the cold-causing coronavirus have not developed serious cases of COVID-19.
One curiosity stood out in the early days of the COVID-19 pandemic – why were so few kids getting sick. Generally young children and the elderly are the most vulnerable to disease outbreaks, particularly viral infections, either because their immune systems are not fully developed or they are starting to fail.
But solid information on the new infection was so scarce that many public health officials acted on the precautionary principle, assumed a worst-case scenario, and applied the broadest, most restrictive policies to all people to try to contain the coronavirus SARS-CoV-2.
One early thought was that lockdowns worked and kids (ages 6 months to 17 years) simply were not being exposed to the virus. So it was a shock when data started to come in showing that well over half of them carried antibodies to the virus, indicating exposure without getting sick. That trend grew over time and the latest tracking data from the CDC shows that 96.3 percent of kids in the U.S. now carry those antibodies.
Antibodies are relatively quick and easy to measure, but some scientists are exploring whether the reactions of T cells could serve as a more useful measure of immune protection.
But that couldn't be the whole story because antibody protection fades, sometimes as early as a month after exposure and usually within a year. Additionally, SARS-CoV-2 has been spewing out waves of different variants that were more resistant to antibodies generated by their predecessors. The resistance was so significant that over time the FDA withdrew its emergency use authorization for a handful of monoclonal antibodies with earlier approval to treat the infection because they no longer worked.
Antibodies got most of the attention early on because they are part of the first line response of the immune system. Antibodies can bind to viruses and neutralize them, preventing infection. They are relatively quick and easy to measure and even manufacture, but as SARS-CoV-2 showed us, often viruses can quickly evolve to become more resistant to them. Some scientists are exploring whether the reactions of T cells could serve as a more useful measure of immune protection.
Kids, colds and T cells
T cells are part of the immune system that deals with cells once they have become infected. But working with T cells is much more difficult, takes longer, and is more expensive than working with antibodies. So studies often lags behind on this part of the immune system.
A group of researchers led by Annika Karlsson at the Karolinska Institute in Sweden focuses on T cells targeting virus-infected cells and, unsurprisingly, saw that they can play a role in SARS-CoV-2 infection. Other labs have shown that vaccination and natural exposure to the virus generates different patterns of T cell responses.
The Swedes also looked at another member of the coronavirus family, OC43, which circulates widely and is one of several causes of the common cold. The molecular structure of OC43 is similar to its more deadly cousin SARS-CoV-2. Sometimes a T cell response to one virus can produce a cross-reactive response to a similar protein structure in another virus, meaning that T cells will identify and respond to the two viruses in much the same way. Karlsson looked to see if T cells for OC43 from a wide age range of patients were cross-reactive to SARS-CoV-2.
And that is what they found, as reported in the PNAS study last month; there was cross-reactive activity, but it depended on a person’s age. A subset of a certain type of T cells, called mCD4+,, that recognized various protein parts of the cold-causing virus, OC43, expressed on the surface of an infected cell – also recognized those same protein parts from SARS-CoV-2. The T cell response was lower than that generated by natural exposure to SARS-CoV-2, but it was functional and thus could help limit the severity of COVID-19.
“One of the most politicized aspects of our pandemic response was not accepting that children are so much less at risk for severe disease with COVID-19,” because usually young children are among the most vulnerable to pathogens, says Monica Gandhi, professor of medicine at the University of California San Francisco.
“The cross-reactivity peaked at age six when more than half the people tested have a cross-reactive immune response,” says Karlsson, though their sample is too small to say if this finding applies more broadly across the population. The vast majority of children as young as two years had OC43-specific mCD4+ T cell responses. In adulthood, the functionality of both the OC43-specific and the cross-reactive T cells wane significantly, especially with advanced age.
“Considering that the mortality rate in children is the lowest from ages five to nine, and higher in younger children, our results imply that cross-reactive mCD4+ T cells may have a role in the control of SARS-CoV-2 infection in children,” the authors wrote in their paper.
“One of the most politicized aspects of our pandemic response was not accepting that children are so much less at risk for severe disease with COVID-19,” because usually young children are among the most vulnerable to pathogens, says Monica Gandhi, professor of medicine at the University of California San Francisco and author of the book, Endemic: A Post-Pandemic Playbook, to be released by the Mayo Clinic Press this summer. The immune response of kids to SARS-CoV-2 stood our expectations on their head. “We just haven't seen this before, so knowing the mechanism of protection is really important.”
Why the T cell immune response can fade with age is largely unknown. With some viruses such as measles, a single vaccination or infection generates life-long protection. But respiratory tract infections, like SARS-CoV-2, cause a localized infection - specific to certain organs - and that response tends to be shorter lived than systemic infections that affect the entire body. Karlsson suspects the elderly might be exposed to these localized types of viruses less often. Also, frequent continued exposure to a virus that results in reactivation of the memory T cell pool might eventually result in “a kind of immunosenescence or immune exhaustion that is associated with aging,” Karlsson says. https://leaps.org/scientists-just-started-testing-a-new-class-of-drugs-to-slow-and-even-reverse-aging/particle-3 This fading protection is why older people need to be repeatedly vaccinated against SARS-CoV-2.
Policy implications
Following the numbers on COVID-19 infections and severity over the last three years have shown us that healthy young people without risk factors are not likely to develop serious disease. This latest study points to a mechanism that helps explain why. But the inertia of existing policies remains. How should we adjust policy recommendations based on what we know today?
The World Health Organization (WHO) updated their COVID-19 vaccination guidance on March 28. It calls for a focus on vaccinating and boosting those at risk for developing serious disease. The guidance basically shrugged its shoulders when it came to healthy children and young adults receiving vaccinations and boosters against COVID-19. It said the priority should be to administer the “traditional essential vaccines for children,” such as those that protect against measles, rubella, and mumps.
“As an immunologist and a mother, I think that catching a cold or two when you are a kid and otherwise healthy is not that bad for you. Children have a much lower risk of becoming severely ill with SARS-CoV-2,” says Karlsson. She has followed public health guidance in Sweden, which means that her young children have not been vaccinated, but being older, she has received the vaccine and boosters. Gandhi and her children have been vaccinated, but they do not plan on additional boosters.
The WHO got it right in “concentrating on what matters,” which is getting traditional childhood immunizations back on track after their dramatic decline over the last three years, says Gandhi. Nor is there a need for masking in schools, according to a study from the Catalonia region of Spain. It found “no difference in masking and spread in schools,” particularly since tracking data indicate that nearly all young people have been exposed to SARS-CoV-2.
Both researchers lament that public discussion has overemphasized the quickly fading antibody part of the immune response to SARS-CoV-2 compared with the more durable T cell component. They say developing an efficient measure of T cell response for doctors to use in the clinic would help to monitor immunity in people at risk for severe cases of COVID-19 compared with the current method of toting up potential risk factors.