“Deep Fake” Video Technology Is Advancing Faster Than Our Policies Can Keep Up
This article is part of the magazine, "The Future of Science In America: The Election Issue," co-published by LeapsMag, the Aspen Institute Science & Society Program, and GOOD.
Alethea.ai sports a grid of faces smiling, blinking and looking about. Some are beautiful, some are oddly familiar, but all share one thing in common—they are fake.
Alethea creates "synthetic media"— including digital faces customers can license saying anything they choose with any voice they choose. Companies can hire these photorealistic avatars to appear in explainer videos, advertisements, multimedia projects or any other applications they might dream up without running auditions or paying talent agents or actor fees. Licenses begin at a mere $99. Companies may also license digital avatars of real celebrities or hire mashups created from real celebrities including "Don Exotic" (a mashup of Donald Trump and Joe Exotic) or "Baby Obama" (a large-eared toddler that looks remarkably similar to a former U.S. President).
Naturally, in the midst of the COVID pandemic, the appeal is understandable. Rather than flying to a remote location to film a beer commercial, an actor can simply license their avatar to do the work for them. The question is—where and when this tech will cross the line between legitimately licensed and authorized synthetic media to deep fakes—synthetic videos designed to deceive the public for financial and political gain.
Deep fakes are not new. From written quotes that are manipulated and taken out of context to audio quotes that are spliced together to mean something other than originally intended, misrepresentation has been around for centuries. What is new is the technology that allows this sort of seamless and sophisticated deception to be brought to the world of video.
"At one point, video content was considered more reliable, and had a higher threshold of trust," said Alethea CEO and co-founder, Arif Khan. "We think video is harder to fake and we aren't yet as sensitive to detecting those fakes. But the technology is definitely there."
"In the future, each of us will only trust about 15 people and that's it," said Phil Lelyveld, who serves as Immersive Media Program Lead at the Entertainment Technology Center at the University of Southern California. "It's already very difficult to tell true footage from fake. In the future, I expect this will only become more difficult."
How do we know what's true in a world where original videos created with avatars of celebrities and politicians can be manipulated to say virtually anything?
As the U.S. 2020 Presidential Election nears, the potential moral and ethical implications of this technology are startling. A number of cases of truth tampering have recently been widely publicized. On August 5, President Donald Trump's campaign released an ad featuring several photos of Joe Biden that were altered to make it seem like was hiding all alone in his basement. In one photo, at least ten people who had been sitting with Biden in the original shot were cut out. In other photos, Biden's image was removed from a nature preserve and praying in church to make it appear Biden was in that same basement. Recently several videos of Speaker of the House Nancy Pelosi were slowed down by 75 percent to make her sound as if her speech was slurred.
During a campaign event in Florida on September 15 of this year, former Vice President Joe Biden was introduced by Puerto Rican singer-songwriter Luis Fonsi. After he was introduced, Biden paid tribute to the singer-songwriter—he held up his cell phone and played the hit song "Despecito". Shortly afterward, a doctored version of this video appeared on self-described parody site the United Spot replacing the Despicito with N.W.A.'s "F—- Tha Police". By September 16, Donald Trump retweeted the video, twice—first with the line "What is this all about" and second with the line "China is drooling. They can't believe this!" Twitter was quick to mark the video in these tweets as manipulated media.
Twitter had previously addressed several of Donald Trump's tweets—flagging a video shared in June as manipulated media and removing altogether a video shared by Trump in July showing a group promoting the hydroxychloroquine as an effective cure for COVID-19. Many of these manipulated videos are ultimately flagged or taken down, but not before they are seen and shared by millions of online viewers.
These faked videos were exposed rather quickly, as they could be compared with the original, publicly available source material. But what happens when there is no original source material? How do we know what's true in a world where original videos created with avatars of celebrities and politicians can be manipulated to say virtually anything?
"This type of fake media is a profound threat to our democracy," said Reid Blackman, the CEO of VIRTUE--an ethics consultancy for AI leaders. "Democracy depends on well-informed citizens. When citizens can't or won't discern between real and fake news, the implications are huge."
In light of the importance of reliable information in the political system, there's a clear and present need to verify that the images and news we consume is authentic. So how can anyone ever know that the content they are viewing is real?
"This will not be a simple technological solution," said Blackman. "There is no 'truth' button to push to verify authenticity. There's plenty of blame and condemnation to go around. Purveyors of information have a responsibility to vet the reliability of their sources. And consumers also have a responsibility to vet their sources."
Yet the process of verifying sources has never been more challenging. More and more citizens are choosing to live in a "media bubble"—gathering and sharing news only from and with people who share their political leanings and opinions. At one time, United States broadcasters were bound by the Fairness Doctrine—requiring them to present controversial issues important to the public in a way that the FCC deemed honest, equitable and balanced. The repeal of this doctrine in 1987 paved the way for new forms of cable news channels such as Fox News and MSNBC that appealed to viewers with a particular point of view. The Internet has only exacerbated these tendencies. Social media algorithms are designed to keep people clicking within their comfort zones by presenting members with only the thoughts and opinions they want to hear.
"I sometimes laugh when I hear people tell me they can back a particular opinion they hold with research," said Blackman. "Having conducted a fair bit of true scientific research, I am aware that clicking on one article on the Internet hardly qualifies. But a surprising number of people believe that finding any source online that states the fact they choose to believe is the same as proving it true."
Back to the fundamental challenge: How do we as a society root out what's false online? Lelyveld suggests that it will begin by verifying things that are known to be true rather than trying to call out everything that is fake. "The EU called me in to talk about how to deal with fake news coming out of Russia," said Lelyveld. "I told them Hollywood has spent 100 years developing special effects technology to make things that are wholly fictional indistinguishable from the truth. I told them that you'll never chase down every source of fake news. You're better off focusing on what can be proved true."
Arif Khan agrees. "There are probably 100 accounts attributed to Elon Musk on Twitter, but only one has the blue checkmark," said Khan. "That means Twitter has verified that an account of public interest is real. That's what we're trying to do with our platform. Allow celebrities to verify that specific videos were licensed and authorized directly by them."
Alethea will use another key technology called blockchain to mark all authentic authorized videos with celebrity avatars. Blockchain uses a distributed ledger technology to make sure that no undetected changes have been made to the content. Think of the difference between editing a document in a traditional word processing program and editing in a distributed online editing system like Google Docs. In a traditional word processing program, you can edit and copy a document without revealing any changes. In a shared editing system like Google Docs, every person who shares the document can see a record of every edit, addition and copy made of any portion of the document. In a similar way, blockchain helps Alethea ensure that approved videos have not been copied or altered inappropriately.
While AI companies like Alethea are moving to ensure that avatars based on real individuals aren't wrongly identified, the situation becomes a bit murkier when it comes to the question of representing groups, races, creeds, and other forms of identity. Alethea is rightly proud that the completely artificial avatars visually represent a variety of ages, races and sexes. However, companies could conceivably license an avatar to represent a marginalized group without actually hiring a person within that group to decide what the avatar will do or say.
"I don't know if I would call this tokenism, as that is difficult to identify without understanding the hiring company's intent," said Blackman. "Where this becomes deeply troubling is when avatars are used to represent a marginalized group without clearly pointing out the actor is an avatar. It's one thing for an African American woman avatar to say, 'I like ice cream.' It's entirely different thing for an African American woman avatar to say she supports a particular political candidate. In the second case, the avatar is being used as social proof that real people of a certain type back a certain political idea. And there the deception is far more problematic."
"It always comes down to unintended consequences of technology," said Lelyveld. "Technology is neutral—it's only the implementation that has the power to be good or bad. Without a thoughtful approach to the cultural, moral and political implications of technology, it often drifts towards the bad. We need to make a conscious decision as we release new technology to ensure it moves towards the good."
When presented with the idea that his avatars might be used to misrepresent marginalized groups, Khan was thoughtful. "Yes, I can see that is an unintended consequence of our technology. We would like to encourage people to license the avatars of real people, who would have final approval over what their avatars say or do. As to what people do with our completely artificial avatars, we will have to consider that moving forward."
Lelyveld frankly sees the ability for advertisers to create avatars that are our assistants or even our friends as a greater moral concern. "Once our digital assistant or avatar becomes an integral part of our life—even a friend as it were, what's to stop marketers from having those digital friends make suggestions about what drink we buy, which shirt we wear or even which candidate we elect? The possibilities for bad actors to reach us through our digital circle is mind-boggling."
Ultimately, Blackman suggests, we as a society will need to make decisions about what matters to us. "We will need to build policies and write laws—tackling the biggest problems like political deep fakes first. And then we have to figure out how to make the penalties stiff enough to matter. Fining a multibillion-dollar company a few million for a major offense isn't likely to move the needle. The punishment will need to fit the crime."
Until then, media consumers will need to do their own due diligence—to do the difficult work of uncovering the often messy and deeply uncomfortable news that's the truth.
[Editor's Note: To read other articles in this special magazine issue, visit the beautifully designed e-reader version.]
Send in the Robots: A Look into the Future of Firefighting
April in Paris stood still. Flames engulfed the beloved Notre Dame Cathedral as the world watched, horrified, in 2019. The worst looked inevitable when firefighters were forced to retreat from the out-of-control fire.
But the Paris Fire Brigade had an ace up their sleeve: Colossus, a firefighting robot. The seemingly indestructible tank-like machine ripped through the blaze with its motorized water cannon. It was able to put out flames in places that would have been deadly for firefighters.
Firefighting is entering a new era, driven by necessity. Conventional methods of managing fires have been no match for the fiercer, more expansive fires being triggered by climate change, urban sprawl, and susceptible wooded areas.
Robots have been a game-changer. Inspired by Paris, the Los Angeles Fire Department (LAFD) was the first in the U.S. to deploy a firefighting robot in 2021, the Thermite Robotics System 3 – RS3, for short.
RS3 is a 3,500-pound turbine on a crawler—the size of a Smart car—with a 36.8 horsepower engine that can go for 20 hours without refueling. It can plow through hazardous terrain, move cars from its path, and pull an 8,000-pound object from a fire.
All that while spurting 2,500 gallons of water per minute with a rear exhaust fan clearing the smoke. At a recent trade show, RS3 was billed as equivalent to 10 firefighters. The Los Angeles Times referred to it as “a droid on steroids.”
Robots such as the Thermite RS3 can plow through hazardous terrain and pull an 8,000-pound object from a fire.
Los Angeles Fire Department
The advantage of the robot is obvious. Operated remotely from a distance, it greatly reduces an emergency responder’s exposure to danger, says Wade White, assistant chief of the LAFD. The robot can be sent into airplane fires, nuclear reactors, hazardous areas with carcinogens (think East Palestine, Ohio), or buildings where a roof collapse is imminent.
Advances for firefighters are taking many other forms as well. Fibers have been developed that make the firefighter’s coat lighter and more protective from carcinogens. New wearable devices track firefighters’ biometrics in real time so commanders can monitor their heat stress and exertion levels. A sensor patch is in development which takes readings every four seconds to detect dangerous gases such as methane and carbon dioxide. A sonic fire extinguisher is being explored that uses low frequency soundwaves to remove oxygen from air molecules without unhealthy chemical compounds.
The demand for this technology is only increasing, especially with the recent rise in wildfires. In 2021, fires were responsible for 3,800 deaths and 14,700 injuries of civilians in this country. Last year, 68,988 wildfires burned down 7.6 million acres. Whether the next generation of firefighting can address these new challenges could depend on special cameras, robots of the aerial variety, AI and smart systems.
Fighting fire with cameras
Another key innovation for firefighters is a thermal imaging camera (TIC) that improves visibility through smoke. “At a fire, you might not see your hand in front of your face,” says White. “Using the TIC screen, you can find the door to get out safely or see a victim in the corner.” Since these cameras were introduced in the 1990s, the price has come down enough (from $10,000 or more to about $700) that every LAFD firefighter on duty has been carrying one since 2019, says White.
TICs are about the size of a cell phone. The camera can sense movement and body heat so it is ideal as a search tool for people trapped in buildings. If a firefighter has not moved in 30 seconds, the motion detector picks that up, too, and broadcasts a distress signal and directional information to others.
To enable firefighters to operate the camera hands-free, the newest TICs can attach inside a helmet. The firefighter sees the images inside their mask.
TICs also can be mounted on drones to get a bird’s-eye, 360 degree view of a disaster or scout for hot spots through the smoke. In addition, the camera can take photos to aid arson investigations or help determine the cause of a fire.
More help From above
Firefighters prefer the term “unmanned aerial systems” (UAS) to drones to differentiate them from military use.
A UAS carrying a camera can provide aerial scene monitoring and topography maps to help fire captains deploy resources more efficiently. At night, floodlights from the drone can illuminate the landscape for firefighters. They can drop off payloads of blankets, parachutes, life preservers or radio devices for stranded people to communicate, too. And like the robot, the UAS reduces risks for ground crews and helicopter pilots by limiting their contact with toxic fumes, hazardous chemicals, and explosive materials.
“The nice thing about drones is that they perform multiple missions at once,” says Sean Triplett, team lead of fire and aviation management, tools and technology at the Forest Service.
Experts predict we’ll see swarms of drones dropping water and fire retardant on burning buildings and forests in the near future.
The UAS is especially helpful during wildfires because it can track fires, get ahead of wind currents and warn firefighters of wind shifts in real time. The U.S. Forest Service also uses long endurance, solar-powered drones that can fly for up to 30 days at a time to detect early signs of wildfire. Wildfires are no longer seasonal in California – they are a year-long threat, notes Thanh Nguyen, fire captain at the Orange County Fire Authority.
In March, Nguyen’s crew deployed a drone to scope out a huge landslide following torrential rains in San Clemente, CA. Emergency responders used photos and videos from the drone to survey the evacuated area, enabling them to stay clear of ground on the hillside that was still sliding.
Improvements in drone batteries are enabling them to fly for longer with heavier payloads. Experts predict we’ll see swarms of drones dropping water and fire retardant on burning buildings and forests in the near future.
AI to the rescue
The biggest peril for a firefighter is often what they don’t see coming. Flashovers are a leading cause of firefighter deaths, for example. They occur when flammable materials in an enclosed area ignite almost instantaneously. Or dangerous backdrafts can happen when a firefighter opens a window or door; the air rushing in can ignite a fire without warning.
The Fire Fighting Technology Group at the National Institute of Standards and Technology (NIST) is developing tools and systems to predict these potentially lethal events with computer models and artificial intelligence.
Partnering with other institutions, NIST researchers developed the Flashover Prediction Neural Network (FlashNet) after looking at common house layouts and running sets of scenarios through a machine-learning model. In the lab, FlashNet was able to predict a flashover 30 seconds before it happened with 92.1% success. When ready for release, the technology will be bundled with sensors that are already installed in buildings, says Anthony Putorti, leader of the NIST group.
The NIST team also examined data from hundreds of backdrafts as a basis for a machine-learning model to predict them. In testing chambers the model predicted them correctly 70.8% of the time; accuracy increased to 82.4% when measures of backdrafts were taken in more positions at different heights in the chambers. Developers are working on how to integrate the AI into a small handheld device that can probe the air of a room through cracks around a door or through a created opening, Putorti says. This way, the air can be analyzed with the device to alert firefighters of any significant backdraft risk.
Early wildfire detection technologies based on AI are in the works, too. The Forest Service predicts the acreage burned each year during wildfires will more than triple in the next 80 years. By gathering information on historic fires, weather patterns, and topography, says White, AI can help firefighters manage wildfires before they grow out of control and create effective evacuation plans based on population data and fire patterns.
The future is connectivity
We are in our infancy with “smart firefighting,” says Casey Grant, executive director emeritus of the Fire Protection Research Foundation. Grant foresees a new era of cyber-physical systems for firefighters—a massive integration of wireless networks, advanced sensors, 3D simulations, and cloud services. To enhance teamwork, the system will connect all branches of emergency responders—fire, emergency medical services, law enforcement.
FirstNet (First Responder Network Authority) now provides a nationwide high-speed broadband network with 5G capabilities for first responders through a terrestrial cell network. Battling wildfires, however, the Forest Service needed an alternative because they don’t always have access to a power source. In 2022, they contracted with Aerostar for a high altitude balloon (60,000 feet up) that can extend cell phone power and LTE. “It puts a bubble of connectivity over the fire to hook in the internet,” Triplett explains.
A high altitude balloon, 60,000 feet high, can extend cell phone power and LTE, putting a "bubble" of internet connectivity over fires.
Courtesy of USDA Forest Service
Advances in harvesting, processing and delivering data will improve safety and decision-making for firefighters, Grant sums up. Smart systems may eventually calculate fire flow paths and make recommendations about the best ways to navigate specific fire conditions. NIST’s plan to combine FlashNet with sensors is one example.
The biggest challenge is developing firefighting technology that can work across multiple channels—federal, state, local and tribal systems as well as for fire, police and other emergency services— in any location, says Triplett. “When there’s a wildfire, there are no political boundaries,” he says. “All hands are on deck.”
New device can diagnose concussions using AI
For a long time after Mary Smith hit her head, she was not able to function. Test after test came back normal, so her doctors ruled out the concussion, but she knew something was wrong. Finally, when she took a test with a novel EyeBOX device, recently approved by the FDA, she learned she indeed had been dealing with the aftermath of a concussion.
“I felt like even my husband and doctors thought I was faking it or crazy,” recalls Smith, who preferred not to disclose her real name. “When I took the EyeBOX test it showed that my eyes were not moving together and my BOX score was abnormal.” To her diagnosticians, scientists at the Minneapolis-based company Oculogica who developed the EyeBOX, these markers were concussion signs. “I cried knowing that finally someone could figure out what was wrong with me and help me get better,” she says.
Concussion affects around 42 million people worldwide. While it’s increasingly common in the news because of sports injuries, anything that causes damage to the head, from a fall to a car accident, can result in a concussion. The sudden blow or jolt can disrupt the normal way the brain works. In the immediate aftermath, people may suffer from headaches, lose consciousness and experience dizziness, confusion and vomiting. Some recover but others have side effects that can last for years, particularly affecting memory and concentration.
There is no simple standard-of-care test to confirm a concussion or rule it out. Neither do they appear on MRI and CT scans. Instead, medical professionals use more indirect approaches that test symptoms of concussions, such as assessments of patients’ learning and memory skills, ability to concentrate and problem solving. They also look at balance and coordination. Most tests are in the form of questionnaires or symptom checklists. Consequently, they have limitations, can be biased and may miss a concussion or produce a false positive. Some people suspected of having a concussion may ordinarily have difficulties with literary and problem-solving tests because of language challenges or education levels.
Another problem with current tests is that patients, particularly soldiers who want to return to combat and athletes who would like to keep competing, could try and hide their symptoms to avoid being diagnosed with a brain injury. Trauma physicians who work with concussion patients have the need for a tool that is more objective and consistent.
“This type of assessment doesn’t rely on the patient's education level, willingness to follow instructions or cooperation. You can’t game this.” -- Uzma Samadani, founder of Oculogica
“The importance of having an objective measurement tool for the diagnosis of concussion is of great importance,” says Douglas Powell, associate professor of biomechanics at the University of Memphis, with research interests in sports injury and concussion. “While there are a number of promising systems or metrics, we have yet to develop a system that is portable, accessible and objective for use on the sideline and in the clinic. The EyeBOX may be able to address these issues, though time will be the ultimate test of performance.”
The EyeBOX as a window inside the brain
Using eye movements to diagnose a concussion has emerged as a promising technique since around 2010. Oculogica combined eye movements with AI to develop the EyeBOX to develop an unbiased objective diagnostic tool.
“What’s so great about this type of assessment is it doesn’t rely on the patient's education level, willingness to follow instructions or cooperation,” says Uzma Samadani, a neurosurgeon and brain injury researcher at the University of Minnesota, who founded Oculogica. “You can’t game this. It assesses functions that are prompted by your brain.”
In 2010, Samadani was working on a clinical trial to improve the outcome of brain injuries. The team needed some way to measure if seriously brain injured patients were improving. One thing patients could do was watch TV. So Samadani designed and patented an AI-based algorithm that tracks the relationship between eye movement and concussion.
The EyeBOX test requires patients to watch movie or music clips for 220 seconds. An eye tracking camera records subconscious eye movements, tracking eye positions 500 times per seconds as patients watch the video. It collects over 100,000 data points. The device then uses AI to assess whether there’s any disruptions from the normal way the eyes move.
Cranial nerves are responsible for transmitting information between the brain and the body. Many are involved in eye movement. Pressure caused by a concussion can affect how these nerves work. So tracking how the eyes move can indicate if there’s anything wrong with the cranial nerves and where the problem lies.
If someone is healthy, their eyes should be able to focus on an object, follow movement and both eyes should be coordinated with each other. The EyeBox can detect abnormalities. For example, if a patient’s eyes are coordinated but they are not moving as they should, that indicates issues in the central brain stem, whilst only one eye moving abnormally suggests that a particular nerve section is affected.
Uzma Samadani with the EyeBOX device
Courtesy Oculogica
“The EyeBOX is a monitor for cranial nerves,” says Samadani. “Essentially it’s a form of digital neurological exam. “Several other eye-tracking techniques already exist, but they rely on subjective self-reported symptoms. Many also require a baseline, a measure of how patients reacted when they were healthy, which often isn’t available.
VOMS (Vestibular Ocular Motor Screen) is one of the most accurate diagnostic tests used in clinics in combination with other tests, but it is subjective. It involves a therapist getting patients to move their head or eyes as they focus or follow a particular object. Patients then report their symptoms.
The King-Devick test measures how fast patients can read numbers and compares it to a baseline. Since it is mainly used for athletes, the initial test is completed before the season starts. But participants can manipulate it. It also cannot be used in emergency rooms because the majority of patients wouldn’t have prior baseline tests.
Unlike these tests, EyeBOX doesn’t use a baseline and is objective because it doesn’t rely on patients’ answers. “It shows great promise,” says Thomas Wilcockson, a senior lecturer of psychology in Loughborough University, who is an expert in using eye tracking techniques in neurological disorders. “Baseline testing of eye movements is not always possible. Alternative measures of concussion currently in development, including work with VR headsets, seem to currently require it. Therefore the EyeBOX may have an advantage.”
A technology that’s still evolving
In their last clinical trial, Oculogica used the EyeBOX to test 46 patients who had concussion and 236 patients who did not. The sensitivity of the EyeBOX, or the probability of it correctly identifying the patient’s concussion, was 80.4 percent. Meanwhile, the test accurately ruled out a concussion in 66.1 percent of cases. This is known as its specificity score.
While the team is working on improving the numbers, experts who treat concussion patients find the device promising. “I strongly support their use of eye tracking for diagnostic decision making,” says Douglas Powell. “But for diagnostic tests, we would prefer at least one of the sensitivity or specificity values to be greater than 90 percent. Powell compares EyeBOX with the Buffalo Concussion Treadmill Test, which has sensitivity and specificity values of 73 and 78 percent, respectively. The VOMS also has shown greater accuracy than the EyeBOX, at least for now. Still, EyeBOX is competitive with the best diagnostic testing available for concussion and Powell hopes that its detection prowess will improve. “I anticipate that the algorithms being used by Oculogica will be under continuous revision and expect the results will improve within the next several years.”
“The color of your skin can have a huge impact in how quickly you are triaged and managed for brain injury. People of color have significantly worse outcomes after traumatic brain injury than people who are white.” -- Uzma Samadani, founder of Oculogica
Powell thinks the EyeBOX could be an important complement to other concussion assessments.
“The Oculogica product is a viable diagnostic tool that supports clinical decision making. However, concussion is an injury that can present with a wide array of symptoms, and the use of technology such as the Oculogica should always be a supplement to patient interaction.”
Ioannis Mavroudis, a consultant neurologist at Leeds Teaching Hospital, agrees that the EyeBOX has promise, but cautions that concussions are too complex to rely on the device alone. For example, not all concussions affect how eyes move. “I believe that it can definitely help, however not all concussions show changes in eye movements. I believe that if this could be combined with a cognitive assessment the results would be impressive.”
The Oculogica team submitted their clinical data for FDA approval and received it in 2018. Now, they’re working to bring the test to the commercial market and using the device clinically to help diagnose concussions for clients. They also want to look at other areas of brain health in the next few years. Samadani believes that the EyeBOX could possibly be used to detect diseases like multiple sclerosis or other neurological conditions. “It’s a completely new way of figuring out what someone’s neurological exam is and we’re only beginning to realize the potential,” says Samadani.
One of Samadani’s biggest aspirations is to help reduce inequalities in healthcare because of skin color and other factors like money or language barriers. From that perspective, the EyeBOX’s greatest potential could be in emergency rooms. It can help diagnose concussions in addition to the questionnaires, assessments and symptom checklists, currently used in the emergency departments. Unlike these more subjective tests, EyeBOX can produce an objective analysis of brain injury through AI when patients are admitted and assessed, unrelated to their socioeconomic status, education, or language abilities. Studies suggest that there are racial disparities in how patients with brain injuries are treated, such as how quickly they're assessed and get a treatment plan.
“The color of your skin can have a huge impact in how quickly you are triaged and managed for brain injury,” says Samadani. “As a result of that, people of color have significantly worse outcomes after traumatic brain injury than people who are white. The EyeBOX has the potential to reduce inequalities,” she explains.
“If you had a digital neurological tool that you could screen and triage patients on admission to the emergency department you would potentially be able to make sure that everybody got the same standard of care,” says Samadani. “My goal is to change the way brain injury is diagnosed and defined.”