Lab-grown meat will soon be sold in the U.S., but who will buy It?
Last November, when the U.S. Food and Drug Administration disclosed that chicken from a California firm called UPSIDE Foods did not raise safety concerns, it drily upended how humans have obtained animal protein for thousands of generations.
“The FDA is ready to work with additional firms developing cultured animal cell food and production processes to ensure their food is safe and lawful,” the agency said in a statement at the time.
Assuming UPSIDE obtains clearances from the U.S. Department of Agriculture, its chicken – grown entirely in a laboratory without harming a single bird – could be sold in supermarkets in the coming months.
“Ultimately, we want our products to be available everywhere meat is sold, including retail and food service channels,” a company spokesperson said. The upscale French restaurant Atelier Crenn in San Francisco will have UPSIDE chicken on its menu once it is approved, she added.
Known as lab-grown or cultured meat, a product such as UPSIDE’s is created using stem cells and other tissue obtained from a chicken, cow or other livestock. Those cells are then multiplied in a nutrient-dense environment, usually in conjunction with a “scaffold” of plant-based materials or gelatin to give them a familiar form, such as a chicken breast or a ribeye steak. A Dutch company called Mosa Meat claims it can produce 80,000 hamburgers derived from a cluster of tissue the size of a sesame seed.
Critics say the doubts about lab-grown meat and the possibility it could merge “Brave New World” with “The Jungle” and “Soylent Green” have not been appropriately explored.
That’s a far cry from when it took months of work to create the first lab-grown hamburger a decade ago. That minuscule patty – which did not contain any fat and was literally plucked from a Petri dish to go into a frying pan – cost about $325,000 to produce.
Just a decade later, an Israeli company called Future Meat said it can produce lab-grown meat for about $1.70 per pound. It plans to open a production facility in the U.S. sometime in 2023 and distribute its products under the brand name “Believer.”
Costs for production have sunk so low that researchers at Carnegie Mellon University in Pittsburgh expect sometime in early 2024 to produce lab-grown Wagyu steak to showcase the viability of growing high-end cuts of beef cheaply. The Carnegie Mellon team is producing its Wagyu using a consumer 3-D printer bought secondhand on eBay and modified to print the highly marbled flesh using a method developed by the university. The device costs $200 – about the same as a pound of Wagyu in the U.S. The initiative’s modest five-figure budget was successfully crowdfunded last year.
“The big cost is going to be the cells (which are being extracted by a cow somewhere in Pennsylvania), but otherwise printing doesn’t add much to the process,” said Rosalyn Abbott, a Carnegie Mellon assistant professor of bioengineering who is co-leader on the project. “But it adds value, unlike doing this with ground meat.”
Lab-Grown Meat’s Promise
Proponents of lab-grown meat say it will cut down on traditional agriculture, which has been a leading contributor to deforestation, water shortages and contaminated waterways from animal waste, as well as climate change.
An Oxford University study from 2011 concludes lab-grown meat could have greenhouse emissions 96 percent lower compared to traditionally raised livestock. Moreover, proponents of lab-grown meat claim that the suffering of animals would decline dramatically, as they would no longer need to be warehoused and slaughtered. A recently opened 26-story high-rise in China dedicated to the raising and slaughtering of pigs illustrates the current plight of livestock in stark terms.
Scientists may even learn how to tweak lab-grown meat to make it more nutritious. Natural red meat is high in saturated fat and, if it’s eaten too often, can lead to chronic diseases. In lab versions, the saturated fat could be swapped for healthier, omega-3 fatty acids.
But critics say the doubts about lab-grown meat and the possibility it could merge “Brave New World” with “The Jungle” and “Soylent Green” have not been appropriately explored.
A Slippery Slope?
Some academics who have studied the moral and ethical issues surrounding lab-grown meat believe it will have a tough path ahead gaining acceptance by consumers. Should it actually succeed in gaining acceptance, many ethical questions must be answered.
“People might be interested” in lab-grown meat, perhaps as a curiosity, said Carlos Alvaro, an associate professor of philosophy at the New York City College of Technology, part of the City University of New York. But the allure of traditionally sourced meat has been baked – or perhaps grilled – into people’s minds for so long that they may not want to make the switch. Plant-based meat provides a recent example of the uphill battle involved in changing old food habits, with Beyond Meat’s stock prices dipping nearly 80 percent in 2022.
"There are many studies showing that people don’t really care about the environment (to that extent)," Alvaro said. "So I don’t know how you would convince people to do this because of the environment.”
“From my research, I understand that the taste (of lab-grown meat) is not quite there,” Alvaro said, noting that the amino acids, sugars and other nutrients required to grow cultivated meat do not mimic what livestock are fed. He also observed that the multiplication of cells as part of the process “really mimic cancer cells” in the way they grow, another off-putting thought for would-be consumers of the product.
Alvaro is also convinced the public will not buy into any argument that lab-grown meat is more environmentally friendly.
“If people care about the environment, they either try and consume considerably less meat and other animal products, or they go vegan or vegetarian,” he said. “But there are many studies showing that people don’t really care about the environment (to that extent). So I don’t know how you would convince people to do this because of the environment.”
Ben Bramble, a professor at Australian National University who previously held posts at Princeton and Trinity College in Ireland, takes a slightly different tack. He noted that “if lab-grown meat becomes cheaper, healthier, or tastier than regular meat, there will be a large market for it. If it becomes all of these things, it will dominate the market.”
However, Bramble has misgivings about that occurring. He believes a smooth transition from traditionally sourced meat to a lab-grown version would allow humans to elide over the decades of animal cruelty perpetrated by large-scale agriculture, without fully reckoning with and learning from this injustice.
“My fear is that if we all switch over to lab-grown meat because it has become cheaper, healthier, or tastier than regular meat, we might never come to realize what we have done, and the terrible things we are capable of,” he said. “This would be a catastrophe.”
Bramble’s writings about cultured meat also raise some serious moral conundrums. If, for example, animal meat may be cultivated without killing animals, why not create products from human protein?
Actually, that’s already happened.
It occurred in 2019, when Orkan Telhan, a professor of fine arts at the University of Pennsylvania, collaborated with two scientists to create an art exhibit at the Philadelphia Museum of Art on the future of foodstuffs.
Although the exhibit included bioengineered bread and genetically modified salmon, it was an installation called “Ouroboros Steak” that drew the most attention. That was comprised of pieces of human flesh grown in a lab from cultivated cells and expired blood products obtained from online sources.
The exhibit was presented as four tiny morsels of red meat – shaped in patterns suggesting an ouroboros, a dragon eating its own tail. They were placed in tiny individual saucers atop a larger plate and placemat with a calico pattern, suggesting an item to order in a diner. The artwork drew international headlines – as well as condemnation for Telhan’s vision.
Telhan’s artwork is intended to critique the overarching assumption that lab-grown meat will eventually replace more traditional production methods, as well as the lack of transparency surrounding many processed foodstuffs. “They think that this problem (from industrial-scale agriculture) is going be solved by this new technology,” Telhan said. “I am critical (of) that perspective.”
Unlike Bramble, Telhan is not against lab-grown meat, so long as its producers are transparent about the sourcing of materials and its cultivation. But he believes that large-scale agricultural meat production – which dates back centuries – is not going to be replaced so quickly.
“We see this again and again with different industries, like algae-based fuels. A lot of companies were excited about this, and promoted it,” Telhan said. “And years later, we know these fuels work. But to be able to displace the oil industry means building the infrastructure to scale takes billions of dollars, and nobody has the patience or money to do it.”
Alvaro concurred on this point, which he believes is already weakened because a large swath of consumers aren’t concerned about environmental degradation.
“They’re going to have to sell this big, but in order to convince people to do so, they have to convince them to eat this product instead of regular meat,” Alvaro said.
Hidden Tweaks?
Moreover, if lab-based meat does obtain a significant market share, Telhan suggested companies may do things to the product – such as to genetically modify it to become more profitable – and never notify consumers. That is a particular concern in the U.S., where regulations regarding such modifications are vastly more relaxed than in the European Union.
“I think that they have really good objectives, and they aspire to good objectives,” Telhan said. “But the system itself doesn't really allow for that much transparency.”
No matter what the future holds, sometime next year Carnegie Mellon is expected to hold a press conference announcing it has produced a cut of the world’s most expensive beef with the help of a modified piece of consumer electronics. It will likely take place at around the same time UPSIDE chicken will be available for purchase in supermarkets and restaurants, pending the USDA’s approvals.
Abbott, the Carnegie Mellon professor, suggested the future event will be both informative and celebratory.
“I think Carnegie Mellon would have someone potentially cook it for us,” she said. “Like have a really good chef in New York City do it.”
Send in the Robots: A Look into the Future of Firefighting
April in Paris stood still. Flames engulfed the beloved Notre Dame Cathedral as the world watched, horrified, in 2019. The worst looked inevitable when firefighters were forced to retreat from the out-of-control fire.
But the Paris Fire Brigade had an ace up their sleeve: Colossus, a firefighting robot. The seemingly indestructible tank-like machine ripped through the blaze with its motorized water cannon. It was able to put out flames in places that would have been deadly for firefighters.
Firefighting is entering a new era, driven by necessity. Conventional methods of managing fires have been no match for the fiercer, more expansive fires being triggered by climate change, urban sprawl, and susceptible wooded areas.
Robots have been a game-changer. Inspired by Paris, the Los Angeles Fire Department (LAFD) was the first in the U.S. to deploy a firefighting robot in 2021, the Thermite Robotics System 3 – RS3, for short.
RS3 is a 3,500-pound turbine on a crawler—the size of a Smart car—with a 36.8 horsepower engine that can go for 20 hours without refueling. It can plow through hazardous terrain, move cars from its path, and pull an 8,000-pound object from a fire.
All that while spurting 2,500 gallons of water per minute with a rear exhaust fan clearing the smoke. At a recent trade show, RS3 was billed as equivalent to 10 firefighters. The Los Angeles Times referred to it as “a droid on steroids.”
Robots such as the Thermite RS3 can plow through hazardous terrain and pull an 8,000-pound object from a fire.
Los Angeles Fire Department
The advantage of the robot is obvious. Operated remotely from a distance, it greatly reduces an emergency responder’s exposure to danger, says Wade White, assistant chief of the LAFD. The robot can be sent into airplane fires, nuclear reactors, hazardous areas with carcinogens (think East Palestine, Ohio), or buildings where a roof collapse is imminent.
Advances for firefighters are taking many other forms as well. Fibers have been developed that make the firefighter’s coat lighter and more protective from carcinogens. New wearable devices track firefighters’ biometrics in real time so commanders can monitor their heat stress and exertion levels. A sensor patch is in development which takes readings every four seconds to detect dangerous gases such as methane and carbon dioxide. A sonic fire extinguisher is being explored that uses low frequency soundwaves to remove oxygen from air molecules without unhealthy chemical compounds.
The demand for this technology is only increasing, especially with the recent rise in wildfires. In 2021, fires were responsible for 3,800 deaths and 14,700 injuries of civilians in this country. Last year, 68,988 wildfires burned down 7.6 million acres. Whether the next generation of firefighting can address these new challenges could depend on special cameras, robots of the aerial variety, AI and smart systems.
Fighting fire with cameras
Another key innovation for firefighters is a thermal imaging camera (TIC) that improves visibility through smoke. “At a fire, you might not see your hand in front of your face,” says White. “Using the TIC screen, you can find the door to get out safely or see a victim in the corner.” Since these cameras were introduced in the 1990s, the price has come down enough (from $10,000 or more to about $700) that every LAFD firefighter on duty has been carrying one since 2019, says White.
TICs are about the size of a cell phone. The camera can sense movement and body heat so it is ideal as a search tool for people trapped in buildings. If a firefighter has not moved in 30 seconds, the motion detector picks that up, too, and broadcasts a distress signal and directional information to others.
To enable firefighters to operate the camera hands-free, the newest TICs can attach inside a helmet. The firefighter sees the images inside their mask.
TICs also can be mounted on drones to get a bird’s-eye, 360 degree view of a disaster or scout for hot spots through the smoke. In addition, the camera can take photos to aid arson investigations or help determine the cause of a fire.
More help From above
Firefighters prefer the term “unmanned aerial systems” (UAS) to drones to differentiate them from military use.
A UAS carrying a camera can provide aerial scene monitoring and topography maps to help fire captains deploy resources more efficiently. At night, floodlights from the drone can illuminate the landscape for firefighters. They can drop off payloads of blankets, parachutes, life preservers or radio devices for stranded people to communicate, too. And like the robot, the UAS reduces risks for ground crews and helicopter pilots by limiting their contact with toxic fumes, hazardous chemicals, and explosive materials.
“The nice thing about drones is that they perform multiple missions at once,” says Sean Triplett, team lead of fire and aviation management, tools and technology at the Forest Service.
Experts predict we’ll see swarms of drones dropping water and fire retardant on burning buildings and forests in the near future.
The UAS is especially helpful during wildfires because it can track fires, get ahead of wind currents and warn firefighters of wind shifts in real time. The U.S. Forest Service also uses long endurance, solar-powered drones that can fly for up to 30 days at a time to detect early signs of wildfire. Wildfires are no longer seasonal in California – they are a year-long threat, notes Thanh Nguyen, fire captain at the Orange County Fire Authority.
In March, Nguyen’s crew deployed a drone to scope out a huge landslide following torrential rains in San Clemente, CA. Emergency responders used photos and videos from the drone to survey the evacuated area, enabling them to stay clear of ground on the hillside that was still sliding.
Improvements in drone batteries are enabling them to fly for longer with heavier payloads. Experts predict we’ll see swarms of drones dropping water and fire retardant on burning buildings and forests in the near future.
AI to the rescue
The biggest peril for a firefighter is often what they don’t see coming. Flashovers are a leading cause of firefighter deaths, for example. They occur when flammable materials in an enclosed area ignite almost instantaneously. Or dangerous backdrafts can happen when a firefighter opens a window or door; the air rushing in can ignite a fire without warning.
The Fire Fighting Technology Group at the National Institute of Standards and Technology (NIST) is developing tools and systems to predict these potentially lethal events with computer models and artificial intelligence.
Partnering with other institutions, NIST researchers developed the Flashover Prediction Neural Network (FlashNet) after looking at common house layouts and running sets of scenarios through a machine-learning model. In the lab, FlashNet was able to predict a flashover 30 seconds before it happened with 92.1% success. When ready for release, the technology will be bundled with sensors that are already installed in buildings, says Anthony Putorti, leader of the NIST group.
The NIST team also examined data from hundreds of backdrafts as a basis for a machine-learning model to predict them. In testing chambers the model predicted them correctly 70.8% of the time; accuracy increased to 82.4% when measures of backdrafts were taken in more positions at different heights in the chambers. Developers are working on how to integrate the AI into a small handheld device that can probe the air of a room through cracks around a door or through a created opening, Putorti says. This way, the air can be analyzed with the device to alert firefighters of any significant backdraft risk.
Early wildfire detection technologies based on AI are in the works, too. The Forest Service predicts the acreage burned each year during wildfires will more than triple in the next 80 years. By gathering information on historic fires, weather patterns, and topography, says White, AI can help firefighters manage wildfires before they grow out of control and create effective evacuation plans based on population data and fire patterns.
The future is connectivity
We are in our infancy with “smart firefighting,” says Casey Grant, executive director emeritus of the Fire Protection Research Foundation. Grant foresees a new era of cyber-physical systems for firefighters—a massive integration of wireless networks, advanced sensors, 3D simulations, and cloud services. To enhance teamwork, the system will connect all branches of emergency responders—fire, emergency medical services, law enforcement.
FirstNet (First Responder Network Authority) now provides a nationwide high-speed broadband network with 5G capabilities for first responders through a terrestrial cell network. Battling wildfires, however, the Forest Service needed an alternative because they don’t always have access to a power source. In 2022, they contracted with Aerostar for a high altitude balloon (60,000 feet up) that can extend cell phone power and LTE. “It puts a bubble of connectivity over the fire to hook in the internet,” Triplett explains.
A high altitude balloon, 60,000 feet high, can extend cell phone power and LTE, putting a "bubble" of internet connectivity over fires.
Courtesy of USDA Forest Service
Advances in harvesting, processing and delivering data will improve safety and decision-making for firefighters, Grant sums up. Smart systems may eventually calculate fire flow paths and make recommendations about the best ways to navigate specific fire conditions. NIST’s plan to combine FlashNet with sensors is one example.
The biggest challenge is developing firefighting technology that can work across multiple channels—federal, state, local and tribal systems as well as for fire, police and other emergency services— in any location, says Triplett. “When there’s a wildfire, there are no political boundaries,” he says. “All hands are on deck.”
New device can diagnose concussions using AI
For a long time after Mary Smith hit her head, she was not able to function. Test after test came back normal, so her doctors ruled out the concussion, but she knew something was wrong. Finally, when she took a test with a novel EyeBOX device, recently approved by the FDA, she learned she indeed had been dealing with the aftermath of a concussion.
“I felt like even my husband and doctors thought I was faking it or crazy,” recalls Smith, who preferred not to disclose her real name. “When I took the EyeBOX test it showed that my eyes were not moving together and my BOX score was abnormal.” To her diagnosticians, scientists at the Minneapolis-based company Oculogica who developed the EyeBOX, these markers were concussion signs. “I cried knowing that finally someone could figure out what was wrong with me and help me get better,” she says.
Concussion affects around 42 million people worldwide. While it’s increasingly common in the news because of sports injuries, anything that causes damage to the head, from a fall to a car accident, can result in a concussion. The sudden blow or jolt can disrupt the normal way the brain works. In the immediate aftermath, people may suffer from headaches, lose consciousness and experience dizziness, confusion and vomiting. Some recover but others have side effects that can last for years, particularly affecting memory and concentration.
There is no simple standard-of-care test to confirm a concussion or rule it out. Neither do they appear on MRI and CT scans. Instead, medical professionals use more indirect approaches that test symptoms of concussions, such as assessments of patients’ learning and memory skills, ability to concentrate and problem solving. They also look at balance and coordination. Most tests are in the form of questionnaires or symptom checklists. Consequently, they have limitations, can be biased and may miss a concussion or produce a false positive. Some people suspected of having a concussion may ordinarily have difficulties with literary and problem-solving tests because of language challenges or education levels.
Another problem with current tests is that patients, particularly soldiers who want to return to combat and athletes who would like to keep competing, could try and hide their symptoms to avoid being diagnosed with a brain injury. Trauma physicians who work with concussion patients have the need for a tool that is more objective and consistent.
“This type of assessment doesn’t rely on the patient's education level, willingness to follow instructions or cooperation. You can’t game this.” -- Uzma Samadani, founder of Oculogica
“The importance of having an objective measurement tool for the diagnosis of concussion is of great importance,” says Douglas Powell, associate professor of biomechanics at the University of Memphis, with research interests in sports injury and concussion. “While there are a number of promising systems or metrics, we have yet to develop a system that is portable, accessible and objective for use on the sideline and in the clinic. The EyeBOX may be able to address these issues, though time will be the ultimate test of performance.”
The EyeBOX as a window inside the brain
Using eye movements to diagnose a concussion has emerged as a promising technique since around 2010. Oculogica combined eye movements with AI to develop the EyeBOX to develop an unbiased objective diagnostic tool.
“What’s so great about this type of assessment is it doesn’t rely on the patient's education level, willingness to follow instructions or cooperation,” says Uzma Samadani, a neurosurgeon and brain injury researcher at the University of Minnesota, who founded Oculogica. “You can’t game this. It assesses functions that are prompted by your brain.”
In 2010, Samadani was working on a clinical trial to improve the outcome of brain injuries. The team needed some way to measure if seriously brain injured patients were improving. One thing patients could do was watch TV. So Samadani designed and patented an AI-based algorithm that tracks the relationship between eye movement and concussion.
The EyeBOX test requires patients to watch movie or music clips for 220 seconds. An eye tracking camera records subconscious eye movements, tracking eye positions 500 times per seconds as patients watch the video. It collects over 100,000 data points. The device then uses AI to assess whether there’s any disruptions from the normal way the eyes move.
Cranial nerves are responsible for transmitting information between the brain and the body. Many are involved in eye movement. Pressure caused by a concussion can affect how these nerves work. So tracking how the eyes move can indicate if there’s anything wrong with the cranial nerves and where the problem lies.
If someone is healthy, their eyes should be able to focus on an object, follow movement and both eyes should be coordinated with each other. The EyeBox can detect abnormalities. For example, if a patient’s eyes are coordinated but they are not moving as they should, that indicates issues in the central brain stem, whilst only one eye moving abnormally suggests that a particular nerve section is affected.
Uzma Samadani with the EyeBOX device
Courtesy Oculogica
“The EyeBOX is a monitor for cranial nerves,” says Samadani. “Essentially it’s a form of digital neurological exam. “Several other eye-tracking techniques already exist, but they rely on subjective self-reported symptoms. Many also require a baseline, a measure of how patients reacted when they were healthy, which often isn’t available.
VOMS (Vestibular Ocular Motor Screen) is one of the most accurate diagnostic tests used in clinics in combination with other tests, but it is subjective. It involves a therapist getting patients to move their head or eyes as they focus or follow a particular object. Patients then report their symptoms.
The King-Devick test measures how fast patients can read numbers and compares it to a baseline. Since it is mainly used for athletes, the initial test is completed before the season starts. But participants can manipulate it. It also cannot be used in emergency rooms because the majority of patients wouldn’t have prior baseline tests.
Unlike these tests, EyeBOX doesn’t use a baseline and is objective because it doesn’t rely on patients’ answers. “It shows great promise,” says Thomas Wilcockson, a senior lecturer of psychology in Loughborough University, who is an expert in using eye tracking techniques in neurological disorders. “Baseline testing of eye movements is not always possible. Alternative measures of concussion currently in development, including work with VR headsets, seem to currently require it. Therefore the EyeBOX may have an advantage.”
A technology that’s still evolving
In their last clinical trial, Oculogica used the EyeBOX to test 46 patients who had concussion and 236 patients who did not. The sensitivity of the EyeBOX, or the probability of it correctly identifying the patient’s concussion, was 80.4 percent. Meanwhile, the test accurately ruled out a concussion in 66.1 percent of cases. This is known as its specificity score.
While the team is working on improving the numbers, experts who treat concussion patients find the device promising. “I strongly support their use of eye tracking for diagnostic decision making,” says Douglas Powell. “But for diagnostic tests, we would prefer at least one of the sensitivity or specificity values to be greater than 90 percent. Powell compares EyeBOX with the Buffalo Concussion Treadmill Test, which has sensitivity and specificity values of 73 and 78 percent, respectively. The VOMS also has shown greater accuracy than the EyeBOX, at least for now. Still, EyeBOX is competitive with the best diagnostic testing available for concussion and Powell hopes that its detection prowess will improve. “I anticipate that the algorithms being used by Oculogica will be under continuous revision and expect the results will improve within the next several years.”
“The color of your skin can have a huge impact in how quickly you are triaged and managed for brain injury. People of color have significantly worse outcomes after traumatic brain injury than people who are white.” -- Uzma Samadani, founder of Oculogica
Powell thinks the EyeBOX could be an important complement to other concussion assessments.
“The Oculogica product is a viable diagnostic tool that supports clinical decision making. However, concussion is an injury that can present with a wide array of symptoms, and the use of technology such as the Oculogica should always be a supplement to patient interaction.”
Ioannis Mavroudis, a consultant neurologist at Leeds Teaching Hospital, agrees that the EyeBOX has promise, but cautions that concussions are too complex to rely on the device alone. For example, not all concussions affect how eyes move. “I believe that it can definitely help, however not all concussions show changes in eye movements. I believe that if this could be combined with a cognitive assessment the results would be impressive.”
The Oculogica team submitted their clinical data for FDA approval and received it in 2018. Now, they’re working to bring the test to the commercial market and using the device clinically to help diagnose concussions for clients. They also want to look at other areas of brain health in the next few years. Samadani believes that the EyeBOX could possibly be used to detect diseases like multiple sclerosis or other neurological conditions. “It’s a completely new way of figuring out what someone’s neurological exam is and we’re only beginning to realize the potential,” says Samadani.
One of Samadani’s biggest aspirations is to help reduce inequalities in healthcare because of skin color and other factors like money or language barriers. From that perspective, the EyeBOX’s greatest potential could be in emergency rooms. It can help diagnose concussions in addition to the questionnaires, assessments and symptom checklists, currently used in the emergency departments. Unlike these more subjective tests, EyeBOX can produce an objective analysis of brain injury through AI when patients are admitted and assessed, unrelated to their socioeconomic status, education, or language abilities. Studies suggest that there are racial disparities in how patients with brain injuries are treated, such as how quickly they're assessed and get a treatment plan.
“The color of your skin can have a huge impact in how quickly you are triaged and managed for brain injury,” says Samadani. “As a result of that, people of color have significantly worse outcomes after traumatic brain injury than people who are white. The EyeBOX has the potential to reduce inequalities,” she explains.
“If you had a digital neurological tool that you could screen and triage patients on admission to the emergency department you would potentially be able to make sure that everybody got the same standard of care,” says Samadani. “My goal is to change the way brain injury is diagnosed and defined.”