“Deep Fake” Video Technology Is Advancing Faster Than Our Policies Can Keep Up
This article is part of the magazine, "The Future of Science In America: The Election Issue," co-published by LeapsMag, the Aspen Institute Science & Society Program, and GOOD.
Alethea.ai sports a grid of faces smiling, blinking and looking about. Some are beautiful, some are oddly familiar, but all share one thing in common—they are fake.
Alethea creates "synthetic media"— including digital faces customers can license saying anything they choose with any voice they choose. Companies can hire these photorealistic avatars to appear in explainer videos, advertisements, multimedia projects or any other applications they might dream up without running auditions or paying talent agents or actor fees. Licenses begin at a mere $99. Companies may also license digital avatars of real celebrities or hire mashups created from real celebrities including "Don Exotic" (a mashup of Donald Trump and Joe Exotic) or "Baby Obama" (a large-eared toddler that looks remarkably similar to a former U.S. President).
Naturally, in the midst of the COVID pandemic, the appeal is understandable. Rather than flying to a remote location to film a beer commercial, an actor can simply license their avatar to do the work for them. The question is—where and when this tech will cross the line between legitimately licensed and authorized synthetic media to deep fakes—synthetic videos designed to deceive the public for financial and political gain.
Deep fakes are not new. From written quotes that are manipulated and taken out of context to audio quotes that are spliced together to mean something other than originally intended, misrepresentation has been around for centuries. What is new is the technology that allows this sort of seamless and sophisticated deception to be brought to the world of video.
"At one point, video content was considered more reliable, and had a higher threshold of trust," said Alethea CEO and co-founder, Arif Khan. "We think video is harder to fake and we aren't yet as sensitive to detecting those fakes. But the technology is definitely there."
"In the future, each of us will only trust about 15 people and that's it," said Phil Lelyveld, who serves as Immersive Media Program Lead at the Entertainment Technology Center at the University of Southern California. "It's already very difficult to tell true footage from fake. In the future, I expect this will only become more difficult."
How do we know what's true in a world where original videos created with avatars of celebrities and politicians can be manipulated to say virtually anything?
As the U.S. 2020 Presidential Election nears, the potential moral and ethical implications of this technology are startling. A number of cases of truth tampering have recently been widely publicized. On August 5, President Donald Trump's campaign released an ad featuring several photos of Joe Biden that were altered to make it seem like was hiding all alone in his basement. In one photo, at least ten people who had been sitting with Biden in the original shot were cut out. In other photos, Biden's image was removed from a nature preserve and praying in church to make it appear Biden was in that same basement. Recently several videos of Speaker of the House Nancy Pelosi were slowed down by 75 percent to make her sound as if her speech was slurred.
During a campaign event in Florida on September 15 of this year, former Vice President Joe Biden was introduced by Puerto Rican singer-songwriter Luis Fonsi. After he was introduced, Biden paid tribute to the singer-songwriter—he held up his cell phone and played the hit song "Despecito". Shortly afterward, a doctored version of this video appeared on self-described parody site the United Spot replacing the Despicito with N.W.A.'s "F—- Tha Police". By September 16, Donald Trump retweeted the video, twice—first with the line "What is this all about" and second with the line "China is drooling. They can't believe this!" Twitter was quick to mark the video in these tweets as manipulated media.
Twitter had previously addressed several of Donald Trump's tweets—flagging a video shared in June as manipulated media and removing altogether a video shared by Trump in July showing a group promoting the hydroxychloroquine as an effective cure for COVID-19. Many of these manipulated videos are ultimately flagged or taken down, but not before they are seen and shared by millions of online viewers.
These faked videos were exposed rather quickly, as they could be compared with the original, publicly available source material. But what happens when there is no original source material? How do we know what's true in a world where original videos created with avatars of celebrities and politicians can be manipulated to say virtually anything?
"This type of fake media is a profound threat to our democracy," said Reid Blackman, the CEO of VIRTUE--an ethics consultancy for AI leaders. "Democracy depends on well-informed citizens. When citizens can't or won't discern between real and fake news, the implications are huge."
In light of the importance of reliable information in the political system, there's a clear and present need to verify that the images and news we consume is authentic. So how can anyone ever know that the content they are viewing is real?
"This will not be a simple technological solution," said Blackman. "There is no 'truth' button to push to verify authenticity. There's plenty of blame and condemnation to go around. Purveyors of information have a responsibility to vet the reliability of their sources. And consumers also have a responsibility to vet their sources."
Yet the process of verifying sources has never been more challenging. More and more citizens are choosing to live in a "media bubble"—gathering and sharing news only from and with people who share their political leanings and opinions. At one time, United States broadcasters were bound by the Fairness Doctrine—requiring them to present controversial issues important to the public in a way that the FCC deemed honest, equitable and balanced. The repeal of this doctrine in 1987 paved the way for new forms of cable news channels such as Fox News and MSNBC that appealed to viewers with a particular point of view. The Internet has only exacerbated these tendencies. Social media algorithms are designed to keep people clicking within their comfort zones by presenting members with only the thoughts and opinions they want to hear.
"I sometimes laugh when I hear people tell me they can back a particular opinion they hold with research," said Blackman. "Having conducted a fair bit of true scientific research, I am aware that clicking on one article on the Internet hardly qualifies. But a surprising number of people believe that finding any source online that states the fact they choose to believe is the same as proving it true."
Back to the fundamental challenge: How do we as a society root out what's false online? Lelyveld suggests that it will begin by verifying things that are known to be true rather than trying to call out everything that is fake. "The EU called me in to talk about how to deal with fake news coming out of Russia," said Lelyveld. "I told them Hollywood has spent 100 years developing special effects technology to make things that are wholly fictional indistinguishable from the truth. I told them that you'll never chase down every source of fake news. You're better off focusing on what can be proved true."
Arif Khan agrees. "There are probably 100 accounts attributed to Elon Musk on Twitter, but only one has the blue checkmark," said Khan. "That means Twitter has verified that an account of public interest is real. That's what we're trying to do with our platform. Allow celebrities to verify that specific videos were licensed and authorized directly by them."
Alethea will use another key technology called blockchain to mark all authentic authorized videos with celebrity avatars. Blockchain uses a distributed ledger technology to make sure that no undetected changes have been made to the content. Think of the difference between editing a document in a traditional word processing program and editing in a distributed online editing system like Google Docs. In a traditional word processing program, you can edit and copy a document without revealing any changes. In a shared editing system like Google Docs, every person who shares the document can see a record of every edit, addition and copy made of any portion of the document. In a similar way, blockchain helps Alethea ensure that approved videos have not been copied or altered inappropriately.
While AI companies like Alethea are moving to ensure that avatars based on real individuals aren't wrongly identified, the situation becomes a bit murkier when it comes to the question of representing groups, races, creeds, and other forms of identity. Alethea is rightly proud that the completely artificial avatars visually represent a variety of ages, races and sexes. However, companies could conceivably license an avatar to represent a marginalized group without actually hiring a person within that group to decide what the avatar will do or say.
"I don't know if I would call this tokenism, as that is difficult to identify without understanding the hiring company's intent," said Blackman. "Where this becomes deeply troubling is when avatars are used to represent a marginalized group without clearly pointing out the actor is an avatar. It's one thing for an African American woman avatar to say, 'I like ice cream.' It's entirely different thing for an African American woman avatar to say she supports a particular political candidate. In the second case, the avatar is being used as social proof that real people of a certain type back a certain political idea. And there the deception is far more problematic."
"It always comes down to unintended consequences of technology," said Lelyveld. "Technology is neutral—it's only the implementation that has the power to be good or bad. Without a thoughtful approach to the cultural, moral and political implications of technology, it often drifts towards the bad. We need to make a conscious decision as we release new technology to ensure it moves towards the good."
When presented with the idea that his avatars might be used to misrepresent marginalized groups, Khan was thoughtful. "Yes, I can see that is an unintended consequence of our technology. We would like to encourage people to license the avatars of real people, who would have final approval over what their avatars say or do. As to what people do with our completely artificial avatars, we will have to consider that moving forward."
Lelyveld frankly sees the ability for advertisers to create avatars that are our assistants or even our friends as a greater moral concern. "Once our digital assistant or avatar becomes an integral part of our life—even a friend as it were, what's to stop marketers from having those digital friends make suggestions about what drink we buy, which shirt we wear or even which candidate we elect? The possibilities for bad actors to reach us through our digital circle is mind-boggling."
Ultimately, Blackman suggests, we as a society will need to make decisions about what matters to us. "We will need to build policies and write laws—tackling the biggest problems like political deep fakes first. And then we have to figure out how to make the penalties stiff enough to matter. Fining a multibillion-dollar company a few million for a major offense isn't likely to move the needle. The punishment will need to fit the crime."
Until then, media consumers will need to do their own due diligence—to do the difficult work of uncovering the often messy and deeply uncomfortable news that's the truth.
[Editor's Note: To read other articles in this special magazine issue, visit the beautifully designed e-reader version.]
In the 1966 movie "Fantastic Voyage," actress Raquel Welch and her submarine were shrunk to the size of a cell in order to eliminate a blood clot in a scientist's brain. Now, 55 years later, the scenario is becoming closer to reality.
California-based startup Bionaut Labs has developed a nanobot about the size of a grain of rice that's designed to transport medication to the exact location in the body where it's needed. If you think about it, the conventional way to deliver medicine makes little sense: A painkiller affects the entire body instead of just the arm that's hurting, and chemotherapy is flushed through all the veins instead of precisely targeting the tumor.
"Chemotherapy is delivered systemically," Bionaut-founder and CEO Michael Shpigelmacher says. "Often only a small percentage arrives at the location where it is actually needed."
But what if it was possible to send a tiny robot through the body to attack a tumor or deliver a drug at exactly the right location?
Several startups and academic institutes worldwide are working to develop such a solution but Bionaut Labs seems the furthest along in advancing its invention. "You can think of the Bionaut as a tiny screw that moves through the veins as if steered by an invisible screwdriver until it arrives at the tumor," Shpigelmacher explains. Via Zoom, he shares the screen of an X-ray machine in his Culver City lab to demonstrate how the half-transparent, yellowish device winds its way along the spine in the body. The nanobot contains a tiny but powerful magnet. The "invisible screwdriver" is an external magnetic field that rotates that magnet inside the device and gets it to move and change directions.
The current model has a diameter of less than a millimeter. Shpigelmacher's engineers could build the miniature vehicle even smaller but the current size has the advantage of being big enough to see with bare eyes. It can also deliver more medicine than a tinier version. In the Zoom demonstration, the micorobot is injected into the spine, not unlike an epidural, and pulled along the spine through an outside magnet until the Bionaut reaches the brainstem. Depending which organ it needs to reach, it could be inserted elsewhere, for instance through a catheter.
"The hope is that we can develop a vehicle to transport medication deep into the body," says Max Planck scientist Tian Qiu.
Imagine moving a screw through a steak with a magnet — that's essentially how the device works. But of course, the Bionaut is considerably different from an ordinary screw: "At the right location, we give a magnetic signal, and it unloads its medicine package," Shpigelmacher says.
To start, Bionaut Labs wants to use its device to treat Parkinson's disease and brain stem gliomas, a type of cancer that largely affects children and teenagers. About 300 to 400 young people a year are diagnosed with this type of tumor. Radiation and brain surgery risk damaging sensitive brain tissue, and chemotherapy often doesn't work. Most children with these tumors live less than 18 months. A nanobot delivering targeted chemotherapy could be a gamechanger. "These patients really don't have any other hope," Shpigelmacher says.
Of course, the main challenge of the developing such a device is guaranteeing that it's safe. Because tissue is so sensitive, any mistake could risk disastrous results. In recent years, Bionaut has tested its technology in dozens of healthy sheep and pigs with no major adverse effects. Sheep make a good stand-in for humans because their brains and spines are similar to ours.
The Bionaut device is about the size of a grain of rice.
Bionaut Labs
"As the Bionaut moves through brain tissue, it creates a transient track that heals within a few weeks," Shpigelmacher says. The company is hoping to be the first to test a nanobot in humans. In December 2022, it announced that a recent round of funding drew $43.2 million, for a total of 63.2 million, enabling more research and, if all goes smoothly, human clinical trials by early next year.
Once the technique has been perfected, further applications could include addressing other kinds of brain disorders that are considered incurable now, such as Alzheimer's or Huntington's disease. "Microrobots could serve as a bridgehead, opening the gateway to the brain and facilitating precise access of deep brain structure – either to deliver medication, take cell samples or stimulate specific brain regions," Shpigelmacher says.
Robot-assisted hybrid surgery with artificial intelligence is already used in state-of-the-art surgery centers, and many medical experts believe that nanorobotics will be the instrument of the future. In 2016, three scientists were awarded the Nobel Prize in Chemistry for their development of "the world's smallest machines," nano "elevators" and minuscule motors. Since then, the scientific experiments have progressed to the point where applicable devices are moving closer to actually being implemented.
Bionaut's technology was initially developed by a research team lead by Peer Fischer, head of the independent Micro Nano and Molecular Systems Lab at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany. Fischer is considered a pioneer in the research of nano systems, which he began at Harvard University more than a decade ago. He and his team are advising Bionaut Labs and have licensed their technology to the company.
"The hope is that we can develop a vehicle to transport medication deep into the body," says Max Planck scientist Tian Qiu, who leads the cooperation with Bionaut Labs. He agrees with Shpigelmacher that the Bionaut's size is perfect for transporting medication loads and is researching potential applications for even smaller nanorobots, especially in the eye, where the tissue is extremely sensitive. "Nanorobots can sneak through very fine tissue without causing damage."
In "Fantastic Voyage," Raquel Welch's adventures inside the body of a dissident scientist let her swim through his veins into his brain, but her shrunken miniature submarine is attacked by antibodies; she has to flee through the nerves into the scientist's eye where she escapes into freedom on a tear drop. In reality, the exit in the lab is much more mundane. The Bionaut simply leaves the body through the same port where it entered. But apart from the dramatization, the "Fantastic Voyage" was almost prophetic, or, as Shpigelmacher says, "Science fiction becomes science reality."
This article was first published by Leaps.org on April 12, 2021.
How the Human Brain Project Built a Mind of its Own
In 2009, neuroscientist Henry Markram gave an ambitious TED talk. “Our mission is to build a detailed, realistic computer model of the human brain,” he said, naming three reasons for this unmatched feat of engineering. One was because understanding the human brain was essential to get along in society. Another was because experimenting on animal brains could only get scientists so far in understanding the human ones. Third, medicines for mental disorders weren’t good enough. “There are two billion people on the planet that are affected by mental disorders, and the drugs that are used today are largely empirical,” Markram said. “I think that we can come up with very concrete solutions on how to treat disorders.”
Markram's arguments were very persuasive. In 2013, the European Commission launched the Human Brain Project, or HBP, as part of its Future and Emerging Technologies program. Viewed as Europe’s chance to try to win the “brain race” between the U.S., China, Japan, and other countries, the project received about a billion euros in funding with the goal to simulate the entire human brain on a supercomputer, or in silico, by 2023.
Now, after 10 years of dedicated neuroscience research, the HBP is coming to an end. As its many critics warned, it did not manage to build an entire human brain in silico. Instead, it achieved a multifaceted array of different goals, some of them unexpected.
Scholars have found that the project did help advance neuroscience more than some detractors initially expected, specifically in the area of brain simulations and virtual models. Using an interdisciplinary approach of combining technology, such as AI and digital simulations, with neuroscience, the HBP worked to gain a deeper understanding of the human brain’s complicated structure and functions, which in some cases led to novel treatments for brain disorders. Lastly, through online platforms, the HBP spearheaded a previously unmatched level of global neuroscience collaborations.
Simulating a human brain stirs up controversy
Right from the start, the project was plagued with controversy and condemnation. One of its prominent critics was Yves Fregnac, a professor in cognitive science at the Polytechnic Institute of Paris and research director at the French National Centre for Scientific Research. Fregnac argued in numerous articles that the HBP was overfunded based on proposals with unrealistic goals. “This new way of over-selling scientific targets, deeply aligned with what modern society expects from mega-sciences in the broad sense (big investment, big return), has been observed on several occasions in different scientific sub-fields,” he wrote in one of his articles, “before invading the field of brain sciences and neuromarketing.”
"A human brain model can simulate an experiment a million times for many different conditions, but the actual human experiment can be performed only once or a few times," said Viktor Jirsa, a professor at Aix-Marseille University.
Responding to such critiques, the HBP worked to restructure the effort in its early days with new leadership, organization, and goals that were more flexible and attainable. “The HBP got a more versatile, pluralistic approach,” said Viktor Jirsa, a professor at Aix-Marseille University and one of the HBP lead scientists. He believes that these changes fixed at least some of HBP’s issues. “The project has been on a very productive and scientifically fruitful course since then.”
After restructuring, the HBP became a European hub on brain research, with hundreds of scientists joining its growing network. The HBP created projects focused on various brain topics, from consciousness to neurodegenerative diseases. HBP scientists worked on complex subjects, such as mapping out the brain, combining neuroscience and robotics, and experimenting with neuromorphic computing, a computational technique inspired by the human brain structure and function—to name just a few.
Simulations advance knowledge and treatment options
In 2013, it seemed that bringing neuroscience into a digital age would be farfetched, but research within the HBP has made this achievable. The virtual maps and simulations various HBP teams create through brain imaging data make it easier for neuroscientists to understand brain developments and functions. The teams publish these models on the HBP’s EBRAINS online platform—one of the first to offer access to such data to neuroscientists worldwide via an open-source online site. “This digital infrastructure is backed by high-performance computers, with large datasets and various computational tools,” said Lucy Xiaolu Wang, an assistant professor in the Resource Economics Department at the University of Massachusetts Amherst, who studies the economics of the HBP. That means it can be used in place of many different types of human experimentation.
Jirsa’s team is one of many within the project that works on virtual brain models and brain simulations. Compiling patient data, Jirsa and his team can create digital simulations of different brain activities—and repeat these experiments many times, which isn’t often possible in surgeries on real brains. “A human brain model can simulate an experiment a million times for many different conditions,” Jirsa explained, “but the actual human experiment can be performed only once or a few times.” Using simulations also saves scientists and doctors time and money when looking at ways to diagnose and treat patients with brain disorders.
Compiling patient data, scientists can create digital simulations of different brain activities—and repeat these experiments many times.
The Human Brain Project
Simulations can help scientists get a full picture that otherwise is unattainable. “Another benefit is data completion,” added Jirsa, “in which incomplete data can be complemented by the model. In clinical settings, we can often measure only certain brain areas, but when linked to the brain model, we can enlarge the range of accessible brain regions and make better diagnostic predictions.”
With time, Jirsa’s team was able to move into patient-specific simulations. “We advanced from generic brain models to the ability to use a specific patient’s brain data, from measurements like MRI and others, to create individualized predictive models and simulations,” Jirsa explained. He and his team are working on this personalization technique to treat patients with epilepsy. According to the World Health Organization, about 50 million people worldwide suffer from epilepsy, a disorder that causes recurring seizures. While some epilepsy causes are known others remain an enigma, and many are hard to treat. For some patients whose epilepsy doesn’t respond to medications, removing part of the brain where seizures occur may be the only option. Understanding where in the patients’ brains seizures arise can give scientists a better idea of how to treat them and whether to use surgery versus medications.
“We apply such personalized models…to precisely identify where in a patient’s brain seizures emerge,” Jirsa explained. “This guides individual surgery decisions for patients for which surgery is the only treatment option.” He credits the HBP for the opportunity to develop this novel approach. “The personalization of our epilepsy models was only made possible by the Human Brain Project, in which all the necessary tools have been developed. Without the HBP, the technology would not be in clinical trials today.”
Personalized simulations can significantly advance treatments, predict the outcome of specific medical procedures and optimize them before actually treating patients. Jirsa is watching this happen firsthand in his ongoing research. “Our technology for creating personalized brain models is now used in a large clinical trial for epilepsy, funded by the French state, where we collaborate with clinicians in hospitals,” he explained. “We have also founded a spinoff company called VB Tech (Virtual Brain Technologies) to commercialize our personalized brain model technology and make it available to all patients.”
The Human Brain Project created a level of interconnectedness within the neuroscience research community that never existed before—a network not unlike the brain’s own.
Other experts believe it’s too soon to tell whether brain simulations could change epilepsy treatments. “The life cycle of developing treatments applicable to patients often runs over a decade,” Wang stated. “It is still too early to draw a clear link between HBP’s various project areas with patient care.” However, she admits that some studies built on the HBP-collected knowledge are already showing promise. “Researchers have used neuroscientific atlases and computational tools to develop activity-specific stimulation programs that enabled paraplegic patients to move again in a small-size clinical trial,” Wang said. Another intriguing study looked at simulations of Alzheimer’s in the brain to understand how it evolves over time.
Some challenges remain hard to overcome even with computer simulations. “The major challenge has always been the parameter explosion, which means that many different model parameters can lead to the same result,” Jirsa explained. An example of this parameter explosion could be two different types of neurodegenerative conditions, such as Parkinson’s and Huntington’s diseases. Both afflict the same area of the brain, the basal ganglia, which can affect movement, but are caused by two different underlying mechanisms. “We face the same situation in the living brain, in which a large range of diverse mechanisms can produce the same behavior,” Jirsa said. The simulations still have to overcome the same challenge.
Understanding where in the patients’ brains seizures arise can give scientists a better idea of how to treat them and whether to use surgery versus medications.
The Human Brain Project
A network not unlike the brain’s own
Though the HBP will be closing this year, its legacy continues in various studies, spin-off companies, and its online platform, EBRAINS. “The HBP is one of the earliest brain initiatives in the world, and the 10-year long-term goal has united many researchers to collaborate on brain sciences with advanced computational tools,” Wang said. “Beyond the many research articles and projects collaborated on during the HBP, the online neuroscience research infrastructure EBRAINS will be left as a legacy even after the project ends.”
Those who worked within the HBP see the end of this project as the next step in neuroscience research. “Neuroscience has come closer to very meaningful applications through the systematic link with new digital technologies and collaborative work,” Jirsa stated. “In that way, the project really had a pioneering role.” It also created a level of interconnectedness within the neuroscience research community that never existed before—a network not unlike the brain’s own. “Interconnectedness is an important advance and prerequisite for progress,” Jirsa said. “The neuroscience community has in the past been rather fragmented and this has dramatically changed in recent years thanks to the Human Brain Project.”
According to its website, by 2023 HBP’s network counted over 500 scientists from over 123 institutions and 16 different countries, creating one of the largest multi-national research groups in the world. Even though the project hasn’t produced the in-silico brain as Markram envisioned it, the HBP created a communal mind with immense potential. “It has challenged us to think beyond the boundaries of our own laboratories,” Jirsa said, “and enabled us to go much further together than we could have ever conceived going by ourselves.”