Your Digital Avatar May One Day Get Sick Before You Do
Artificial intelligence is everywhere, just not in the way you think it is.
These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn."
"There's the perception of AI in the glossy magazines," says Anders Kofod-Petersen, a professor of Artificial Intelligence at the Norwegian University of Science and Technology. "That's the sci-fi version. It resembles the small guy in the movie AI. It might be benevolent or it might be evil, but it's generally intelligent and conscious."
"And this is, of course, as far from the truth as you can possibly get."
What Exactly Is Artificial Intelligence, Anyway?
Let's start with how you got to this piece. You likely came to it through social media. Your Facebook account, Twitter feed, or perhaps a Google search. AI influences all of those things, machine learning helping to run the algorithms that decide what you see, when, and where. AI isn't the little humanoid figure; it's the system that controls the figure.
"AI is being confused with robotics," Eleonore Pauwels, Director of the Anticipatory Intelligence Lab with the Science and Technology Innovation Program at the Wilson Center, says. "What AI is right now is a data optimization system, a very powerful data optimization system."
The revolution in recent years hasn't come from the method scientists and other researchers use. The general ideas and philosophies have been around since the late 1960s. Instead, the big change has been the dramatic increase in computing power, primarily due to the development of neural networks. These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn." An AI, for example, can be taught to spot a picture of a cat by looking at hundreds of thousands of pictures that have been labeled "cat" and "learning" what a cat looks like. Or an AI can beat a human at Go, an achievement that just five years ago Kofod-Petersen thought wouldn't be accomplished for decades.
"It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn."
Medicine is the field where this expertise in perception tasks might have the most influence. It's already having an impact as iPhones use AI to detect cancer, Apple watches alert the wearer to a heart problem, AI spots tuberculosis and the spread of breast cancer with a higher accuracy than human doctors, and more. Every few months, another study demonstrates more possibility. (The New Yorker published an article about medicine and AI last year, so you know it's a serious topic.)
But this is only the beginning. "I personally think genomics and precision medicine is where AI is going to be the biggest game-changer," Pauwels says. "It's going to completely change how we think about health, our genomes, and how we think about our relationship between our genotype and phenotype."
The Fundamental Breakthrough That Must Be Solved
To get there, however, researchers will need to make another breakthrough, and there's debate about how long that will take. Kofod-Petersen explains: "If we want to move from this narrow intelligence to this broader intelligence, that's a very difficult problem. It basically boils down to that we haven't got a clue about what intelligence actually is. We don't know what intelligence means in a biological sense. We think we might recognize it but we're not completely sure. There isn't a working definition. We kind of agree with the biologists that learning is an aspect of it. It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn. They can learn specific tasks but we haven't approached how to teach them to learn to learn."
In other words, current AI is very, very good at identifying that a picture of a cat is, in fact, a cat – and getting better at doing so at an incredibly rapid pace – but the system only knows what a "cat" is because that's what a programmer told it a furry thing with whiskers and two pointy ears is called. If the programmer instead decided to label the training images as "dogs," the AI wouldn't say "no, that's a cat." Instead, it would simply call a furry thing with whiskers and two pointy ears a dog. AI systems lack the explicit inference that humans do effortlessly, almost without thinking.
Pauwels believes that the next step is for AI to transition from supervised to unsupervised learning. The latter means that the AI isn't answering questions that a programmer asks it ("Is this a cat?"). Instead, it's almost like it's looking at the data it has, coming up with its own questions and hypothesis, and answering them or putting them to the test. Combining this ability with the frankly insane processing power of the computer system could result in game-changing discoveries.
In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions present themselves before the person gets sick in real life.
One company in China plans to develop a way to create a digital avatar of an individual person, then simulate that person's health and medical information into the future. In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions presented themselves – cancer or a heart condition or anything, really – and help the real-life version prevent those conditions from beginning or treating them before they became a life-threatening issue.
That, obviously, would be an incredibly powerful technology, and it's just one of the many possibilities that unsupervised AI presents. It's also terrifying in the potential for misuse. Even the term "unsupervised AI" brings to mind a dystopian landscape where AI takes over and enslaves humanity. (Pick your favorite movie. There are dozens.) This is a concern, something for developers, programmers, and scientists to consider as they build the systems of the future.
The Ethical Problem That Deserves More Attention
But the more immediate concern about AI is much more mundane. We think of AI as an unbiased system. That's incorrect. Algorithms, after all, are designed by someone or a team, and those people have explicit or implicit biases. Intentionally, or more likely not, they introduce these biases into the very code that forms the basis for the AI. Current systems have a bias against people of color. Facebook tried to rectify the situation and failed. These are two small examples of a larger, potentially systemic problem.
It's vital and necessary for the people developing AI today to be aware of these issues. And, yes, avoid sending us to the brink of a James Cameron movie. But AI is too powerful a tool to ignore. Today, it's identifying cats and on the verge of detecting cancer. In not too many tomorrows, it will be on the forefront of medical innovation. If we are careful, aware, and smart, it will help simulate results, create designer drugs, and revolutionize individualize medicine. "AI is the only way to get there," Pauwels says.
A newly discovered brain cell may lead to better treatments for cognitive disorders
Swiss researchers have discovered a third type of brain cell that appears to be a hybrid of the two other primary types — and it could lead to new treatments for many brain disorders.
The challenge: Most of the cells in the brain are either neurons or glial cells. While neurons use electrical and chemical signals to send messages to one another across small gaps called synapses, glial cells exist to support and protect neurons.
Astrocytes are a type of glial cell found near synapses. This close proximity to the place where brain signals are sent and received has led researchers to suspect that astrocytes might play an active role in the transmission of information inside the brain — a.k.a. “neurotransmission” — but no one has been able to prove the theory.
A new brain cell: Researchers at the Wyss Center for Bio and Neuroengineering and the University of Lausanne believe they’ve definitively proven that some astrocytes do actively participate in neurotransmission, making them a sort of hybrid of neurons and glial cells.
According to the researchers, this third type of brain cell, which they call a “glutamatergic astrocyte,” could offer a way to treat Alzheimer’s, Parkinson’s, and other disorders of the nervous system.
“Its discovery opens up immense research prospects,” said study co-director Andrea Volterra.
The study: Neurotransmission starts with a neuron releasing a chemical called a neurotransmitter, so the first thing the researchers did in their study was look at whether astrocytes can release the main neurotransmitter used by neurons: glutamate.
By analyzing astrocytes taken from the brains of mice, they discovered that certain astrocytes in the brain’s hippocampus did include the “molecular machinery” needed to excrete glutamate. They found evidence of the same machinery when they looked at datasets of human glial cells.
Finally, to demonstrate that these hybrid cells are actually playing a role in brain signaling, the researchers suppressed their ability to secrete glutamate in the brains of mice. This caused the rodents to experience memory problems.
“Our next studies will explore the potential protective role of this type of cell against memory impairment in Alzheimer’s disease, as well as its role in other regions and pathologies than those explored here,” said Andrea Volterra, University of Lausanne.
But why? The researchers aren’t sure why the brain needs glutamatergic astrocytes when it already has neurons, but Volterra suspects the hybrid brain cells may help with the distribution of signals — a single astrocyte can be in contact with thousands of synapses.
“Often, we have neuronal information that needs to spread to larger ensembles, and neurons are not very good for the coordination of this,” researcher Ludovic Telley told New Scientist.
Looking ahead: More research is needed to see how the new brain cell functions in people, but the discovery that it plays a role in memory in mice suggests it might be a worthwhile target for Alzheimer’s disease treatments.
The researchers also found evidence during their study that the cell might play a role in brain circuits linked to seizures and voluntary movements, meaning it’s also a new lead in the hunt for better epilepsy and Parkinson’s treatments.
“Our next studies will explore the potential protective role of this type of cell against memory impairment in Alzheimer’s disease, as well as its role in other regions and pathologies than those explored here,” said Volterra.
Researchers claimed they built a breakthrough superconductor. Social media shot it down almost instantly.
Harsh Mathur was a graduate physics student at Yale University in late 1989 when faculty announced they had failed to replicate claims made by scientists at the University of Utah and the University of Wolverhampton in England.
Such work is routine. Replicating or attempting to replicate the contraptions, calculations and conclusions crafted by colleagues is foundational to the scientific method. But in this instance, Yale’s findings were reported globally.
“I had a ringside view, and it was crazy,” recalls Mathur, now a professor of physics at Case Western Reserve University in Ohio.
Yale’s findings drew so much attention because initial experiments by Stanley Pons of Utah and Martin Fleischmann of Wolverhampton led to a startling claim: They were able to fuse atoms at room temperature – a scientific El Dorado known as “cold fusion.”
Nuclear fusion powers the stars in the universe. However, star cores must be at least 23.4 million degrees Fahrenheit and under extraordinary pressure to achieve fusion. Pons and Fleischmann claimed they had created an almost limitless source of power achievable at any temperature.
Like fusion, superconductivity can only be achieved in mostly impractical circumstances.
But about six months after they made their startling announcement, the pair’s findings were discredited by researchers at Yale and the California Institute of Technology. It was one of the first instances of a major scientific debunking covered by mass media.
Some scholars say the media attention for cold fusion stemmed partly from a dazzling announcement made three years prior in 1986: Scientists had created the first “superconductor” – material that could transmit electrical current with little or no resistance. It drew global headlines – and whetted the public’s appetite for announcements of scientific breakthroughs that could cause economic transformations.
But like fusion, superconductivity can only be achieved in mostly impractical circumstances: It must operate either at temperatures of at least negative 100 degrees Fahrenheit, or under pressures of around 150,000 pounds per square inch. Superconductivity that functions in closer to a normal environment would cut energy costs dramatically while also opening infinite possibilities for computing, space travel and other applications.
In July, a group of South Korean scientists posted material claiming they had created an iron crystalline substance called LK-99 that could achieve superconductivity at slightly above room temperature and at ambient pressure. The group partners with the Quantum Energy Research Centre, a privately-held enterprise in Seoul, and their claims drew global headlines.
Their work was also debunked. But in the age of internet and social media, the process was compressed from half-a-year into days. And it did not require researchers at world-class universities.
One of the most compelling critiques came from Derrick VanGennep. Although he works in finance, he holds a Ph.D. in physics and held a postdoctoral position at Harvard. The South Korean researchers had posted a video of a nugget of LK-99 in what they claimed was the throes of the Meissner effect – an expulsion of the substance’s magnetic field that would cause it to levitate above a magnet. Unless Hollywood magic is involved, only superconducting material can hover in this manner.
That claim made VanGennep skeptical, particularly since LK-99’s levitation appeared unenthusiastic at best. In fact, a corner of the material still adhered to the magnet near its center. He thought the video demonstrated ferromagnetism – two magnets repulsing one another. He mixed powdered graphite with super glue, stuck iron filings to its surface and mimicked the behavior of LK-99 in his own video, which was posted alongside the researchers’ video.
VanGennep believes the boldness of the South Korean claim was what led to him and others in the scientific community questioning it so quickly.
“The swift replication attempts stemmed from the combination of the extreme claim, the fact that the synthesis for this material is very straightforward and fast, and the amount of attention that this story was getting on social media,” he says.
But practicing scientists were suspicious of the data as well. Michael Norman, director of the Argonne Quantum Institute at the Argonne National Laboratory just outside of Chicago, had doubts immediately.
Will this saga hurt or even affect the careers of the South Korean researchers? Possibly not, if the previous fusion example is any indication.
“It wasn’t a very polished paper,” Norman says of the Korean scientists’ work. That opinion was reinforced, he adds, when it turned out the paper had been posted online by one of the researchers prior to seeking publication in a peer-reviewed journal. Although Norman and Mathur say that is routine with scientific research these days, Norman notes it was posted by one of the junior researchers over the doubts of two more senior scientists on the project.
Norman also raises doubts about the data reported. Among other issues, he observes that the samples created by the South Korean researchers contained traces of copper sulfide that could inadvertently amplify findings of conductivity.
The lack of the Meissner effect also caught Mathur’s attention. “Ferromagnets tend to be unstable when they levitate,” he says, adding that the video “just made me feel unconvinced. And it made me feel like they hadn't made a very good case for themselves.”
Will this saga hurt or even affect the careers of the South Korean researchers? Possibly not, if the previous fusion example is any indication. Despite being debunked, cold fusion claimants Pons and Fleischmann didn’t disappear. They moved their research to automaker Toyota’s IMRA laboratory in France, which along with the Japanese government spent tens of millions of dollars on their work before finally pulling the plug in 1998.
Fusion has since been created in laboratories, but being unable to reproduce the density of a star’s core would require excruciatingly high temperatures to achieve – about 160 million degrees Fahrenheit. A recently released Government Accountability Office report concludes practical fusion likely remains at least decades away.
However, like Pons and Fleischman, the South Korean researchers are not going anywhere. They claim that LK-99’s Meissner effect is being obscured by the fact the substance is both ferromagnetic and diamagnetic. They have filed for a patent in their country. But for now, those claims remain chimerical.
In the meantime, the consensus as to when a room temperature superconductor will be achieved is mixed. VenGennep – who studied the issue during his graduate and postgraduate work – puts the chance of creating such a superconductor by 2050 at perhaps 50-50. Mathur believes it could happen sooner, but adds that research on the topic has been going on for nearly a century, and that it has seen many plateaus.
“There's always this possibility that there's going to be something out there that we're going to discover unexpectedly,” Norman notes. The only certainty in this age of social media is that it will be put through the rigors of replication instantly.