Scientists Want to Make Robots with Genomes that Help Grow their Minds
Lina Zeldovich has written about science, medicine and technology for Popular Science, Smithsonian, National Geographic, Scientific American, Reader’s Digest, the New York Times and other major national and international publications. A Columbia J-School alumna, she has won several awards for her stories, including the ASJA Crisis Coverage Award for Covid reporting, and has been a contributing editor at Nautilus Magazine. In 2021, Zeldovich released her first book, The Other Dark Matter, published by the University of Chicago Press, about the science and business of turning waste into wealth and health. You can find her on http://linazeldovich.com/ and @linazeldovich.
One day in recent past, scientists at Columbia University’s Creative Machines Lab set up a robotic arm inside a circle of five streaming video cameras and let the robot watch itself move, turn and twist. For about three hours the robot did exactly that—it looked at itself this way and that, like toddlers exploring themselves in a room full of mirrors. By the time the robot stopped, its internal neural network finished learning the relationship between the robot’s motor actions and the volume it occupied in its environment. In other words, the robot built a spatial self-awareness, just like humans do. “We trained its deep neural network to understand how it moved in space,” says Boyuan Chen, one of the scientists who worked on it.
For decades robots have been doing helpful tasks that are too hard, too dangerous, or physically impossible for humans to carry out themselves. Robots are ultimately superior to humans in complex calculations, following rules to a tee and repeating the same steps perfectly. But even the biggest successes for human-robot collaborations—those in manufacturing and automotive industries—still require separating the two for safety reasons. Hardwired for a limited set of tasks, industrial robots don't have the intelligence to know where their robo-parts are in space, how fast they’re moving and when they can endanger a human.
Over the past decade or so, humans have begun to expect more from robots. Engineers have been building smarter versions that can avoid obstacles, follow voice commands, respond to human speech and make simple decisions. Some of them proved invaluable in many natural and man-made disasters like earthquakes, forest fires, nuclear accidents and chemical spills. These disaster recovery robots helped clean up dangerous chemicals, looked for survivors in crumbled buildings, and ventured into radioactive areas to assess damage.
Now roboticists are going a step further, training their creations to do even better: understand their own image in space and interact with humans like humans do. Today, there are already robot-teachers like KeeKo, robot-pets like Moffin, robot-babysitters like iPal, and robotic companions for the elderly like Pepper.
But even these reasonably intelligent creations still have huge limitations, some scientists think. “There are niche applications for the current generations of robots,” says professor Anthony Zador at Cold Spring Harbor Laboratory—but they are not “generalists” who can do varied tasks all on their own, as they mostly lack the abilities to improvise, make decisions based on a multitude of facts or emotions, and adjust to rapidly changing circumstances. “We don’t have general purpose robots that can interact with the world. We’re ages away from that.”
Robotic spatial self-awareness – the achievement by the team at Columbia – is an important step toward creating more intelligent machines. Hod Lipson, professor of mechanical engineering who runs the Columbia lab, says that future robots will need this ability to assist humans better. Knowing how you look and where in space your parts are, decreases the need for human oversight. It also helps the robot to detect and compensate for damage and keep up with its own wear-and-tear. And it allows robots to realize when something is wrong with them or their parts. “We want our robots to learn and continue to grow their minds and bodies on their own,” Chen says. That’s what Zador wants too—and on a much grander level. “I want a robot who can drive my car, take my dog for a walk and have a conversation with me.”
Columbia scientists have trained a robot to become aware of its own "body," so it can map the right path to touch a ball without running into an obstacle, in this case a square.
Jane Nisselson and Yinuo Qin/ Columbia Engineering
Today’s technological advances are making some of these leaps of progress possible. One of them is the so-called Deep Learning—a method that trains artificial intelligence systems to learn and use information similar to how humans do it. Described as a machine learning method based on neural network architectures with multiple layers of processing units, Deep Learning has been used to successfully teach machines to recognize images, understand speech and even write text.
Trained by Google, one of these language machine learning geniuses, BERT, can finish sentences. Another one called GPT3, designed by San Francisco-based company OpenAI, can write little stories. Yet, both of them still make funny mistakes in their linguistic exercises that even a child wouldn’t. According to a paper published by Stanford’s Center for Research on Foundational Models, BERT seems to not understand the word “not.” When asked to fill in the word after “A robin is a __” it correctly answers “bird.” But try inserting the word “not” into that sentence (“A robin is not a __”) and BERT still completes it the same way. Similarly, in one of its stories, GPT3 wrote that if you mix a spoonful of grape juice into your cranberry juice and drink the concoction, you die. It seems that robots, and artificial intelligence systems in general, are still missing some rudimentary facts of life that humans and animals grasp naturally and effortlessly.
How does one give robots a genome? Zador has an idea. We can’t really equip machines with real biological nucleotide-based genes, but we can mimic the neuronal blueprint those genes create.
It's not exactly the robots’ fault. Compared to humans, and all other organisms that have been around for thousands or millions of years, robots are very new. They are missing out on eons of evolutionary data-building. Animals and humans are born with the ability to do certain things because they are pre-wired in them. Flies know how to fly, fish knows how to swim, cats know how to meow, and babies know how to cry. Yet, flies don’t really learn to fly, fish doesn’t learn to swim, cats don’t learn to meow, and babies don’t learn to cry—they are born able to execute such behaviors because they’re preprogrammed to do so. All that happens thanks to the millions of years of evolutions wired into their respective genomes, which give rise to the brain’s neural networks responsible for these behaviors. Robots are the newbies, missing out on that trove of information, Zador argues.
A neuroscience professor who studies how brain circuitry generates various behaviors, Zador has a different approach to developing the robotic mind. Until their creators figure out a way to imbue the bots with that information, robots will remain quite limited in their abilities. Each model will only be able to do certain things it was programmed to do, but it will never go above and beyond its original code. So Zador argues that we have to start giving robots a genome.
How does one do that? Zador has an idea. We can’t really equip machines with real biological nucleotide-based genes, but we can mimic the neuronal blueprint those genes create. Genomes lay out rules for brain development. Specifically, the genome encodes blueprints for wiring up our nervous system—the details of which neurons are connected, the strength of those connections and other specs that will later hold the information learned throughout life. “Our genomes serve as blueprints for building our nervous system and these blueprints give rise to a human brain, which contains about 100 billion neurons,” Zador says.
If you think what a genome is, he explains, it is essentially a very compact and compressed form of information storage. Conceptually, genomes are similar to CliffsNotes and other study guides. When students read these short summaries, they know about what happened in a book, without actually reading that book. And that’s how we should be designing the next generation of robots if we ever want them to act like humans, Zador says. “We should give them a set of behavioral CliffsNotes, which they can then unwrap into brain-like structures.” Robots that have such brain-like structures will acquire a set of basic rules to generate basic behaviors and use them to learn more complex ones.
Currently Zador is in the process of developing algorithms that function like simple rules that generate such behaviors. “My algorithms would write these CliffsNotes, outlining how to solve a particular problem,” he explains. “And then, the neural networks will use these CliffsNotes to figure out which ones are useful and use them in their behaviors.” That’s how all living beings operate. They use the pre-programmed info from their genetics to adapt to their changing environments and learn what’s necessary to survive and thrive in these settings.
For example, a robot’s neural network could draw from CliffsNotes with “genetic” instructions for how to be aware of its own body or learn to adjust its movements. And other, different sets of CliffsNotes may imbue it with the basics of physical safety or the fundamentals of speech.
At the moment, Zador is working on algorithms that are trying to mimic neuronal blueprints for very simple organisms—such as earthworms, which have only 302 neurons and about 7000 synapses compared to the millions we have. That’s how evolution worked, too—expanding the brains from simple creatures to more complex to the Homo Sapiens. But if it took millions of years to arrive at modern humans, how long would it take scientists to forge a robot with human intelligence? That’s a billion-dollar question. Yet, Zador is optimistic. “My hypotheses is that if you can build simple organisms that can interact with the world, then the higher level functions will not be nearly as challenging as they currently are.”
Lina Zeldovich has written about science, medicine and technology for Popular Science, Smithsonian, National Geographic, Scientific American, Reader’s Digest, the New York Times and other major national and international publications. A Columbia J-School alumna, she has won several awards for her stories, including the ASJA Crisis Coverage Award for Covid reporting, and has been a contributing editor at Nautilus Magazine. In 2021, Zeldovich released her first book, The Other Dark Matter, published by the University of Chicago Press, about the science and business of turning waste into wealth and health. You can find her on http://linazeldovich.com/ and @linazeldovich.
The Friday Five covers five stories in research that you may have missed this week. There are plenty of controversies and troubling ethical issues in science – and we get into many of them in our online magazine – but this news roundup focuses on new scientific theories and progress to give you a therapeutic dose of inspiration headed into the weekend.
This episode includes an interview with Dr. Helen Keyes, Head of the School of Psychology and Sports Science at Anglia Ruskin University.
Listen on Apple | Listen on Spotify | Listen on Stitcher | Listen on Amazon | Listen on Google
- Attending sports events is linked to greater life satisfaction
- Identifying specific brain tumors in under 90 seconds with AI
- LSD - minus hallucinations - raises hopes for mental health
- New research on the benefits of cold showers
- Inspire awe in your kids and reap the benefits
As a graduate student in observational astronomy at the University of Arizona during the 1970s, Diane Turnshek remembers the starry skies above the Kitt Peak National Observatory on the Tucson outskirts. Back then, she could observe faint objects like nebulae, galaxies, and star clusters on most nights.
When Turnshek moved to Pittsburgh in 1981, she found it almost impossible to see a clear night sky because the city’s countless lights created a bright dome of light called skyglow. Over the next two decades, Turnshek almost forgot what a dark sky looked like. She witnessed pristine dark skies in their full glory again during a visit to the Mars Desert Research Station in Utah in early 2000s.
“I was shocked at how beautiful the dark skies were in the West. That is when I realized that most parts of the world have lost access to starry skies because of light pollution,” says Turnshek, an astronomer and lecturer at Carnegie Mellon University. In 2015, she became a dark sky advocate.
Light pollution is defined as the excessive or wasteful use of artificial light.
Light-emitting diodes (LEDs) -- which became commercially available in 2002 and rapidly gained popularity in offices, schools, and hospitals when their price dropped six years later — inadvertently fueled the surge in light pollution. As traditional light sources like halogen, fluorescent, mercury, and sodium vapor lamps have been phased out or banned, LEDs became the main source of lighting globally in 2019. Switching to LEDs has been lauded as a win-win decision. Not only are they cheap but they also consume a fraction of electricity compared to their traditional counterparts.
But as cheap LED installations became omnipresent, they increased light pollution. “People have been installing LEDs thinking they are making a positive change for the environment. But LEDs are a lot brighter than traditional light sources,” explains Ashley Wilson, director of conservation at the International Dark-Sky Association (IDA). “Despite being energy-efficient, they are increasing our energy consumption. No one expected this kind of backlash from switching to LEDs.”
Light pollution impacts the circadian rhythms of all living beings — the natural internal process that regulates the sleep–wake cycle.
Currently, more than 80 percent of the world lives under light-polluted skies. In the U.S. and Europe, that figure is above 99 percent.
According to the IDA, $3 billion worth of electricity is lost to skyglow every year in the U.S. alone — thanks to unnecessary and poorly designed outdoor lighting installations. Worse, the resulting light pollution has insidious impacts on humans and wildlife — in more ways than one.
Disrupting the brain’s clock
Light pollution impacts the circadian rhythms of all living beings—the natural internal process that regulates the sleep–wake cycle. Humans and other mammals have neurons in their retina called intrinsically photosensitive retinal ganglion cells (ipRGCs). These cells collect information about the visual world and directly influence the brain’s biological clock in the hypothalamus.
The ipRGCs are particularly sensitive to the blue light that LEDs emit at high levels, resulting in suppression of melatonin, a hormone that helps us sleep. A 2020 JAMA Psychiatry study detailed how teenagers who lived in areas with bright outdoor lighting at night went to bed late and slept less, which made them more prone to mood disorders and anxiety.
“Many people are skeptical when they are told something as ubiquitous as lights could have such profound impacts on public health,” says Gena Glickman, director of the Chronobiology, Light and Sleep Lab at Uniformed Services University. “But when the clock in our brains gets exposed to blue light at nighttime, it could result in a lot of negative consequences like impaired cognitive function and neuro-endocrine disturbances.”
In the last 12 years, several studies indicated that light pollution exposure is associated with obesity and diabetes in humans and animals alike. While researchers are still trying to understand the exact underlying mechanisms, they found that even one night of too much light exposure could negatively affect the metabolic system. Studies have linked light pollution to a higher risk of hormone-sensitive cancers like breast and prostate cancer. A 2017 study found that female nurses exposed to light pollution have a 14 percent higher risk of breast cancer. The World Health Organization (WHO) identified long-term night shiftwork as a probable cause of cancer.
“We ignore our biological need for a natural light and dark cycle. Our patterns of light exposure have consequently become different from what nature intended,” explains Glickman.
Circadian lighting systems, designed to match individuals’ circadian rhythms, might help. The Lighting Research Center at Rensselaer Polytechnic Institute developed LED light systems that mimic natural lighting fluxes, required for better sleep. In the morning the lights shine brightly as does the sun. After sunset, the system dims, once again mimicking nature, which boosts melatonin production. It can even be programmed to increase blue light indoors when clouds block sunlight’s path through windows. Studies have shown that such systems might help reduce sleep fragmentation and cognitive decline. People who spend most of their day indoors can benefit from such circadian mimics.
When Diane Turnshek moved to Pittsburgh, she found it almost impossible to see a clear night sky because the city’s countless lights created a bright dome of light called skyglow.
Diane Turnshek
Leading to better LEDs
Light pollution disrupts the travels of millions of migratory birds that begin their long-distance journeys after sunset but end up entrapped within the sky glow of cities, becoming disoriented. A 2017 study in Nature found that nocturnal pollinators like bees, moths, fireflies and bats visit 62 percent fewer plants in areas with artificial lights compared to dark areas.
“On an evolutionary timescale, LEDs have triggered huge changes in the Earth’s environment within a relative blink of an eye,” says Wilson, the director of IDA. “Plants and animals cannot adapt so fast. They have to fight to survive with their existing traits and abilities.”
But not all types of LEDs are inherently bad -- it all comes down to how much blue light they emit. During the day, the sun emits blue light waves. By sunset, red and orange light waves become predominant, stimulating melatonin production. LED’s artificial blue light, when shining at night, disrupts that. For some unknown reason, there are more bluer color LEDs made and sold.
“Communities install blue color temperature LEDs rather than redder color temperature LEDs because more of the blue ones are made; they are the status quo on the market,” says Michelle Wooten, an assistant professor of astronomy at the University of Alabama at Birmingham.
Most artificial outdoor light produced is wasted as human eyes do not use them to navigate their surroundings.
While astronomers and the IDA have been educating LED manufacturers about these nuances, policymakers struggle to keep up with the growing industry. But there are things they can do—such as requiring LEDs to include dimmers. “Most LED installations can be dimmed down. We need to make the dimmable drivers a mandatory requirement while selling LED lighting,” says Nancy Clanton, a lighting engineer, designer, and dark sky advocate.
Some lighting companies have been developing more sophisticated LED lights that help support melatonin production. Lighting engineers at Crossroads LLC and Nichia Corporation have been working on creating LEDs that produce more light in the red range. “We live in a wonderful age of technology that has given us these new LED designs which cut out blue wavelengths entirely for dark-sky friendly lighting purposes,” says Wooten.
Dimming the lights to see better
The IDA and advocates like Turnshek propose that communities turn off unnecessary outdoor lights. According to the Department of Energy, 99 percent of artificial outdoor light produced is wasted as human eyes do not use them to navigate their surroundings.
In recent years, major cities like Chicago, Austin, and Philadelphia adopted the “Lights Out” initiative encouraging communities to turn off unnecessary lights during birds’ peak migration seasons for 10 days at a time. “This poses an important question: if people can live without some lights for 10 days, why can’t they keep them turned off all year round,” says Wilson.
Most communities globally believe that keeping bright outdoor lights on all night increases security and prevents crime. But in her studies of street lights’ brightness levels in different parts of the US — from Alaska to California to Washington — Clanton found that people felt safe and could see clearly even at low or dim lighting levels.
Clanton and colleagues installed LEDs in a Seattle suburb that provided only 25 percent of lighting levels compared to what they used previously. The residents reported far better visibility because the new LEDs did not produce glare. “Visual contrast matters a lot more than lighting levels,” Clanton says. Additionally, motion sensor LEDs for outdoor lighting can go a long way in reducing light pollution.
Flipping a switch to preserve starry nights
Clanton has helped draft laws to reduce light pollution in at least 17 U.S. states. However, poor awareness of light pollution led to inadequate enforcement of these laws. Also, getting thousands of counties and municipalities within any state to comply with these regulations is a Herculean task, Turnshek points out.
Fountain Hills, a small town near Phoenix, Arizona, has rid itself of light pollution since 2018, thanks to the community's efforts to preserve dark skies.
Until LEDs became mainstream, Fountain Hills enjoyed starry skies despite its proximity to Phoenix. A mountain surrounding the town blocks most of the skyglow from the city.
“Light pollution became an issue in Fountain Hills over the years because we were not taking new LED technologies into account. Our town’s lighting code was antiquated and out-of-date,” says Vicky Derksen, a resident who is also a part of the Fountain Hills Dark Sky Association founded in 2017. “To preserve dark skies, we had to work with the entire town to update the local lighting code and convince residents to follow responsible outdoor lighting practices.”
Derksen and her team first tackled light pollution in the town center which has a faux fountain in the middle of a lake. “The iconic centerpiece, from which Fountain Hills got its name, had the wrong types of lighting fixtures, which created a lot of glare,” adds Derksen. They then replaced several other municipal lighting fixtures with dark-sky-friendly LEDs.
The results were awe-inspiring. After a long time, residents could see the Milky Way with crystal clear clarity. Star-gazing activities made a strong comeback across the town. But keeping light pollution low requires constant work.
Derksen and other residents regularly measure artificial light levels in
Fountain Hills. Currently, the only major source of light pollution is from extremely bright, illuminated signs which local businesses had installed in different parts of the town. While Derksen says it is an uphill battle to educate local businesses about light pollution, Fountain Hills residents are determined to protect their dark skies.
“When a river gets polluted, it can take several years before clean-up efforts see any tangible results,” says Derksen. “But the effects are immediate when you work toward reducing light pollution. All it requires is flipping a switch.”