Scientists Want to Make Robots with Genomes that Help Grow their Minds

Scientists Want to Make Robots with Genomes that Help Grow their Minds

Giving robots self-awareness as they move through space - and maybe even providing them with gene-like methods for storing rules of behavior - could be important steps toward creating more intelligent machines.

phonlamaiphoto

One day in recent past, scientists at Columbia University’s Creative Machines Lab set up a robotic arm inside a circle of five streaming video cameras and let the robot watch itself move, turn and twist. For about three hours the robot did exactly that—it looked at itself this way and that, like toddlers exploring themselves in a room full of mirrors. By the time the robot stopped, its internal neural network finished learning the relationship between the robot’s motor actions and the volume it occupied in its environment. In other words, the robot built a spatial self-awareness, just like humans do. “We trained its deep neural network to understand how it moved in space,” says Boyuan Chen, one of the scientists who worked on it.

For decades robots have been doing helpful tasks that are too hard, too dangerous, or physically impossible for humans to carry out themselves. Robots are ultimately superior to humans in complex calculations, following rules to a tee and repeating the same steps perfectly. But even the biggest successes for human-robot collaborations—those in manufacturing and automotive industries—still require separating the two for safety reasons. Hardwired for a limited set of tasks, industrial robots don't have the intelligence to know where their robo-parts are in space, how fast they’re moving and when they can endanger a human.

Over the past decade or so, humans have begun to expect more from robots. Engineers have been building smarter versions that can avoid obstacles, follow voice commands, respond to human speech and make simple decisions. Some of them proved invaluable in many natural and man-made disasters like earthquakes, forest fires, nuclear accidents and chemical spills. These disaster recovery robots helped clean up dangerous chemicals, looked for survivors in crumbled buildings, and ventured into radioactive areas to assess damage.

Keep Reading Keep Reading
Lina Zeldovich

Lina Zeldovich has written about science, medicine and technology for Popular Science, Smithsonian, National Geographic, Scientific American, Reader’s Digest, the New York Times and other major national and international publications. A Columbia J-School alumna, she has won several awards for her stories, including the ASJA Crisis Coverage Award for Covid reporting, and has been a contributing editor at Nautilus Magazine. In 2021, Zeldovich released her first book, The Other Dark Matter, published by the University of Chicago Press, about the science and business of turning waste into wealth and health. You can find her on http://linazeldovich.com/ and @linazeldovich.

Don’t fear AI, fear power-hungry humans

Story by Big Think

We live in strange times, when the technology we depend on the most is also that which we fear the most. We celebrate cutting-edge achievements even as we recoil in fear at how they could be used to hurt us. From genetic engineering and AI to nuclear technology and nanobots, the list of awe-inspiring, fast-developing technologies is long.

However, this fear of the machine is not as new as it may seem. Technology has a longstanding alliance with power and the state. The dark side of human history can be told as a series of wars whose victors are often those with the most advanced technology. (There are exceptions, of course.) Science, and its technological offspring, follows the money.

This fear of the machine seems to be misplaced. The machine has no intent: only its maker does. The fear of the machine is, in essence, the fear we have of each other — of what we are capable of doing to one another.

Keep Reading Keep Reading
Marcelo Gleiser
Marcelo Gleiser is a professor of natural philosophy, physics, and astronomy at Dartmouth College. He is a Fellow of the American Physical Society, a recipient of the Presidential Faculty Fellows Award from the White House and NSF, and was awarded the 2019 Templeton Prize. Gleiser has authored five books and is the co-founder of 13.8, where he writes about science and culture with physicist Adam Frank.
Interview with Jamie Metzl: We need a global OS upgrade

Jamie Metzl, author of Hacking Darwin, shares his views with Leaps.org on the future of genetics, tech, healthcare and more.

Jamie Metzl

In this Q&A, leading technology and healthcare futurist Jamie Metzl discusses a range of topics and trend lines that will unfold over the next several decades: whether a version of Moore's Law applies to genetic technologies, the ethics of genetic engineering, the dangers of gene hacking, the end of sex, and much more.

Metzl is a member of the WHO expert advisory committee on human genome editing and the bestselling author of Hacking Darwin.

The conversation was lightly edited by Leaps.org for style and length.

Keep Reading Keep Reading
Matt Fuchs
Matt Fuchs is the host of the Making Sense of Science podcast and served previously as the editor-in-chief of Leaps.org. He writes as a contributor to the Washington Post, and his articles have also appeared in the New York Times, WIRED, Nautilus Magazine, Fortune Magazine and TIME Magazine. Follow him @fuchswriter.