Your Digital Avatar May One Day Get Sick Before You Do
Artificial intelligence is everywhere, just not in the way you think it is.
These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn."
"There's the perception of AI in the glossy magazines," says Anders Kofod-Petersen, a professor of Artificial Intelligence at the Norwegian University of Science and Technology. "That's the sci-fi version. It resembles the small guy in the movie AI. It might be benevolent or it might be evil, but it's generally intelligent and conscious."
"And this is, of course, as far from the truth as you can possibly get."
What Exactly Is Artificial Intelligence, Anyway?
Let's start with how you got to this piece. You likely came to it through social media. Your Facebook account, Twitter feed, or perhaps a Google search. AI influences all of those things, machine learning helping to run the algorithms that decide what you see, when, and where. AI isn't the little humanoid figure; it's the system that controls the figure.
"AI is being confused with robotics," Eleonore Pauwels, Director of the Anticipatory Intelligence Lab with the Science and Technology Innovation Program at the Wilson Center, says. "What AI is right now is a data optimization system, a very powerful data optimization system."
The revolution in recent years hasn't come from the method scientists and other researchers use. The general ideas and philosophies have been around since the late 1960s. Instead, the big change has been the dramatic increase in computing power, primarily due to the development of neural networks. These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn." An AI, for example, can be taught to spot a picture of a cat by looking at hundreds of thousands of pictures that have been labeled "cat" and "learning" what a cat looks like. Or an AI can beat a human at Go, an achievement that just five years ago Kofod-Petersen thought wouldn't be accomplished for decades.
"It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn."
Medicine is the field where this expertise in perception tasks might have the most influence. It's already having an impact as iPhones use AI to detect cancer, Apple watches alert the wearer to a heart problem, AI spots tuberculosis and the spread of breast cancer with a higher accuracy than human doctors, and more. Every few months, another study demonstrates more possibility. (The New Yorker published an article about medicine and AI last year, so you know it's a serious topic.)
But this is only the beginning. "I personally think genomics and precision medicine is where AI is going to be the biggest game-changer," Pauwels says. "It's going to completely change how we think about health, our genomes, and how we think about our relationship between our genotype and phenotype."
The Fundamental Breakthrough That Must Be Solved
To get there, however, researchers will need to make another breakthrough, and there's debate about how long that will take. Kofod-Petersen explains: "If we want to move from this narrow intelligence to this broader intelligence, that's a very difficult problem. It basically boils down to that we haven't got a clue about what intelligence actually is. We don't know what intelligence means in a biological sense. We think we might recognize it but we're not completely sure. There isn't a working definition. We kind of agree with the biologists that learning is an aspect of it. It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn. They can learn specific tasks but we haven't approached how to teach them to learn to learn."
In other words, current AI is very, very good at identifying that a picture of a cat is, in fact, a cat – and getting better at doing so at an incredibly rapid pace – but the system only knows what a "cat" is because that's what a programmer told it a furry thing with whiskers and two pointy ears is called. If the programmer instead decided to label the training images as "dogs," the AI wouldn't say "no, that's a cat." Instead, it would simply call a furry thing with whiskers and two pointy ears a dog. AI systems lack the explicit inference that humans do effortlessly, almost without thinking.
Pauwels believes that the next step is for AI to transition from supervised to unsupervised learning. The latter means that the AI isn't answering questions that a programmer asks it ("Is this a cat?"). Instead, it's almost like it's looking at the data it has, coming up with its own questions and hypothesis, and answering them or putting them to the test. Combining this ability with the frankly insane processing power of the computer system could result in game-changing discoveries.
In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions present themselves before the person gets sick in real life.
One company in China plans to develop a way to create a digital avatar of an individual person, then simulate that person's health and medical information into the future. In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions presented themselves – cancer or a heart condition or anything, really – and help the real-life version prevent those conditions from beginning or treating them before they became a life-threatening issue.
That, obviously, would be an incredibly powerful technology, and it's just one of the many possibilities that unsupervised AI presents. It's also terrifying in the potential for misuse. Even the term "unsupervised AI" brings to mind a dystopian landscape where AI takes over and enslaves humanity. (Pick your favorite movie. There are dozens.) This is a concern, something for developers, programmers, and scientists to consider as they build the systems of the future.
The Ethical Problem That Deserves More Attention
But the more immediate concern about AI is much more mundane. We think of AI as an unbiased system. That's incorrect. Algorithms, after all, are designed by someone or a team, and those people have explicit or implicit biases. Intentionally, or more likely not, they introduce these biases into the very code that forms the basis for the AI. Current systems have a bias against people of color. Facebook tried to rectify the situation and failed. These are two small examples of a larger, potentially systemic problem.
It's vital and necessary for the people developing AI today to be aware of these issues. And, yes, avoid sending us to the brink of a James Cameron movie. But AI is too powerful a tool to ignore. Today, it's identifying cats and on the verge of detecting cancer. In not too many tomorrows, it will be on the forefront of medical innovation. If we are careful, aware, and smart, it will help simulate results, create designer drugs, and revolutionize individualize medicine. "AI is the only way to get there," Pauwels says.
The livestock trucks arrived all night. One after the other they backed up to the wood chute leading to a dusty corral and loosed their cargo — 580 head of cattle by the time the last truck pulled away at 3pm the next afternoon. Dan Probert, astride his horse, guided the cows to paddocks of pristine grassland stretching alongside the snow-peaked Wallowa Mountains. They’d spend the summer here grazing bunchgrass and clovers and biscuitroot. The scuffle of their hooves and nibbles of their teeth would mimic the elk, antelope and bison that are thought to have historically roamed this portion of northeastern Oregon’s Zumwalt Prairie, helping grasses grow and restoring health to the soil.
The cows weren’t Probert’s, although the fifth-generation rancher and one other member of the Carman Ranch Direct grass-fed beef collective also raise their own herds here for part of every year. But in spring, when the prairie is in bloom, Probert receives cattle from several other ranchers. As the grasses wither in October, the cows move on to graze fertile pastures throughout the Columbia Basin, which stretches across several Pacific Northwest states; some overwinter on a vegetable farm in central Washington, feeding on corn leaves and pea vines left behind after harvest.
Sharing land and other resources among farmers isn’t new. But research shows it may be increasingly relevant in a time of climatic upheaval, potentially influencing “farmers to adopt environmentally friendly practices and agricultural innovation,” according to a 2021 paper in the Journal of Economic Surveys. Farmers might share knowledge about reducing pesticide use, says Heather Frambach, a supply chain consultant who works with farmers in California and elsewhere. As a group they may better qualify for grants to monitor soil and water quality.
Most research around such practices applies to cooperatives, whose owner-members equally share governance and profits. But a collective like Carman Ranch’s — spearheaded by fourth-generation rancher Cory Carman, who purchases beef from eight other ranchers to sell under one “regeneratively” certified brand — shows when producers band together, they can achieve eco-benefits that would be elusive if they worked alone.
Vitamins and minerals in soil pass into plants through their roots, then into cattle as they graze, then back around as the cows walk around pooping.
Carman knows from experience. Taking over her family's land in 2003, she started selling grass-fed beef “because I really wanted to figure out how to not participate in the feedlot world, to have a healthier product. I didn't know how we were going to survive,” she says. Part of her land sits on a degraded portion of Zumwalt Prairie replete with invasive grasses; working to restore it, she thought, “What good does it do to kill myself trying to make this ranch more functional? If you want to make a difference, change has to be more than single entrepreneurs on single pieces of land. It has to happen at a community level.” The seeds of her collective were sown.
Raising 100 percent grass-fed beef requires land that’s got something for cows to graze in every season — which most collective members can’t access individually. So, they move cattle around their various parcels. It’s practical, but it also restores nutrient flows “to the way they used to move, from lowlands and canyons during the winter to higher-up places as the weather gets hot,” Carman says. Meaning, vitamins and minerals in soil pass into plants through their roots, then into cattle as they graze, then back around as the cows walk around pooping.
Cory Carman sells grass-fed beef, which requires land that’s got something for cows to graze in every season.
Courtesy Cory Carman
Each collective member has individual ecological goals: Carman brought in pigs to root out invasive grasses and help natives flourish. Probert also heads a more conventional grain-finished beef collective with 100 members, and their combined 6.5 million ranchland acres were eligible for a grant supporting climate-friendly practices, which compels them to improve soil and water health and biodiversity and make their product “as environmentally friendly as possible,” Probert says. The Washington veg farmer reduced tilling and pesticide use thanks to the ecoservices of visiting cows. Similarly, a conventional hay farmer near Carman has reduced his reliance on fertilizer by letting cattle graze the cover crops he plants on 80 acres.
Additionally, the collective must meet the regenerative standards promised on their label — another way in which they work together to achieve ecological goals. Says David LeZaks, formerly a senior fellow at finance-focused ecology nonprofit Croatan Institute, it’s hard for individual farmers to access monetary assistance. “But it's easier to get financing flowing when you increase the scale with cooperatives or collectives,” he says. “This supports producers in ways that can lead to better outcomes on the landscape.”
New, smaller scale farmers might gain the most from collective and cooperative models.
For example, it can help them minimize waste by using more of an animal, something our frugal ancestors excelled at. Small-scale beef producers normally throw out hides; Thousand Hills’ 50 regenerative beef producers together have enough to sell to Timberland to make carbon-neutral leather. In another example, working collectively resulted in the support of more diverse farms: Meadowlark Community Mill in Wisconsin went from working with one wheat grower, to sourcing from several organic wheat growers marketing flour under one premium brand.
Another example shows how these collaborations can foster greater equity, among other benefits: The Federation of Southern Cooperatives has a mission to support Black farmers as they build community health. It owns several hundred forest acres in Alabama, where it teaches members to steward their own forest land and use it to grow food — one member coop raises goats to graze forest debris and produce milk. Adding the combined acres of member forest land to the Federation’s, the group qualified for a federal conservation grant that will keep this resource available for food production, and community environmental and mental health benefits. “That's the value-add of the collective land-owner structure,” says Dãnia Davy, director of land retention and advocacy.
New, smaller scale farmers might gain the most from collective and cooperative models, says Jordan Treakle, national program coordinator of the National Family Farm Coalition (NFFC). Many of them enter farming specifically to raise healthy food in healthy ways — with organic production, or livestock for soil fertility. With land, equipment and labor prohibitively expensive, farming collectively allows shared costs and risk that buy farmers the time necessary to “build soil fertility and become competitive” in the marketplace, Treakle says. Just keeping them in business is an eco-win; when small farms fail, they tend to get sold for development or absorbed into less-diversified operations, so the effects of their success can “reverberate through the entire local economy.”
Frambach, the supply chain consultant, has been experimenting with what she calls “collaborative crop planning,” where she helps farmers strategize what they’ll plant as a group. “A lot of them grow based on what they hear their neighbor is going to do, and that causes really poor outcomes,” she says. “Nobody replanted cauliflower after the [atmospheric rivers in California] this year and now there's a huge shortage of cauliflower.” A group plan can avoid the under-planting that causes farmers to lose out on revenue.
It helps avoid overplanted crops, too, which small farmers might have to plow under or compost. Larger farmers, conversely, can sell surplus produce into the upcycling market — to Matriark Foods, for example, which turns it into value-add products like pasta sauce for companies like Sysco that supply institutional kitchens at colleges and hospitals. Frambach and Anna Hammond, Matriark’s CEO, want to collectivize smaller farmers so that they can sell to the likes of Matriark and “not lose an incredible amount of income,” Hammond says.
Ultimately, farming is fraught with challenges and even collectivizing doesn’t guarantee that farms will stay in business. But with agriculture accounting for almost 30 percent of greenhouse gas emissions globally, there's an “urgent” need to shift farming practices to more environmentally sustainable models, as well as a “demand in the marketplace for it,” says NFFC’s Treakle. “The growth of cooperative and collective farming can be a huge, huge boon for the ecological integrity of the system.”
Story by Big Think
We live in strange times, when the technology we depend on the most is also that which we fear the most. We celebrate cutting-edge achievements even as we recoil in fear at how they could be used to hurt us. From genetic engineering and AI to nuclear technology and nanobots, the list of awe-inspiring, fast-developing technologies is long.
However, this fear of the machine is not as new as it may seem. Technology has a longstanding alliance with power and the state. The dark side of human history can be told as a series of wars whose victors are often those with the most advanced technology. (There are exceptions, of course.) Science, and its technological offspring, follows the money.
This fear of the machine seems to be misplaced. The machine has no intent: only its maker does. The fear of the machine is, in essence, the fear we have of each other — of what we are capable of doing to one another.
How AI changes things
Sure, you would reply, but AI changes everything. With artificial intelligence, the machine itself will develop some sort of autonomy, however ill-defined. It will have a will of its own. And this will, if it reflects anything that seems human, will not be benevolent. With AI, the claim goes, the machine will somehow know what it must do to get rid of us. It will threaten us as a species.
Well, this fear is also not new. Mary Shelley wrote Frankenstein in 1818 to warn us of what science could do if it served the wrong calling. In the case of her novel, Dr. Frankenstein’s call was to win the battle against death — to reverse the course of nature. Granted, any cure of an illness interferes with the normal workings of nature, yet we are justly proud of having developed cures for our ailments, prolonging life and increasing its quality. Science can achieve nothing more noble. What messes things up is when the pursuit of good is confused with that of power. In this distorted scale, the more powerful the better. The ultimate goal is to be as powerful as gods — masters of time, of life and death.
Should countries create a World Mind Organization that controls the technologies that develop AI?
Back to AI, there is no doubt the technology will help us tremendously. We will have better medical diagnostics, better traffic control, better bridge designs, and better pedagogical animations to teach in the classroom and virtually. But we will also have better winnings in the stock market, better war strategies, and better soldiers and remote ways of killing. This grants real power to those who control the best technologies. It increases the take of the winners of wars — those fought with weapons, and those fought with money.
A story as old as civilization
The question is how to move forward. This is where things get interesting and complicated. We hear over and over again that there is an urgent need for safeguards, for controls and legislation to deal with the AI revolution. Great. But if these machines are essentially functioning in a semi-black box of self-teaching neural nets, how exactly are we going to make safeguards that are sure to remain effective? How are we to ensure that the AI, with its unlimited ability to gather data, will not come up with new ways to bypass our safeguards, the same way that people break into safes?
The second question is that of global control. As I wrote before, overseeing new technology is complex. Should countries create a World Mind Organization that controls the technologies that develop AI? If so, how do we organize this planet-wide governing board? Who should be a part of its governing structure? What mechanisms will ensure that governments and private companies do not secretly break the rules, especially when to do so would put the most advanced weapons in the hands of the rule breakers? They will need those, after all, if other actors break the rules as well.
As before, the countries with the best scientists and engineers will have a great advantage. A new international détente will emerge in the molds of the nuclear détente of the Cold War. Again, we will fear destructive technology falling into the wrong hands. This can happen easily. AI machines will not need to be built at an industrial scale, as nuclear capabilities were, and AI-based terrorism will be a force to reckon with.
So here we are, afraid of our own technology all over again.
What is missing from this picture? It continues to illustrate the same destructive pattern of greed and power that has defined so much of our civilization. The failure it shows is moral, and only we can change it. We define civilization by the accumulation of wealth, and this worldview is killing us. The project of civilization we invented has become self-cannibalizing. As long as we do not see this, and we keep on following the same route we have trodden for the past 10,000 years, it will be very hard to legislate the technology to come and to ensure such legislation is followed. Unless, of course, AI helps us become better humans, perhaps by teaching us how stupid we have been for so long. This sounds far-fetched, given who this AI will be serving. But one can always hope.