The New Prospective Parenthood: When Does More Info Become Too Much?
Peggy Clark was 12 weeks pregnant when she went in for a nuchal translucency (NT) scan to see whether her unborn son had Down syndrome. The sonographic scan measures how much fluid has accumulated at the back of the baby's neck: the more fluid, the higher the likelihood of an abnormality. The technician said the baby was in such an odd position, the test couldn't be done. Clark, whose name has been changed to protect her privacy, was told to come back in a week and a half to see if the baby had moved.
"With the growing sophistication of prenatal tests, it seems that the more questions are answered, the more new ones arise."
"It was like the baby was saying, 'I don't want you to know,'" she recently recalled.
When they went back, they found the baby had a thickened neck. It's just one factor in identifying Down's, but it's a strong indication. At that point, she was 13 weeks and four days pregnant. She went to the doctor the next day for a blood test. It took another two weeks for the results, which again came back positive, though there was still a .3% margin of error. Clark said she knew she wanted to terminate the pregnancy if the baby had Down's, but she didn't want the guilt of knowing there was a small chance the tests were wrong. At that point, she was too late to do a Chorionic villus sampling (CVS), when chorionic villi cells are removed from the placenta and sequenced. And she was too early to do an amniocentesis, which isn't done until between 14 and 20 weeks of the pregnancy. So she says she had to sit and wait, calling those few weeks "brutal."
By the time they did the amnio, she was already nearly 18 weeks pregnant and was getting really big. When that test also came back positive, she made the anguished decision to end the pregnancy.
Now, three years after Clark's painful experience, a newer form of prenatal testing routinely gives would-be parents more information much earlier on, especially for women who are over 35. As soon as nine weeks into their pregnancies, women can have a simple blood test to determine if there are abnormalities in the DNA of chromosomes 21, which indicates Down syndrome, as well as in chromosomes 13 and 18. Using next-generation sequencing technologies, the test separates out and examines circulating fetal cells in the mother's blood, which eliminates the risks of drawing fluid directly from the fetus or placenta.
"Finding out your baby has Down syndrome at 11 or 12 weeks is much easier for parents to make any decision they may want to make, as opposed to 16 or 17 weeks," said Dr. Leena Nathan, an obstetrician-gynecologist in UCLA's healthcare system. "People are much more willing or able to perhaps make a decision to terminate the pregnancy."
But with the growing sophistication of prenatal tests, it seems that the more questions are answered, the more new ones arise--questions that previous generations have never had to face. And as genomic sequencing improves in its predictive accuracy at the earliest stages of life, the challenges only stand to increase. Imagine, for example, learning your child's lifetime risk of breast cancer when you are ten weeks pregnant. Would you terminate if you knew she had a 70 percent risk? What about 40 percent? Lots of hard questions. Few easy answers. Once the cost of whole genome sequencing drops low enough, probably within the next five to ten years according to experts, such comprehensive testing may become the new standard of care. Welcome to the future of prospective parenthood.
"In one way, it's a blessing to have this information. On the other hand, it's very difficult to deal with."
How Did We Get Here?
Prenatal testing is not new. In 1979, amniocentesis was used to detect whether certain inherited diseases had been passed on to the fetus. Through the 1980s, parents could be tested to see if they carried disease like Tay-Sachs, Sickle cell anemia, Cystic fibrosis and Duchenne muscular dystrophy. By the early 1990s, doctors could test for even more genetic diseases and the CVS test was beginning to become available.
A few years later, a technique called preimplantation genetic diagnosis (PGD) emerged, in which embryos created in a lab with sperm and harvested eggs would be allowed to grow for several days and then cells would be removed and tested to see if any carried genetic diseases. Those that weren't affected could be transferred back to the mother. Once in vitro fertilization (IVF) took off, so did genetic testing. The labs test the embryonic cells and get them back to the IVF facilities within 24 hours so that embryo selection can occur. In the case of IVF, genetic tests are done so early, parents don't even have to decide whether to terminate a pregnancy. Embryos with issues often aren't even used.
"It was a very expensive endeavor but exciting to see our ability to avoid disorders, especially for families that don't want to terminate a pregnancy," said Sara Katsanis, an expert in genetic testing who teaches at Duke University. "In one way, it's a blessing to have this information (about genetic disorders). On the other hand, it's very difficult to deal with. To make that decision about whether to terminate a pregnancy is very hard."
Just Because We Can, Does It Mean We Should?
Parents in the future may not only find out whether their child has a genetic disease but will be able to potentially fix the problem through a highly controversial process called gene editing. But because we can, does it mean we should? So far, genes have been edited in other species, but to date, the procedure has not been used on an unborn child for reproductive purposes apart from research.
"There's a lot of bioethics debate and convening of groups to try to figure out where genetic manipulation is going to be useful and necessary, and where it is going to need some restrictions," said Katsanis. She notes that it's very useful in areas like cancer research, so one wouldn't want to over-regulate it.
There are already some criteria as to which genes can be manipulated and which should be left alone, said Evan Snyder, professor and director of the Center for Stem Cells and Regenerative Medicine at Sanford Children's Health Research Center in La Jolla, Calif. He noted that genes don't stand in isolation. That is, if you modify one that causes disease, will it disrupt others? There may be unintended consequences, he added.
"As the technical dilemmas get fixed, some of the ethical dilemmas get fixed. But others arise. It's kind of like ethical whack-a-mole."
But gene editing of embryos may take years to become an acceptable practice, if ever, so a more pressing issue concerns the rationale behind embryo selection during IVF. Prospective parents can end up with anywhere from zero to thirty embryos from the procedure and must choose only one (rarely two) to implant. Since embryos are routinely tested now for certain diseases, and selected or discarded based on that information, should it be ethical—and legal—to make selections based on particular traits, too? To date so far, parents can select for gender, but no other traits. Whether trait selection becomes routine is a matter of time and business opportunity, Katsanis said. So far, the old-fashioned way of making a baby combined with the luck of the draw seems to be the preferred method for the marketplace. But that could change.
"You can easily see a family deciding not to implant a lethal gene for Tay-Sachs or Duchene or Cystic fibrosis. It becomes more ethically challenging when you make a decision to implant girls and not any of the boys," said Snyder. "And then as we get better and better, we can start assigning genes to certain skills and this starts to become science fiction."
Once a pregnancy occurs, prospective parents of all stripes will face decisions about whether to keep the fetus based on the information that increasingly robust prenatal testing will provide. What influences their decision is the crux of another ethical knot, said Snyder. A clear-cut rationale would be if the baby is anencephalic, or it has no brain. A harder one might be, "It's a girl, and I wanted a boy," or "The child will only be 5' 2" tall in adulthood."
"Those are the extremes, but the ultimate question is: At what point is it a legitimate response to say, I don't want to keep this baby?'" he said. Of course, people's responses will vary, so the bigger conundrum for society is: Where should a line be drawn—if at all? Should a woman who is within the legal scope of termination (up to around 24 weeks, though it varies by state) be allowed to terminate her pregnancy for any reason whatsoever? Or must she have a so-called "legitimate" rationale?
"As the technical dilemmas get fixed, some of the ethical dilemmas get fixed. But others arise. It's kind of like ethical whack-a-mole," Snyder said.
One of the newer moles to emerge is, if one can fix a damaged gene, for how long should it be fixed? In one child? In the family's whole line, going forward? If the editing is done in the embryo right after the egg and sperm have united and before the cells begin dividing and becoming specialized, when, say, there are just two or four cells, it will likely affect that child's entire reproductive system and thus all of that child's progeny going forward.
"This notion of changing things forever is a major debate," Snyder said. "It literally gets into metaphysics. On the one hand, you could say, well, wouldn't it be great to get rid of Cystic fibrosis forever? What bad could come of getting rid of a mutant gene forever? But we're not smart enough to know what other things the gene might be doing, and how disrupting one thing could affect this network."
As with any tool, there are risks and benefits, said Michael Kalichman, Director of the Research Ethics Program at the University of California San Diego. While we can envision diverse benefits from a better understanding of human biology and medicine, it is clear that our species can also misuse those tools – from stigmatizing children with certain genetic traits as being "less than," aka dystopian sci-fi movies like Gattaca, to judging parents for making sure their child carries or doesn't carry a particular trait.
"The best chance to ensure that the benefits of this technology will outweigh the risks," Kalichman said, "is for all stakeholders to engage in thoughtful conversations, strive for understanding of diverse viewpoints, and then develop strategies and policies to protect against those uses that are considered to be problematic."
Autonomous, indoor farming gives a boost to crops
The glass-encased cabinet looks like a display meant to hold reasonably priced watches, or drugstore beauty creams shipped from France. But instead of this stagnant merchandise, each of its five shelves is overgrown with leaves — moss-soft pea sprouts, spikes of Lolla rosa lettuces, pale bok choy, dark kale, purple basil or red-veined sorrel or green wisps of dill. The glass structure isn’t a cabinet, but rather a “micro farm.”
The gadget is on display at the Richmond, Virginia headquarters of Babylon Micro-Farms, a company that aims to make indoor farming in the U.S. more accessible and sustainable. Babylon’s soilless hydroponic growing system, which feeds plants via nutrient-enriched water, allows chefs on cruise ships, cafeterias and elsewhere to provide home-grown produce to patrons, just seconds after it’s harvested. Currently, there are over 200 functioning systems, either sold or leased to customers, and more of them are on the way.
The chef-farmers choose from among 45 types of herb and leafy-greens seeds, plop them into grow trays, and a few weeks later they pick and serve. While success is predicated on at least a small amount of these humans’ care, the systems are autonomously surveilled round-the-clock from Babylon’s base of operations. And artificial intelligence is helping to run the show.
Babylon piloted the use of specialized cameras that take pictures in different spectrums to gather some less-obvious visual data about plants’ wellbeing and alert people if something seems off.
Imagine consistently perfect greens and tomatoes and strawberries, grown hyper-locally, using less water, without chemicals or environmental contaminants. This is the hefty promise of controlled environment agriculture (CEA) — basically, indoor farms that can be hydroponic, aeroponic (plant roots are suspended and fed through misting), or aquaponic (where fish play a role in fertilizing vegetables). But whether they grow 4,160 leafy-green servings per year, like one Babylon farm, or millions of servings, like some of the large, centralized facilities starting to supply supermarkets across the U.S., they seek to minimize failure as much as possible.
Babylon’s soilless hydroponic growing system
Courtesy Babylon Micro-Farms
Here, AI is starting to play a pivotal role. CEA growers use it to help “make sense of what’s happening” to the plants in their care, says Scott Lowman, vice president of applied research at the Institute for Advanced Learning and Research (IALR) in Virginia, a state that’s investing heavily in CEA companies. And although these companies say they’re not aiming for a future with zero human employees, AI is certainly poised to take a lot of human farming intervention out of the equation — for better and worse.
Most of these companies are compiling their own data sets to identify anything that might block the success of their systems. Babylon had already integrated sensor data into its farms to measure heat and humidity, the nutrient content of water, and the amount of light plants receive. Last year, they got a National Science Foundation grant that allowed them to pilot the use of specialized cameras that take pictures in different spectrums to gather some less-obvious visual data about plants’ wellbeing and alert people if something seems off. “Will this plant be healthy tomorrow? Are there things…that the human eye can't see that the plant starts expressing?” says Amandeep Ratte, the company’s head of data science. “If our system can say, Hey, this plant is unhealthy, we can reach out to [users] preemptively about what they’re doing wrong, or is there a disease at the farm?” Ratte says. The earlier the better, to avoid crop failures.
Natural light accounts for 70 percent of Greenswell Growers’ energy use on a sunny day.
Courtesy Greenswell Growers
IALR’s Lowman says that other CEA companies are developing their AI systems to account for the different crops they grow — lettuces come in all shapes and sizes, after all, and each has different growing needs than, for example, tomatoes. The ways they run their operations differs also. Babylon is unusual in its decentralized structure. But centralized growing systems with one main location have variabilities, too. AeroFarms, which recently declared bankruptcy but will continue to run its 140,000-square foot vertical operation in Danville, Virginia, is entirely enclosed and reliant on the intense violet glow of grow lights to produce microgreens.
Different companies have different data needs. What data is essential to AeroFarms isn’t quite the same as for Greenswell Growers located in Goochland County, Virginia. Raising four kinds of lettuce in a 77,000-square-foot automated hydroponic greenhouse, the vagaries of naturally available light, which accounts for 70 percent of Greenswell’s energy use on a sunny day, affect operations. Their tech needs to account for “outside weather impacts,” says president Carl Gupton. “What adjustments do we have to make inside of the greenhouse to offset what's going on outside environmentally, to give that plant optimal conditions? When it's 85 percent humidity outside, the system needs to do X, Y and Z to get the conditions that we want inside.”
AI will help identify diseases, as well as when a plant is thirsty or overly hydrated, when it needs more or less calcium, phosphorous, nitrogen.
Nevertheless, every CEA system has the same core needs — consistent yield of high quality crops to keep up year-round supply to customers. Additionally, “Everybody’s got the same set of problems,” Gupton says. Pests may come into a facility with seeds. A disease called pythium, one of the most common in CEA, can damage plant roots. “Then you have root disease pressures that can also come internally — a change in [growing] substrate can change the way the plant performs,” Gupton says.
AI will help identify diseases, as well as when a plant is thirsty or overly hydrated, when it needs more or less calcium, phosphorous, nitrogen. So, while companies amass their own hyper-specific data sets, Lowman foresees a time within the next decade “when there will be some type of [open-source] database that has the most common types of plant stress identified” that growers will be able to tap into. Such databases will “create a community and move the science forward,” says Lowman.
In fact, IALR is working on assembling images for just such a database now. On so-called “smart tables” inside an Institute lab, a team is growing greens and subjects them to various stressors. Then, they’re administering treatments while taking images of every plant every 15 minutes, says Lowman. Some experiments generate 80,000 images; the challenge lies in analyzing and annotating the vast trove of them, marking each one to reflect outcome—for example increasing the phosphate delivery and the plant’s response to it. Eventually, they’ll be fed into AI systems to help them learn.
For all the enthusiasm surrounding this technology, it’s not without downsides. Training just one AI system can emit over 250,000 pounds of carbon dioxide, according to MIT Technology Review. AI could also be used “to enhance environmental benefit for CEA and optimize [its] energy consumption,” says Rozita Dara, a computer science professor at the University of Guelph in Canada, specializing in AI and data governance, “but we first need to collect data to measure [it].”
The chef-farmers can choose from 45 types of herb and leafy-greens seeds.
Courtesy Babylon Micro-Farms
Any system connected to the Internet of Things is also vulnerable to hacking; if CEA grows to the point where “there are many of these similar farms, and you're depending on feeding a population based on those, it would be quite scary,” Dara says. And there are privacy concerns, too, in systems where imaging is happening constantly. It’s partly for this reason, says Babylon’s Ratte, that the company’s in-farm cameras all “face down into the trays, so the only thing [visible] is pictures of plants.”
Tweaks to improve AI for CEA are happening all the time. Greenswell made its first harvest in 2022 and now has annual data points they can use to start making more intelligent choices about how to feed, water, and supply light to plants, says Gupton. Ratte says he’s confident Babylon’s system can already “get our customers reliable harvests. But in terms of how far we have to go, it's a different problem,” he says. For example, if AI could detect whether the farm is mostly empty—meaning the farm’s user hasn’t planted a new crop of greens—it can alert Babylon to check “what's going on with engagement with this user?” Ratte says. “Do they need more training? Did the main person responsible for the farm quit?”
Lowman says more automation is coming, offering greater ability for systems to identify problems and mitigate them on the spot. “We still have to develop datasets that are specific, so you can have a very clear control plan, [because] artificial intelligence is only as smart as what we tell it, and in plant science, there's so much variation,” he says. He believes AI’s next level will be “looking at those first early days of plant growth: when the seed germinates, how fast it germinates, what it looks like when it germinates.” Imaging all that and pairing it with AI, “can be a really powerful tool, for sure.”
Scientists make progress with growing organs for transplants
Story by Big Think
For over a century, scientists have dreamed of growing human organs sans humans. This technology could put an end to the scarcity of organs for transplants. But that’s just the tip of the iceberg. The capability to grow fully functional organs would revolutionize research. For example, scientists could observe mysterious biological processes, such as how human cells and organs develop a disease and respond (or fail to respond) to medication without involving human subjects.
Recently, a team of researchers from the University of Cambridge has laid the foundations not just for growing functional organs but functional synthetic embryos capable of developing a beating heart, gut, and brain. Their report was published in Nature.
The organoid revolution
In 1981, scientists discovered how to keep stem cells alive. This was a significant breakthrough, as stem cells have notoriously rigorous demands. Nevertheless, stem cells remained a relatively niche research area, mainly because scientists didn’t know how to convince the cells to turn into other cells.
Then, in 1987, scientists embedded isolated stem cells in a gelatinous protein mixture called Matrigel, which simulated the three-dimensional environment of animal tissue. The cells thrived, but they also did something remarkable: they created breast tissue capable of producing milk proteins. This was the first organoid — a clump of cells that behave and function like a real organ. The organoid revolution had begun, and it all started with a boob in Jello.
For the next 20 years, it was rare to find a scientist who identified as an “organoid researcher,” but there were many “stem cell researchers” who wanted to figure out how to turn stem cells into other cells. Eventually, they discovered the signals (called growth factors) that stem cells require to differentiate into other types of cells.
For a human embryo (and its organs) to develop successfully, there needs to be a “dialogue” between these three types of stem cells.
By the end of the 2000s, researchers began combining stem cells, Matrigel, and the newly characterized growth factors to create dozens of organoids, from liver organoids capable of producing the bile salts necessary for digesting fat to brain organoids with components that resemble eyes, the spinal cord, and arguably, the beginnings of sentience.
Synthetic embryos
Organoids possess an intrinsic flaw: they are organ-like. They share some characteristics with real organs, making them powerful tools for research. However, no one has found a way to create an organoid with all the characteristics and functions of a real organ. But Magdalena Żernicka-Goetz, a developmental biologist, might have set the foundation for that discovery.
Żernicka-Goetz hypothesized that organoids fail to develop into fully functional organs because organs develop as a collective. Organoid research often uses embryonic stem cells, which are the cells from which the developing organism is created. However, there are two other types of stem cells in an early embryo: stem cells that become the placenta and those that become the yolk sac (where the embryo grows and gets its nutrients in early development). For a human embryo (and its organs) to develop successfully, there needs to be a “dialogue” between these three types of stem cells. In other words, Żernicka-Goetz suspected the best way to grow a functional organoid was to produce a synthetic embryoid.
As described in the aforementioned Nature paper, Żernicka-Goetz and her team mimicked the embryonic environment by mixing these three types of stem cells from mice. Amazingly, the stem cells self-organized into structures and progressed through the successive developmental stages until they had beating hearts and the foundations of the brain.
“Our mouse embryo model not only develops a brain, but also a beating heart [and] all the components that go on to make up the body,” said Żernicka-Goetz. “It’s just unbelievable that we’ve got this far. This has been the dream of our community for years and major focus of our work for a decade and finally we’ve done it.”
If the methods developed by Żernicka-Goetz’s team are successful with human stem cells, scientists someday could use them to guide the development of synthetic organs for patients awaiting transplants. It also opens the door to studying how embryos develop during pregnancy.