I'll never forget the experience of having a child in the neonatal intensive care unit (NICU).
Now more than ever, we're working to remove the barriers between new parents and their infants.
It was another layer of uncertainty that filtered into my experience of being a first-time parent. There was so much I didn't know, and the wires attached to my son's small body for the first week of his life were a reminder of that.
I wanted to be the best mother possible. I deeply desired to bring my son home to start our lives. More than anything, I longed for a wireless baby whom I could hold and love freely without limitations.
The wires suggested my baby was fragile and it left me feeling severely unprepared, anxious, and depressed.
In recent years, research has documented the ways that NICU experiences take a toll on parents' mental health. But thankfully, medical technology is rapidly being developed to help reduce the emotional fallout of the NICU. Now more than ever, we're working to remove the barriers between new parents and their infants. The latest example is the first ever wireless monitoring system that was recently developed by a team at Northwestern University.
After listening to the needs of parents and medical staff, Debra Weese-Mayer, M.D., a professor of pediatric autonomic medicine at Feinberg School of Medicine, along with a team of materials scientists, engineers, dermatologists and pediatricians, set out to develop this potentially life-changing technology. Weese-Mayer believes wireless monitoring will have a significant impact for people on all sides of the NICU experience.
"With elimination of the cumbersome wires," she says, "the parents will find their infant more approachable/less intimidating and have improved access to their long-awaited but delivered-too-early infant, allowing them to begin skin-to-skin contact and holding with reduced concern for dislodging wires."
So how does the new system work?
Very thin "skin like" patches made of silicon rubber are placed on the surface of the skin to monitor vitals like heart rate, respiration rate, and body temperature. One patch is placed on the chest or back and the other is placed on the foot.
These patches are safer on the skin than previously used adhesives, reducing the cuts and infections associated with past methods. Finally, an antenna continuously delivers power, often from under the mattress.
The data collected from the patches stream from the body to a tablet or computer.
New wireless sensor technology is being studied to replace wired monitoring in NICUs in the coming years.
(Northwestern University)
Weese-Mayer hopes that wireless systems will be standard soon, but first they must undergo more thorough testing. "I would hope that in the next five years, wireless monitoring will be the standard in NICUs, but there are many essential validation steps before this technology will be embraced nationally," she says.
Until the new systems are ready, parents will be left struggling with the obstacles that wired monitoring presents.
Physical intimacy, for example, appears to have pain-reducing qualities -- something that is particularly important for babies who are battling serious illness. But wires make those cuddles more challenging.
There's also been minimal discussion about how wired monitoring can be particularly limiting for parents with disabilities and mobility aids, or even C-sections.
"When he was first born and I was recovering from my c-section, I couldn't deal with keeping the wires untangled while trying to sit down without hurting myself," says Rhiannon Giles, a writer from North Carolina, who delivered her son at just over 31 weeks after suffering from severe preeclampsia.
"The wires were awful," she remembers. "They fell off constantly when I shifted positions or he kicked a leg, which meant the monitors would alarm. It felt like an intrusion into the quiet little world I was trying to mentally create for us."
Over the last few years, researchers have begun to dive deeper into the literal and metaphorical challenges of wired monitoring.
For many parents, the wires prompt anxiety that worsens an already tense and vulnerable time.
I'll never forget the first time I got to hold my son without wires. It was the first time that motherhood felt manageable.
"Seeing my five-pound-babies covered in wires from head to toe rendered me completely overwhelmed," recalls Caila Smith, a mom of five from Indiana, whose NICU experience began when her twins were born pre-term. "The nurses seemed to handle them perfectly, but I was scared to touch them while they appeared so medically frail."
During the nine days it took for both twins to come home, the limited access she had to her babies started to impact her mental health. "If we would've had wireless sensors and monitors, it would've given us a much greater sense of freedom and confidence when snuggling our newborns," Smith says.
Besides enabling more natural interactions, wireless monitoring would make basic caregiving tasks much easier, like putting on a onesie.
"One thing I noticed is that many preemie outfits are made with zippers," points out Giles, "which just don't work well when your baby has wires coming off of them, head to toe."
Wired systems can pose issues for medical staff as well as parents.
"The main concern regarding wired systems is that they restrict access to the baby and often get tangled with other equipment, like IV lines," says Lamia Soghier, Medical Director of the Neonatal Intensive Care Unit at Children's National in Washington, D.C , who was also a NICU parent herself. "The nurses have to untangle the wires, which takes time, before handing the baby to the family."
I'll never forget the first time I got to hold my son without wires. It was the first time that motherhood felt manageable, and I couldn't stop myself from crying. Suddenly, anything felt possible and all the limitations from that first week of life seemed to fade away. The rise of wired-free monitoring will make some of the stressors that accompany NICU stays a thing of the past.
New Tech Can Predict Breast Cancer Years in Advance
Every two minutes, a woman is diagnosed with breast cancer. The question is, can those at high risk be identified early enough to survive?
New AI software has predicted risk equally well in both white and black women for the first time.
The current standard practice in medicine is not exactly precise. It relies on age, family history of cancer, and breast density, among other factors, to determine risk. But these factors do not always tell the whole story, leaving many women to slip through the cracks. In addition, a racial gap persists in breast cancer treatment and survival. African-American women are 42 percent more likely to die from the disease despite relatively equal rates of diagnosis.
But now those grim statistics could be changing. A team of researchers from MIT's Computer Science and Artificial Intelligence Laboratory have developed a deep learning model that can more accurately predict a patient's breast cancer risk compared to established clinical guidelines – and it has predicted risk equally well in both white and black women for the first time.
The Lowdown
Study results published in Radiology described how the AI software read mammogram images from more than 60,000 patients at Massachusetts General Hospital to identify subtle differences in breast tissue that pointed to potential risk factors, even in their earliest stages. The team accessed the patients' actual diagnoses and determined that the AI model was able to correctly place 31 percent of all cancer patients in the highest-risk category of developing breast cancer within five years of the examination, compared to just 18 percent for existing models.
"Each image has hundreds of thousands of pixels identifying something that may not necessarily be detected by the human eye," said MIT professor Regina Barzilay, one of the study's lead authors. "We all have limited visual capacities so it seems some machines trained on hundreds of thousands of images with a known outcome can capture correlations the human eye might not notice."
Barzilay, a breast cancer survivor herself, had abnormal tissue patterns on mammograms in 2012 and 2013, but wasn't diagnosed until after a 2014 image reading, illustrating the limitations of human processing alone.
MIT professor Regina Barzilay, a lead author on the new study and a breast cancer survivor herself.
(Courtesy MIT)
Next up: The MIT team is looking at training the model to detect other cancers and health risks. Barzilay recalls how a cardiologist told her during a conference that women with heart diseases had a different pattern of calcification on their mammograms, demonstrating how already existing images can be used to extract other pieces of information about a person's health status.
Integration of the AI model in standard care could help doctors better tailor screening and prevention programs based on actual instead of perceived risk. Patients who might register as higher risk by current guidelines could be identified as lower risk, helping resolve conflicting opinions about how early and how often women should receive mammograms.
Open Questions: While the results were promising, it's unknown how well the model will work on a larger scale, as the study looked at data from just one institution and used mammograms supplied by just one hospital. Some risk factor information was also unavailable for certain patients during the study, leaving researchers unable to fully compare the AI model's performance to that of the traditional standard.
One incentive to wider implementation and study, however, is the bonus that no new hardware is required to use the AI model. With other institutions now showing interest, this software could lead to earlier routine detection and treatment of breast cancer — resulting in more lives saved.
Dadbot, Wifebot, Friendbot: The Future of Memorializing Avatars
In 2016, when my family found out that my father was dying from cancer, I did something that at the time felt completely obvious: I started building a chatbot replica of him.
I simply wanted to create an interactive way to share key parts of his life story.
I was not under any delusion that the Dadbot, as I soon began calling it, would be a true avatar of him. From my research about the voice computing revolution—Siri, Alexa, the Google Assistant—I knew that fully humanlike AIs, like you see in the movies, were a vast ways from technological reality. Replicating my dad in any real sense was never the goal, anyway; that notion gave me the creeps.
Instead, I simply wanted to create an interactive way to share key parts of his life story: facts about his ancestors in Greece. Memories from growing up. Stories about his hobbies, family life, and career. And I wanted the Dadbot, which sent text messages and audio clips over Facebook Messenger, to remind me of his personality—warm, erudite, and funny. So I programmed it to use his distinctive phrasings; to tell a few of his signature jokes and sing his favorite songs.
While creating the Dadbot, a laborious undertaking that sprawled into 2017, I fixated on two things. The first was getting the programming right, which I did using a conversational agent authoring platform called PullString. The second, far more wrenching concern was my father's health. Failing to improve after chemotherapy and immunotherapy, and steadily losing energy, weight, and the animating sparkle of life, he died on February 9.
John Vlahos at a family reunion in the summer of 2016, a few months after his cancer diagnosis.
(Courtesy James Vlahos)
After a magazine article that I wrote about the Dadbot came out in the summer of 2017, messages poured in from readers. While most people simply expressed sympathy, some conveyed a more urgent message: They wanted their own memorializing chatbots. One man implored me to make a bot for him; he had been diagnosed with cancer and wanted his six-month-old daughter to have a way to remember him. A technology entrepreneur needed advice on replicating what I did for her father, who had stage IV cancer. And a teacher in India asked me to engineer a conversational replica of her son, who had recently been struck and killed by a bus.
Journalists from around the world also got in touch for interviews, and they inevitably came around to the same question. Will virtual immortality, they asked, ever become a business?
The prospect of this happening had never crossed my mind. I was consumed by my father's struggle and my own grief. But the notion has since become head-slappingly obvious. I am not the only person to confront the loss of a loved one; the experience is universal. And I am not alone in craving a way to keep memories alive. Of course people like the ones who wrote me will get Dadbots, Mombots, and Childbots of their own. If a moonlighting writer like me can create a minimum viable product, then a company employing actual computer scientists could do much more.
But this prospect raises unanswered and unsettling questions. For businesses, profit, and not some deeply personal mission, will be the motivation. This shift will raise issues that I didn't have to confront. To make money, a virtual immortality company could follow the lucrative but controversial business model that has worked so well for Google and Facebook. To wit, a company could provide the memorializing chatbot for free and then find ways to monetize the attention and data of whoever communicated with it. Given the copious amount of personal information flowing back and forth in conversations with replica bots, this would be a data gold mine for the company—and a massive privacy risk for users.
Virtual immortality as commercial product will doubtless become more sophisticated.
Alternately, a company could charge for memorializing avatars, perhaps with an annual subscription fee. This would put the business in a powerful position. Imagine the fee getting hiked each year. A customer like me would find himself facing a terrible decision—grit my teeth and keep paying, or be forced to pull the plug on the best, closest reminder of a loved one that I have. The same person would effectively wind up dying twice.
Another way that a beloved digital avatar could die is if the company that creates it ceases to exist. This is no mere academic concern for me: Earlier this year, PullString was swallowed up by Apple. I'm still able to access the Dadbot on my own computer, fortunately, but the acquisition means that other friends and family members can no longer chat with him remotely.
Startups like PullString, of course, are characterized by impermanence; they tend to get snapped up by bigger companies or run out of venture capital and fold. But even if big players like, say, Facebook or Google get into the virtual immortality game, we can't count on them existing even a few decades from now, which means that the avatars enabled by their technology would die, too.
The permanence problem is the biggest hurdle faced by the fledgling enterprise of virtual immortality. So some entrepreneurs are attempting to enable avatars whose existence isn't reliant upon any one company or set of computer servers. "By leveraging the power of blockchain and decentralized software to replicate information, we help users create avatars that live on forever," says Alex Roy, the founder and CEO of the startup Everlife.ai. But until this type of solution exists, give props to conventional technology for preserving memories: printed photos and words on paper can last for centuries.
The fidelity of avatars—just how lifelike they are—also raises serious concerns. Before I started creating the Dadbot, I worried that the tech might be just good enough to remind my family of the man it emulated, but so far off from my real father that it gave us all the creeps. But because the Dadbot was a simple chatbot and not some all-knowing AI, and because the interface was a messaging app, there was no danger of him encroaching on the reality of my actual dad.
But virtual immortality as commercial product will doubtless become more sophisticated. Avatars will have brains built by teams of computer scientists employing the latest techniques in conversational AI. The replicas will not just text but also speak, using synthetic voices that emulate the ones of the people being memorialized. They may even come to life as animated clones on computer screens or in 3D with the help of virtual reality headsets.
What fascinates me is how technology can help to preserve the past—genuine facts and memories from peoples' lives.
These are all lines that I don't personally want to cross; replicating my dad was never the goal. I also never aspired to have some synthetic version of him that continued to exist in the present, capable of acquiring knowledge about the world or my life and of reacting to it in real time.
Instead, what fascinates me is how technology can help to preserve the past—genuine facts and memories from people's lives—and their actual voices so that their stories can be shared interactively after they have gone. I'm working on ideas for doing this via voice computing platforms like Alexa and Assistant, and while I don't have all of the answers yet, I'm excited to figure out what might be possible.
[Adapted from Talk to Me: How Voice Computing Will Transform the Way We Live, Work, and Think (Houghton Mifflin Harcourt, March 26, 2019).]