The Promise of Pills That Know When You Swallow Them
Dr. Sara Browne, an associate professor of clinical medicine at the University of California, San Diego, is a specialist in infectious diseases and, less formally, "a global health person." She often travels to southern Africa to meet with colleagues working on the twin epidemics of HIV and tuberculosis.
"This technology, in my opinion, is an absolute slam dunk for tuberculosis."
Lately she has asked them to name the most pressing things she can help with as a researcher based in a wealthier country. "Over and over and over again," she says, "the only thing they wanted to know is whether their patients are taking the drugs."
Tuberculosis is one of world's deadliest diseases; every year there are 10 million new infections and more than a million deaths. When a patient with tuberculosis is prescribed medicine to combat the disease, adherence to the regimen is important not just for the individual's health, but also for the health of the community. Poor adherence can lead to lengthier and more costly treatment and, perhaps more importantly, to drug-resistant strains of the disease -- an increasing global threat.
Browne is testing a new method to help healthcare workers track their patients' adherence with greater precision—close to exact precision even. They're called digital pills, and they involve a patient swallowing medicine as they normally would, only the capsule contains a sensor that—when it contacts stomach acid—transmits a signal to a small device worn on or near the body. That device in turn sends a signal to the patient's phone or tablet and into a cloud-based database. The fact that the pill has been swallowed has therefore been recorded almost in real time, and notice is available to whoever has access to the database.
"This technology, in my opinion, is an absolute slam dunk for tuberculosis," Browne says. TB is much more prevalent in poorer regions of the world—in Sub-Saharan Africa, for example—than in richer places like the U.S., where Browne's studies thus far have taken place. But when someone is diagnosed in the U.S., because of the risk to others if it spreads, they will likely have to deal with "directly observed therapy" to ensure that they take their medicines correctly.
DOT, as it's called, requires the patient to meet with a healthcare worker several days a week, or every day, so that the medicine intake can be observed in person -- an expensive and time-consuming process. Still, the Centers for Disease Control and Prevention website says (emphasis theirs), "DOT should be used for ALL patients with TB disease, including children and adolescents. There is no way to accurately predict whether a patient will adhere to treatment without this assistance."
Digital pills can help with both the cost and time involved, and potentially improve adherence in places where DOT is impossibly expensive. With the sensors, you can monitor a patient's adherence without a healthcare worker physically being in the room. Patients can live their normal lives and if they miss a pill, they can receive a reminder by text or a phone call from the clinic or hospital. "They can get on with their lives," said Browne. "They don't need the healthcare system to interrupt them."
A 56-year-old patient who participated in one of Browne's studies when he was undergoing TB treatment says that before he started taking the digital pills, he would go to the clinic at least once every day, except weekends. Once he switched to digital pills, he could go to work and spend time with his wife and children instead of fighting traffic every day to get to the clinic. He just had to wear a small patch on his abdomen, which would send the signal to a tablet provided by Browne's team. When he returned from work, he could see the results—that he'd taken the pill—in a database accessed via the tablet. (He could also see his heart rate and respiratory rate.) "I could do my daily activities without interference," he said.
Dr. Peter Chai, a medical toxicologist and emergency medicine physician at Brigham and Women's Hospital in Boston, is studying digital pills in a slightly different context, to help fight the country's opioid overdose crisis. Doctors like Chai prescribe pain medicine, he says, but then immediately put the onus on the patient to decide when to take it. This lack of guidance can lead to abuse and addiction. Patients are often told to take the meds "as needed." Chai and his colleagues wondered, "What does that mean to patients? And are people taking more than they actually need? Because pain is such a subjective experience."
The patients "liked the fact that somebody was watching them."
They wanted to see what "take as needed" actually led to, so they designed a study with patients who had broken a bone and come to the hospital's emergency department to get it fixed. Those who were prescribed oxycodone—a pharmaceutical opioid for pain relief—got enough digital pills to last one week. They were supposed to take the pills as needed, or as many as three pills per day. When the pills were ingested, the sensor sent a signal to a card worn on a lanyard around the neck.
Chai and his colleagues were able to see exactly when the patients took the pills and how many, and to detect patterns of ingestion more precisely than ever before. They talked to the patients after the seven days were up, and Chai said most were happy to be taking digital pills. The patients saw it as a layer of protection from afar. "They liked the fact that somebody was watching them," Chai said.
Both doctors, Browne and Chai, are in early stages of studies with patients taking pre-exposure prophylaxis, medicines that can protect people with a high-risk of contracting HIV, such as injectable drug users. Without good adherence, patients leave themselves open to getting the virus. If a patient is supposed to take a pill at 2 p.m. but the digital pill sensor isn't triggered, the healthcare provider can have an automatic message sent as a reminder. Or a reminder to one of the patient's friends or loved ones.
"Like Swallowing Your Phone"?
Deven Desai, an associate professor of law and ethics at Georgia Tech, says that digital pills sound like a great idea for helping with patient adherence, a big issue that self-reporting doesn't fully solve. He likes the idea of a physician you trust having better information about whether you're taking your medication on time. "On the surface that's just cool," he says. "That's a good thing." But Desai, who formerly worked as academic research counsel at Google, said that some of the same questions that have come up in recent years with social media and the Internet in general also apply to digital pills.
"Think of it like your phone, but you swallowed it," he says. "At first it could be great, simple, very much about the user—in this case, the patient—and the data is going between you and your doctor and the medical people it ought to be going to. Wonderful. But over time, phones change. They become 'smarter.'" And when phones and other technologies become smarter, he says, the companies behind them tend to expand the type of data they collect, because they can. Desai says it will be crucial that prescribers be completely transparent about who is getting the patients' data and for what purpose.
"We're putting stuff in our body in good faith with our medical providers, and what if it turned out later that all of a sudden someone was data mining or putting in location trackers and we never knew about that?" Desai asks. "What science has to realize is if they don't start thinking about this, what could be a wonderful technology will get killed."
Leigh Turner, an associate professor at the University of Minnesota's Center for Bioethics, agrees with Desai that digital pills have great promise, and also that there are clear reasons to be concerned about their use. Turner compared the pills to credit cards and social media, in that the data from them can potentially be stolen or leaked. One question he would want answered before the pills were normalized: "What kind of protective measures are in place to make sure that personal information isn't spilling out and being acquired by others or used by others in unexpected and unwanted ways?"
If digital pills catch on, some experts worry that they may one day not be a voluntary technology.
Turner also wonders who will have access to the pills themselves. Only those who can afford both the medicine plus the smartphones that are currently required for their use? Or will people from all economic classes have access? If digital pills catch on, he also worries they may one day not be a voluntary technology.
"When it comes to digital pills, it's not something that's really being foisted on individuals. It's more something that people can be informed of and can choose to take or not to take," he says. "But down the road, I can imagine a scenario where we move away from purely voluntary agreements to it becoming more of an expectation."
He says it's easy to picture a scenario in which insurance companies demand that patient medicinal intake data be tracked and collected or else. Refuse to have your adherence tracked and you risk higher rates or even overall coverage. Maybe patients who don't take the digital pills suffer dire consequences financially or medically. "Maybe it becomes beneficial as much to health insurers and payers as it is to individual patients," Turner says.
In November 2017, the FDA approved the first-ever digital pill that includes a sensor, a drug called Abilify MyCite, made by Otsuka Pharmaceutical Company. The drug, which is yet to be released, is used to treat schizophrenia, bipolar disorder, and depression. With a built-in sensor developed by Proteus Digital Health, patients can give their doctors permission to see when exactly they are taking, or not taking, their meds. For patients with mental illness, the ability to help them stick to their prescribed regime can be life-saving.
But Turner wonders if Abilify is the best drug to be a forerunner for digital pills. Some people with schizophrenia might be suffering from paranoia, and perhaps giving them a pill developed by a large corporation that sends data from their body to be tracked by other people might not be the best idea. It could in fact exacerbate their sense of paranoia.
The Bottom Line: Protect the Data
We all have relatives who have pillboxes with separate compartments for each day of the week, or who carry pillboxes that beep when it's time to take the meds. But that's not always good enough for people with dementia, mental illness, drug addiction, or other life situations that make it difficult to remember to take their pills. Digital pills can play an important role in helping these people.
"The absolute principle here is that the data has to belong to the patient."
The one time the patient from Browne's study forgot to take his pills, he got a beeping reminder from his tablet that he'd missed a dose. "Taking a medication on a daily basis, sometimes we just forget, right?" he admits. "With our very accelerated lives nowadays, it helps us to remember that we have to take the medications. So patients are able to be on top of their own treatment."
Browne is convinced that digital pills can help people in developing countries with high rates of TB and HIV, though like Turner and Desai she cautions that patients' data must be protected. "I think it can be a tremendous technology for patient empowerment and I also think if properly used it can help the medical system to support patients that need it," she said. "But the absolute principle here is that the data has to belong to the patient."
Gene therapy helps restore teen’s vision for first time
Story by Freethink
For the first time, a topical gene therapy — designed to heal the wounds of people with “butterfly skin disease” — has been used to restore a person’s vision, suggesting a new way to treat genetic disorders of the eye.
The challenge: Up to 125,000 people worldwide are living with dystrophic epidermolysis bullosa (DEB), an incurable genetic disorder that prevents the body from making collagen 7, a protein that helps strengthen the skin and other connective tissues.Without collagen 7, the skin is incredibly fragile — the slightest friction can lead to the formation of blisters and scarring, most often in the hands and feet, but in severe cases, also the eyes, mouth, and throat.
This has earned DEB the nickname of “butterfly skin disease,” as people with it are said to have skin as delicate as a butterfly’s wings.
The gene therapy: In May 2023, the FDA approved Vyjuvek, the first gene therapy to treat DEB.
Vyjuvek uses an inactivated herpes simplex virus to deliver working copies of the gene for collagen 7 to the body’s cells. In small trials, 65 percent of DEB-caused wounds sprinkled with it healed completely, compared to just 26 percent of wounds treated with a placebo.
“It was like looking through thick fog.” -- Antonio Vento Carvajal.
The patient: Antonio Vento Carvajal, a 14 year old living in Florida, was one of the trial participants to benefit from Vyjuvek, which was developed by Pittsburgh-based pharmaceutical company Krystal Biotech.
While the topical gene therapy could help his skin, though, it couldn’t do anything to address the severe vision loss Antonio experienced due to his DEB. He’d undergone multiple surgeries to have scar tissue removed from his eyes, but due to his condition, the blisters keep coming back.
“It was like looking through thick fog,” said Antonio, noting how his impaired vision made it hard for him to play his favorite video games. “I had to stand up from my chair, walk over, and get closer to the screen to be able to see.”
The idea: Encouraged by how Antonio’s skin wounds were responding to the gene therapy, Alfonso Sabater, his doctor at the Bascom Palmer Eye Institute, reached out to Krystal Biotech to see if they thought an alternative formula could potentially help treat his patient’s eyes.
The company was eager to help, according to Sabater, and after about two years of safety and efficacy testing, he had permission, under the FDA’s compassionate use protocol, to treat Antonio’s eyes with a version of the topical gene therapy delivered as eye drops.
The results: In August 2022, Sabater once again removed scar tissue from Antonio’s right eye, but this time, he followed up the surgery by immediately applying eye drops containing the gene therapy.
“I would send this message to other families in similar situations, whether it’s DEB or another condition that can benefit from genetic therapy. Don’t be afraid.” -- Yunielkys “Yuni” Carvajal.
The vision in Antonio’s eye steadily improved. By about eight months after the treatment, it was just slightly below average (20/25) and stayed that way. In March 2023, Sabater performed the same procedure on his young patient’s other eye, and the vision in it has also steadily improved.
“I’ve seen the transformation in Antonio’s life,” said Sabater. “He’s always been a happy kid. Now he’s very happy. He can function pretty much normally. He can read, he can study, he can play video games.”
Looking ahead: The topical gene therapy isn’t a permanent fix — it doesn’t alter Antonio’s own genes, so he has to have the eye drops reapplied every month. Still, that’s far less invasive than having to undergo repeated surgeries.
Sabater is now working with Krystal Biotech to launch trials of the eye drops in other patients, and not just those with DEB. By changing the gene delivered by the therapy, he believes it could be used to treat other eye disorders that are far more common — Fuchs’ dystrophy, for example, affects the vision of an estimated 300 million people over the age of 30.
Antonio’s mother, Yunielkys “Yuni” Carvajal, meanwhile, has said that having her son be the first to receive the eye drops was “very scary,” but she’s hopeful others will take a chance on new gene therapies if given the opportunity.
“I would send this message to other families in similar situations, whether it’s DEB or another condition that can benefit from genetic therapy,” she said. “Don’t be afraid.”
Can AI chatbots help with eating disorders?
Her name was Tessa and she was there to help. That’s what Sharon Maxwell read, anyway. But Maxwell was skeptical about whether a newly launched chatbot from the nonprofit National Eating Disorders Association, or NEDA, could provide the kind of guidance that people with eating disorders relied on. Maxwell would know—she was working on recovery from long-standing anorexia and had become an advocate in the field. So Maxwell took a deep breath and asked, “Hi Tessa. How do you support folks with eating disorders?”
Tessa’s reply was immediate. “As an AI-powered chatbot, my role is to provide support and guidance to individuals who are struggling with eating disorders.”
So far, so good. Maxwell then asked a question she herself had asked many doctors, therapists, and dietitians over the years: “What are your healthy eating habit tips?” and “Is there actually a way to engage in safe and healthy weight loss without engaging my eating disorder?”
Then everything went off the rails. Tessa told Maxwell an unequivocal yes, and that “it’s important to approach weight loss with a balanced and realistic mindset,” and recommended eating whole foods and lean proteins to create a 500-1000 calorie per day deficit that would lead to a loss of 1-2 pounds per week. To most people, the advice sounds anodyne, but alarm bells sounded in Maxwell’s head.
“This is actively going to feed eating disorders,” Maxwell says. “Having a chatbot be the direct response to someone reaching out for support for an eating disorder instead of the helpline seems careless.”
“The scripts that are being fed into the chatbot are only going to be as good as the person who’s feeding them.” -- Alexis Conason.
According to several decades of research, deliberate weight loss in the form of dieting is a serious risk for people with eating disorders. Maxwell says that following medical advice like what Tessa prescribed was what triggered her eating disorder as a child. And Maxwell wasn’t the only one who got such advice from the bot. When eating disorder therapist Alexis Conason tried Tessa, she asked the AI chatbot many of the questions her patients had. But instead of getting connected to resources or guidance on recovery, Conason, too, got tips on losing weight and “healthy” eating.
“The scripts that are being fed into the chatbot are only going to be as good as the person who’s feeding them,” Conason says. “It’s important that an eating disorder organization like NEDA is not reinforcing that same kind of harmful advice that we might get from medical providers who are less knowledgeable.”
Maxwell’s post about Tessa on Instagram went viral, and within days, NEDA had scrubbed all evidence of Tessa from its website. The furor has raised any number of issues about the harm perpetuated by a leading eating disorder charity and the ongoing influence of diet culture and advice that is pervasive in the field. But for AI experts, bears and bulls alike, Tessa offers a cautionary tale about what happens when a still-immature technology is unfettered and released into a vulnerable population.
Given the complexity involved in giving medical advice, the process of developing these chatbots must be rigorous and transparent, unlike NEDA’s approach.
“We don’t have a full understanding of what’s going on in these models. They’re a black box,” says Stephen Schueller, a clinical psychologist at the University of California, Irvine.
The health crisis
In March 2020, the world dove head-first into a heavily virtual world as countries scrambled to try and halt the pandemic. Even with lockdowns, hospitals were overwhelmed by the virus. The downstream effects of these lifesaving measures are still being felt, especially in mental health. Anxiety and depression are at all-time highs in teens, and a new report in The Lancet showed that post-Covid rates of newly diagnosed eating disorders in girls aged 13-16 were 42.4 percent higher than previous years.
And the crisis isn’t just in mental health.
“People are so desperate for health care advice that they'll actually go online and post pictures of [their intimate areas] and ask what kind of STD they have on public social media,” says John Ayers, an epidemiologist at the University of California, San Diego.
For many people, the choice isn’t chatbot vs. well-trained physician, but chatbot vs. nothing at all.
I know a bit about that desperation. Like Maxwell, I have struggled with a multi-decade eating disorder. I spent my 20s and 30s bouncing from crisis to crisis. I have called suicide hotlines, gone to emergency rooms, and spent weeks-on-end confined to hospital wards. Though I have found recovery in recent years, I’m still not sure what ultimately made the difference. A relapse isn't improbably, given my history. Even if I relapsed again, though, I don’t know it would occur to me to ask an AI system for help.
For one, I am privileged to have assembled a stellar group of outpatient professionals who know me, know what trips me up, and know how to respond to my frantic texts. Ditto for my close friends. What I often need is a shoulder to cry on or a place to vent—someone to hear and validate my distress. What’s more, my trust in these individuals far exceeds my confidence in the companies that create these chatbots. The Internet is full of health advice, much of it bad. Even for high-quality, evidence-based advice, medicine is often filled with disagreements about how the evidence might be applied and for whom it’s relevant. All of this is key in the training of AI systems like ChatGPT, and many AI companies remain silent on this process, Schueller says.
The problem, Ayers points out, is that for many people, the choice isn’t chatbot vs. well-trained physician, but chatbot vs. nothing at all. Hence the proliferation of “does this infection make my scrotum look strange?” questions. Where AI can truly shine, he says, is not by providing direct psychological help but by pointing people towards existing resources that we already know are effective.
“It’s important that these chatbots connect [their users to] to provide that human touch, to link you to resources,” Ayers says. “That’s where AI can actually save a life.”
Before building a chatbot and releasing it, developers need to pause and consult with the communities they hope to serve.
Unfortunately, many systems don’t do this. In a study published last month in the Journal of the American Medical Association, Ayers and colleagues found that although the chatbots did well at providing evidence-based answers, they often didn’t provide referrals to existing resources. Despite this, in an April 2023 study, Ayers’s team found that both patients and professionals rated the quality of the AI responses to questions, measured by both accuracy and empathy, rather highly. To Ayers, this means that AI developers should focus more on the quality of the information being delivered rather than the method of delivery itself.
Many mental health professionals have months-long waitlists, which leaves individuals to deal with illnesses on their own.
Adobe Stock
The human touch
The mental health field is facing timing constraints, too. Even before the pandemic, the U.S. suffered from a shortage of mental health providers. Since then, the rates of anxiety, depression, and eating disorders have spiked even higher, and many mental health professionals report waiting lists that are months long. Without support, individuals are left to try and cope on their own, which often means their condition deteriorates even further.
Nor do mental health crises happen during office hours. I struggled the most late at night, long after everyone else had gone to bed. I needed support during those times when I was most liable to hurt myself, not in the mornings and afternoons when I was at work.
In this sense, a 24/7 chatbot makes lots of sense. “I don't think we should stifle innovation in this space,” Schueller says. “Because if there was any system that needs to be innovated, it's mental health services, because they are sadly insufficient. They’re terrible.”
But before building a chatbot and releasing it, Tina Hernandez-Boussard, a data scientist at Stanford Medicine, says that developers need to pause and consult with the communities they hope to serve. It requires a deep understanding of what their needs are, the language they use to describe their concerns, existing resources, and what kinds of topics and suggestions aren’t helpful. Even asking a simple question at the beginning of a conversation such as “Do you want to talk to an AI or a human?” could allow those individuals to pick the type of interaction that suits their needs, Hernandez-Boussard says.
NEDA did none of these things before deploying Tessa. The researchers who developed the online body positivity self-help program upon which Tessa was initially based created a set of online question-and-answer exercises to improve body image. It didn’t involve generative AI that could write its own answers. The bot deployed by NEDA did use generative AI, something that no one in the eating disorder community was aware of before Tessa was brought online. Consulting those with lived experience would have flagged Tessa’s weight loss and “healthy eating” recommendations, Conason says.
The question for healthcare isn’t whether to use AI, but how.
NEDA did not comment on initial Tessa’s development and deployment, but a spokesperson told Leaps.org that “Tessa will be back online once we are confident that the program will be run with the rule-based approach as it was designed.”
The tech and therapist collaboration
The question for healthcare isn’t whether to use AI, but how. Already, AI can spot anomalies on medical images with greater precision than human eyes and can flag specific areas of an image for a radiologist to review in greater detail. Similarly, in mental health, AI should be an add-on for therapy, not a counselor-in-a-box, says Aniket Bera, an expert on AI and mental health at Purdue University.
“If [AIs] are going to be good helpers, then we need to understand humans better,” Bera says. That means understanding what patients and therapists alike need help with and respond to.
One of the biggest challenges of struggling with chronic illness is the dehumanization that happens. You become a patient number, a set of laboratory values and test scores. Treatment is often dictated by invisible algorithms and rules that you have no control over or access to. It’s frightening and maddening. But this doesn’t mean chatbots don’t have any place in medicine and mental health. An AI system could help provide appointment reminders and answer procedural questions about parking and whether someone should fast before a test or a procedure. They can help manage billing and even provide support between outpatient sessions by offering suggestions for what coping skills to use, the best ways to manage anxiety, and point to local resources. As the bots get better, they may eventually shoulder more and more of the burden of providing mental health care. But as Maxwell learned with Tessa, it’s still no replacement for human interaction.
“I'm not suggesting we should go in and start replacing therapists with technologies,” Schueller says. Instead, he advocates for a therapist-tech collaboration. “The technology side and the human component—these things need to come together.”