As More People Crowdfund Medical Bills, Beware of Dubious Campaigns
Nearly a decade ago, Jamie Anderson hit his highest weight ever: 618 pounds. Depression drove him to eat and eat. He tried all kinds of diets, losing and regaining weight again and again. Then, four years ago, a friend nudged him to join a gym, and with a trainer's guidance, he embarked on a life-altering path.
Ethicists become particularly alarmed when medical crowdfunding appeals are for scientifically unfounded and potentially harmful interventions.
"The big catalyst for all of this is, I was diagnosed as a diabetic," says Anderson, a 46-year-old sales associate in the auto care department at Walmart. Within three years, he was down to 276 pounds but left with excess skin, which sagged from his belly to his mid-thighs.
Plastic surgery would cost $4,000 more than the sum his health insurance approved. That's when Anderson, who lives in Cabot, Arkansas, a suburb outside of Little Rock, turned to online crowdfunding to raise money. In a few months last year, current and former co-workers and friends of friends came up with that amount, covering the remaining expenses for the tummy tuck and overnight hospital stay.
The crowdfunding site that he used, CoFund Health, aimed to give his donors some peace of mind about where their money was going. Unlike GoFundMe and other platforms that don't restrict how donations are spent, Anderson's funds were loaded on a debit card that only worked at health care providers, so the donors "were assured that it was for medical bills only," he says.
CoFund Health was started in January 2019 in response to concerns about the legitimacy of many medical crowdfunding campaigns. As crowdfunding for health-related expenses has gained more traction on social media sites, with countless campaigns seeking to subsidize the high costs of care, it has given rise to some questionable transactions and legitimate ethical concerns.
Common examples of alleged fraud have involved misusing the donations for nonmedical purposes, feigning or embellishing the story of one's own unfortunate plight or that of another person, or impersonating someone else with an illness. Ethicists become particularly alarmed when medical crowdfunding appeals are for scientifically unfounded and potentially harmful interventions.
About 20 percent of American adults reported giving to a crowdfunding campaign for medical bills or treatments, according to a survey by AmeriSpeak Spotlight on Health from NORC, formerly called the National Opinion Research Center, a non-partisan research institution at the University of Chicago. The self-funded poll, conducted in November 2019, included 1,020 interviews with a representative sample of U.S. households. Researchers cited a 2019 City University of New York-Harvard study, which noted that medical bills are the most common basis for declaring personal bankruptcy.
Some experts contend that crowdfunding platforms should serve as gatekeepers in prohibiting campaigns for unproven treatments. Facing a dire diagnosis, individuals may go out on a limb to try anything and everything to prolong and improve the quality of their lives.
They may enroll in well-designed clinical trials, or they could fall prey "to snake oil being sold by people out there just making a buck," says Jeremy Snyder, a health sciences professor at Simon Fraser University in British Columbia, Canada, and the lead author of a December 2019 article in The Hastings Report about crowdfunding for dubious treatments.
For instance, crowdfunding campaigns have sought donations for homeopathic healing for cancer, unapproved stem cell therapy for central nervous system injury, and extended antibiotic use for chronic Lyme disease, according to an October 2018 report in the Journal of the American Medical Association.
Ford Vox, the lead author and an Atlanta-based physician specializing in brain injury, maintains that a repository should exist to monitor the outcomes of experimental treatments. "At the very least, there ought to be some tracking of what happens to the people the funds are being raised for," he says. "It would be great for an independent organization to do so."
"Even if it appears like a good cause, consumers should still do some research before donating to a crowdfunding campaign."
The Federal Trade Commission, the national consumer watchdog, cautions online that "it might be impossible for you to know if the cause is real and if the money actually gets to the intended recipient." Another caveat: Donors can't deduct contributions to individuals on tax returns.
"Even if it appears like a good cause, consumers should still do some research before donating to a crowdfunding campaign," says Malini Mithal, associate director of financial practices at the FTC. "Don't assume all medical treatments are tested and safe."
Before making any donation, it would be wise to check whether a crowdfunding site offers some sort of guarantee if a campaign ends up being fraudulent, says Kristin Judge, chief executive and founder of the Cybercrime Support Network, a Michigan-based nonprofit that serves victims before, during, and after an incident. They should know how the campaign organizer is related to the intended recipient and note whether any direct family members and friends have given funds and left supportive comments.
Donating to vetted charities offers more assurance than crowdfunding that the money will be channeled toward helping someone in need, says Daniel Billingsley, vice president of external affairs for the Oklahoma Center of Nonprofits. "Otherwise, you could be putting money into all sorts of scams." There is "zero accountability" for the crowdfunding site or the recipient to provide proof that the dollars were indeed funneled into health-related expenses.
Even if donors may have limited recourse against scammers, the "platforms have an ethical obligation to protect the people using their site from fraud," says Bryanna Moore, a postdoctoral fellow at Baylor College of Medicine's Center for Medical Ethics and Health Policy. "It's easy to take advantage of people who want to be charitable."
There are "different layers of deception" on a broad spectrum of fraud, ranging from "outright lying for a self-serving reason" to publicizing an imaginary illness to collect money genuinely needed for basic living expenses. With medical campaigns being a top category among crowdfunding appeals, it's "a lot of money that's exchanging hands," Moore says.
The advent of crowdfunding "reveals and, in some ways, reinforces a health care system that is totally broken," says Jessica Pierce, a faculty affiliate in the Center for Bioethics and Humanities at the University of Colorado Anschutz Medical Campus in Denver. "The fact that people have to scrounge for money to get life-saving treatment is unethical."
Crowdfunding also highlights socioeconomic and racial disparities by giving an unfair advantage to those who are social-media savvy and capable of crafting a compelling narrative that attracts donors. Privacy issues enter into the picture as well, because telling that narrative entails revealing personal details, Pierce says, particularly when it comes to children, "who may not be able to consent at a really informed level."
CoFund Health, the crowdfunding site on which Anderson raised the money for his plastic surgery, offers to help people write their campaigns and copy edit for proper language, says Matthew Martin, co-founder and chief executive officer. Like other crowdfunding sites, it retains a few percent of the donations for each campaign. Martin is the husband of Anderson's acquaintance from high school.
So far, the site, which is based in Raleigh, North Carolina, has hosted about 600 crowdfunding campaigns, some completed and some still in progress. Campaigns have raised as little as $300 to cover immediate dental expenses and as much as $12,000 for cancer treatments, Martin says, but most have set a goal between $5,000 and $10,000.
Whether or not someone's campaign is based on fact or fiction remains for prospective donors to decide.
The services could be cosmetic—for example, a breast enhancement or reduction, laser procedures for the eyes or skin, and chiropractic care. A number of campaigns have sought funding for transgender surgeries, which many insurers consider optional, he says.
In July 2019, a second site was hatched out of pet owners' requests for assistance with their dogs' and cats' medical expenses. Money raised on CoFund My Pet can only be used at veterinary clinics. Martin says the debit card would be declined at other merchants, just as its CoFund Health counterpart for humans will be rejected at places other than health care facilities, dental and vision providers, and pharmacies.
Whether or not someone's campaign is based on fact or fiction remains for prospective donors to decide. If a donor were to regret a transaction, he says the site would reach out to the campaign's owner but ultimately couldn't force a refund, Martin explains, because "it's hard to chase down fraud without having access to people's health records."
In some crowdfunding campaigns, the individual needs some or all the donated resources to pay for travel and lodging at faraway destinations to receive care, says Snyder, the health sciences professor and crowdfunding report author. He suggests people only give to recipients they know personally.
"That may change the calculus a little bit," tipping the decision in favor of donating, he says. As long as the treatment isn't harmful, the funds are a small gesture of support. "There's some value in that for preserving hope or just showing them that you care."
Podcast: The Friday Five weekly roundup in health research
The Friday Five covers five stories in health research that you may have missed this week. There are plenty of controversies and troubling ethical issues in science – and we get into many of them in our online magazine – but this news roundup focuses on scientific creativity and progress to give you a therapeutic dose of inspiration headed into the weekend.
Covered in this week's Friday Five:
- Sex differences in cancer
- Promising research on a vaccine for Lyme disease
- Using a super material for brain-like devices
- Measuring your immunity to Covid
- Reducing dementia risk with leisure activities
One day in recent past, scientists at Columbia University’s Creative Machines Lab set up a robotic arm inside a circle of five streaming video cameras and let the robot watch itself move, turn and twist. For about three hours the robot did exactly that—it looked at itself this way and that, like toddlers exploring themselves in a room full of mirrors. By the time the robot stopped, its internal neural network finished learning the relationship between the robot’s motor actions and the volume it occupied in its environment. In other words, the robot built a spatial self-awareness, just like humans do. “We trained its deep neural network to understand how it moved in space,” says Boyuan Chen, one of the scientists who worked on it.
For decades robots have been doing helpful tasks that are too hard, too dangerous, or physically impossible for humans to carry out themselves. Robots are ultimately superior to humans in complex calculations, following rules to a tee and repeating the same steps perfectly. But even the biggest successes for human-robot collaborations—those in manufacturing and automotive industries—still require separating the two for safety reasons. Hardwired for a limited set of tasks, industrial robots don't have the intelligence to know where their robo-parts are in space, how fast they’re moving and when they can endanger a human.
Over the past decade or so, humans have begun to expect more from robots. Engineers have been building smarter versions that can avoid obstacles, follow voice commands, respond to human speech and make simple decisions. Some of them proved invaluable in many natural and man-made disasters like earthquakes, forest fires, nuclear accidents and chemical spills. These disaster recovery robots helped clean up dangerous chemicals, looked for survivors in crumbled buildings, and ventured into radioactive areas to assess damage.
Now roboticists are going a step further, training their creations to do even better: understand their own image in space and interact with humans like humans do. Today, there are already robot-teachers like KeeKo, robot-pets like Moffin, robot-babysitters like iPal, and robotic companions for the elderly like Pepper.
But even these reasonably intelligent creations still have huge limitations, some scientists think. “There are niche applications for the current generations of robots,” says professor Anthony Zador at Cold Spring Harbor Laboratory—but they are not “generalists” who can do varied tasks all on their own, as they mostly lack the abilities to improvise, make decisions based on a multitude of facts or emotions, and adjust to rapidly changing circumstances. “We don’t have general purpose robots that can interact with the world. We’re ages away from that.”
Robotic spatial self-awareness – the achievement by the team at Columbia – is an important step toward creating more intelligent machines. Hod Lipson, professor of mechanical engineering who runs the Columbia lab, says that future robots will need this ability to assist humans better. Knowing how you look and where in space your parts are, decreases the need for human oversight. It also helps the robot to detect and compensate for damage and keep up with its own wear-and-tear. And it allows robots to realize when something is wrong with them or their parts. “We want our robots to learn and continue to grow their minds and bodies on their own,” Chen says. That’s what Zador wants too—and on a much grander level. “I want a robot who can drive my car, take my dog for a walk and have a conversation with me.”
Columbia scientists have trained a robot to become aware of its own "body," so it can map the right path to touch a ball without running into an obstacle, in this case a square.
Jane Nisselson and Yinuo Qin/ Columbia Engineering
Today’s technological advances are making some of these leaps of progress possible. One of them is the so-called Deep Learning—a method that trains artificial intelligence systems to learn and use information similar to how humans do it. Described as a machine learning method based on neural network architectures with multiple layers of processing units, Deep Learning has been used to successfully teach machines to recognize images, understand speech and even write text.
Trained by Google, one of these language machine learning geniuses, BERT, can finish sentences. Another one called GPT3, designed by San Francisco-based company OpenAI, can write little stories. Yet, both of them still make funny mistakes in their linguistic exercises that even a child wouldn’t. According to a paper published by Stanford’s Center for Research on Foundational Models, BERT seems to not understand the word “not.” When asked to fill in the word after “A robin is a __” it correctly answers “bird.” But try inserting the word “not” into that sentence (“A robin is not a __”) and BERT still completes it the same way. Similarly, in one of its stories, GPT3 wrote that if you mix a spoonful of grape juice into your cranberry juice and drink the concoction, you die. It seems that robots, and artificial intelligence systems in general, are still missing some rudimentary facts of life that humans and animals grasp naturally and effortlessly.
How does one give robots a genome? Zador has an idea. We can’t really equip machines with real biological nucleotide-based genes, but we can mimic the neuronal blueprint those genes create.
It's not exactly the robots’ fault. Compared to humans, and all other organisms that have been around for thousands or millions of years, robots are very new. They are missing out on eons of evolutionary data-building. Animals and humans are born with the ability to do certain things because they are pre-wired in them. Flies know how to fly, fish knows how to swim, cats know how to meow, and babies know how to cry. Yet, flies don’t really learn to fly, fish doesn’t learn to swim, cats don’t learn to meow, and babies don’t learn to cry—they are born able to execute such behaviors because they’re preprogrammed to do so. All that happens thanks to the millions of years of evolutions wired into their respective genomes, which give rise to the brain’s neural networks responsible for these behaviors. Robots are the newbies, missing out on that trove of information, Zador argues.
A neuroscience professor who studies how brain circuitry generates various behaviors, Zador has a different approach to developing the robotic mind. Until their creators figure out a way to imbue the bots with that information, robots will remain quite limited in their abilities. Each model will only be able to do certain things it was programmed to do, but it will never go above and beyond its original code. So Zador argues that we have to start giving robots a genome.
How does one do that? Zador has an idea. We can’t really equip machines with real biological nucleotide-based genes, but we can mimic the neuronal blueprint those genes create. Genomes lay out rules for brain development. Specifically, the genome encodes blueprints for wiring up our nervous system—the details of which neurons are connected, the strength of those connections and other specs that will later hold the information learned throughout life. “Our genomes serve as blueprints for building our nervous system and these blueprints give rise to a human brain, which contains about 100 billion neurons,” Zador says.
If you think what a genome is, he explains, it is essentially a very compact and compressed form of information storage. Conceptually, genomes are similar to CliffsNotes and other study guides. When students read these short summaries, they know about what happened in a book, without actually reading that book. And that’s how we should be designing the next generation of robots if we ever want them to act like humans, Zador says. “We should give them a set of behavioral CliffsNotes, which they can then unwrap into brain-like structures.” Robots that have such brain-like structures will acquire a set of basic rules to generate basic behaviors and use them to learn more complex ones.
Currently Zador is in the process of developing algorithms that function like simple rules that generate such behaviors. “My algorithms would write these CliffsNotes, outlining how to solve a particular problem,” he explains. “And then, the neural networks will use these CliffsNotes to figure out which ones are useful and use them in their behaviors.” That’s how all living beings operate. They use the pre-programmed info from their genetics to adapt to their changing environments and learn what’s necessary to survive and thrive in these settings.
For example, a robot’s neural network could draw from CliffsNotes with “genetic” instructions for how to be aware of its own body or learn to adjust its movements. And other, different sets of CliffsNotes may imbue it with the basics of physical safety or the fundamentals of speech.
At the moment, Zador is working on algorithms that are trying to mimic neuronal blueprints for very simple organisms—such as earthworms, which have only 302 neurons and about 7000 synapses compared to the millions we have. That’s how evolution worked, too—expanding the brains from simple creatures to more complex to the Homo Sapiens. But if it took millions of years to arrive at modern humans, how long would it take scientists to forge a robot with human intelligence? That’s a billion-dollar question. Yet, Zador is optimistic. “My hypotheses is that if you can build simple organisms that can interact with the world, then the higher level functions will not be nearly as challenging as they currently are.”
Lina Zeldovich has written about science, medicine and technology for Popular Science, Smithsonian, National Geographic, Scientific American, Reader’s Digest, the New York Times and other major national and international publications. A Columbia J-School alumna, she has won several awards for her stories, including the ASJA Crisis Coverage Award for Covid reporting, and has been a contributing editor at Nautilus Magazine. In 2021, Zeldovich released her first book, The Other Dark Matter, published by the University of Chicago Press, about the science and business of turning waste into wealth and health. You can find her on http://linazeldovich.com/ and @linazeldovich.