Artificial Intelligence is getting better than humans at detecting breast cancer
Since the early 2000s, AI systems have eliminated more than 1.7 million jobs, and that number will only increase as AI improves. Some research estimates that by 2025, AI will eliminate more than 85 million jobs.
But for all the talk about job security, AI is also proving to be a powerful tool in healthcare—specifically, cancer detection. One recently published study has shown that, remarkably, artificial intelligence was able to detect 20 percent more cancers in imaging scans than radiologists alone.
Published in The Lancet Oncology, the study analyzed the scans of 80,000 Swedish women with a moderate hereditary risk of breast cancer who had undergone a mammogram between April 2021 and July 2022. Half of these scans were read by AI and then a radiologist to double-check the findings. The second group of scans was read by two researchers without the help of AI. (Currently, the standard of care across Europe is to have two radiologists analyze a scan before diagnosing a patient with breast cancer.)
The study showed that the AI group detected cancer in 6 out of every 1,000 scans, while the radiologists detected cancer in 5 per 1,000 scans. In other words, AI found 20 percent more cancers than the highly-trained radiologists.
Scientists have been using MRI images (like the ones pictured here) to train artificial intelligence to detect cancers earlier and with more accuracy. Here, MIT's AI system, MIRAI, looks for patterns in a patient's mammograms to detect breast cancer earlier than ever before. news.mit.edu
But even though the AI was better able to pinpoint cancer on an image, it doesn’t mean radiologists will soon be out of a job. Dr. Laura Heacock, a breast radiologist at NYU, said in an interview with CNN that radiologists do much more than simply screening mammograms, and that even well-trained technology can make errors. “These tools work best when paired with highly-trained radiologists who make the final call on your mammogram. Think of it as a tool like a stethoscope for a cardiologist.”
AI is still an emerging technology, but more and more doctors are using them to detect different cancers. For example, researchers at MIT have developed a program called MIRAI, which looks at patterns in patient mammograms across a series of scans and uses an algorithm to model a patient's risk of developing breast cancer over time. The program was "trained" with more than 200,000 breast imaging scans from Massachusetts General Hospital and has been tested on over 100,000 women in different hospitals across the world. According to MIT, MIRAI "has been shown to be more accurate in predicting the risk for developing breast cancer in the short term (over a 3-year period) compared to traditional tools." It has also been able to detect breast cancer up to five years before a patient receives a diagnosis.
The challenges for cancer-detecting AI tools now is not just accuracy. AI tools are also being challenged to perform consistently well across different ages, races, and breast density profiles, particularly given the increased risks that different women face. For example, Black women are 42 percent more likely than white women to die from breast cancer, despite having nearly the same rates of breast cancer as white women. Recently, an FDA-approved AI device for screening breast cancer has come under fire for wrongly detecting cancer in Black patients significantly more often than white patients.
As AI technology improves, radiologists will be able to accurately scan a more diverse set of patients at a larger volume than ever before, potentially saving more lives than ever.
Podcast: The Friday Five weekly roundup in health research
The Friday Five covers five stories in health research that you may have missed this week. There are plenty of controversies and troubling ethical issues in science – and we get into many of them in our online magazine – but this news roundup focuses on scientific creativity and progress to give you a therapeutic dose of inspiration headed into the weekend.
Listen to the Episode
Listen on Apple | Listen on Spotify | Listen on Stitcher | Listen on Amazon | Listen on Google
Covered in this week's Friday Five:
- A new blood test for cancer
- Patches of bacteria can use your sweat to power electronic devices
- Researchers revive organs of dead pigs
- Phone apps detects cancer-causing chemicals in foods
- Stem cells generate "synthetic placentas" in mice
Plus, an honorable mention for early research involving vitamin K and Alzheimer's
Since the recent reversal of Roe v. Wade — the landmark decision establishing a constitutional right to abortion — the vulnerabilities of reproductive health data and various other information stored on digital devices or shared through the Web have risen to the forefront.
Menstrual period tracking apps are an example of how technologies that collect information from users could be weaponized against abortions seekers. The apps, which help tens of millions of users in the U.S. predict when they’re ovulating, may provide evidence that leads to criminal prosecution in states with abortion bans, says Anton T. Dahbura, executive director of the Johns Hopkins University Information Security Institute. In states where abortion is outlawed, “it’s probably best to not use a period tracker,” he says.
Following the Dobbs v. Jackson ruling in late June that overturned Roe, even women who suffered a miscarriage could be suspected of having an abortion in some cases. While using these apps in anonymous mode may appear more secure, “data is notoriously difficult to perfectly anonymize,” Dahbura says. “Whether the data are stored on the user’s device or in the cloud, there are ways to connect that data to the user.”
Completely concealing one’s tracks in cyberspace poses enormous challenges. Digital forensics can take advantage of technology such as GPS apps, security cameras, license plate trackers, credit card transactions and bank records to reconstruct a person’s activities,” Dahbura says. “Abortion service providers are also in a world of risk for similar reasons.”
Practicing “good cyber hygiene” is essential. That’s particularly true in states where private citizens may be rewarded for reporting on women they suspect of having an abortion, such as Texas, which passed a so-called bounty hunter law last fall. To help guard against hacking, Dahbura suggests using strong passwords and two-factor authentication when possible while remaining on alert for phishing scams on email or texts.
Another option for safeguarding privacy is to avoid such apps entirely, but that choice will depend on an individual’s analysis of the risks and benefits, says Leah Fowler, research assistant professor at the University of Houston Law Center, Health Law & Policy Institute.
“These apps are popular because people find them helpful and convenient, so I hesitate to tell anyone to get rid of something they like without more concrete evidence of its nefarious uses,” she says. “I also hate the idea that asking anyone capable of becoming pregnant to opt out of all or part of the digital economy could ever be a viable solution. That’s an enormous policy failure. We have to do better than that.”
The potential universe of abortion-relevant data can include information from a variety of fitness and other biometric trackers, text and social media chat records, call details, purchase histories and medical insurance records.
Instead, Fowler recommends that concerned consumers read the terms of service and privacy policies of the apps they’re using. If some of the terms are unclear, she suggests emailing customer service with questions until the answers are satisfactory. It’s also wise for consumers to research products that meet their specific needs and find out whether other women have raised concerns about specific apps. Users interested in more privacy may want to switch to an app that stores data locally, meaning the data stays on your device, or does not use third-party tracking, so the app-maker is the only company with access to it, she says.
Period tracking apps can be useful for those on fertility journeys, making it easier to store information digitally than on paper charts. But users may want to factor in whether they live in a state with an anti-abortion stance and run the risk of legal issues due to a potential data breach, says Carmel Shachar, executive director of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School.
Consumers’ risks extend beyond period tracking apps in the post-Roe v. Wade era. “Anything that creates digital breadcrumbs to your reproductive choices and conduct could raise concerns — for example, googling ‘abortion providers near me’ or texting your best friend that you are pregnant but do not want to be,” Shachar says. Women also could incriminate themselves by bringing their phones, which may record geolocation data, to the clinic with them.
The potential universe of abortion-relevant data can include information from a variety of fitness and other biometric trackers, text and social media chat records, call details, purchase histories and medical insurance records, says Rebecca Wexler, faculty co-director of the Berkeley Center for Law & Technology. “These data sources can reveal a pregnant person’s decision to seek or obtain an abortion, as well as reveal a healthcare provider’s provision of abortion services and anyone else’s provision of abortion assistance,” she says.
In some situations, people or companies could inadvertently expose themselves to risk after posting on social media with offers of places for abortion seekers to stay after traveling from states with bans. They could be liable for aiding and abetting abortion. At this point, it’s unclear whether states that ban abortion will try to prosecute residents who seek abortions in other states without bans.
Another possibility is that a woman seeking an abortion will be prosecuted based not only on her phone’s data, but also on the data that law enforcement finds on someone else’s device or a shared computer. As a result, “people in one household may find themselves at odds with each other,” says K Royal, faculty fellow at the Center for Law, Science, and Innovation at Arizona State University’s Sandra Day O'Connor College of Law. “This is a very delicate situation.”
Individuals and corporate executives should research their options before leaving a digital footprint. “Guard your privacy carefully, whether you are seeking help or you are seeking to help someone,” Royal says. While she has come across recommendations from other experts who suggest carrying a second phone that is harder to link a person’s identity for certain online activities, “it’s not practical on a general basis.”
The privacy of this health data isn’t fully protected by the law because period trackers, texting services and other apps are not healthcare providers — and as a result, there’s no prohibition on sharing the information with a third party under the Health Insurance Portability and Accountability Act of 1996, says Florencia Marotta-Wurgler, a professor who specializes in online consumer contracts and data privacy at the NYU School of Law.
“So, as long as there is valid consent, then it’s fair game unless you say that it violates the reasonable expectations of consumers,” she says. “But this is pretty unchartered territory at the moment.”
As states implement laws granting anyone the power to report suspected or known pregnancies to law enforcement, anti-choice activists are purchasing reproductive health data from companies that make period apps, says Rebecca Herold, chief executive officer of Privacy & Security Brainiacs in Des Moines, Iowa, and a member of the Emerging Trends Working Group at ISACA, an association focused on information technology governance. They could also buy data on search histories and make it available in places like Texas for “bounty hunters” to find out which women have searched for information about abortions.
Some groups are creating their own apps described as providing general medical information on subjects such as pregnancy health. But they are “ultimately intended to ‘catch’ women” — to identify those who are probably pregnant and dissuade them from having an abortion, to launch harassment campaigns against them, or to report them to law enforcement, anti-choice groups and others in states where such prenatal medical care procedures are now restricted or prohibited, Herold says.
In addition to privacy concerns, the reversal of Roe v. Wade raises censorship issues. Facebook and Instagram have started to remove or flag content, particularly as it relates to providing the abortion pill, says Michael Kleinman, director of the Silicon Valley Initiative at Amnesty International USA, a global organization that promotes human rights.
Facebook and Instagram have rules that forbid private citizens from buying, selling or giving away pharmaceuticals, including the abortion pill, according to a social media post by a communications director for Meta, which owns both platforms. In the same post, though, the Meta official noted that the company’s enforcement of this rule has been “incorrect” in some cases.
“It’s terrifying to think that arbitrary decisions by these platforms can dramatically limit the ability of people to access critical reproductive rights information,” Kleinman says. However, he adds, “as it currently stands, the platforms make unilateral decisions about what reproductive rights information they allow and what information they take down.”