The Troubling Reason I Obsessively Researched My Pregnancy
At the end of my second trimester of pregnancy, I answered a call from an unknown number.
To be pregnant is to exist on a never-ending receiving line of advice, whether we want it or not.
"I know your due date is approaching," said a stranger at the other end of the line, completely freaking me out. She identified herself as being from Natera, a company that my doctor had used for genetic testing I had consented to months ago.
"Excuse me?" I said.
"Have you considered cord-blood banking?" she said.
"No, I'm not doing that," I said. I had read enough about cord-blood banking, the process of saving stem cell-containing blood from your baby's umbilical cord, to understand that my family was in the vast majority of those that would with extremely high likelihood derive no medical benefit from it. Of course, in the societally sanctioned spending spree that accompanies new parenthood, plenty of companies are happy to charge anyone hundreds if not thousands of dollars plus annual storage fees to collect and manage your cord blood.
"Why not? Have you considered all the bene—"
"I'm not doing it and I don't want to explain my decision," I said before hanging up. I would later learn I neglected to check a miniscule box on my testing consent forms at the doctor to opt out of solicitations. Still, I was angry that I was being telemarketed unnecessary and costly medical services by someone who had been trained to immediately call my judgment into question. I was annoyed that my doctor's office would allow such intrusions at all. When I asked my OB about it at my next visit, she told me there's no way Natera would have gotten my information from them. Apparently even she didn't realize what was on those forms.
The incident with Natera did nothing to heighten my trust of the medical establishment during my pregnancy. I was hardly alone. Almost every mom I knew had expressed a similar sentiment.
"I don't trust doctors," read the text of a loved one when I told her I would probably get an epidural after my doctor recommended getting one because, she said, it can help relax the pelvic muscles during labor. But this friend, a highly educated woman who had had done her research and had two unmedicated births, believed firmly otherwise. "Look it up," she said. Thus commenced more of the furious Googling I found myself doing multiple times a day since deciding I wanted to become pregnant.
To be pregnant is to exist on a never-ending receiving line of advice, whether we want it or not. Information presents to us from Google's never-out-of-reach search bar, friends and family eager to use our pregnancies as an excuse to recall their own, and the doctor's office, where the wisdom of medical professionals neatly comingles with brochures and free samples from myriad companies that would really, really like our business as new moms. Separating the "good" advice from the rest is a Herculean task that many pregnant women manage only with vigorous fact-finding missions of their own.
The medical community in America is poorly equipped to help women navigate the enormous pressures that come with birth and transitioning to motherhood.
Doing my research during pregnancy felt like a defense against the scary unknowns, overabundance of opinions, and disturbing marketing schemes that come with entering parenthood. The medical community in America is poorly equipped to help women navigate the enormous emotional and societal pressures that come with birth and transitioning to motherhood. Too much of what pregnant women experience at the doctor has to do with dated ideas about our care, mandated by tradition or a fear of being sued rather than medical necessity. These practices, like weigh-ins at every appointment or medically unnecessary C-sections (which are estimated to account, horrifically, for almost 50 percent of all C-sections performed in the U.S.), only heighten anxiety.
Meanwhile, things that might alleviate stress – like having thorough discussions about the kinds of interventions we might be asked to accept at the hospital during labor and delivery – are left to outside educators and doulas that insurance plans typically don't cover. The net effect isn't better health outcomes for mom and baby, but rather a normalized sense of distrust many American women feel toward their OBGYNs, and the burden of going to every appointment and the delivery room on the defensive. Instead of being wed to dated medical practices and tangled in America's new motherhood industrial complex, shouldn't our doctors, of all people, be our biggest advocates?
As soon as I found out I was pregnant, I devoured Expecting Better, by Emily Oster, an economist who embarked on her own fact-finding mission during her first pregnancy, predicated on the belief that the advice OBGYNs have been giving pregnant women for decades is out of date and unnecessarily restrictive. The book includes controversial stances, like that having small amounts of alcohol while pregnant is OK. (More recent research has called this view into question.) Oster writes that for the vast majority of pregnant women, it's perfectly fine to lie on your back, do sit-ups, and eat Brie — all things I was relieved to learn I wouldn't have to give up for nine months, despite the traditional advice, which my doctor also gave to me.
Oster recommends hiring a doula, based both on research and personal experience. It's a worthwhile investment for those who can afford it: according to one study, 20.4 percent of laboring women with doulas had C-sections compared with 34.2 percent of women without them. A doula can do many things for a pregnant client, including helping her write a birth plan, massaging her back in labor, and cheering her on, which is especially useful for women who plan to labor without pain medication. Use of doulas is on the rise; according to DONA International, the world's largest and oldest doula association, the number of doulas who have been certified to date is over 12,000, up from 2,000 in 2002.
But the most significant role a doula plays is that of patient advocate in the hospital. This is a profound commentary on the way the medical establishment handles childbirth, a medical event that 86 percent of women aged 40 to 44 had gone through as of 2016. Recognizing the maternal mortality crisis in the U.S., where women are far more likely to die as a result of childbirth than anywhere else in the developed world and black women are three times more likely to die in childbirth than white women, a few states now allow Medicaid to cover doulas. Can you imagine feeling the need to hire an independent non-medical care provider to help you run interference with your doctors and nurses for something like an appendectomy?
I wouldn't have been aware of all the imminent interventions during my labor if my doula hadn't told me about them. Things happen fast in the hospital and doctors and nurses may rush patients to consent before proceeding with things like breaking their water or hooking them up to an IV of Pitocin. Only because my husband and I had spent six hours in birth class — a suggestion by my doula — did I realize that I was empowered to say "no" to such procedures.
Expecting more trustworthy advice to come from my doctor than books or Google or even a doula hardly seems unreasonable.
Of course, we all feel immense pressure to become good parents, and questioning conventional medical wisdom is a natural response to that pressure. "Looking around at the world and saying, who am I as a parent? What is important to me? Who are the wise people? What do I think wisdom is? What is a good decision? If you're a certain type of introspective person, if you're really asking those questions, that's going to include like taking a second look at things that doctors, for example, say," says Koyuki Smith, a doula and birth educator.
Expecting more trustworthy advice to come from my doctor than books or Google or even a doula hardly seems unreasonable. Yet my doctor's office seemed more concerned with checking off a list of boxes rather than providing me with personalized care that might have relieved my understandable anxiety about my first birth. When I still hadn't gone into labor around the time of my due date, my doctor encouraged me to be induced because my baby appeared to be large. I declined but scheduled an induction to "hold my spot" around the 42-week mark.
When I asked what medication would be used for an induction if I had one and she said Cytotec, I told her I had read that drug could cause serious complications, but she dismissed my concerns after I told her they stemmed from a book I read on natural childbirth. The FDA's page on Cytotec isn't exactly reassuring.
The nurse who took me in triage after I went into labor a week past my due date practically scolded me for waiting to go into labor naturally instead of opting for induction sooner. My doula told her while I was struggling to speak through labor pains to get off my case about it. I hadn't even become a mom and I was already doing so many things "wrong." Because I had done my own reading, I felt confident that my choices weren't harming my baby or me.
Becoming a mom would be less daunting if the medical community found a way to help women navigate the pressures of motherhood instead of adding to them. "Our culture at large doesn't support women enough in the complicated emotions that are a part of this process," said Alexandra Saks, a reproductive psychologist and author of What No One Tells You: A Guide to Your Emotions From Pregnancy to Motherhood. "I hope that every practitioner that works with women around reproductive health prioritizes her emotions around her experience."
For many of us, that will mean doctors who help us understand the pros and cons of conventional advice, don't use their offices as marketing channels, and don't pressure women into medically unnecessary inductions. Moms should also receive more attention after delivery both in the hospital and after they get home; a single, quick postpartum visit at six weeks is not an adequate way to care for women recovering from the trauma of childbirth, nor is it an adequate way to ensure women are emotionally supported during the transition. While several people interrogated me about my mental health at the hospital and my doctor's office just before and after birth, if I had been concerned about postpartum depression, I can't imagine feeling comfortable enough in those moments to tell strangers filling out obligatory worksheets.
It also means figuring out how to talk to patients who are prone to Googling their pregnancies with gusto every single day. It would be impossible for many women to shun independent research during pregnancy altogether. But it would also be nice if our doctors didn't add to our impulse to do it.
Many leaders at top companies are trying to get workers to return to the office. They say remote and hybrid work are bad for their employees’ mental well-being and lead to a sense of social isolation, meaninglessness, and lack of work-life boundaries, so we should just all go back to office-centric work.
One example is Google, where the company’s leadership is defending its requirement of mostly in-office work for all staff as necessary to protect social capital, meaning people’s connections to and trust in one another. That’s despite a survey of over 1,000 Google employees showing that two-thirds feel unhappy about being forced to work in the office three days per week. In internal meetings and public letters, many have threatened to leave, and some are already quitting to go to other companies with more flexible options.
Last month, GM rolled out a policy similar to Google’s, but had to backtrack because of intense employee opposition. The same is happening in some places outside of the U.S. For instance, three-fifths of all Chinese employers are refusing to offer permanent remote work options, according to a survey this year from The Paper.
For their claims that remote work hurts well-being, some of these office-centric traditionalists cite a number of prominent articles. For example, Arthur Brooks claimed in an essay that “aggravation from commuting is no match for the misery of loneliness, which can lead to depression, substance abuse, sedentary behavior, and relationship damage, among other ills.” An article in Forbes reported that over two-thirds of employees who work from home at least part of the time had trouble getting away from work at the end of the day. And Fast Company has a piece about how remote work can “exacerbate existing mental health issues” like depression and anxiety.
For his part, author Malcolm Gladwell has also championed a swift return to the office, saying there is a “core psychological truth, which is we want you to have a feeling of belonging and to feel necessary…I know it’s a hassle to come into the office, but if you’re just sitting in your pajamas in your bedroom, is that the work life you want to live?”
These arguments may sound logical to some, but they fly in the face of research and my own experience as a behavioral scientist and as a consultant to Fortune 500 companies. In these roles, I have seen the pitfalls of in-person work, which can be just as problematic, if not more so. Remote work is not without its own challenges, but I have helped 21 companies implement a series of simple steps to address them.
Research finds that remote work is actually better for you
The trouble with the articles described above - and claims by traditionalist business leaders and gurus - stems from a sneaky misdirection. They decry the negative impact of remote and hybrid work for wellbeing. Yet they gloss over the damage to wellbeing caused by the alternative, namely office-centric work.
It’s like comparing remote and hybrid work to a state of leisure. Sure, people would feel less isolated if they could hang out and have a beer with their friends instead of working. They could take care of their existing mental health issues if they could visit a therapist. But that’s not in the cards. What’s in the cards is office-centric work. That means the frustration of a long commute to the office, sitting at your desk in an often-uncomfortable and oppressive open office for at least 8 hours, having a sad desk lunch and unhealthy snacks, sometimes at an insanely expensive cost and, for making it through this series of insults, you’re rewarded with more frustration while commuting back home.
In a 2022 survey, the vast majority of respondents felt that working remotely improved their work-life balance. Much of that improvement stemmed from saving time due to not needing to commute and having a more flexible schedule.
So what happens when we compare apples to apples? That’s when we need to hear from the horse’s mouth: namely, surveys of employees themselves, who experienced both in-office work before the pandemic, and hybrid and remote work after COVID struck.
Consider a 2022 survey by Cisco of 28,000 full-time employees around the globe. Nearly 80 percent of respondents say that remote and hybrid work improved their overall well-being: that applies to 83 percent of Millennials, 82 percent of Gen Z, 76 percent of Gen Z, and 66 percent of Baby Boomers. The vast majority of respondents felt that working remotely improved their work-life balance.
Much of that improvement stemmed from saving time due to not needing to commute and having a more flexible schedule: 90 percent saved 4 to 8 hours or more per week. What did they do with that extra time? The top choice for almost half was spending more time with family, friends and pets, which certainly helped address the problem of isolation from the workplace. Indeed, three-quarters of them report that working from home improved their family relationships, and 51 percent strengthened their friendships. Twenty percent used the freed up hours for self-care.
Of the small number who report their work-life balance has not improved or even worsened, the number one reason is the difficulty of disconnecting from work, but 82 percent report that working from anywhere has made them happier. Over half say that remote work decreased their stress levels.
Other surveys back up Cisco’s findings. For example, a 2022 Future Forum survey compared knowledge workers who worked full-time in the office, in a hybrid modality, and fully remote. It found that full-time in-office workers felt the least satisfied with work-life balance, hybrid workers were in the middle, and fully remote workers felt most satisfied. The same distribution applied to questions about stress and anxiety. A mental health website called Tracking Happiness found in a 2022 survey of over 12,000 workers that fully remote employees report a happiness level about 20 percent greater than office-centric ones. Another survey by CNBC in June found that fully remote workers are more often very satisfied with their jobs than workers who are fully in-person.
Academic peer-reviewed research provides further support. Consider a 2022 study published in the International Journal of Environmental Research and Public Health of bank workers who worked on the same tasks of advising customers either remotely or in-person. It found that fully remote workers experienced higher meaningfulness, self-actualization, happiness, and commitment than in-person workers. Another study, published by the National Bureau of Economic Research, reported that hybrid workers, compared to office-centric ones, experienced higher satisfaction with work and had 35 percent more job retention.
What about the supposed burnout crisis associated with remote work? Indeed, burnout is a concern. A survey by Deloitte finds that 77 percent of workers experienced burnout at their current job. Gallup came up with a slightly lower number of 67 percent in its survey. But guess what? Both of those surveys are from 2018, long before the era of widespread remote work.
By contrast, in a Gallup survey in late 2021, 58 percent of respondents reported less burnout. An April 2021 McKinsey survey found burnout in 54 percent of Americans and 49 percent globally. A September 2021 survey by The Hartford reported 61 percent burnout. Arguably, the increase in full or part-time remote opportunities during the pandemic helped to address feelings of burnout, rather than increasing them. Indeed, that finding aligns with the earlier surveys and peer-reviewed research suggesting remote and hybrid work improves wellbeing.
Remote work isn’t perfect – here’s how to fix its shortcomings
Still, burnout is a real problem for hybrid and remote workers, as it is for in-office workers. Employers need to offer mental health benefits with online options to help employees address these challenges, regardless of where they’re working.
Moreover, while they’re better overall for wellbeing, remote and hybrid work arrangements do have specific disadvantages around work-life separation. To address work-life issues, I advise my clients who I helped make the transition to hybrid and remote work to establish norms and policies that focus on clear expectations and setting boundaries.
For working at home and collaborating with others, there’s sometimes an unhealthy expectation that once you start your workday in your home office chair, and that you’ll work continuously while sitting there.
Some people expect their Slack or Microsoft Teams messages to be answered within an hour, while others check Slack once a day. Some believe email requires a response within three hours, and others feel three days is fine. As a result of such uncertainty and lack of clarity about what’s appropriate, too many people feel uncomfortable disconnecting and not replying to messages or doing work tasks after hours. That might stem from a fear of not meeting their boss’s expectations or not wanting to let their colleagues down.
To solve this problem, companies need to establish and incentivize clear expectations and boundaries. They should develop policies and norms around response times for different channels of communication. They also need to clarify work-life boundaries – for example, the frequency and types of unusual circumstances that will require employees to work outside of regular hours.
Moreover, for working at home and collaborating with others, there’s sometimes an unhealthy expectation that once you start your workday in your home office chair, and that you’ll work continuously while sitting there (except for your lunch break). That’s not how things work in the office, which has physical and mental breaks built in throughout the day. You took 5-10 minutes to walk from one meeting to another, or you went to get your copies from the printer and chatted with a coworker on the way.
Those and similar physical and mental breaks, research shows, decrease burnout, improve productivity, and reduce mistakes. That’s why companies should strongly encourage employees to take at least a 10-minute break every hour during remote work. At least half of those breaks should involve physical activity, such as stretching or walking around, to counteract the dangerous effects of prolonged sitting. Other breaks should be restorative mental activities, such as meditation, brief naps, walking outdoors, or whatever else feels restorative to you.
To facilitate such breaks, my client organizations such as the University of Southern California’s Information Sciences Institute shortened hour-long meetings to 50 minutes and half-hour meetings to 25 minutes, to give everyone – both in-person and remote workers – a mental and physical break and transition time.
Very few people will be reluctant to have shorter meetings. After that works out, move to other aspects of setting boundaries and expectations. Doing so will require helping team members get on the same page and reduce conflicts and tensions. By setting clear expectations, you’ll address the biggest challenge for wellbeing for remote and hybrid work: establishing clear work-life boundaries.
In May 2022, Californian biotech Ultima Genomics announced that its UG 100 platform was capable of sequencing an entire human genome for just $100, a landmark moment in the history of the field. The announcement was particularly remarkable because few had previously heard of the company, a relative unknown in an industry long dominated by global giant Illumina which controls about 80 percent of the world’s sequencing market.
Ultima’s secret was to completely revamp many technical aspects of the way Illumina have traditionally deciphered DNA. The process usually involves first splitting the double helix DNA structure into single strands, then breaking these strands into short fragments which are laid out on a glass surface called a flow cell. When this flow cell is loaded into the sequencing machine, color-coded tags are attached to each individual base letter. A laser scans the bases individually while a camera simultaneously records the color associated with them, a process which is repeated until every single fragment has been sequenced.
Instead, Ultima has found a series of shortcuts to slash the cost and boost efficiency. “Ultima Genomics has developed a fundamentally new sequencing architecture designed to scale beyond conventional approaches,” says Josh Lauer, Ultima’s chief commercial officer.
This ‘new architecture’ is a series of subtle but highly impactful tweaks to the sequencing process ranging from replacing the costly flow cell with a silicon wafer which is both cheaper and allows more DNA to be read at once, to utilizing machine learning to convert optical data into usable information.
To put $100 genome in perspective, back in 2012 the cost of sequencing a single genome was around $10,000, a price tag which dropped to $1,000 a few years later. Before Ultima’s announcement, the cost of sequencing an individual genome was around $600.
Several studies have found that nearly 12 percent of healthy people who have their genome sequenced, then discover they have a variant pointing to a heightened risk of developing a disease that can be monitored, treated or prevented.
While Ultima’s new machine is not widely available yet, Illumina’s response has been rapid. Last month the company unveiled the NovaSeq X series, which it describes as its fastest most cost-efficient sequencing platform yet, capable of sequencing genomes at $200, with further price cuts likely to follow.
But what will the rapidly tumbling cost of sequencing actually mean for medicine? “Well to start with, obviously it’s going to mean more people getting their genome sequenced,” says Michael Snyder, professor of genetics at Stanford University. “It'll be a lot more accessible to people.”
At the moment sequencing is mainly limited to certain cancer patients where it is used to inform treatment options, and individuals with undiagnosed illnesses. In the past, initiatives such as SeqFirst have attempted further widen access to genome sequencing based on growing amounts of research illustrating the potential benefits of the technology in healthcare. Several studies have found that nearly 12 percent of healthy people who have their genome sequenced, then discover they have a variant pointing to a heightened risk of developing a disease that can be monitored, treated or prevented.
“While whole genome sequencing is not yet widely used in the U.S., it has started to come into pediatric critical care settings such as newborn intensive care units,” says Professor Michael Bamshad, who heads the genetic medicine division in the University of Washington’s pediatrics department. “It is also being used more often in outpatient clinical genetics services, particularly when conventional testing fails to identify explanatory variants.”
But the cost of sequencing itself is only one part of the price tag. The subsequent clinical interpretation and genetic counselling services often come to several thousand dollars, a cost which insurers are not always willing to pay.
As a result, while Bamshad and others hope that the arrival of the $100 genome will create new opportunities to use genetic testing in innovative ways, the most immediate benefits are likely to come in the realm of research.
Bigger Data
There are numerous ways in which cheaper sequencing is likely to advance scientific research, for example the ability to collect data on much larger patient groups. This will be a major boon to scientists working on complex heterogeneous diseases such as schizophrenia or depression where there are many genes involved which all exert subtle effects, as well as substantial variance across the patient population. Bigger studies could help scientists identify subgroups of patients where the disease appears to be driven by similar gene variants, who can then be more precisely targeted with specific drugs.
If insurers can figure out the economics, Snyder even foresees a future where at a certain age, all of us can qualify for annual sequencing of our blood cells to search for early signs of cancer or the potential onset of other diseases like type 2 diabetes.
David Curtis, a genetics professor at University College London, says that scientists studying these illnesses have previously been forced to rely on genome-wide association studies which are limited because they only identify common gene variants. “We might see a significant increase in the number of large association studies using sequence data,” he says. “It would be far preferable to use this because it provides information about rare, potentially functional variants.”
Cheaper sequencing will also aid researchers working on diseases which have traditionally been underfunded. Bamshad cites cystic fibrosis, a condition which affects around 40,000 children and adults in the U.S., as one particularly pertinent example.
“Funds for gene discovery for rare diseases are very limited,” he says. “We’re one of three sites that did whole genome sequencing on 5,500 people with cystic fibrosis, but our statistical power is limited. A $100 genome would make it much more feasible to sequence everyone in the U.S. with cystic fibrosis and make it more likely that we discover novel risk factors and pathways influencing clinical outcomes.”
For progressive diseases that are more common like cancer and type 2 diabetes, as well as neurodegenerative conditions like multiple sclerosis and ALS, geneticists will be able to go even further and afford to sequence individual tumor cells or neurons at different time points. This will enable them to analyze how individual DNA modifications like methylation, change as the disease develops.
In the case of cancer, this could help scientists understand how tumors evolve to evade treatments. Within in a clinical setting, the ability to sequence not just one, but many different cells across a patient’s tumor could point to the combination of treatments which offer the best chance of eradicating the entire cancer.
“What happens at the moment with a solid tumor is you treat with one drug, and maybe 80 percent of that tumor is susceptible to that drug,” says Neil Ward, vice president and general manager in the EMEA region for genomics company PacBio. “But the other 20 percent of the tumor has already got mutations that make it resistant, which is probably why a lot of modern therapies extend life for sadly only a matter of months rather than curing, because they treat a big percentage of the tumor, but not the whole thing. So going forwards, I think that we will see genomics play a huge role in cancer treatments, through using multiple modalities to treat someone's cancer.”
If insurers can figure out the economics, Snyder even foresees a future where at a certain age, all of us can qualify for annual sequencing of our blood cells to search for early signs of cancer or the potential onset of other diseases like type 2 diabetes.
“There are companies already working on looking for cancer signatures in methylated DNA,” he says. “If it was determined that you had early stage cancer, pre-symptomatically, that could then be validated with targeted MRI, followed by surgery or chemotherapy. It makes a big difference catching cancer early. If there were signs of type 2 diabetes, you could start taking steps to mitigate your glucose rise, and possibly prevent it or at least delay the onset.”
This would already revolutionize the way we seek to prevent a whole range of illnesses, but others feel that the $100 genome could also usher in even more powerful and controversial preventative medicine schemes.
Newborn screening
In the eyes of Kári Stefánsson, the Icelandic neurologist who been a visionary for so many advances in the field of human genetics over the last 25 years, the falling cost of sequencing means it will be feasible to sequence the genomes of every baby born.
“We have recently done an analysis of genomes in Iceland and the UK Biobank, and in 4 percent of people you find mutations that lead to serious disease, that can be prevented or dealt with,” says Stefansson, CEO of deCODE genetics, a subsidiary of the pharmaceutical company Amgen. “This could transform our healthcare systems.”
As well as identifying newborns with rare diseases, this kind of genomic information could be used to compute a person’s risk score for developing chronic illnesses later in life. If for example, they have a higher than average risk of colon or breast cancer, they could be pre-emptively scheduled for annual colonoscopies or mammograms as soon as they hit adulthood.
To a limited extent, this is already happening. In the UK, Genomics England has launched the Newborn Genomes Programme, which plans to undertake whole-genome sequencing of up to 200,000 newborn babies, with the aim of enabling the early identification of rare genetic diseases.
"I have not had my own genome sequenced and I would not have wanted my parents to have agreed to this," Curtis says. "I don’t see that sequencing children for the sake of some vague, ill-defined benefits could ever be justifiable.”
However, some scientists feel that it is tricky to justify sequencing the genomes of apparently healthy babies, given the data privacy issues involved. They point out that we still know too little about the links which can be drawn between genetic information at birth, and risk of chronic illness later in life.
“I think there are very difficult ethical issues involved in sequencing children if there are no clear and immediate clinical benefits,” says Curtis. “They cannot consent to this process. I have not had my own genome sequenced and I would not have wanted my parents to have agreed to this. I don’t see that sequencing children for the sake of some vague, ill-defined benefits could ever be justifiable.”
Curtis points out that there are many inherent risks about this data being available. It may fall into the hands of insurance companies, and it could even be used by governments for surveillance purposes.
“Genetic sequence data is very useful indeed for forensic purposes. Its full potential has yet to be realized but identifying rare variants could provide a quick and easy way to find relatives of a perpetrator,” he says. “If large numbers of people had been sequenced in a healthcare system then it could be difficult for a future government to resist the temptation to use this as a resource to investigate serious crimes.”
While sequencing becoming more widely available will present difficult ethical and moral challenges, it will offer many benefits for society as a whole. Cheaper sequencing will help boost the diversity of genomic datasets which have traditionally been skewed towards individuals of white, European descent, meaning that much of the actionable medical information which has come out of these studies is not relevant to people of other ethnicities.
Ward predicts that in the coming years, the growing amount of genetic information will ultimately change the outcomes for many with rare, previously incurable illnesses.
“If you're the parent of a child that has a susceptible or a suspected rare genetic disease, their genome will get sequenced, and while sadly that doesn’t always lead to treatments, it’s building up a knowledge base so companies can spring up and target that niche of a disease,” he says. “As a result there’s a whole tidal wave of new therapies that are going to come to market over the next five years, as the genetic tools we have, mature and evolve.”