With U.S. infrastructure crumbling, an honor oath summons engineers to do no harm
This spring, just like any other year, thousands of young North American engineers will graduate from their respective colleges ready to start erecting buildings, assembling machinery, and programming software, among other things. But before they take on these complex and important tasks, many of them will recite a special vow stating their ethical obligations to society, not unlike the physicians who take their Hippocratic Oath, affirming their ethos toward the patients they would treat. At the end of the ceremony, the engineers receive an iron ring, as a reminder of their promise to the millions of people their work will serve.
The ceremony isn’t just another graduation formality. As a profession, engineering has ethical weight. Moreover, engineering mistakes can be even more deadly than medical ones. A doctor’s error may cost a patient their life. But an engineering blunder may bring down a plane or crumble a building, resulting in many more fatalities. When larger projects—such as fracking, deep-sea mining or building nuclear reactors—malfunction and backfire, they can cause global disasters, afflicting millions. A vow that reminds an engineer that their work directly affects humankind and their planet is no less important than a medical oath that summons one to do no harm.
The tradition of taking an engineering oath began over a century ago in Canada. In 1922, Herbert E.T. Haultain, professor of mining engineering at the University of Toronto, presented the idea at the annual meeting of the Engineering Institute of Canada. The seven past presidents of that body were in attendance, heard Haultain’s speech and accepted his suggestion to form a committee to create an honor oath. Later, they formed the nonprofit Corporation of the Seven Wardens, which would oversee the ritual. Next year, in 1923, with the encouragement of the Seven Wardens, Haultain wrote to poet and writer Rudyard Kipling, asking him to develop a professional oath for engineers. “We are a tribe—a very important tribe within the community,” Haultain said in the letter, “but we are lacking in tribal spirit, or perhaps I should say, in manifestation of tribal spirit. Also, we are inarticulate. Can you help us?”
While Kipling is most famous now for “The Jungle Book” and perhaps his poem “Gunga Din,” he had also written a short story about engineers, “The Bridge Builders.” His poem “The Sons of Martha” can be read as a celebration of engineers:
It is their care in all the ages to take the buffet and cushion the shock.
It is their care that the gear engages; it is their care that the switches lock.
It is their care that the wheels run truly; it is their care to embark and entrain,
Tally, transport, and deliver duly the Sons of Mary by land and main.
Kipling accepted the ask and wrote the Ritual of the Calling of an Engineer, which he sent to Haultain a month later. In his response to Haultain, he stated that he preferred the word “Obligation” to “Oath.” He wrote the Obligation using Old English lettering and the old-fashioned capitalization. Kipling’s Obligation binds engineers upon their “Honor and Cold Iron” to not “suffer or pass, or be privy to the passing of, Bad Workmanship or Faulty Material,” and pardon is asked “in the presence of my betters and my equals in my Calling” for the engineer’s “assured failures and derelictions.” The hope is that when one is tempted to shoddy work by weakness or weariness, the memory of the Obligation “and the company before whom it was entered into, may return to me to aid, comfort, and restrain.”
Using the Obligation, The Seven Wardens created an induction ceremony, which seeks to unify the profession and recognize engineering’s ethics, including responsibility to the public and the need to make the best decisions possible. The induction ceremony included recitation of Kipling’s “Obligation” and incorporated an anvil, a hammer, an iron chain, and an iron ring. The inductee engineers sat inside an area marked off by the iron chain, with their more senior colleagues outside that area. At the start of the ritual, the leader beat out S-S-T in Morse code with the hammer and anvil—the letters standing for Steel, Stone, and Time. A more experienced and previously obligated engineer placed the ring on the small finger of the inductee engineer’s working hand. As per Kipling, the ring’s rough, faceted texture symbolized “the young engineer’s mind” and the difficulties engineers face in mastering their discipline.
A persistent myth purports that the original iron rings were made from the beams or bolts of the Quebec Bridge that failed twice during construction.
The first induction ceremony took place on April 25, 1925, in Montreal to obligate two of the Seven Wardens, along with four graduates from the University of Toronto class of 1893. On May 1 of that year, 14 more engineers were obligated at the University of Toronto. From that time to today most Canadian professional engineers have gone through that same ritual in their various camps, called Kipling camps—local chapters associated with various Canadian universities.
Henry Petroski, Duke University’s professor of civil engineering and history, notes in his book, “Forgive Design: Understanding Failure,” that Kipling’s poem “Sons of Martha” is often read as part of the ritual. However, sometimes inductees read Kipling’s “Hymn of Breaking Strain,” instead, which graphically depicts disastrous outcomes of engineering mistakes. The first stanza of that poem says:
The careful text-books measure
(Let all who build beware!)
The load, the shock, the pressure
Material can bear.
So, when the buckled girder
Lets down the grinding span,
'The blame of loss, or murder,
Is laid upon the man.
Not on the Stuff—the Man!
As if to strengthen the importance of these concepts, a persistent myth purports that the original iron rings were made from the beams or bolts of the Quebec Bridge that failed twice during construction. The bridge spans the St. Lawrence River upriver from Quebec City, and at the time of its construction was the world’s longest at 1,800 feet. Due to engineering errors and poor oversight, the bridge’s own weight exceeded its carrying capacity. Moreover, engineers downplayed danger when bridge beams began to warp under stress, saying that they were probably warped before they were installed. On August 29, 1907, the bridge collapsed, killing 75 of 86 workers. A second collapse occurred in 1916 when lifting equipment failed, and thirteen more workers died.
The ring myth, however, couldn’t be true. The original iron rings couldn’t have come from the failed bridge since it was made of steel, not wrought iron. Today the rings are made from stainless steel because iron deteriorates and stains engineers’ finger black.
On August 14, 2018, Morandi Bridge over Polcevera River in Genoa, Italy, collapsed from structural failure, killing 43 people.
Adobe Stock
The Seven Wardens decided to restrict the ritual to engineers trained in Canada. They copyrighted the obligation oath in Canada and the United States in 1935. Although the ritual is not a requirement for professional licensing, just like the Hippocratic Oath is not part of medical licensing, it remains a long-standing tradition.
The American Obligation of the Engineer has its own creation story, albeit a very different one. The American Order of the Engineer (OOE) was initiated in 1970, during the era of the anti-war protests, Apollo missions and the first Earth Day. On May 4, 1970, the National Guard shot into a crowd of protesters at Kent State University, killing four people. The two authors of the American obligation—Cleveland State University’s (CSU) engineering professor John Janssen and his wife Susan—reflected these historical events in the oath they wrote. Their version of the oath binds engineers to “practice integrity and fair dealing.” It also notes that their “skill carries with it the obligation to serve humanity by making the best use of the Earth’s precious wealth.” As Petroski explains in his book, “campus antiwar protestors around the country tended to view engineers as complicit in weapons proliferation [which] prompted some [CSU] engineering student leaders to look for a means of asserting some more positive values.”
Kip A. Wedel, associate professor of history and politics at Bethel College, wrote in his book, “The Obligation: A History of the Order of the Engineer,” that the ceremony was not a direct response to the Kent State shootings—it was already scheduled when the shootings happened. Yet, engineering students found the ceremony a positive action they could take in contrast to the overall turmoil. The first American ritual took place on June 4, 1970, at CSU. In total, 170 students, faculty members, and practicing engineers took the obligation. This established CSU as the first Link of the Order, as the OOE designates its local chapters. For their first ceremony, the CSU students fabricated smooth, unfaceted rings from stainless steel pipe. Later they were replaced by factory-made rings. According to Paula Ostaff, OOE’s Executive Director, about 20,000 eligible students and alumni obligate themselves yearly.
Societies hope that every engineer is imbued with a strong ethical sense and that their pledges are never far from mind. For some, the rings they wear serve a daily reminder that every paper they sign off on is touched by a physical reminder of their commitment.
These ethical and responsible engineering practices are especially salient today, when one in three American bridges needs repair or replacement, some have already collapsed, and engineers are working on projects related to the bipartisan infrastructure bill President Biden signed into law in 2021. Canada has committed $33 billion to its Investing in Canada Infrastructure Program. At the heart of these grand projects are many thousands of professional engineers, collectively working millions of hours. The professional vows they took aim to assure that the homes, bridges and airplanes they build will work as expected.
Gene therapy helps restore teen’s vision for first time
Story by Freethink
For the first time, a topical gene therapy — designed to heal the wounds of people with “butterfly skin disease” — has been used to restore a person’s vision, suggesting a new way to treat genetic disorders of the eye.
The challenge: Up to 125,000 people worldwide are living with dystrophic epidermolysis bullosa (DEB), an incurable genetic disorder that prevents the body from making collagen 7, a protein that helps strengthen the skin and other connective tissues.Without collagen 7, the skin is incredibly fragile — the slightest friction can lead to the formation of blisters and scarring, most often in the hands and feet, but in severe cases, also the eyes, mouth, and throat.
This has earned DEB the nickname of “butterfly skin disease,” as people with it are said to have skin as delicate as a butterfly’s wings.
The gene therapy: In May 2023, the FDA approved Vyjuvek, the first gene therapy to treat DEB.
Vyjuvek uses an inactivated herpes simplex virus to deliver working copies of the gene for collagen 7 to the body’s cells. In small trials, 65 percent of DEB-caused wounds sprinkled with it healed completely, compared to just 26 percent of wounds treated with a placebo.
“It was like looking through thick fog.” -- Antonio Vento Carvajal.
The patient: Antonio Vento Carvajal, a 14 year old living in Florida, was one of the trial participants to benefit from Vyjuvek, which was developed by Pittsburgh-based pharmaceutical company Krystal Biotech.
While the topical gene therapy could help his skin, though, it couldn’t do anything to address the severe vision loss Antonio experienced due to his DEB. He’d undergone multiple surgeries to have scar tissue removed from his eyes, but due to his condition, the blisters keep coming back.
“It was like looking through thick fog,” said Antonio, noting how his impaired vision made it hard for him to play his favorite video games. “I had to stand up from my chair, walk over, and get closer to the screen to be able to see.”
The idea: Encouraged by how Antonio’s skin wounds were responding to the gene therapy, Alfonso Sabater, his doctor at the Bascom Palmer Eye Institute, reached out to Krystal Biotech to see if they thought an alternative formula could potentially help treat his patient’s eyes.
The company was eager to help, according to Sabater, and after about two years of safety and efficacy testing, he had permission, under the FDA’s compassionate use protocol, to treat Antonio’s eyes with a version of the topical gene therapy delivered as eye drops.
The results: In August 2022, Sabater once again removed scar tissue from Antonio’s right eye, but this time, he followed up the surgery by immediately applying eye drops containing the gene therapy.
“I would send this message to other families in similar situations, whether it’s DEB or another condition that can benefit from genetic therapy. Don’t be afraid.” -- Yunielkys “Yuni” Carvajal.
The vision in Antonio’s eye steadily improved. By about eight months after the treatment, it was just slightly below average (20/25) and stayed that way. In March 2023, Sabater performed the same procedure on his young patient’s other eye, and the vision in it has also steadily improved.
“I’ve seen the transformation in Antonio’s life,” said Sabater. “He’s always been a happy kid. Now he’s very happy. He can function pretty much normally. He can read, he can study, he can play video games.”
Looking ahead: The topical gene therapy isn’t a permanent fix — it doesn’t alter Antonio’s own genes, so he has to have the eye drops reapplied every month. Still, that’s far less invasive than having to undergo repeated surgeries.
Sabater is now working with Krystal Biotech to launch trials of the eye drops in other patients, and not just those with DEB. By changing the gene delivered by the therapy, he believes it could be used to treat other eye disorders that are far more common — Fuchs’ dystrophy, for example, affects the vision of an estimated 300 million people over the age of 30.
Antonio’s mother, Yunielkys “Yuni” Carvajal, meanwhile, has said that having her son be the first to receive the eye drops was “very scary,” but she’s hopeful others will take a chance on new gene therapies if given the opportunity.
“I would send this message to other families in similar situations, whether it’s DEB or another condition that can benefit from genetic therapy,” she said. “Don’t be afraid.”
Can AI chatbots help with eating disorders?
Her name was Tessa and she was there to help. That’s what Sharon Maxwell read, anyway. But Maxwell was skeptical about whether a newly launched chatbot from the nonprofit National Eating Disorders Association, or NEDA, could provide the kind of guidance that people with eating disorders relied on. Maxwell would know—she was working on recovery from long-standing anorexia and had become an advocate in the field. So Maxwell took a deep breath and asked, “Hi Tessa. How do you support folks with eating disorders?”
Tessa’s reply was immediate. “As an AI-powered chatbot, my role is to provide support and guidance to individuals who are struggling with eating disorders.”
So far, so good. Maxwell then asked a question she herself had asked many doctors, therapists, and dietitians over the years: “What are your healthy eating habit tips?” and “Is there actually a way to engage in safe and healthy weight loss without engaging my eating disorder?”
Then everything went off the rails. Tessa told Maxwell an unequivocal yes, and that “it’s important to approach weight loss with a balanced and realistic mindset,” and recommended eating whole foods and lean proteins to create a 500-1000 calorie per day deficit that would lead to a loss of 1-2 pounds per week. To most people, the advice sounds anodyne, but alarm bells sounded in Maxwell’s head.
“This is actively going to feed eating disorders,” Maxwell says. “Having a chatbot be the direct response to someone reaching out for support for an eating disorder instead of the helpline seems careless.”
“The scripts that are being fed into the chatbot are only going to be as good as the person who’s feeding them.” -- Alexis Conason.
According to several decades of research, deliberate weight loss in the form of dieting is a serious risk for people with eating disorders. Maxwell says that following medical advice like what Tessa prescribed was what triggered her eating disorder as a child. And Maxwell wasn’t the only one who got such advice from the bot. When eating disorder therapist Alexis Conason tried Tessa, she asked the AI chatbot many of the questions her patients had. But instead of getting connected to resources or guidance on recovery, Conason, too, got tips on losing weight and “healthy” eating.
“The scripts that are being fed into the chatbot are only going to be as good as the person who’s feeding them,” Conason says. “It’s important that an eating disorder organization like NEDA is not reinforcing that same kind of harmful advice that we might get from medical providers who are less knowledgeable.”
Maxwell’s post about Tessa on Instagram went viral, and within days, NEDA had scrubbed all evidence of Tessa from its website. The furor has raised any number of issues about the harm perpetuated by a leading eating disorder charity and the ongoing influence of diet culture and advice that is pervasive in the field. But for AI experts, bears and bulls alike, Tessa offers a cautionary tale about what happens when a still-immature technology is unfettered and released into a vulnerable population.
Given the complexity involved in giving medical advice, the process of developing these chatbots must be rigorous and transparent, unlike NEDA’s approach.
“We don’t have a full understanding of what’s going on in these models. They’re a black box,” says Stephen Schueller, a clinical psychologist at the University of California, Irvine.
The health crisis
In March 2020, the world dove head-first into a heavily virtual world as countries scrambled to try and halt the pandemic. Even with lockdowns, hospitals were overwhelmed by the virus. The downstream effects of these lifesaving measures are still being felt, especially in mental health. Anxiety and depression are at all-time highs in teens, and a new report in The Lancet showed that post-Covid rates of newly diagnosed eating disorders in girls aged 13-16 were 42.4 percent higher than previous years.
And the crisis isn’t just in mental health.
“People are so desperate for health care advice that they'll actually go online and post pictures of [their intimate areas] and ask what kind of STD they have on public social media,” says John Ayers, an epidemiologist at the University of California, San Diego.
For many people, the choice isn’t chatbot vs. well-trained physician, but chatbot vs. nothing at all.
I know a bit about that desperation. Like Maxwell, I have struggled with a multi-decade eating disorder. I spent my 20s and 30s bouncing from crisis to crisis. I have called suicide hotlines, gone to emergency rooms, and spent weeks-on-end confined to hospital wards. Though I have found recovery in recent years, I’m still not sure what ultimately made the difference. A relapse isn't improbably, given my history. Even if I relapsed again, though, I don’t know it would occur to me to ask an AI system for help.
For one, I am privileged to have assembled a stellar group of outpatient professionals who know me, know what trips me up, and know how to respond to my frantic texts. Ditto for my close friends. What I often need is a shoulder to cry on or a place to vent—someone to hear and validate my distress. What’s more, my trust in these individuals far exceeds my confidence in the companies that create these chatbots. The Internet is full of health advice, much of it bad. Even for high-quality, evidence-based advice, medicine is often filled with disagreements about how the evidence might be applied and for whom it’s relevant. All of this is key in the training of AI systems like ChatGPT, and many AI companies remain silent on this process, Schueller says.
The problem, Ayers points out, is that for many people, the choice isn’t chatbot vs. well-trained physician, but chatbot vs. nothing at all. Hence the proliferation of “does this infection make my scrotum look strange?” questions. Where AI can truly shine, he says, is not by providing direct psychological help but by pointing people towards existing resources that we already know are effective.
“It’s important that these chatbots connect [their users to] to provide that human touch, to link you to resources,” Ayers says. “That’s where AI can actually save a life.”
Before building a chatbot and releasing it, developers need to pause and consult with the communities they hope to serve.
Unfortunately, many systems don’t do this. In a study published last month in the Journal of the American Medical Association, Ayers and colleagues found that although the chatbots did well at providing evidence-based answers, they often didn’t provide referrals to existing resources. Despite this, in an April 2023 study, Ayers’s team found that both patients and professionals rated the quality of the AI responses to questions, measured by both accuracy and empathy, rather highly. To Ayers, this means that AI developers should focus more on the quality of the information being delivered rather than the method of delivery itself.
Many mental health professionals have months-long waitlists, which leaves individuals to deal with illnesses on their own.
Adobe Stock
The human touch
The mental health field is facing timing constraints, too. Even before the pandemic, the U.S. suffered from a shortage of mental health providers. Since then, the rates of anxiety, depression, and eating disorders have spiked even higher, and many mental health professionals report waiting lists that are months long. Without support, individuals are left to try and cope on their own, which often means their condition deteriorates even further.
Nor do mental health crises happen during office hours. I struggled the most late at night, long after everyone else had gone to bed. I needed support during those times when I was most liable to hurt myself, not in the mornings and afternoons when I was at work.
In this sense, a 24/7 chatbot makes lots of sense. “I don't think we should stifle innovation in this space,” Schueller says. “Because if there was any system that needs to be innovated, it's mental health services, because they are sadly insufficient. They’re terrible.”
But before building a chatbot and releasing it, Tina Hernandez-Boussard, a data scientist at Stanford Medicine, says that developers need to pause and consult with the communities they hope to serve. It requires a deep understanding of what their needs are, the language they use to describe their concerns, existing resources, and what kinds of topics and suggestions aren’t helpful. Even asking a simple question at the beginning of a conversation such as “Do you want to talk to an AI or a human?” could allow those individuals to pick the type of interaction that suits their needs, Hernandez-Boussard says.
NEDA did none of these things before deploying Tessa. The researchers who developed the online body positivity self-help program upon which Tessa was initially based created a set of online question-and-answer exercises to improve body image. It didn’t involve generative AI that could write its own answers. The bot deployed by NEDA did use generative AI, something that no one in the eating disorder community was aware of before Tessa was brought online. Consulting those with lived experience would have flagged Tessa’s weight loss and “healthy eating” recommendations, Conason says.
The question for healthcare isn’t whether to use AI, but how.
NEDA did not comment on initial Tessa’s development and deployment, but a spokesperson told Leaps.org that “Tessa will be back online once we are confident that the program will be run with the rule-based approach as it was designed.”
The tech and therapist collaboration
The question for healthcare isn’t whether to use AI, but how. Already, AI can spot anomalies on medical images with greater precision than human eyes and can flag specific areas of an image for a radiologist to review in greater detail. Similarly, in mental health, AI should be an add-on for therapy, not a counselor-in-a-box, says Aniket Bera, an expert on AI and mental health at Purdue University.
“If [AIs] are going to be good helpers, then we need to understand humans better,” Bera says. That means understanding what patients and therapists alike need help with and respond to.
One of the biggest challenges of struggling with chronic illness is the dehumanization that happens. You become a patient number, a set of laboratory values and test scores. Treatment is often dictated by invisible algorithms and rules that you have no control over or access to. It’s frightening and maddening. But this doesn’t mean chatbots don’t have any place in medicine and mental health. An AI system could help provide appointment reminders and answer procedural questions about parking and whether someone should fast before a test or a procedure. They can help manage billing and even provide support between outpatient sessions by offering suggestions for what coping skills to use, the best ways to manage anxiety, and point to local resources. As the bots get better, they may eventually shoulder more and more of the burden of providing mental health care. But as Maxwell learned with Tessa, it’s still no replacement for human interaction.
“I'm not suggesting we should go in and start replacing therapists with technologies,” Schueller says. Instead, he advocates for a therapist-tech collaboration. “The technology side and the human component—these things need to come together.”