Many leaders at top companies are trying to get workers to return to the office. They say remote and hybrid work are bad for their employees’ mental well-being and lead to a sense of social isolation, meaninglessness, and lack of work-life boundaries, so we should just all go back to office-centric work.
One example is Google, where the company’s leadership is defending its requirement of mostly in-office work for all staff as necessary to protect social capital, meaning people’s connections to and trust in one another. That’s despite a survey of over 1,000 Google employees showing that two-thirds feel unhappy about being forced to work in the office three days per week. In internal meetings and public letters, many have threatened to leave, and some are already quitting to go to other companies with more flexible options.
Last month, GM rolled out a policy similar to Google’s, but had to backtrack because of intense employee opposition. The same is happening in some places outside of the U.S. For instance, three-fifths of all Chinese employers are refusing to offer permanent remote work options, according to a survey this year from The Paper.
For their claims that remote work hurts well-being, some of these office-centric traditionalists cite a number of prominent articles. For example, Arthur Brooks claimed in an essay that “aggravation from commuting is no match for the misery of loneliness, which can lead to depression, substance abuse, sedentary behavior, and relationship damage, among other ills.” An article in Forbes reported that over two-thirds of employees who work from home at least part of the time had trouble getting away from work at the end of the day. And Fast Company has a piece about how remote work can “exacerbate existing mental health issues” like depression and anxiety.
For his part, author Malcolm Gladwell has also championed a swift return to the office, saying there is a “core psychological truth, which is we want you to have a feeling of belonging and to feel necessary…I know it’s a hassle to come into the office, but if you’re just sitting in your pajamas in your bedroom, is that the work life you want to live?”
These arguments may sound logical to some, but they fly in the face of research and my own experience as a behavioral scientist and as a consultant to Fortune 500 companies. In these roles, I have seen the pitfalls of in-person work, which can be just as problematic, if not more so. Remote work is not without its own challenges, but I have helped 21 companies implement a series of simple steps to address them.
Research finds that remote work is actually better for you
The trouble with the articles described above - and claims by traditionalist business leaders and gurus - stems from a sneaky misdirection. They decry the negative impact of remote and hybrid work for wellbeing. Yet they gloss over the damage to wellbeing caused by the alternative, namely office-centric work.
It’s like comparing remote and hybrid work to a state of leisure. Sure, people would feel less isolated if they could hang out and have a beer with their friends instead of working. They could take care of their existing mental health issues if they could visit a therapist. But that’s not in the cards. What’s in the cards is office-centric work. That means the frustration of a long commute to the office, sitting at your desk in an often-uncomfortable and oppressive open office for at least 8 hours, having a sad desk lunch and unhealthy snacks, sometimes at an insanely expensive cost and, for making it through this series of insults, you’re rewarded with more frustration while commuting back home.
In a 2022 survey, the vast majority of respondents felt that working remotely improved their work-life balance. Much of that improvement stemmed from saving time due to not needing to commute and having a more flexible schedule.
So what happens when we compare apples to apples? That’s when we need to hear from the horse’s mouth: namely, surveys of employees themselves, who experienced both in-office work before the pandemic, and hybrid and remote work after COVID struck.
Consider a 2022 survey by Cisco of 28,000 full-time employees around the globe. Nearly 80 percent of respondents say that remote and hybrid work improved their overall well-being: that applies to 83 percent of Millennials, 82 percent of Gen Z, 76 percent of Gen Z, and 66 percent of Baby Boomers. The vast majority of respondents felt that working remotely improved their work-life balance.
Much of that improvement stemmed from saving time due to not needing to commute and having a more flexible schedule: 90 percent saved 4 to 8 hours or more per week. What did they do with that extra time? The top choice for almost half was spending more time with family, friends and pets, which certainly helped address the problem of isolation from the workplace. Indeed, three-quarters of them report that working from home improved their family relationships, and 51 percent strengthened their friendships. Twenty percent used the freed up hours for self-care.
Of the small number who report their work-life balance has not improved or even worsened, the number one reason is the difficulty of disconnecting from work, but 82 percent report that working from anywhere has made them happier. Over half say that remote work decreased their stress levels.
Other surveys back up Cisco’s findings. For example, a 2022 Future Forum survey compared knowledge workers who worked full-time in the office, in a hybrid modality, and fully remote. It found that full-time in-office workers felt the least satisfied with work-life balance, hybrid workers were in the middle, and fully remote workers felt most satisfied. The same distribution applied to questions about stress and anxiety. A mental health website called Tracking Happiness found in a 2022 survey of over 12,000 workers that fully remote employees report a happiness level about 20 percent greater than office-centric ones. Another survey by CNBC in June found that fully remote workers are more often very satisfied with their jobs than workers who are fully in-person.
Academic peer-reviewed research provides further support. Consider a 2022 study published in the International Journal of Environmental Research and Public Health of bank workers who worked on the same tasks of advising customers either remotely or in-person. It found that fully remote workers experienced higher meaningfulness, self-actualization, happiness, and commitment than in-person workers. Another study, published by the National Bureau of Economic Research, reported that hybrid workers, compared to office-centric ones, experienced higher satisfaction with work and had 35 percent more job retention.
What about the supposed burnout crisis associated with remote work? Indeed, burnout is a concern. A survey by Deloitte finds that 77 percent of workers experienced burnout at their current job. Gallup came up with a slightly lower number of 67 percent in its survey. But guess what? Both of those surveys are from 2018, long before the era of widespread remote work.
By contrast, in a Gallup survey in late 2021, 58 percent of respondents reported less burnout. An April 2021 McKinsey survey found burnout in 54 percent of Americans and 49 percent globally. A September 2021 survey by The Hartford reported 61 percent burnout. Arguably, the increase in full or part-time remote opportunities during the pandemic helped to address feelings of burnout, rather than increasing them. Indeed, that finding aligns with the earlier surveys and peer-reviewed research suggesting remote and hybrid work improves wellbeing.
Remote work isn’t perfect – here’s how to fix its shortcomings
Still, burnout is a real problem for hybrid and remote workers, as it is for in-office workers. Employers need to offer mental health benefits with online options to help employees address these challenges, regardless of where they’re working.
Moreover, while they’re better overall for wellbeing, remote and hybrid work arrangements do have specific disadvantages around work-life separation. To address work-life issues, I advise my clients who I helped make the transition to hybrid and remote work to establish norms and policies that focus on clear expectations and setting boundaries.
For working at home and collaborating with others, there’s sometimes an unhealthy expectation that once you start your workday in your home office chair, and that you’ll work continuously while sitting there.
Some people expect their Slack or Microsoft Teams messages to be answered within an hour, while others check Slack once a day. Some believe email requires a response within three hours, and others feel three days is fine. As a result of such uncertainty and lack of clarity about what’s appropriate, too many people feel uncomfortable disconnecting and not replying to messages or doing work tasks after hours. That might stem from a fear of not meeting their boss’s expectations or not wanting to let their colleagues down.
To solve this problem, companies need to establish and incentivize clear expectations and boundaries. They should develop policies and norms around response times for different channels of communication. They also need to clarify work-life boundaries – for example, the frequency and types of unusual circumstances that will require employees to work outside of regular hours.
Moreover, for working at home and collaborating with others, there’s sometimes an unhealthy expectation that once you start your workday in your home office chair, and that you’ll work continuously while sitting there (except for your lunch break). That’s not how things work in the office, which has physical and mental breaks built in throughout the day. You took 5-10 minutes to walk from one meeting to another, or you went to get your copies from the printer and chatted with a coworker on the way.
Those and similar physical and mental breaks, research shows, decrease burnout, improve productivity, and reduce mistakes. That’s why companies should strongly encourage employees to take at least a 10-minute break every hour during remote work. At least half of those breaks should involve physical activity, such as stretching or walking around, to counteract the dangerous effects of prolonged sitting. Other breaks should be restorative mental activities, such as meditation, brief naps, walking outdoors, or whatever else feels restorative to you.
To facilitate such breaks, my client organizations such as the University of Southern California’s Information Sciences Institute shortened hour-long meetings to 50 minutes and half-hour meetings to 25 minutes, to give everyone – both in-person and remote workers – a mental and physical break and transition time.
Very few people will be reluctant to have shorter meetings. After that works out, move to other aspects of setting boundaries and expectations. Doing so will require helping team members get on the same page and reduce conflicts and tensions. By setting clear expectations, you’ll address the biggest challenge for wellbeing for remote and hybrid work: establishing clear work-life boundaries.
A Stomach Implant Saved Me. When Your Organs Fail, You Could Become a Cyborg, Too
Beware, cyborgs walk among us. They’re mostly indistinguishable from regular humans and are infiltrating every nook and cranny of society. For full disclosure, I’m one myself. No, we’re not deadly intergalactic conquerors like the Borg race of Star Trek fame, just ordinary people living better with chronic conditions thanks to medical implants.
In recent years there has been an explosion of developments in implantable devices that merge multiple technologies into gadgets that work in concert with human physiology for the treatment of serious diseases. Pacemakers for the heart are the best-known implants, as well as other cardiac devices like LVADs (left-ventricular assist devices) and implanted defibrillators. Next-generation devices address an array of organ failures, and many are intended as permanent. The driving need behind this technology: a critical, persistent shortage of implantable biological organs.
The demand for transplantable organs dwarfs their availability. There are currently over 100,000 people on the transplant waiting list in the U.S., compared to 40,000 transplants completed in 2021. But even this doesn’t reflect the number of people in dire straits who don’t qualify for a transplant because of things like frailty, smoking status and their low odds of surviving the surgery.
My journey to becoming a cyborg came about because of a lifelong medical condition characterized by pathologically low motility of the digestive system, called gastroparesis. Ever since I was in my teens, I’ve had chronic problems with severe nausea. Flareups can be totally incapacitating and last anywhere from hours to months, interspersed with periods of relief. The cycle is totally unpredictable, and for decades my condition went both un- and misdiagnosed by doctors who were not even aware that the condition existed. Over the years I was labeled with whatever fashionable but totally inappropriate medical label existed at the time, and not infrequently, hypochondria.
Living with the gastric pacer is easy. In fact, most of the time, I don’t even know it’s there.
One of the biggest turning points in my life came when a surgeon at the George Washington University Hospital, Dr. Frederick Brody, ordered a gastric emptying test that revealed gastroparesis. This was in 2009, and an implantable device, called a gastric pacer, had been approved by the FDA for compassionate use, meaning that no other treatments were available. The small device is like a pacemaker that’s implanted beneath the skin of the abdomen and is attached to the stomach through electrodes that carry electrical pulses that stimulate the stomach, making it contract as it’s supposed to.
Dr. Brody implanted the electrical wires and the device, and, once my stomach started to respond to the pulses, I got the most significant nausea relief I’d had in decades of futile treatments. It sounds cliché to say that my debt to Dr. Brody is immeasurable, but the pacer has given me more years of relative normalcy than I previously could have dreamed of.
I should emphasize that the pacer is not a cure. I still take a lot of medicine and have to maintain a soft, primarily vegetarian diet, and the condition has progressed with age. I have ups and downs, and can still have periods of severe illness, but there’s no doubt I would be far worse off without the electrical stimulation provided by the pacer.
Living with the gastric pacer is easy. In fact, most of the time, I don’t even know it’s there. It entails periodic visits with a surgeon who can adjust the strength of the electrical pulses using a wireless device, so when symptoms are worse, he or she can amp up the juice. If the pulses are too strong, they can cause annoying contractions in the abdominal muscles, but this is easily fixed with a simple wireless adjustment. The battery runs down after a few years, and when this happens the whole device has to be replaced in what is considered minor surgery.
Such devices could fill gaps in treating other organ failures. By far most of the people on transplant waiting lists are waiting for kidneys. Despite the fact that live donations are possible, there’s still a dire shortage of organs. A bright spot on the horizon is The Kidney Project, a program spearheaded by bioengineer Shuvo Roy at the University of California, San Francisco, which is developing a fully implantable artificial kidney. The device combines living cells with artificial materials and relies not on a battery, but on the patient’s own blood pressure to keep it functioning.
Several years into this project, a prototype of the kidney, about the size of a smart phone, has been successfully tested in pigs. The device seems to provide many of the functions of a biological kidney (unlike dialysis, which replaces only one main function) and reliably produces urine. One of its most critical components is a special artificial membrane, called a hemofilter, that filters out toxins and waste products from the blood without leaking important molecules like albumin. Since it allows for total mobility, the artificial kidney will provide patients with a higher quality of life than those on dialysis, and is in some important ways, even better than a biological transplant.
The beauty of the device is that, even though it contains kidney cells sourced, as of now, from cadavers or pigs, the cells are treated so that they can’t be rejected and the device doesn’t require the highly problematic immunosuppressant drugs a biological organ requires. “Anti-rejection drugs,” says Roy, “make you susceptible to all kinds of infections and damage the transplanted organ, causing steady deterioration. Eventually they kill the kidney. A biological transplant has about a 10-year limit,” after which the kidney fails and the body rejects it.
Eventually, says Roy, the cells used in the artificial kidney will be sourced from the patient himself, the ultimate genetic match. The patient’s adult stem cells can be used to produce some or all of the 25 to 30 specialized cells of a biological kidney that provide all the functions of a natural organ. People formerly on dialysis could drastically improve their functionality and quality of life without being tethered to a machine for hours at a time, three days a week.
As exciting as this project is, it suffers from a common theme in early biomedical research—keeping a steady stream of funding that will move the project from the lab, into human clinical trials and eventually to the bedside. “It’s the issue,” says Roy. “Potential investors want to see more data indicating that it works, but you need funding to create data. It’s a Catch-22 that puts you in a kind of no-man’s land of funding.” The constant pursuit of funding introduces a variable that makes it hard to predict when the kidney will make it to market, despite the enormous need for such a technology.
Another critical variable is if and when insurance companies will decide to cover transplants with the artificial kidney, so that it becomes affordable for the average person. But Roy thinks that this hurdle, too, will be crossed. Insurance companies stand to save a great deal of money compared to what they ordinarily spend on transplant patients. The cost of yearly maintenance will be a fraction of that associated with the tens of thousands of dollars for immunosuppressant drugs and the attendant complications associated with a biological transplant.
One estimate that the multidisciplinary team of researchers involved with The Kidney Project are still trying to establish is how long the artificial kidney will last once transplanted into the body. Animal trials so far have been looking at how the kidney works for 30 days, and will soon extend that study to 90 days. Additional studies will extend much farther into the future, but first the kidneys have to be implanted into people who can be followed over many years to answer this question. But unlike the gastric pacer and other implants, there won’t be a need for periodic surgeries to replace a depleted battery, and the stark improvements in quality of life compared to dialysis add a special dimension to the value of whatever time the kidney lasts.
Another life-saving implant could address a major scourge of the modern world—heart disease. Despite significant advances in recent decades, including the cardiac implants mentioned above, cardiovascular disease still causes one in three deaths across the world. One of the most promising developments in recent years is the Total Artificial Heart, a pneumatically driven device that can be used in patients with biventricular heart failure, affecting both sides of the heart, when a biological organ is not available.
The TAH is implanted in the chest cavity and has two tubes that snake down the body, come out through the abdomen and attach to a 13.5-pound external driver that the patient carries around in a backpack. It was first developed as a bridge to transplant, a temporary alternative while the patient waited for a biological heart to replace it. However, SynCardia Systems, LLC, the Tucson-based company that makes it, is now investigating whether the heart can be used on a long-term basis.
There’s good reason to think that this will be the case. I spoke with Daniel Teo, one of the board members of SynCardia, who said that so far, one patient lived with the TAH for six years and nine months, before he died of other causes. Another patient, still alive, has lived with the device for over five years and another one has lived with it for over four years. About 2,000 of these transplants have been done in patients waiting for biological hearts so far, and most have lived mobile, even active lives. One TAH recipient hiked for 600 miles, and another ran the 4.2-mile Pat Tillman Run, both while on the artificial heart. This is a far cry from their activities before surgery, while living with advanced heart failure.
Randy Shepard, a recipient of the Total Artificial Heart, teaches archery to his son.
Randy Shepard
If removing and replacing one’s biological heart with a synthetic device sounds scary, it is. But then so is replacing one’s heart with biological one. “The TAH is very emotionally loaded for most people,” says Teo. “People sometimes hold back because of philosophical, existential questions and other nonmedical reasons.” He also cites cultural reasons why some people could be hesitant to accept an artificial heart, saying that some religions could frown upon it, just as they forbid other medical interventions.
The first TAHs that were approved were 70 cubic centimeters in size and fit into the chest cavities of men and larger women, but there’s now a smaller, 50 cc size meant for women and adolescents. The FDA first cleared the 70 cc heart as a bridge to transplant in 2004, and the 50 cc model received approval in 2014. SynCardia’s focus now is on seeking FDA approval to use the heart on a long-term basis. There are other improvements in the works.
One issue being refined deals with the external driver that holds the pneumatic device for moving the blood through a patient’s body. The two tubes connecting the driver to the heart entail openings in the skin that could get infected, and carrying the backpack is less than ideal. The driver also makes an audible sound that some people find disturbing. The next generation TAH will be quieter and involve wearing a smaller, lighter device on a belt rather than carrying the backpack. SynCardia is also working toward a fully implantable heart that wouldn’t require any external components and would contain an energy source that can be recharged wirelessly.
Teo says the jury is out as to whether artificial hearts will ever obviate the need for biological organs, but the world’s number one killer isn’t going away any time soon. “The heart is one of the strongest organs,” he says, “but it’s not made to last forever. If you live long enough, the heart will eventually fail, and heart failure leads to the failure of other organs like the kidney, the lungs and the liver.” As long as this remains the case and as long as the current direction of research continues, artificial organs are likely to play an ever larger part of our everyday lives.
Oh, wait. Maybe we cyborgs will take over the world after all.
A New Web Could be Coming. Will It Improve Human Health?
The Web has provided numerous benefits over the years, but users have also experienced issues related to privacy, cybersecurity, income inequality, and addiction which negatively impact their quality of life. In important ways, the Web has yet to meet its potential to support human health.
Now, engineers are in the process of developing a new version of the Web, called Web3, which would seek to address the Web’s current shortcomings through a mix of new technologies.
It could also create new problems. Industrial revolutions, including new versions of the Web, have trade-offs. While many activists tend to focus on the negative aspects of Web3 technologies, they overlook some of the potential benefits to health and the environment that aren’t as easily quantifiable such as less stressful lives, fewer hours required for work, and a higher standard of living. What emerging technologies are in the mix to define the new era of the digital age, and how will they contribute to our overall health and well-being?
In order to answer these questions, I have identified three major trends that may help define the future landscape of Web3. These include more powerful machine intelligence that could drive improvements in healthcare, decentralized banking systems that allow consumers to bypass middlemen, and self-driving cars with potential to reduce pollution. However, it is the successes of the enabling technologies that support these goals—improvements in AI, blockchain and smart contracts, and fog computing—that will ultimately define Web3.
Machine Intelligence and Diagnosing Diseases
While the internet is the physical network equipment and computers that keep the world connected, the Web is one of the services that run on the internet. In 1989, British scientist Tim Berners-Lee invented the World Wide Web and, when Web1 went live in 1991, it consisted of pages of text connected by hyperlinks. It remained that way until 2004 with the introduction of Web2, which provided social media websites and let users generate content in addition to consuming it passively.
The Semantic Web could expand the impact of new cognitive skills for machines by feeding data to AI in more readily accessible formats. This will make machines better at solving hard problems such as diagnosing and treating complex diseases.
For the most part, Web2 is what we still have today but, from the beginning, Berners-Lee, now an MIT professor, envisaged a much more sophisticated version of the Web. Known as the Semantic Web, it would not only store data, but actually know what it means. The goal is to make all information on the Internet “machine-readable,” so it can be easily processed by computers, like an Excel sheet full of numbers as opposed to human language. We are now in the early stages of the Semantic Web, which incorporates his vision. For example, there is already a cloud of datasets that links thousands of servers without any form of centralized control. However, due to the costs and technological hurdles related to converting human language into something that computers can understand, the Semantic Web remains an ongoing project.
Currently, AI is only able to perform certain tasks, but it can already make healthcare business practices more efficient by leveraging deep learning to analyze data in supply chains. DeepMind, the company that developed AI for defeating chess masters, has also made huge advances in figuring out protein folding and misfolding, which is responsible for some diseases. Currently, AI is not that useful for diagnosing and treating many complex diseases. This is because deep learning is probabilistic, not causal. So, it is able to understand correlation, but not cause and effect.
Like the Web, though, AI is evolving, and the limitations of deep learning could be overcome in the foreseeable future. A number of government programs and private initiatives are dedicated to better understanding human brain complexity and equipping machines with reasoning, common sense, and the ability to understand cause and effect. The Semantic Web could expand the impact of these new cognitive skills by feeding data to AI in more readily accessible formats. This will make machines better at solving hard problems such as diagnosing and treating complex diseases, which involve genetic, lifestyle, and environment factors. These powerful AIs in the realm of healthcare could become an enduring and important feature of Web3.
Blockchain, Smart Contracts and Income Inequality
The Web2 version of the digital age was certainly impactful in altering our lifestyle both positively and negatively. This is predominately because of the business model used by companies such as Meta (formerly Facebook) and Google. By providing useful products like search engines, these companies have lured consumers into giving away their personal data for free, and the companies use this information to detect buying patterns in order to sell advertising. The digital economy made high tech companies billions of dollars while many users became underemployed or jobless.
In recent years, a similar model has been emerging in the realm of genetics. Personalized genomic companies charge a relatively small fee to analyze a fraction of our genes and provide probabilities of having specific medical conditions. While individual data is not valuable, cumulative data is helpful for deep learning. So, these companies can sell the anonymous DNA data to pharmaceutical companies for millions of dollars.
As these companies improve their ability to collect even more data about our genetic vulnerabilities, the technologies of Web3 could protect consumers from giving it away for free. An emerging technology called blockchain is able to provide a Web-based ledger of financial transactions with checks and balances to ensure that its records cannot be faked or altered. It has yet to reach mass adoption by the public, but the computer scientist Jaron Lanier has proposed storing our genomes and electronic health records in blockchain, utilizing electronic smart contracts between individuals and pharma healthcare industry. Micropayments could then be made to individuals for their data, using cryptocurrency.
These individual payments could become more lucrative in the coming years especially as researchers learn how to fully interpret and apply a person’s genetic data. In this way, blockchain could lead to improvements in income inequality, which currently drives health problems and other challenges for many. A number of start-ups are using this business model which has secure data and eliminates middlemen who don’t create any value, while compensating and protecting the privacy of individuals who contribute their health data.
Autonomous Vehicles, Fog Computing and Pollution
A number of trends indicate that modernizing the transportation industry would address a myriad of problems with public health, productivity and the environment. Autonomous vehicles (AVs) could help usher in this new era of transportation, and these AVs would need to be supported by Web3 technologies.
Automobile accidents are the second leading cause of death worldwide, with roughly 1.3 million fatalities annually, according to the World Health Organization. Some estimates suggest that replacing human drivers with AVs could eliminate as many as a million global fatalities annually. Shared AVs would help to reduce traffic congestion that wastes time and fuel, and electric vehicles would help minimize greenhouse gases.
To reap the benefits from replacing gas vehicles with electric, societies will need an infrastructure that enables self-driving cars to communicate with each other. Most data processing in computers is performed using von Neumann architecture, where the data memory and the processor are in two different places. Today, that typically means cloud computing. With self-driving cars, when cameras and sensors generate data to detect objects on the roads, processors will need to rapidly analyze the data and make real-time decisions regarding acceleration, braking, and steering. However, cloud computing is susceptible to latency issues.
One solution to latency is moving processing and data storage closer to where it is needed to improve response times. Edge computing, for example, places the processor at the site where the data is generated. Most new human-driven vehicles contain anywhere from 30 to 100 electronic control units (ECUs) that process data and control electrical systems in vehicles. These embedded systems, typically in the dashboard, control different applications such as airbags, steering, brakes, etc. ECUs process data generated by cameras and sensors in AVs and make crucial decisions on how they operate.
Self-driving cars can benefit by communicating with each other for navigation in the same way that bacteria and animals use swarm intelligence for tasks involving groups. Researchers are currently investigating fog computing which utilizes servers along highways for faster and more reliable navigation and for communicating data analytics among driverless cars.
The Future Landscape of Web3 is Uncertain
The future of Web3 has many possibilities. However, there is no guarantee that blockchain, smart contracts, and fog computing will achieve public acceptance and market saturation or prevail over other technologies or the status quo of Web2. It is also uncertain if or when the breakthroughs in AI will occur that could eradicate complex diseases through Web3.
An example of this uncertainty is the metaverse, which combines blockchain with virtual reality. Currently, the metaverse is primarily used for gaming and recreational use until its infrastructure is further developed. Researchers are interested in the long-term mental health effects of virtual reality, both positive and negative. Using avatars, or virtual representations of humans, in the metaverse, users have greater control of their environment and chosen identities. But, it is unclear what negative mental health effects will occur. As far as regulations, the metaverse is still in the Wild West stage, and bullying or even murder will likely take place. Also, there will be a point where virtual worlds like the metaverse will become so immersive that we won't want to leave them, according to Meta’s Zuckerberg.
The metaverse would rely on virtual reality technology that was developed many years ago, and adoption has been slower than some experts predicted. But most emerging technologies, including other examples related to Web3, follow a similar, nonlinear pattern of development that Gartner has represented in graphical form using the S-curve. To develop a technology forecast for Web3, you can follow the progress along the curve from proof of concept to a particular goal. After a series of successes and failures, entrepreneurs will continue to improve their products until each emerging technology fails or achieves mainstream adoption by the public.
What mix of emerging technologies ultimately defines Web3 will likely be determined by the benefits they provide to society—including whether and how they improve health—how they stimulate the digital economy, and how they address the significant shortcomings of Web2.