On Jan. 1, 1983, the U.S. Department of Defense’s Advanced Research Projects Agency deployed a “transmission control protocol/ internet protocol,” which allowed distant computers to communicate easily with each other. Universities and other research institutions quickly adopted TCP/IP, and the “Internet” was born.
Early users found the experience breathtaking. Despite glitchy connections, text-only interfaces, slow data speeds and many other dehumanizing factors, online communities, friendships and romances sprang up. Interactions could be intimate, liberating and intense.
The combination of anonymity and the ability to connect with likeminded people anywhere in the world was a boon for queer people, activists, cultural diasporas, and members of other scattered or discriminated- against groups.
Unfortunately, it was just as freeing for fascists, terrorists, predators and other perpetrators of violence and bigotry.
“The idea of bringing people together has been a trope that boosters of every electronic media technology of the last 200 years have used. Somebody says, ‘This technology is going to bring the world together. It’s going to produce world peace. It’s going to create understanding,’” says Luke Stark, a historian of computer-mediated social interactions in Western’s Faculty of Information & Media Studies. “And it just doesn’t. For instance, one of the ways digital media technologies bring people together is through fostering extremely strong ‘negative affinity bonds’ against other groups.”
Forty years later, technology and society have both evolved nearly beyond recognition. Yet the same conflicts linger. Can the billions of devices now connected to a planet-wide mesh of cable and satellite signals knit people together into stronger social fabric? Or are technology-mediated social interactions inherently divisive and dangerous?
People going online for the first time tend to be optimistic they will feel less alone, not more.
“If you look at the way young people are adopting digital technologies and social media apps, they’re still energetic and hopeful,” says Kaitlynn Mendes, associate professor of sociology and Canada Research Chair in Inequality and Gender. But unalloyed excitement doesn’t last long. “Often they’re like, ‘Yeah, I joined Instagram or Snapchat and I thought it was going to be amazing. And then all of a sudden, I started getting dick pics or weird messages.’ They often seem genuinely surprised that the Internet isn’t a safe, open, welcoming space.”
Some users tune out the worst aspects of their online experience, while others instinctively fade into the background, lurking online rather than risking attention.
Withdrawing, though, comes with its own risks.
“Simply scrolling through social media will not help reduce loneliness. People who use their cellular telephones for the purpose of actually communicating with others, are less lonely,” says Julie Aitken Schermer, BA’92, professor of psychology and management and organizational studies. She researches how personality, intelligence and other factors influence technology- related loneliness.
“The problem with online situations is that an individual can easily avoid interacting with others. True, an individual can stay in a corner of a room at an in-person gathering, but it is easier to hide online.”
With seemingly every online interaction carrying the risk of unwanted attention, it’s not easy to build the trust and safety needed for meaningful connections.
Mendes is currently working on a book for parents about preparing kids for digital life. She says the mitigation should start long before a child’s first day online and carry on long after.
“Think of digital technologies as you would a car. It’s an incredibly powerful device. But you have to recognize as soon as you introduce this technology, you’re introducing risk. Not necessarily harm, but risk,” she says. “You would never just hand over your car keys to your kids. We spend years preparing them before they’re fully autonomous and driving on their own.”
The systems and the structures are set up in a way that it makes it difficult for anyone to just have an unfiltered, positive experience
And while cars keep introducing new safety features, navigating the information superhighway seems to come with continual new threats.
Mendes sees the challenges as daunting and complex, but not intractable.
“Right now my focus is on talking to young people. What support are they getting and what kind of support do they want? We don’t even have that basic level of information. We need to involve them and make sure their lived experience is reflected in the curriculum,
in conversations with parents or teachers and with legislators and policy makers. We start from there and build up.”
While systemic changes such as structural improvements, regulation and moderation come slowly, individuals and communities still try to make the most of these flawed online communities.
“The systems and the structures are set up in a way that it makes it difficult for anyone to just have an unfiltered, positive experience,” Mendes says.1 Some people use anonymized “alt” accounts or private messaging to shield themselves from personal attacks. Some tune out or laugh off the onslaught of bots and creeps. Others harness the technology to create more positive connections.
“In terms of loneliness, I actually think many of these experiences can bring people together. You can find lots of groups like Bye Felipe where girls and women can share and call out hostile men on dating sites. People still look for—and find—community despite negative online experiences. I have seen both the good and the bad—and I am not yet ready to give up.”
Then there is artificial intelligence, better known as AI. Once something only computer scientists understood and science fiction writers imagined, now AI’s capabilities and possibilities are fascinating—and causing anxiety.
This tsunami-like transformation is at the root of why Western appointed Mark Daley as its first-ever chief AI officer in October 2023.
“Comparing the onset of AI to the internet or even the steam engine is legitimate, but I think it’s even bigger. It’s more like the discovery of fire,” says Daley, a Western professor and alum (BSc’99, PhD’03) whose position is also the first of its kind at an academic institution in North America.
The debut of OpenAI’s ChatGPT chatbot in November 2022 was a game changer on a number of fronts, but how will AI impact the ways in which we connect with one another?
“Sooner than anyone thinks, we’re going to be dealing with AI entities that are functionally indistinguishable from humans. Even with the current immature state of this technology, people are forming serious attachments to chatbots,” says Daley, who is also an AI researcher and respected leader in neural computation, a branch of computer science focused on AI where computers are taught to process information based on how the human brain works.
He looks at the development of human-AI relationships as a two-sided coin.
“Sure, AIs may be friendly, fun to interact with, and compliant, but what if they subtly bias the human towards the views of the AI’s creators? That is a slippery slope. On the other hand, there is an epidemic of loneliness in much of the world. So, what if these technologies can offer compassion, support and joy to those who would not otherwise have it?”
This technology is here, it’s progressing at warp speed, and it’s changing our lives. So how can society ensure these changes are as positive as possible?
“We—universities, governments, businesses and users—need to continue to step back, consider, judge, decide what is useful and good and restrict when we need to. There is a lot of AI fear and doom generated in the media, but I’m an optimist. This is an important moment in history and we—all of us—have an opportunity to help push toward making good decisions for humanity when it comes to AI and all digital technology. We have a responsibility to constantly explore how we use it to do good and how we can keep it from causing harm.”