Skip to content
Interview

Mark Daley

The Future of AI

When ChatGPT was launched in 2022, suddenly, “artificial intelligence” jumped from what most of us knew as a weird Hollywood sci-fi thing to a freaky new reality.

As a mathematician and computer scientist, Mark Daley was always interested in theories of neural computation. During a sabbatical in 2012, he went back to school to complete a master’s degree in neuroscience to better understand the biological aspects of intelligence and learning.

He then moved into academic research administration, working as vice-president, research at the Canadian Institute for Advanced Research (CIFAR). That group oversees the Pan-Canadian Artificial Intelligence Strategy, and also runs a research program called Learning in Machines and Brain, which, at the time, was co-directed by Yann LeCun, the head of AI at Meta, and Yoshua Bengio of the University of Montreal.

“What a privilege it was to be a fly on that wall,” Daley says.

A Western professor since 2004 (and a double Western degree holder: BSc’99, PhD’03) he was appointed chief digital information officer in September 2022. When the university took the historic step of creating an AI-focused role within its senior executive team, Daley was the obvious choice. In October 2023, he began a five-year stint as chief AI officer, believed to be the first such position at any North American university. Daley spoke with Patchen Barss and Keri Ferguson about how quickly things are moving.

Photo of Mark Daley

In your conversations with faculty, students and staff, how do you find people are reacting to these transformative tools?

Well, let me say this first. Even though I’ve been involved in neural computation research for most of my career, I didn’t imagine AI getting to the point it is at right now within my lifetime, or I thought it maybe would happen after I retired. We are way further ahead than I thought we would be. So I’m sympathetic to people who aren’t researchers in this area who feel this came out of nowhere. There is a lot of fear and doom generated in the media, but I’m an optimist. I do feel a sense of obligation, because this is an important moment in history and we—all of us—have an opportunity to help push toward making good decisions for humanity.

In terms of my interactions with people, I’ve seen a huge range of responses. Some people are so excited. “Finally, I can talk to my computer!” Others find it all a bit creepy, especially with something as sophisticated as GPT. People say it’s kind of gross that the machine’s pretending to be a human. Some people have a deeper, existential response: “What does it mean that this tool can simulate a human that well? Maybe I’m less special than I thought I was.” These are good questions we as humanity are going to be grappling with in a very serious way.

What is your read on “The Terminator Problem”—the idea that we might not only lose control of AI, but that it might someday control us?

Right now, the idea of killer robots taking over is speculative. We should take this idea seriously and have those conversations. But the existential threats are not real or manifest—they are only possibilities.

I’m more interested in immediate realities. Recently, [AI research laboratory and Google subsidiary] DeepMind announced they had discovered a new class of antibiotics that could save tens of thousands of lives.

At the same time, we have deep-fakes that are good enough to disrupt elections. A fake Joe Biden YouTube video told New Hampshire voters not to vote in the primary. People who are accustomed to accepting videos and photos as truthful have to learn to treat them with the same skepticism they would with written text.

These are real, immediate, non-speculative public goods and harms.

How do you see these goods and harms shaping Western’s approach to AI?

There’s no standardized playbook for generative AI like ChatGPT, which can create original writing, visual images and other content. How to best use it varies hugely across disciplines. We can’t say, “Here are the five things Western’s going to do in AI.” We want a huge bottom-up component, starting with individual experimentation. People need to be empowered and given time and space to experiment with these tools and see how it affects their day-to-day. A music history professor will use it differently than their colleagues in research finance for example.

Our students are going to be living and competing in a world where everyone knows how to use these tools to maximize their personal impact. They have this force multiplier now that might allow them to do 10 times what they could before. Our students want to learn how to use these tools intelligently and ethically. We’re already seeing curriculum change at the level of the individual instructors to reflect these new opportunities for students.

Can you elaborate on the idea of AI as a force multiplier?

Some companies are doing major layoffs, thinking they’re going to replace humans with AI. I think that’s a mistake. That day might come, but this is not that day.

Other companies are going to keep their human cohort and have them collaborate with AI in everything they do. Humans still do all the things they excel at, and now they’re augmented by technology. AI reduces or eliminates intellectual drudgery, automating repetitive correspondence and bureaucratic requirements, and frees people to focus on other challenges.

A lot of people are surprised how creative AI can be as a brainstorming partner. It knows things you don’t. You get into a brainstorming session with it, pull on a couple of threads and realize there’s a whole extra field of study out there you didn’t even know existed.

I don’t see AI replacing people. I see AI augmenting what people do. In the history of human technological innovation, every time we’ve invented a new technology, it has ended up creating more jobs than it took away.

There are fundamental issues of trust, and what AI is good at and not good at is still being explored. There’s still a role for humans in exercising judgement and oversight. As a university, we need to be part of that broader societal discussion about what that means and how we cope with that.

Aside from threats, and aside from creating a “force multiplier” for existing roles, what do you imagine we can do with AI that is truly new?

There are already multiple instances of research papers primarily written by generative AI. It is accelerating research.

On the learning side, we have a personalized tutor that’s good and getting better. It doesn’t replace professors or TAs, but augments them by being available 24/7, and by having infinite time and patience. A student can ask it the same question 20 times or ask it to try teaching another way and it will do it. This transforms how we deliver education.

We’ve talked about the groundbreaking nature of your role. Do you think this position is a sign of things to come at other universities?

Creating this role shows real vision on the part of Western’s leadership and we’re starting to see it emerge as a trend in higher education. It already exists in industry in other places. But I’m curious about the permanency of these positions. Do you need a chief AI officer in perpetuity? I don’t think you do. Eventually AI is going to be everywhere. Right now, a big organization like Western needs someone to help coordinate that transformation and guide the adoption process. But ultimately, adoption is happening on the front lines. Once the transformation is done, then I can go back to teaching and research full time.

But right now, we need to be taking a leadership role in AI. This technology is going to transform society in ways other technologies haven’t. It’s being compared to the internet and the steam engine. Those are legitimate comparisons, but I think this is even bigger. I think this is more like the discovery of fire.

How have you used AI for your own brainstorming?

When I’m writing a grant. I will think of someone famous who I absolutely do not want to review the grant, because I know they would rip it to shreds. Who am I most scared of reading my grant? And I’ll tell GPT, “You are [philosopher and cognitive scientist] Daniel Dennett. Please review my grant in in the voice of Dan Dennett.” And it’ll work.

It’s still my ideas, but it is so useful for finding connections I’d have missed, or and for improving, refining, and making my work better.

Do you think these large learning models know what they’re talking about?

I think they don’t think exactly the same way we do. But they do something close enough to cognition that I’m happy to call it artificial cognition. That’s what’s so exciting about them. They are trained on existing documents and existing inputs, and they synthesize them into something new. They’re more than just a big lookup table.

What has been your first priority as you’ve settled into this new role at Western?

My first priority has been to listen. I need to understand what’s happening with AI at Western, and where it’s going. It’s such a broad and deep field, and there’s so much exciting work happening. I need to talk to people, listen to what they’re excited about, and what their concerns are. Then we’ll work together to shape our strategy.

Was this rapid advancement of AI technology something you would have expected?

No, I don’t think anyone could have predicted it. The pace of advancement in AI has been breathtaking. It’s been driven by huge investments from industry and government, and by rapid advances in computing power. These advances have opened up so many exciting possibilities, but they also bring challenges, like how to ensure that AI is developed and used responsibly.

The internet, which has biases, informs AI. How do we manage and mitigate that?

Bias in AI is a significant concern, and it’s something we need to address. One way to mitigate bias is to ensure the teams developing AI systems are diverse and inclusive, so that different perspectives are represented. We also need to be transparent about how AI systems are trained and evaluated, so that biases can be identified and corrected.

What about misinformation?

Misinformation is another important challenge. AI can be used to create and spread misinformation at scale, which can have serious consequences for individuals and society. We need to develop tools and techniques to detect and combat misinformation, and to promote media literacy and critical thinking skills so that people can evaluate information critically.

How is Western positioned to take on and contribute to these important discussions about AI?

Western has a strong tradition of research and teaching in AI, and we have world-class researchers in a wide range of AI-related fields. We also have strong connections with industry and government, which will be important for addressing the ethical, legal, and social implications of AI. I’m excited to work with our faculty, students, and partners to help shape the future of AI at Western and beyond.

How are you working with professors?

My approach is to collaborate closely with them and help them identify opportunities to integrate AI into their pedagogy and curriculum. This could involve developing new courses or modules on AI, incorporating AI tools and techniques into existing courses, or providing training and support for faculty who want to use AI in their teaching. Ultimately, our goal is to ensure all Western students have the opportunity to learn about AI and its applications, regardless of their field of study.

You seem invigorated about the possibilities of AI and taking on this role.

I am invigorated! AI has the potential to revolutionize almost every aspect of our lives, from healthcare and education to transportation and entertainment. It’s an incredibly exciting time to be working in this field, and I feel privileged to have the opportunity to help shape its future. I’m looking forward to working with our faculty, students, and partners to unlock the full potential of AI and ensure that it benefits everyone.


Interview has been edited for length and clarity.