Deepa Kundur is the Chair of the Edward S. Rogers Sr. Department of Electrical & Computer Engineering at the University of Toronto.
Working in cybersecurity is like playing a game of chess. On one side, we have bad actors—criminal groups, disgruntled employees or internet trolls—trying to hack into power grids, electric car networks and election databases. On the other, groups like mine build computing-based mechanisms to protect critical systems from cyberattacks. The war on cybersecurity is as old as technology itself, but artificial intelligence is changing the game at an eye-popping rate.
Historically, it was more challenging to hack into networks like a power grid, because such systems were not connected to widespread communication infrastructure and had their own unique operating systems, sensors and communication channels. To launch a cyberattack on a power grid, malware would penetrate a system through information technology, reside in it for years unnoticed then communicate data and information about the operational system back to attackers at a remote command and control centre, who would then launch a targeted attack.
For example, in March of last year, in the early days of the Russian invasion of Ukraine, Russian hackers tried to bring down Ukraine’s power grid, which could have caused a blackout for two million people. It turns out they had also broken into the grid in 2015 and 2016. On a smaller scale, cyberattacks on electric vehicle chargers have emerged in the United States; these might tap into entire electrical grids and orchestrate blackouts.
These large cyberattacks take investment and resources, so they are still relatively rare. AI is making that process much stealthier. AI-powered malware can hack into a critical-infrastructure computer system, explore and learn about it independently and launch its attack all by itself without a need for a remote person directing it during reconnaissance. This technology is making hacking much quicker and easier, and it will lead to far more assaults on our computer systems.
In the near future, nation states could create AI-based malware and deploy it through a supply-chain attack, installing malware into an industrial device at the manufacturing site. From there, the malware would duplicate itself and independently explore the industrial control system, finding existing vulnerabilities at a rapid pace. Then, it could identify new vulnerabilities even before system operators realize it. Even if the malware is discovered and removed, duplicate versions might remain in the system and begin the search for vulnerabilities again in a never-ending cycle. Part of what protected systems in the past is that a human operator would patch them, rendering malware obsolete. However, AI agents continually learn, grow, adapt and become more intelligent. We must build systems to be resilient, not only to outsider attacks but internal ones as well.
Research groups like mine are hard at work to make our critical infrastructure attack-resistant and resilient. Thankfully, the good guys have home-court advantage, because we’re the ones who created the systems; hackers have the burden of learning how to break into them. Better still, we can also leverage AI. We use anomaly-detection systems that deploy deep learning to detect suspicious activities within a system, such as increased traffic in a communication channel. We also develop honey pots—simulated online systems resembling vital control centres, like an entrance to a power grid—designed to attract potential attackers and learn how they operate. The emerging landscape is AI versus AI, with both sides constantly trying to outsmart each other.
Cyberattacks will affect everyone, and widespread literacy around them is crucial. Any business that has a computer system will require professionals who understand their critical infrastructure and can learn to detect attacks. For example, a power engineer in the past would need to be concerned with the reliability of power delivery; now they have to be aware of potential cybersecurity issues as well. Schools must begin to teach children not only to be physically safe, but also how to be safe in the online world. For starters, people should now be wary of any messages that ask for information. Multi-factor authentication might be a pain, but it’s worth it. Beware what you click on and be intentional about the websites you visit.
In recent months, ChatGPT demonstrated how quickly an AI can learn and improve. Applying that learning curve to hacking creates some scary scenarios for the very near future. Luckily for us, an AI cannot teach itself to learn how to launch a cyberattack as quickly as it learned to write an essay, because there are far fewer data points of critical infrastructure operations on which it could train. Still, we must take cybersecurity seriously, or risk these hacks becoming commonplace and wreaking havoc we still cannot fully predict.
We reached out to Canada’s top AI thinkers in fields like ethics, health and computer science and asked them to predict where AI will take us in the coming years, for better or worse. The results may sound like science fiction—but they’re coming at you sooner than you think. To stay ahead of it all, read the other essays that make up our AI cover story, which was published in the November 2023 issue of Maclean’s. Subscribe now.
The post Hackers will use AI to orchestrate worldwide cyberattacks appeared first on Macleans.ca.