Technology & Innovation

Artificial Intelligence & Cybersecurity: Balancing Innovation, Execution and Risk

September 09, 2021

Global

Artificial Intelligence & Cybersecurity: Balancing Innovation, Execution and Risk

September 09, 2021

Global
Michael Paterra

Senior manager, Policy & insights

Michael Paterra is a senior manager on the Policy & Insights team at Economist Impact. Michael leads research programs for foundations, governments, nonprofits and corporates seeking evidence-based analysis to inform policy recommendations and strategy development. He specialises in the intersection of security, health, migration and the environment. At Economist Impact he leads research on a number of benchmarking indexes, including the Global Health Security Index, a 195-country study on epidemic and pandemic preparedness. Michael previously spent time specializing in global labour market research and international labour statistics at The Conference Board. He holds a Master's degree in International Political Economy and Development from Fordham University and a bachelor’s degree in economic and political science from the University of Delaware.

The COVID-19 pandemic has accelerated digital transformation across industries, creating newfound benefits to efficiency but also exposing new risks to organizational networks as technology adoption rises and employees increasingly work remotely. As a result, there has been a rapid uptick in the number of cyberattacks, ranging from mundane efforts to gather important business and personal information to highly sophisticated attacks on critical infrastructure. At the same time, the rise of artificial intelligence (AI) across industries provides both an opportunity and a challenge to organizations as they look to leverage technologies to improve their cyber defenses. If adopted and monitored properly, AI can serve as a key competitive differentiator in the success of cybersecurity programs.
This report explores perceptions around the intersection of AI and cybersecurity. It finds that organizations are aware of the opportunities in this regard but also of the potential negative consequences of being overly reliant on AI to protect themselves. The key findings are:
  • AI can enhance cybersecurity. It primarily does this by automating threat detection by handling substantial volumes and identifying anomalies around the clock, even as human support continues to play an important role. A hybrid approach may provide the best of both worlds; however, control of organizational AI cybersecurity systems should only be provided to a few highly trusted people.
     
  • AI can introduce cybersecurity weaknesses. Despite its many benefits, AI solutions are not a silver bullet as organizational governance and policies continue to play a key role in beefing up cybersecurity. In part this is due to the fact that there is a nascent but potentially growing threat landscape in which malicious actors use AI to penetrate weak systems or exploit the complexities of cybersecurity systems that rely on AI. 
     
  • Regulatory compliance comes to the forefront. Data privacy and transparency are no longer buzz words as companies need to comply with extensive regulations and build trust among customers, regulators and the public. This can pose a compliance challenge for US-based companies due to varying rules across states and the need to adopt international practices if operating in a region such as Europe.
     
  • There is hope that an international consensus on AI principles will also lead to global cybersecurity agreements. The lack of common norms and principles related to cybersecurity has long been a sticking point for global agreements. AI may change that too, as the G20 have adopted shared principles on the use of the technology, a nascent effort that may pave the way for further agreements.

Enjoy in-depth insights and expert analysis - subscribe to our Perspectives newsletter, delivered every week