Skip to content

The double-edged sword of AI in cybersecurity – Highlights from the ENISA AI Cybersecurity Conference’

ENISA AI Cybersecurity Conference took place on 7th June, and here are the key highlights:

AI is a double-edged sword when it comes to Cyber Security as it enables better and more advanced prevention against cyberattacks but at the same time enables cyber criminals to advance their attack methods and vectors.

Generative AI by Huub Janssen (Rijksinspectie Digitale Infrastructuur)

  • AI solutions derived from LLM foundation models are evolving at an extremely fast pace. Likewise, new AI-enabled threats are emerging at a similar pace, and traditional security controls, measures, and solutions are not able to keep up with them. A paradigm shift in approach is required to deal with AI-enabled threats.

Cyber Security of AI: Technical Challenges & Opportunities by Dr. Henrik Junklewitz (DG Joint Research Centre, European Commission)

  • Current technologies are overwhelmed and ineffective in preventing the misuse of AI for nefarious activities, although securing the AI development lifecycle is possible but extremely complex.

AI and Cyber Security Research & Innovation in Europe by Dr. Gianluca Misuraca (AI4Gov and Inspiring Futures)

  • A comprehensive overview of the state of cyber security at the EU level is lacking, making it difficult to identify gaps and shortcomings although ENISA is leading the way towards this goal. AI adds to the complexity, and the race is on to be ahead of the game.
  • The EU faces a shortage of 300K – 500K of cyber security professionals, even more dire if AI-cyber security combined skills are included.
  • There is a clear need to extend cyber security capabilities with AI and to strengthen the association between them.

AI Cybersecurity Trends: Opportunities and Threats for Research and Innovation by Dr. Rafael Popper (Futures Diamond and Finland Futures Research Centre of the University of Turku)

  • AI presents immense opportunities as well as risks for cyber security.
  • Human expertise and creativity, coupled with collaboration and interaction, remain vital to the core of addressing both opportunities and threats with AI-driven cyber security.

Artificial Intelligence Act by Tatjana Evas and Antoine-Alexandre Andre (European Commission)

  • The EU Artificial Intelligence Act adopts a risk-based and horizontal approach to regulating artificial intelligence and defines requirements on foundation models such as large language models and generative AI.
  • It has been endorsed by sub-committees of lawmakers in the EU Parliament in May 2023, with a final vote by the whole Parliament expected (at the time of this write-up) during the mid-June session.

Trustworthy of AI by Prof. Nineta Polemi (University of Piraeus)

  • AI system threats can be classified into 3 main categories:
    • Technical (e.g. loss of accuracy, reliability, robustness)
    • Socio-technical (e.g. loss of explainability, bias, security, transparency)
    • Guiding Principles (e.g. loss of accountability, reliability, traceability)
  • The challenge is to accurately identify, measure and report on the socio-technical and guiding principles threats, as there are no widely accepted common standards and models currently.
  • Further collaboration with social scientists, behavior scientists, and other expertise beyond legal experts to determine the basis of the risks that need to be managed to create trustworthy AI.

Security of AI and ML by Prof. Isabel Praça (Instituto Superior de Engenharia do Porto (ISEP))

  • The accuracy of AI can be improved through clear metrics.
  • MITRE ATLAS™ (Adversarial Threat Landscape for Artificial-Intelligence Systems) is a knowledge base of adversary tactics, techniques, and case studies for machine learning (ML) systems
  • NIST has developed a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence.

Telecommunications Industry Perspective on AI Properties by Ewelina Szczekocka (Orange Innovation Poland)

  • All aspects such as IT, legal, business, human, social need to be considered when it comes to AI.
  • Benchmarking is very important for telecommunications.
  • AI is present in the telecommunications network supply chain processes, from network optimization to predictive maintenance of sites, devices, etc., to customer service enhancements.

Cybersecurity Certification of AI by Dr. Xavier Valero Gonzalez (Dekra)

  • Cybersecurity certification would improve the trust of users and consumers in AI technology.
  • Prerequisite certification components include methodologies that can: 1) audit AI processes, assess their vulnerabilities, and apply corrective measures; 2) measure the robustness of machine learning models against various attack vectors; 3) enable risk assessment and mitigation steps; 4) establish AI life cycle management systems.
  • Lack of proper AI cybersecurity certification schemes available currently, but it is much needed for AI high-risk systems.

Enabling Digital Trust for AI by Dr. Jesus Luna Garcia (Robert Bosch GmbH)

The following new ENISA publications were announced to wrap up the event:

Share this post

Leave a Reply

Your email address will not be published. Required fields are marked *


AI Act Has Been Adopted

On the 13th of March, the European Parliament’s Plenary voted officially in favour of the AI Act.  The AI Act is considered the world’s first

Read More »