There’s a common saying in cybersecurity: “It's not a matter of if you'll be hacked, but when”. For some time this seemed to apply to AI systems as well.
Fast forward since 2022, there haven’t been a surge in attacks to AI; let alone any nation state attacks. Several reasons seem to justify this. With AI moving at break-neck speed it is as much of a challenge for attackers to cope with it as it is for the rest of us. Furthermore, low enterprise adoption reduce the incentive for sophisticated attacks to AI infra.
So will AI security take-off? If so when?
We took the topic for discussion with CISOs, security and AI practitioners, 6 months after we first covered the topic (here). The conclusion? The verdict is still out.
The Threats in AI Security
In the discussion, we bucketed threats around three primary categories of attacks:
Manual Prompt Attacks: Low-hanging fruits for attackers. Just sending a high volume of diverse prompts already creates a Denial of Wallet attack (ie. a surge in inference costs). The dynamic nature of such attacks, often generated algorithmically, creates a continuous "cat and mouse" game. This supports the boom in AI red-teaming/pen-testing startups, with new ones every month.
Operational Attacks: Complex but systemic. Look to compromise the operational aspects of AI models, drawing parallels to conventional software supply chain attacks. Frameworks such as Databricks DASF 2.0, Google's SAIF or MITRE Atlas, try to get a lay the land. As pre-trained LLMs become better and easily consumable via APIs, the opportunity for these attacks is for now brink.
Agentic Warfare: Exponentially impactful. This involves deploying AI agents with specific objectives, utilizing models as their core intelligence and equipping them with tools to execute tasks. The extensive control these agents can exert over software systems amplifies their potential impact, including the risk of large-scale disruptions. This extends far beyond the 'AI security' realm.
While prompt attacks are seen as the immediate threat for now, some practitioners see long-term benefits of adding additional protections to the same single platform vs different end-point solutions for AI security. When will the remaining security layers become key to add-on?
The Challenge of AI Quality
Hallucinations/bias/toxicity have more to do with quality than security. This is a grey area, so tighter layers of testing seem to be the way to address it:
Scenario-Level Granular Testing: Evaluate diverse scenarios to test edge cases.
Statistically Balanced Test Cases: Develop test cases that accurately represent various real-world situations preventing skewed performance metrics and ensuring the AI system's robustness.
Thorough Regression Testing: Regularly test AI systems after updates or modifications.
End-to-End Pipeline Assurance: Validating each component of the AI pipeline, from data ingestion to output generation.
Transparent and Standardized Quality Standards: Establish clear quality benchmarks fosters alignment among developers, stakeholders and end-users.
Quality is however much more enterprise and use case specific. The ultimate “quality metric” is the ROI of the business case. This is good news to vertical AI vendors or system integrators delivering custom use cases, but generally not for horizontal AI quality/observability/explainability vendors (and there’s already many out there). Will there be an opportunity for new vendors in AI quality?
Conclusion – we’re just beginning
Accelerated AI adoption will kick start the ‘cat-and-mouse’ flywheel of improved quality/security in a market that currently looks very nascent. At 33N we’re strategic partners and long term believers in the sector and always open to connect with disruptive thinkers, practitioners and founders in the space – don’t hesitate to reach-out at info@33n.vc!
33N Company Updates 🚀
DataGalaxy
Recognized in the 2025 Gartner Magic Quadrant for Data & Analytics Governance Platforms 👉 Read more
Recognized as a Data Governance leader on G2 👉 Read more
Acquired YOOI, a small French operation, to enhance product capabilities in managing Data & AI use cases and products
Exein
Secures partnership with chipmaker MediaTek 👉 Read more
Recognized among Europe’s Top 250 Startups by Sifted 👉 Read more
Panorays
Releases 2025 CISO Survey on Third-Party Cyber Risk Priorities 👉 Read more
Managing third-party cybersecurity risks listed as a Top Trend in Cybersecurity by Gartner 👉 Read more
StrikeReady
Yasir Khalid, StrikeReady’s CEO and Founder, earned a spot as one of Senior Executive’s Top Cybersecurity Founders to watch in 2025 by Forbes 👉 Read more
Present at Black Hat MEA 👉 Read more
Upcoming Events for 33N 🤝
Websummit Qatar, 23-26 Feb, Qatar — Carlos & Carlos
MWC/4YFN, 3-6 Mar, Barcelona – Margarida & Pedro
Embedded World, 11-13 Mar, Nuremberg – Carlos A.
NVIDIA GTC, 17-21Mar, San Jose – Guy & Pedro
Cybertech, 24-26 Mar, Tel-Aviv – Eli
CMIP/Google, 27 Mar, Warsaw – Carlos & Carlos
FIC Europe, 1-3 Apr, Lille – Gonçalo & Christophe
Kubecon + CloudNativeCon, 1-4 Apr, London – Pedro
RSA, 28Apr-1May, San Francisco – 33N team
Team updates at 33N 👨💻👩💻🧑💻
Please join us welcoming 33N’s new members:
Guy Horowitz as Venture partner (in action below hosting an event for 33N)
Beatriz Gonçalves and Lourenço Teodoro as Analysts (already active at the office below)
Cássio Sampaio as Advisory Board member, ex-CPO at Okta/Auth0