AI in IT & CYBERSECURITY
- Tom Foale
- 4 days ago
- 5 min read
Updated: 18 minutes ago
Cutting Through the Hype, Facing the Risks, Seizing the Opportunities
Executive Summary
Artificial Intelligence (AI) is rapidly reshaping the technology and cybersecurity landscape.For IT leaders, the challenge is no longer whether to adopt AI — but how to do it without falling victim to its risks, its overblown promises, or the adversaries who will exploit it.
This paper examines three critical dimensions of AI’s impact on IT and security:
Risks: the real and emerging threats AI poses to security, privacy, and workforce capability.
Hype: inflated claims and misconceptions that can mislead strategy and investment.
Opportunities: practical, high-value applications that deliver measurable benefits today.
The conclusion is clear: AI is neither the saviour nor the destroyer of cybersecurity — it is a force multiplier, and its value depends entirely on how it is deployed.
1. Introduction
AI adoption in IT and cybersecurity is accelerating at a pace few could have predicted.Large language models (LLMs), predictive analytics, and autonomous detection systems have moved from experimental labs into production systems in just a few years.
This pace is not without consequence. The same tools that help defenders can be adapted by attackers — and in some cases, used more effectively by them. The AI revolution is already producing:
Cyberattacks that are faster, more convincing, and harder to detect.
Security tools that can triage, analyse, and respond to threats with unprecedented speed.
A flood of exaggerated claims from vendors and commentators.
The stakes are high. Organisations must learn to navigate the fine line between innovation and risk management.
2. The Risks of AI in Cybersecurity
2.1 Weaponisation of AI
AI allows attackers to industrialise cybercrime. Deepfake voice and video are already being used to impersonate executives, trick finance teams, and manipulate public opinion. In one case, an employee was conned into transferring £20 million after receiving a video call from a “CEO” — who was, in reality, a synthetic deepfake.
Hyper-personalised phishing campaigns — generated at scale using AI — now mimic writing styles, local cultural references, and even personal quirks gleaned from social media, bypassing many traditional detection measures.
2.2 Technical Vulnerabilities in AI
AI systems themselves introduce new attack surfaces:
Hallucinations — fabricated facts and confident misinformation can mislead analysts, lawyers, or decision-makers.
Cache poisoning — adversaries can manipulate the inputs AI systems rely on, compromising recommendations or security verdicts.
Model inversion — attackers can reconstruct private data from trained models, a risk in both cloud-hosted and local deployments.
Data Leakage — users can upload confidential material to an online AI to help with their work. This could be a breach of data privacy regulations if the material contains personally identifying information (PII) and if the organisation does not have a data processing agreement in place with the AI vendor.
Agentic AI - Unsecured AI agents may be holding hidden keys to your enterprise data and unauthenticated AI agents are an invisible security risk.
2.3 The Scale & Speed Problem
With AI, reconnaissance by threat actors that once took days can be done in minutes. Automated vulnerability scanning, exploit generation, and malware obfuscation are now within reach of low-skill threat actors — effectively lowering the barrier to entry for cybercrime.
2.4 Erosion of Workforce Capability
AI is also changing the cybersecurity labour pipeline. As junior analysts and entry-level coders rely on AI to handle basic tasks, the organic development of critical skills may stall. In the long term, this could produce a shortage of senior professionals with the depth of experience needed to tackle novel threats.
3. The Hype Cycle
3.1 The AGI Mirage
The idea of Artificial General Intelligence (AGI) — a machine that can think like a human across any domain — is a persistent talking point. Some commentators, like Nobel laureate Roger Penrose, argue that human awareness is fundamentally non-computable, putting true AGI beyond reach.
For most businesses, AGI is irrelevant to operational security decisions for the foreseeable future. It’s a distraction from actionable, present-day risks and benefits.
3.2 The over-promise of “vibe coding”
While vibe coding might be suitable for quick prototyping or demos, it often leads to code that is poorly structured, difficult to understand, and prone to errors, especially in production environments. The core issue is that developers using this method don't deeply understand the code they're generating, relying on AI to produce functional results without proper review or debugging.
3.3 Over-Hyped Use Cases
Some applications receive more publicity than they deserve:
AI legal analysis: Despite impressive demos, AI “lawyers” have failed professional bar exams unless allowed to use open-book or search-based aids — and even then, they hallucinate precedents.
Professional liability: Organisations consulting external professionals do so to benefit from the protection that professional liability insurance cover provides, as well as the benefit of protecting executives from adverse impacts from strategic decisions. AI cannot be sued for errors.
Fully autonomous SOCs: The promise of an AI-driven Security Operations Centre that needs no human oversight remains marketing fiction. AI still requires skilled analysts to interpret, validate, and act on alerts.
3.4 Vendor & Media Amplification
In an arms race to secure funding and market share, some vendors make claims that border on science fiction. Media outlets amplify these narratives, creating unrealistic expectations. Organisations that take these at face value risk over-investing in immature technologies or neglecting foundational security practices.
4. The Opportunities Created by AI
4.1 Operational Efficiency
AI can automate repetitive tasks, from processing log files to generating standard compliance reports. For overworked IT teams, this is transformative — freeing skilled staff to focus on complex, value-adding work.
4.2 Enhanced Threat Detection
AI-powered platforms like Deep Instinct use deep learning to identify and neutralise malware before it executes, within 20 milliseconds – even never-before-seen zero-day exploits. This pre-execution approach is particularly effective against zero-day ransomware strains that traditional signature-based and heuristic-based systems miss.
4.3 Coding Acceleration
For experienced developers, AI accelerates development cycles. It can handle boilerplate code, generate unit tests, generate documentation and suggest optimisations — enabling teams to deliver robust, secure, well-documented code faster.
4.4 Augmenting Incident Response
AI-driven analysis tools can scan petabytes of network traffic in minutes, flag anomalies, and present findings in natural language. This reduces the time between detection and containment — often the difference between a minor incident and a breach.
5. Balancing Innovation and Caution
Adopting AI in cybersecurity is not about blind enthusiasm or fear-driven avoidance — it’s about governance and measured integration.
Best practice includes:
Following frameworks such as the NIST AI Risk Management Framework or aligning with the EU AI Act for compliance and transparency.
Implementing explainable AI so human operators can understand — and challenge — its decisions.
Providing staff training to avoid skill erosion and ensure AI output is critically evaluated.
6. Conclusion
AI will not replace cybersecurity teams. But cybersecurity teams using AI will replace those who don’t.
The next three years will define whether organisations treat AI as a fad, a threat, or a force multiplier. The winners will be those who:
Pilot AI tools in controlled environments.
Regularly assess the risks and ROI.
Maintain human oversight at every critical decision point.
The hype will fade. The risks will evolve. The opportunities will compound.
The time to start — with eyes open — is now.
Contact us at enquiries@klaatuitsecurity.com for more information on AI security.
7. Examples of AI Risks
1. According to IBM’s latest “Cost of a Data Breach” report, 20% of the 600 organizations it surveyed had suffered a breach “due to security incidents involving shadow AI”. https://www.ibm.com/reports/data-breach
2. Fortune Magazine: “Finance companies fight back against AI deepfakes, with over 70% of new enrolment attempts to some firms being fake.” https://fortune.com/asia/2025/08/13/ant-international-ai-deepfakes-cybersecurity/
3. Fortune Magazine: “1 in 343 job applicants is now a fake from North Korea, this security company says” https://fortune.com/2025/07/02/pindrop-ceo-vijay-balasubramaniyan-fake-job-applicants-north-korea/
4. Fortune Magazine: “MIT report: 95% of generative AI pilots at companies are failing”. https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
Comments