Cyber Security using AI (is it OKAY?)
AI-powered cybersecurity refers to using machine learning and generative models to detect, analyze, and respond to threats in real time including zero-day malware, phishing, insider threats, deepfakes, and automated ransomware. Today, organizations are facing both AI-empowered attacks and defenses raising the stakes and reshaping the cyber arms race.
TECHONOLOGY


Source: Northern Technologies Group
AI-powered cybersecurity refers to using machine learning and generative models to detect, analyze, and respond to threats in real time including zero-day malware, phishing, insider threats, deepfakes, and automated ransomware. Today, organizations are facing both AI-empowered attacks and defenses raising the stakes and reshaping the cyber arms race.
The relevance is urgent: 93% of security leaders expect daily AI-driven attacks in 2025, and the global cost of cybercrime is projected to reach US $10.5 trillion by 2025. Yet defenders are closing the gap: AI now identifies breaches in seconds rather than days.
In 2022, the UK Ministry of Defence (MoD) suffered a significant data breach impacting the Afghan Relocations and Assistance Policy (ARAP) scheme. A spreadsheet with personal details for thousands of Afghan interpreters and local staff was mistakenly shared outside secure channels. MoD uses AI for contextual labeling and locking of sensitive documents off-network post-massive data breach to reduce human-error and safeguard operations.This event became the turning point that pushed the MoD to seek technological solutions that could mitigate human error and secure documents beyond traditional policy-based controls.
Proof-of-concept malware prototype In August 2025 ESET researchers found a proof-of concept malware prototype which they called PromptLock. Early AI-driven ransomware using LLMs to create attack scripts was uncovered by cyber researchers. The malware was written in Go (Golang), and used an open-weight LLM(gpt-oss:20b) via the Ollama API to generate Lua scripts on-the-fly. These Lua scripts were employed for functionalities including filesystem enumeration, file exfil and local data encryption.
Meanwhile “vibe-hacking” attacks use persuasive, psychologically tuned messaging to extort organizations. Later confirmations showed PromptLock was not an active cybercriminal tool, but rather a proof-of-concept experiment.


Source: LInkedin
Big Sleep an AI agent that was developed by Google DeepMind and Project Zero autonomously finds and neutralizes critical vulnerabilities before attackers can exploit them including zero-days.
FACADE detects insider threats because it watches internal user behavior to find any anomalies. It uses contrastive/self-supervised learning based on, and does not rely on historic attack data.
The announcements officially are a part of Google’s summer security updates (ahead of Black Hat and DEF CON) though some media frame the reveal as “at DEF CON 2025” but the deeper tech disclosures along with demos are being shared at those events.
Traditionally, SOCs rely on human analysts to triage alerts (determine which ones are real and urgent), gather context, escalate, and respond. But with the explosion of alert volume, complexity, and a scarcity of trained analysts, this model faces serious bottlenecks.
AI agents are now being introduced into SOC workflows to take over much of the routine, high-volume tasks: validating alerts, enriching context, determining false positives, even triggering responses for lower-risk events. This allows human analysts to focus on complex threats, threat hunting, strategy, or oversight.
Many vendors and security teams report that introducing AI in alert triage and incident handling results in 50-70% (or more) reductions in mean time to response / investigation.
The Company That Fought the Invisible Enemy (Darktrace). In London, a global firm spotted strange behavior, sensitive files got opened late. Was it just a loyal employee then working overtime? Or did someone steal any data?
Darktrace’s AI likened this behavior to the employee’s usual “pattern of life.” It then flagged the activity as unusual, froze the account, also alerted the team. After the breach attempt ended, Data left the building.
AI continuously learns normal behavior, unlike customary monitoring tools that look for known threats. Something outside of the baseline raises an alarm. This alarm is triggered at once, even in the event that the fall is subtle.
Reference:
ESET discovers PromptLock, the first AI-powered ransomware — pengumuman resmi ESET tentang penemuan malware PromptLock yang menggunakan model AI untuk menghasilkan script secara dinamis. eset.com
First known AI-powered ransomware uncovered by ESET Research — analisis di WeLiveSecurity yang membahas detail teknis PromptLock sebagai prototipe. welivesecurity.com
AI-Powered Ransomware Has Arrived With 'PromptLock' — DarkReading, membahas konsekuensi temuan PromptLock bagi dunia keamanan siber. Dark Reading
Generative AI and Security Operations Center Productivity: Evidence from Live Operations — makalah (preprint) yang menunjukkan bahwa adopsi generative AI dikaitkan dengan penurunan rata-rata waktu penyelesaian insiden sebesar ~30 %. arXiv
Generative AI in Live Operations: Evidence of Productivity Gains in Cybersecurity and Endpoint Management — makalah berikutnya yang memperluas bukti terhadap peningkatan performa operasi keamanan karena AI. arXiv
Anomaly-Based Threat Hunting: Darktrace's Approach in Action — blog Darktrace, menjelaskan pendekatan deteksi ancaman berbasis anomali mereka. Darktrace
How Does Darktrace Detect Threats? — laman Darktrace yang merinci bagaimana AI mereka bekerja tanpa mengandalkan signature tetapi pola perilaku normal vs abnormal. Darktrace
Darktrace Case Study | Network Critical — studi kasus integrasi Darktrace dengan visibilitas jaringan untuk mendeteksi anomali di jaringan high-performance. Network Critical
*Disclaimer: This article was drafted with the assistance of AI technology and then critically reviewed and edited by a human author for accuracy, clarity, and tone.
