Using Generative AI Models to Support Cybersecurity Analysts

March 8, 2025

Artificial intelligence is reshaping the cybersecurity landscape, and one of the most promising developments is the use of generative AI models to assist security analysts. In my latest research, titled Using Generative AI Models to Support Cybersecurity Analysts, we explored how large language models (LLMs) can be integrated into cybersecurity workflows to enhance threat detection, vulnerability assessment, and log analysis.

Why Generative AI for Cybersecurity?

Cybersecurity analysts deal with an overwhelming amount of data—from event logs and security settings to vulnerability reports and malware analysis. While traditional tools help collect and process this data, human analysts still spend hours manually identifying threats and assessing risks.

Generative AI can change this by:

  • Automating security log analysis to detect anomalies faster
  • Identifying vulnerabilities in applications more efficiently
  • Enhancing penetration testing by analyzing potential exploits
  • Providing contextual insights using established security frameworks like OWASP Mobile Top 10 and MITRE ATT&CK

Our Research: Two Key Case Studies

Our study focused on two real-world applications where LLMs can assist cybersecurity experts:

1. Detecting Vulnerabilities in Android Applications

We developed a system that combines LLMs with security tools like MobSF (Mobile Security Framework) and Semgrep to analyze Android applications for vulnerabilities. Instead of relying solely on static analysis tools, we leveraged AI-driven contextual analysis to:

  • Scan application source code
  • Identify security flaws using pre-defined vulnerability rules
  • Classify issues based on the OWASP Mobile Top 10 framework

💡 Key finding: LLMs improved the speed and accuracy of vulnerability detection, but they also introduced false positives, requiring human verification.

2. Security Log Analysis for Incident Detection

We tested how LLMs could assist in analyzing network security logs from tools like Suricata (IDS/IPS) and Sysmon (Windows monitoring tool). Our approach aimed to correlate security alerts with real threats, filtering out false positives and mapping incidents to the MITRE ATT&CK framework.

💡 Key finding: AI-assisted analysis significantly reduced the time required to identify and classify threats, but models struggled with highly obfuscated or complex attacks.

Limitations and Challenges

While our research demonstrated great potential, using LLMs in cybersecurity isn’t without risks:

  • False positives: AI models can misinterpret logs or code, leading to unnecessary alerts.
  • Cost considerations: Processing large datasets with LLMs can be expensive.
  • Security concerns: Using AI-powered analysis raises questions about data privacy and adversarial attacks against LLMs.

The Future of AI in Cybersecurity

Despite these challenges, AI-assisted cybersecurity is rapidly evolving. Future advancements in fine-tuned LLMs, custom security AI models, and real-time adaptive learning could revolutionize how we detect, analyze, and respond to threats.

If you’re interested in the full details of our research, you can find the paper here:

Research Paper URL

As AI becomes a critical tool for cybersecurity professionals, the balance between automation and human expertise will be key to ensuring both efficiency and accuracy in defending against cyber threats.

🔐 Stay secure, stay informed, and embrace AI in cybersecurity!

Protect Your Business with Professional Cybersecurity Solutions

Book Now