|

Episode 555 Security With Tim Nash AI and Security


Show Summary

Episode 555 of the SDM show, hosted by Rob Cairns, features a discussion with Tim Nash regarding AI’s impact on security. They explore how AI is revolutionizing phishing attacks with improved text and graphics, making them harder to detect. The conversation also touches on how AI is being used to accelerate the exploitation of discovered vulnerabilities and how the quality and security of AI-generated code, particularly in the WordPress context, is a concern. They conclude by emphasizing that while AI is a useful tool, it also presents new challenges that both users and security professionals must be aware of.

Show Notes

Opening Banter:

  • Tim, joining from the UK, wishes for the British summer weather he experienced a few weeks prior, contrasting it with the current “gray and miserable” Canadian weather.
  • Rob mentions Toronto is experiencing its coldest May in 20 years, with a forecast of 29ยฐC for the following week.

Main Discussion: The Impact of AI on Security

  1. AI Revolutionizing Phishing:
    1. Tim states AI has “revolutionized the phishing industry.”
    1. Bad actors use AI to write more convincing emails, making them harder to detect.
    1. Rob concurs, noting improvements in both text and graphics in phishing emails, citing AI tools like ChatGPT, Grok, and Gemini that can generate photos.
  2. AI and Vulnerability Exploitation:
    1. Tim believes AI is used less for discovering new vulnerabilities and more for shortening the time to exploitation once a vulnerability is announced.
    1. AI helps create proof-of-concept code and weaponize it faster.
    1. There’s a rise in failed attacks, possibly due to AI-generated code being deployed without thorough testing.
  3. Security of AI-Generated Code (e.g., WordPress Plugins):
    1. Rob questions the security of AI-generated plugins unless reviewed line by line.
    1. Tim notes an 80% increase in plugin submissions to the WordPress repository, attributed to AI.
    1. AI code isn’t secure by default; it needs explicit instructions.
    1. Tim, a code reviewer, can often identify AI-written code and even the specific model used due to their “quirks.”
    1. Prompting AI to “assume you are a senior developer with years of experience” can surprisingly improve code security and legibility.
  4. AI Models and Code Security:
    1. The quality of AI output depends heavily on the prompt (“garbage in, garbage out”).
    1. Some models (e.g., Claude’s Sonet 3.5 and 3.7) are stricter in following prompts.
    1. Reasoning models (e.g., Deepseek, newer OpenAI models) might deviate from prompts, which can be problematic for code generation.
  5. Deepseek and Data Privacy:
    1. Rob raises concerns about Deepseek being Chinese-developed.
    1. Tim points out that using the cloud version means data likely goes to China, similar to how data with US models (like Anthropic’s Claude, an Amazon subsidiary) stays in the US.
    1. A key difference: Deepseek’s models can be run locally (open source), potentially making them more trustworthy if self-hosted.
    1. Critical Warning: Avoid inputting sensitive data (like bank account details) into any public cloud AI model.
    1. Tim uses a local LLM (Ollama) for private/sensitive tasks.
  6. AI and Automated Bot Attacks:
    1. Tim believes AI is likely not heavily used in large-scale, basic bot attacks (e.g., password guessing on WordPress sites) because AI is slow and resource-intensive compared to simpler, faster scripts.
    1. AI might be used in the initial creation of attack tools or for log analysis by attackers, but not for real-time “agentic” AI attacks.
  7. AI Assisting Security Researchers:
    1. Tim uses AI for:
      1. Basic administrative tasks.
      1. Analyzing large log files for patterns (though existing tools are often better).
      1. Generating proof-of-concept code for client reports (either simplifying his code or creating it based on a described vulnerability).
    1. It’s more for peripheral tasks than core research.
  8. AI and the Future of Jobs:
    1. Both agree AI is a tool, not a complete replacement for humans.
    1. Experienced professionals can use AI to become more productive.
    1. Concern: A potential decrease in junior developer roles as senior developers leverage AI. This could lead to an “experience gap,” as security fundamentals are often poorly taught in formal education.
    1. Junior talent may miss out on crucial mentorship and agency experience.
  9. AI and Voice Validation:
    1. Strong Warning from Rob & Tim: DO NOT use voice validation for sensitive accounts (banks, etc.).
    1. Voices are wave patterns that can be recorded and replicated by AI.
    1. Voice is not a secure biometric (affected by illness, accents, etc.). Tim notes how voice systems often struggle with non-standard (e.g., his British) accents.

Key Takeaways & Advice:

  • AI is a powerful tool for both good and bad actors in cybersecurity.
  • Be increasingly vigilant about sophisticated phishing attacks.
  • Develop critical thinking to identify AI-driven content.
  • Authentic human communication (your unique “voice” in writing) may become a key differentiator.
  • Never put sensitive personal or company data into public AI models.
  • Consider using local LLMs for tasks involving sensitive information.

Tim Nash’s Projects & Contact:

Closing:

Rob thanks Tim for the insightful discussion and hopes for better weather for him.

Similar Posts