AI and Cybersecurity: Smarter Tech, Bigger Risks

Categories: Best Practices Cyber Security Awareness Month Information Security

Introduction

As artificial intelligence becomes more prevalent in our lives and workflows, it’s not just a neutral tool that can help us be more productive – it’s also an evolving battlefield of attack vs defense. For National Cybersecurity Awareness Month, the question isn’t just “how do we use AI safely?” but also “how might cybercriminals weaponize AI against us?”

Recent trends suggest that AI-based attacks are not hypothetical future threats — they are happening now. From highly convincing phishing attempts and deepfake videos to prompt-injections and AI-assisted reconnaissance, cybercriminals are experimenting and scaling up their attacks. A 2025 survey found that 74% of IT leaders say they have definitely experienced an AI-related breach, and 98% believe they’ve likely had one already — yet only 32% are using dedicated defenses for AI systems.

As attackers leverage AI for speed, scale, and evasion, defenders must react not just with tools, but with awareness. In this article, we’ll unpack how AI is reshaping the cyber threat landscape and share strategies to stay ahead.

While AI can amplify cyber threats, it’s also one of the most powerful tools available to defend against them. Modern cybersecurity teams are already using machine learning and generative AI to detect, predict, and respond to threats faster than humans ever could. The promise of AI in security comes down to one word: scale.


The Promise

Faster Threat Detection

AI-driven analytics can process billions of network events per second, identifying patterns that would be invisible to human analysts. In 2024, Microsoft reported that its AI threat detection systems analyze over 65 trillion signals daily, helping it spot emerging attack trends and stop intrusions in near real-time.

Predictive Defense

Unlike legacy security systems that only react to attacks, AI models can predict potential vulnerabilities by studying known exploits, code changes, and behavior signatures.

Automated Response and Recovery

AI doesn’t just detect threats — it can act on them. Through automated incident response systems, AI can quarantine infected devices, block malicious IP addresses, or revoke compromised credentials in seconds, reducing down time and limiting damage.

Enhanced Human Decision-Making

AI isn’t replacing IT Security experts — it’s augmenting them. By handling repetitive monitoring and triage tasks, AI allows analysts to focus on complex investigations, strategy, and policy decisions. It effectively acts as a “force multiplier” for human defenders.

What does it mean?

In the cybersecurity world, AI will enable security experts to focus on more complex tasks, strategies, and policy decisions, while working in the background to spot and stop potential threats in a proactive way, instead of only relying on reacting to an attack when it may be too late.


The Risks

For all its defensive promise, AI also introduces new and serious risks. Every advancement in automation, scale, and speed can be turned around and used by attackers. As organizations adopt AI for security, they also need to account for how AI systems themselves — and the data they rely on — can become targets.

AI-Powered Threats

Attackers are using AI to make their operations more effective. Generative AI can craft convincing phishing emails, fake personas, or malicious code at scale. In 2024, a deepfake voice scam in Hong Kong reportedly tricked a finance employee into transferring $25 million to criminals posing as the CFO — a stark example of AI-enabled fraud.

Prompt Injection and Model Manipulation

Generative AI systems can be tricked into performing unintended actions through prompt injection — malicious inputs designed to override safety rules or extract sensitive information. Attackers can embed these instructions in text, code, or even web content that the AI processes automatically.

Poisoned or Biased Training Data

AI models trained on compromised data can produce skewed or dangerous results. Attackers may inject false or misleading data into training pipelines, leading to data poisoning attacks that degrade the accuracy of threat detection systems or hide malicious activity. In cybersecurity contexts, poisoned datasets could cause an AI system to “ignore” specific attack patterns altogether.

Over-Reliance and False Confidence

AI tools can create a false sense of security. If organizations treat AI outputs as infallible, they risk missing sophisticated threats or misjudging incidents. Automation without oversight can also lead to cascading failures — like automated blocking systems taking down critical infrastructure due to a misclassification.


Security Best Practices for AI Use

Even if you’re not in IT, you interact with AI systems daily — chatbots, writing tools, automated assistants, and analytics dashboards. The same security mindset that applies to email and social media now applies to AI, too.

Think Before You Share

  • Just like when browsing the web or posting to social media, never enter confidential, personal, or proprietary information into AI tools unless your organization has explicitly approved them.
  • Treat AI chatbots like public websites — anything you type could be stored, reviewed, or reused to train models.
  • If in doubt, ask your IT or compliance team before sharing sensitive data.

Verify Before You Trust

  • AI can sound confident but still be wrong. Always double-check facts, links, or recommendations generated by AI before taking action.
  • Watch out for AI-assisted scams — phishing emails or messages may now use perfect grammar, realistic names, and context-aware details.

Use Official Tools Only

  • Stick to university-approved AI tools and accounts. “Shadow AI” — using unapproved apps like free AI chatbots or extensions — can expose sensitive data. Copilot is available with the university’s Office 365 subscription.
  • If you need an AI tool for your workflow, request approval instead of finding your own workaround.

Watch for Deepfakes and Impersonations

  • Be cautious with unexpected calls, videos, or messages that seem slightly off — even if they use real voices or names.
  • Always verify through a second channel (like a known phone number or internal messaging) before acting on instructions involving money, credentials, or private info.

Keep Credentials Safe

  • Never paste passwords, API keys, or tokens into AI prompts or shared documents.
  • Use password managers and multi-factor authentication for all accounts connected to AI tools.

Report Anything Suspicious

  • If an AI tool behaves oddly — such as revealing unexpected data, asking for private info, or giving risky instructions — report it to IT right away.
  • Early reporting helps prevent larger security incidents.

Stay Informed

  • Keep up with short internal security briefings or awareness newsletters. AI evolves quickly — knowing the latest scams and university guidelines keeps you ahead.

Conclusion

If you wouldn’t email it to a stranger, don’t feed it to an AI.
If it seems too real, verify it twice.
If something feels off, report it.

If you have any questions or require more information about the security involved with AI, please contact your Computer Support Specialist.

https://hiddenlayer.com/innovation-hub/top-5-ai-threat-vectors-in-2025

https://www.microsoft.com/en-us/security/security-insider/threat-landscape/microsoft-digital-defense-report-2024

https://www.ft.com/content/b977e8d4-664c-4ae4-8a8e-eb93bdf785ea?utm_source=chatgpt.com

Leave a Reply

Your email address will not be published. Required fields are marked *