The Convergence of AI and Cybersecurity

Categories: Cyber Security Awareness Month Security

AI and Cybersecurity

Are Artificial Intelligence (AI) and related technologies going to ultimately help us or harm us? It’s a question being debated across news cycles and behind closed doors on Capital Hill as AI technologies continue to be developed and implemented at an unprecedented rate, with little legislation, controls, or governance. And the market is booming. Goldman Sachs forecasts AI investments to reach around $200 billion globally by 2025, and Forbes is expecting an annual market growth rate of 37.3% over the next seven years.  

You may already be familiar with some of the more popular forms of AI technology that have been written about about in the news: ChatGPT, a chatbot known for it’s ability to code software, successfully pass the Bar exam, and have convincing human-like chat conversations. There are also malign uses for this technology such as allowing someone with relatively little effort to make DeepFakes, convincing but entirely counterfeit images or even video of a person ‘spliced’ into a preexisting image or video.

There are also AI technologies in use that may not be as apparent like the voice-assistant on your smartphone, or an online customer service ‘assistant’ who can process refunds for an online purchase. As of today in late 2023, (an online database for AI technology) lists 8,700 AI products that are trained for 2,300 specific tasks and directly impact or fulfill tasks across 4,800 jobs performed by humans.

In the realm of cybersecurity, AI provides benefits to both security teams and cybercriminals. In a classic game of “cat and mouse”, hackers use AI to build better password-guessing tools and generate authenticate-looking ploys that mimic, for example, a sender’s email message with customized wording, fewer grammatical issues and tell-tale indications of a phishing expedition. The latest news headlines already indicate AI is actively used by hackers: prompt hacking and malicious software creation. What’s ahead: cybercriminals will use AI-based tools to develop new and more intelligent malware then test the success of their product against AI tools. 

At the same time “Cyber AI” will enable security teams to anticipate cybersecurity threats and incidents, then respond faster to compromises. For example, a AI security bot could recognize a phishing attempt and stop a message before it hits a person’s inbox. Organizations could use AI to prepare for AI-driven cybercrimes with accelerated threat detection. Instead of being in a constant reactive environment, the new security efforts can be proactive.

AI offers human agents the ability to collaborate, detect and prevent compromises, limiting an organization’s exposure. Machine learning and natural language processing, elements of AI, will help security teams to detect actual threats from “noise.” A collaborative effort of human and machine interaction can enable better pattern recognition, threat prediction, and adaptability then generate faster and more accurate responses.

But it’s important to remember this market is still quite early in it’s lifecycle. These examples above are only the very beginnings of AI integration in use. Large investments in AI technology are occurring across every major sector from healthcare and transportation to finance and education, which will bring their own sets of unique potential benefits or opportunities for misuse.

As policy makers today scramble to build a framework for broadly governing AI integration, on a day-to-day scale it is much less clear what AI will be capable of going forward, how they will be implemented, and what outcomes it will lead to. While It’s important to stay informed about the ongoing evolution of AI and cybersecurity, there are some steps you can take towards protecting yourself and benefiting from AI technologies:

  • Be selective about using products with AI capabilities. Look for providers who are transparent about how they develop and audit their AI systems. Avoid any tools or services that make overbroad claims about AI.
  • Consider any potential privacy risks ahead of time. Read the “Privacy Policy” documentation and understand how your information is used by any AI systems you interact with before using an AI product or service.
  • Practice good cyber hygiene. Use strong, unique passwords and enable multi-factor authentication (MFA) for your online accounts and services when possible.
  • Watch out for phishing attempts and verify the identity of any party requesting personal data or information.
  • Advocate for responsible and thoughtful governance of AI development and use with state and local representatives. As consumers and citizens, we have the ability to influence the policies and regulations being developed to address the potential benefits and risks of AI use.

Refererences and Further Reading

Leave a Reply

Your email address will not be published. Required fields are marked *