AI agents drafted into cybersecurity defense forces of companies


The rise of generative AI and large language models has drastically shifted the cybersecurity landscape, empowering attackers with easy-to-use tools that can create realistic video and voice deepfakes, personalized phishing campaigns, and malware and malicious code. 

That has opened the door for AI on the defense as well. As agentic AI becomes more deeply embedded in the enterprise in areas like finance and legal, cybersecurity AI agents are on the rise, too, becoming a key asset for detection, analysis, and alerts. 

“It’s a massive challenge to detect, contain, investigate and respond across larger companies,” said Brian Murphy, CEO of cybersecurity technology company ReliaQuest. “AI is allowing us to remove a lot of that noise, that tier one or tier two work, that work that’s often not at all relevant to something that could be threatening to an organization,” Murphy said. 

Putting a tool in the hands of human workers that can automate otherwise menial tasks or time-consuming ones, freeing them to do more important work, has often been the pitch for agentic AI. 

In a message shared with Amazon employees in June, CEO Andy Jassy said “We have strong conviction that AI agents will change how we all work and live,” adding that he sees a future with “billions of these agents, across every company and in every imaginable field,” helping workers “focus less on rote work and more on thinking strategically” while also making “our jobs even more exciting and fun than they are today.” 

Murphy shares a similar view across cybersecurity, where he sees an industry of workers who are inundated with work they likely shouldn’t be spending time on, causing more burnout and exacerbating the existing issue of a lack of available talent. 

He’s also seen the way AI is being wielded to attack companies. “Those phishing emails, they used to look almost laughable with the misspellings and the fonts wrong,” he said. “AI can take the average bad actor and make them better, and so the trick is if you’re on the defensive side, they have to use AI because of the reality of what AI can do.” 

ReliaQuest recently released what it calls GreyMatter Agentic Teammates, autonomous, role-based AI agents that can be used to take on tasks that detection engineers or threat intelligence researchers would otherwise accomplish on a security operations team. 

“Think of it as this persona that teams up with a human, and the human is prompting that agentic AI, so the human knows what to do,” Murphy said, adding that it’s like having a “teammate that takes that incident response analyst and multiplies their capability.” 

Murphy gave an example that is a frequent occurrence for any security team at a global company: international executive travel. Every time a laptop or cell phone is connected to a network in, say, China, the security operations team would be alerted, and the security team would have to verify that the executive is abroad and is securely using their device each day of that trip. With an agentic AI teammate, that security person could automate that task, or even set up a series of similar processes for board meetings, off-sites, or other large team gatherings. 

“There’s hundreds of things like that,” he said. 

Justin Dellaportas, chief information and security officer at communications technology company Syniverse, said that while AI agents have been able to automate some of those basic cybersecurity tasks like combing through logs, it’s also starting to be able to automate actions, like quarantining flagged emails and removing them from inboxes, or restricting access by a comprised account across a variety of logins.

“[AI] is being used by criminals to efficiently find vulnerabilities and exploits into organizations at scale, and all of that is resulting in them having a higher success rate, getting initial access sooner and moving laterally into an organization quicker than we’ve seen,” he said. “Cyber defenders really need to lean into this technology now more than ever to stay ahead of this evolving threat landscape and the pace of cyber criminals.”

Dellaportas said that while every company has a unique risk profile and tolerance when it comes to deploying different types of cybersecurity tools, he views the adoption of agentic AI in cybersecurity as stages of a “crawl, walk, run methodology.”

“You roll this out, and it’s going to reason and then take action, but then it’s got to iterate through the actions that it’s previously taken,” he said. “I come back to a kind of trust but verify, and then as we get confidence in its effectiveness, we’ll move on to different problems.”

What AI bots mean for cybersecurity workers

While Dellaportas said AI agents can take over some tasks from human cybersecurity professionals in the future, he…



Read More: AI agents drafted into cybersecurity defense forces of companies

Agentsartificial intelligencebusinessbusiness newscompaniesCyber threatCyberattackCybersecuritydefensedraftedForcesGenerative AISuppress Zephr
Comments (0)
Add Comment