In today’s rapidly evolving digital landscape, the threat of cyber incidents looms large. Organizations are constantly battling sophisticated attacks, making an efficient and effective incident response process more elusive yet ever-more critical.
But what if new and emerging technology could not only help organizations react to these incidents but also proactively strengthen their defenses?
This was the central theme of a recent webinar hosted by UnitedLex, “Smarter, Faster, Stronger: How AI and ML are Powering Modern Cyber Incident Response,” which delved into how artificial and other technological advancements are transforming the way organizations approach incident response. The webinar, moderated by Kimberly Manibusan, VP, Cyber Incident Response Services, UnitedLex, featured Paul Greene, AIGP, CIPP/US, CIPP/E, CIPM, FIP, Partner, Harter Secrest & Emery LLP, Doug Kaminski, Chief Revenue Officer, Infinnium, and Violet Sullivan, CIPP/US CIPM, AVP, Cyber Solutions Team Leader, Crum & Forster.
The webinar highlighted opportunities in leveraging AI along with a common pain point: the often lengthy and complex process of understanding the scope of a cyber security incident.
The power of data mining and AI in incident response
In the event of a cyber incident, as Greene pointed out, a significant challenge lies in the time it takes to “figure out what data is at issue and whom you need to notify.” This is often followed by the disheartening realization that the notification list, after significant effort, is incorrect, forcing teams to “start at zero again.”
Sullivan emphasized that “data mining is one of those things that you don’t usually just start out of nowhere.” It’s often a necessary step when counsel requests a deeper understanding of the data involved in an incident. The technical team, while crucial, may not have the capacity to manually review every file and discern its meaning.
The good news, as shared by the experts, is that AI and data mining are emerging as powerful tools to address these very challenges. Greene highlighted that using AI for data mining and generating notification lists is a “sweet spot” where it can be highly effective. He emphasized the comfort level with AI’s ability to handle complex data analysis.
As Greene explained, “the deep machine learning models that engage in pattern recognition that help us sort through data that is too complex to sort through as a human being is exactly where we should be using AI.”
AI can sift through vast amounts of data, identify sensitive information like personally identifiable information (PII), and help organizations understand the extent of the breach more quickly and accurately.
Proactive measures: the foundation for solid Information Governance
The panel of experts also stressed that effective incident response isn’t just about reacting to an incident; it’s about building a strong foundation beforehand. Greene emphasized that the problems encountered during incident response often stem from a “failure to do all the good proactive work” in the months and years leading up to an incident. This includes having a robust asset inventory and understanding where sensitive data resides within the organization.
As Greene pointed out, for organizations that haven’t engaged in thorough proactive data mapping, the data mining exercise itself can be an “eye opener” experience. It can help clients realize the true extent and location of their data, prompting them to re-evaluate their risk assessment and allocate resources more effectively. “Failure to do that in the front end in any reasonable fashion adds to confusion. Delays cost on the back end in relation to data mining,” Greene stated. This proactive approach, Kaminski added, allows organizations to say, “These are proactive measures we have taken, and so it sets us up much better.”
Greene offered a forward-looking perspective on the potential of agentic AI to provide much-needed “personal privacy assistance” for individuals within organizations. Recognizing the human tendency to make mistakes and the practical need to save data in various locations for accessibility, Greene acknowledged, “We all make mistakes when we click on things or we’ll save that spreadsheet in four different places, because we need to get it locally.”
Greene envisions a future where AI-enabled assistance can proactively remind users to address outdated or unnecessary files, suggesting that such prompting could “do a great job in moving the ball forward on reducing risk” in the context of a proactive information governance program.
The role of AI, LLMs, and humans in fine-tuning notifications
The panelists also touched upon the challenges and considerations associated with implementing AI in incident response. While acknowledging the immense benefits, they stressed the need for a cautious and strategic approach.
A crucial aspect of incident response is timely and accurate notification. Manibusan highlighted the key role of technology in “fine-tuning these notifications” and “really homing in and finding that data.”
Sullivan provided a real-world perspective, noting the complexities of cyber incidents like those involving PowerSchool and MOVEit. She highlighted the challenges in determining the scope of data mining and the varying obligations of the breached vendor and their clients, underscoring the need for experts to navigate these complexities noting information, “was in all shapes and forms, in different places.”
Kaminski underscored that achieving accuracy within tight timelines often “tracks back to the tools,” asserting that with “purpose-built” solutions trained on extensive “data breach data,” organizations can indeed “get what you’re looking for, and we’ve proven it for many years.” Kaminski stressed the importance of rigorous testing, urging to “make sure you don’t have false negatives as well as false positives.”
Addressing concerns and the future
All the panelists acknowledged that concerns always exist when discussing the use of AI.
Kaminski highlighted the promising advancements in agentic AI compared to more common “zero shot generative AI” models like ChatGPT, acknowledging that even in its current state, it’s “definitely generally better than just human review of every document” due to the inherent high error rates in manual processes. However, Kaminski echoed Greene’s point, emphasizing the established reliability of “tried and true machine learning algorithms” that have been specifically trained to identify PII.
The experts also raised important concerns about the potential risks associated with generative AI, questioning, “Is it being used to train other models? Is it being exposed? Is customer data being exposed in this generative AI process?” These questions, Kaminski noted, are not as prevalent with traditional algorithms.
Lessons learned and the path forward
The webinar concluded with a vital point: the importance of learning from every incident. Greene shared that a “lessons learned scenario” is a standard practice after an incident, providing an opportunity to identify weaknesses in the incident response plan and data management practices. He stressed that understanding the universe of protected data is almost always a key area for improvement identified in these post-incident reviews.
This program painted a clear picture of how AI and technology are transforming incident response. By leveraging data mining capabilities, streamlining notification processes, and strengthening governance and compliance, organizations can move towards a more proactive and resilient approach to cybersecurity.
Watch the recorded webinar here.
Learn about 2025 trends in incident response.
To learn more about UnitedLex Cyber Incident Response services, let’s talk.