The role of AI in DFIR is something I’ve been noodling over for some time, even before my wife first asked me the question of how AI would impact what I do. I guess I started thinking about it when I first saw signs of folks musing over how “good” AI would be for cybersecurity, without any real clarity, nor specification as to how that would work.
I recently received a more pointed question regarding the use of AI in DFIR, asking if it could be used to develop investigative plans, or to identify both direct and circumstantial evidence of a compromise.
As I started thinking about the first part of the question, I was thinking to myself, “…how would you create such a thing?”, but then I switched to “why?” and sort of stopped there. Why would you need an AI to develop investigative plans? Is it because analysts aren’t creating then? If that’s the case, then is this really a problem set for which “AI” is a solution?
About a dozen years ago, I was working at a company where the guy in charge of the IR consulting team mandated that analysts would create investigative plans. I remember this specifically because the announcement came out on my wife’s birthday. Several months later, the staff deemed the mandate a resounding success, but no one was able to point to a single investigative plan. Even a full six months after the announcement, the mandate was still considered a success, but no one was able to point to a single investigative plan.
My point is, if your goal is to create investigative plans and you’re looking to AI to “fill the gap” because analysts aren’t doing it, then it’s possible that this isn’t a problem for which
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from Windows Incident Response
Read the original article: