Many organizations are looking for trusted advisors, and this applies to our beloved domain of cyber/information security. If you look at LinkedIn, many consultants present themselves as trusted advisors to CISOs or their teams.
This perhaps implies that nobody wants to hire an untrusted advisor. But if you think about it, modern LLM-powered chatbots and other GenAI applications are essentially untrusted advisors (RAG and fine-tuning notwithstanding).
Let’s think about the use cases where using an untrusted security advisor is quite effective and the risks are minimized.
To start, naturally intelligent humans remind us that any output of an LLM-powered application needs to be reviewed by a human with domain knowledge. While this advice has been spouted many times — with good reasons — unfortunately there are signs of people not paying attention. Here I will try to identify patterns and anti-patterns and some dependencies for success with untrusted advisors, in security and SOC specifically.
First, tasks involving ideation, creating ideas and refining them are very much a fit to the pattern. One of the inspirations for this blog was my eternal favorite read from years ago about LLMs “ChatGPT as muse, not oracle”. If you need a TLDR, you will see that an untrusted cybersecurity advisor can be used for the majority of muse use cases (give me ideas and inspiration! test my ideas!) and only for a limited number of oracle use cases (give me precise answers! tell me what to do!).
So let’s create new ideas. How would you approach securing something? What are some ideas for doing architecture in cases of X and Y constraints? What are some ideas for implementing controls given the infrastructure constraints? What are some of the ways to detect Z? All of these produce useful ideas that can be turned by experts into something great. Ultimately, they shorten time to value and they also crea
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
Read the original article: