Since the proliferation of large language models (LLMs), like OpenAI’s GPT-4, Meta’s Llama 2, and Google’s PaLM 2, we have seen an explosion of generative AI applications in almost every industry, cybersecurity included. However, for a majority of LLM applications, privacy and data residency is a major concern that limits the applicability of these technologies. In the worst cases, employees at organizations are unknowingly sending personally identifiable information (PII) to services like ChatGPT, outside of their organization’s controls, without understanding the associated security risks.
This article has been indexed from InfoWorld Security