According to a new Venafi survey, developers in almost all (83%) organisations utilise AI to generate code, raising concerns among security leaders that it might lead to a major security incident.
In a report published earlier this month, the machine identity management company shared results indicating that AI-generated code is widening the gap between programming and security teams.
The report, Organisations Struggle to Secure AI-Generated and Open Source Code, highlighted that while 72% of security leaders believe they have little choice but to allow developers to utilise AI in order to remain competitive, virtually all (92%) are concerned regarding its use.
Because AI, particularly generative AI technology, is advancing so quickly, 66% of security leaders believe they will be unable to stay up. An even more significant number (78%) believe that AI-generated code will lead to a security reckoning for their organisation, and 59% are concerned about the security implications of AI.
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article: