Let’s talk about CrowdStrike’s quality assurance failures! Thanks to Help Net Security for publishing my opinion piece. Take a look for a more in-depth explanation of how the bad update made it to over 8 million devices and caused widespread global outages.
CrowdStrike has released preliminary details of how their bad update made it to client systems, which caused the BSODs. It showcases they have a complex product release architecture, which in this case failed. Improvements need to be made and I am concerned that their plans for incremental changes to a flawed Quality Assurance architecture won’t result in the desired long-term outcomes.
CrowdStrike has a good reputation and leadership, as showcased by the CEO George Kurtz who quickly came out to take responsibility and rally his team to help their customers. There have been many companies, including security companies, who were not transparent or timely when their products caused problems. In fact, it seems more common to initially deny, downplay, or blame others. So, what George did is truly wonderful. It is a testament to CrowdStrike’s work ethics.
However, this is a major outage and CrowdStrike needs to revisit their preliminary improvement plans to account for a flawed strategy that allows for dangerous code to make it to the endpoints — something that should never be allowed to happen.
The world is watching and lessons-learned will likely be used to help improve the operating practices across the industry.
Take read at the article and let me know your thoughts and concerns!
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.