Securing work-at-home apps

Read the original article: Securing work-at-home apps


In today’s post, I answer the following question:

Our customer’s employees are now using our corporate application while working from home. They are concerned about security, protecting their trade secrets. What security feature can we add for these customers?

The tl;dr answer is this: don’t add gimmicky features, but instead, take this opportunity to do security things you should already be doing, starting with a “vulnerability disclosure program” or “vuln program”.

Gimmicks

First of all, I’d like to discourage you from adding security gimmicks to your product. You are no more likely to come up with an exciting new security feature on your own as you are a miracle cure for the covid. Your sales and marketing people may get excited about the feature, and they may get the customer excited about it too, but the excitement won’t last.

Eventually, the customer’s IT and cybersecurity teams will be brought in. They’ll quickly identify your gimmick as snake oil, and you’ll have made an enemy of them. They are already involved in securing the server side, the work-at-home desktop, the VPN, and all the other network essentials. You don’t want them as your enemy, you want them as your friend. You don’t want to send your salesperson into the maw of a technical meeting at the customer’s site trying to defend the gimmick.

You want to take the opposite approach: do something that the decision maker on the customer side won’t necessarily understand, but which their IT/cybersecurity people will get excited about. You want them in the background as your champion rather than as your opposition.

Vulnerability disclosure program

To accomplish this goal described above, the thing you want is known as a vulnerability disclosure program. If there’s one thing that the entire cybersecurity industry is agreed about (other than hating the term cybersecurity, preferring “infosec” instead) is that you need this vulnerability disclosure program. Everything else you might want to do to add security features in your product come after you have this thing.

Your product has security bugs, known as vulnerabilities. This is true of everyone, no matter how good you are. Apple, Microsoft, and Google employ the brightest minds in cybersecurity and they have vulnerabilities. Every month you update their products with the latest fixes for these vulnerabilities. I just bought a new MacBook Air and it’s already telling me I need to update the operating system to fix the bugs found after it shipped.

These bugs come mostly from outsiders. These companies have internal people searching for such bugs, as well as consultants, and do a good job quietly fixing what they find. But this goes only so far. Outsiders have a wider set of skills and perspectives than the companies could ever hope to control themselves, so find things that the companies miss.

These outsiders are often not customers.

This has been a chronic problem throughout the history of computers. Somebody calls up your support line and tells you there’s an obvious bug that hackers can easily exploit. The customer support representative then ignores this because they aren’t a customer. It’s foolish wasting time adding features to a product that no customer is asking for.

But then this bug leaks out to the public, hackers widely exploit it damaging customers, and angry customers now demand why you did nothing to fix the bug despite having been notified about it.

The problem here is that nobody has the job of responding to such problems. The reason your company dropped the ball was that nobody was assigned to pick it up. All a vulnerability disclosure program means that at least one person within the company has the responsibility of dealing with it.

How to set up vulnerability disclosure program

The process is pretty simple.

First of all, assign somebody to be responsible for it. This could be somebody in engineering, in project management, or in tech support. There is management work involved, opening tickets, tracking them, and closing them, but also at some point, a technical person needs to get involved to analyze the problem.

Second, figure out how the public contacts the company. The standard way is to setup two email addresses, “security@example.com” and “secure@example.com” (pointing to the same inbox). These tare the standard addresses that most cybersecurity researchers will attempt to use when reporting a vulnerability to a company. These should point to a mailbox checked by the person assigned in the first step above. A web form for submitting information can also be used. In any case, googling “vulnerability disclosure [your company name]” should yield a webpage describe how to submit vulnerability information — just like it does for Apple, Google, and Microsoft. (Go ahead, google them, see what they do, and follow their lead).

Tech support need to be trained that “vulnerability” is a magic word, and that when somebody calls in with a “vulnerability” that it doesn’t go through the normal customer support process (which starts with “are you a customer?”), but instead gets shunted over the vulnerability disclosure process.

How to run a vuln disclosure program

One you’ve done the steps above, let your program evolve with the experience you get from receiving such vulnerability reports. You’ll figure it out as you go along.

But let me describe some of the problems you are going to have along the way.

For specialty companies with high-value products and a small customer base, you’ll have the problem that nobody uses this feature. Lack of exercise leads to flab, and thus, you’ll have a broken process when a true problem arrives.

You’ll get spam on this address. This is why even though “security@example.com” is the standard address, many companies prefer web forms instead, to reduce the noise. The danger is that whoever has the responsibility of checking the email inbox will get so accustomed to ignoring spam that they’ll ignore legitimate emails. Spam filters help.

For notifications that are legitimately about security vulnerabilities, most will be nonsense. Vulnerability hunting is fairly standard thing in the industry, both by security professionals and hackers. There are lots of tools to find common problems — tools that any idiot can use.

Which means idiots will use these tools, and not understanding the results from the tools, will claim to have found a vulnerability, and will waste your time telling you about it.

At the same time, there are lots of non-native english speakers and native speakers who are just really nerdy, who won’t express themselves well. They will find real bugs, but you won’t be able to tell because their communication is so bad.

Thus, you get these reports, most of which are trash, but a few of which are gems, and you won’t be able to easily tell the difference. It’ll take work on your part, querying the vuln reporter for more information.

Most vuln reporters are emotional and immature. They are usually convinced that your company is evil and stupid. And they are right, after a fashion. When it’s a real issue, it’s probably something that in their narrow expertise that your own engineers don’t quite understand. Their motivation isn’t necessarily to help your engineers understand the problem, but to help you fumble the ball to prove their own superiority and your company’s weakness. Just because they have glaring personality disorders doesn’t mean they aren’t right.

Then there is the issue of extortion. A lot of vuln reports will arrive as extortion threats, demanding money or else the person will make the vuln public, or given to hackers to cause havoc. Many of these threats will demand a “bounty”. At this point, we should talk about “vuln bounty programs…”.

Vulnerability bounty programs

Once you’ve had a vulnerability disclosure program for some years and have all the kinks worked out, you may want to consider a “bounty” program. Instead of simply responding to such reports, you may want to actively encourage people to find such bugs. It’s a standard feature of big tech companies like Google, Microsoft, and Apple. Even the U.S. Department of Defense has a vuln bounty program.

This is not a marketing effort. Sometimes companies offer bounties that claim their product is so secure that hackers can’t possibly find a bug, and they are offering (say) $100,000 to any hacker who thinks they can. This is garbage. All products have security vulnerabilities. Such bounties are full of small print such that any vulns hackers find won’t match the terms, and thus, not get the payout. It’s just another gimmick that’ll get your product labeled snake oil.

Real vuln bounty programs pay out. Google offers $100,000 for certain kinds of bugs and has paid that sum many times. They have one of the best reputations for security in the industry not because they are so good that hackers can’t find vulns, but so responsive to the vulns that are found. They’ve probably paid more in bounties than any other company and are thus viewed as the most secure. You’d think that having so many bugs would make people think they were less secure, but the industry views them in the opposite light.

You don’t want a bounty program. The best companies have vuln bounty programs, but you shouldn’t. At least, you shouldn’t until you’ve gotten the simpler vuln disclosure program running first. It’ll increase the problems I describe above 10 fold. Unless you’ve had experience dealing with the normal level of trouble you’ll get overwhelmed by a bug bounty program.

Bounties are related to the extortion problem described above. If all you have a mere disclosure program without bounties, people will still ask for bounties. These legitimate requests for money may sound like extortion for money.

The seven stages of denial

When doctors tell patients of their illness, they go through seven stages of denial: disbelief, denial, bargaining, guilt, anger, depression, and acceptance/hope.

When real vulns appear in your program, you’ll go through those same stages. Your techies will find ways of denying that a vuln is real.

This is the opposite problem from the one I describe above. You’ll get a lot of trash that aren’t real bugs, but some bugs are real, and yet your engineers will still claim they aren’t.

I’ve dealt with this problem for decades, helping companies with reports where I believe are blindingly obvious and real, which their engineers claim are only “theoretical”.

Take “SQL injection” as a good example. This is the most common bug in web apps (and REST applications). How it works is obvious — yet in my experience, most engineers believe that it can’t happen. Thus, it persists in applications. One reason engineers will deny it’s a bug is that they’ll convince themselves that nobody could practically reverse engineer the details out of their product in order to get it to work. In reality, such reverse engineering is easy. I can either use simple reverse engineering tools on the product’s binary code, or I can intercept live requests to REST APIs within the running program. Security consultants and hackers are extremely experienced at this. In customer engagement, I’ve found impossible to find SQL vulnerabilities within 5 minutes of looking at the product. I’ll have spent much longer trying to get your product installed on my computer, and I really won’t have a clue about what your product actually does, and I’ll already have found a vuln.

Server side vulns are also easier than you expect. Unlike the client application, we can assume that hackers won’t have access to the product. They can still find vulnerabilities. A good example are “blind SQL injection” vulnerabilities, which at first glance appear impossible to exploit.

Even the biggest/best companies struggle with this “denial”. You are going to make this mistake repeatedly and ignore bug reports that eventually bite you. It’s just a fact of life.

This denial is related to the extortion problem I described above.  Take as an example where your engineers are in the “denial” phase, claiming that the reported vuln can’t practically be exploited by hackers. The person who reported the bug then offers to write a “proof-of-concept” (PoC) to prove that it can be exploited — but that it would take several days of effort. They demand compensation before going through the effort. Demanding money for work they’ve already done is illegitimate, especially when threats are involved. Asking for money before doing future work is legitimate — people rightly may be unwilling to work without pay. (There’s a hefty “no free bugs” movement in the community from people refusing to work for free).

Full disclosure and Kerckhoff’s Principle

Your marketing people will make claims about the security of your product. Customers will ask for more details. Your marketing people will respond saying they can’t reveal the details, because that’s sensitive information that would help hackers get around the security.

This is the wrong answer. Only insecure products need to hide the details. Secure products publish the details. Some publish the entire source code, others publish enough details that everyone, even malicious hackers, can find ways around the security features — if such ways exist. A good example is the detailed documents Apple publishes about the security of its iPhones.

This idea goes back to the 1880s and is know as Kerckhoff’s Principle in cryptography. It asserts that encryption algorithms should be public instead of secret, that the only secret should be the password/key. Such secrecy prevents your friends from pointing out obvious flaws but does little to discourage the enemy from reverse engineering flaws.

My grandfather was a cryptographer in WW II. He told a story how the Germans were using an algorithmic “one time pad”. Only, the “pad” wasn’t “one time” as the Germans thought, but instead repeated. Through brilliant guesswork and reverse engineering, the Allies were able to discover that it repeated, and thus were able to completely break this encryption algorithm.

The same is true of your product. You can make it secure enough that even if hackers know everything about it, that they still can’t bypass its security. If your product isn’t that secure, then hiding the details won’t help you much, as hackers are very good at reverse engineering. I’ve tried to describe how unexpectedly good they are in the text above. All hiding the details does is prevent your friends and customers from discovering those flaws first.

Ideally, this would mean publishing source code. In practice, commercial products won’t do this for obvious reasons. But they can still publish enough information for customers to see what’s going on. The more transparent you are about cybersecurity, the more customers will trust you are doing the right thing — and the more vulnerability disclosure reports you’ll get from people discovering you’ve done the wrong thing, so you can fix it.

This transparency continues after a bug has been found. It means communicating to your customer that such a bug happened, it’s full danger, how to mitigate it without a patch, and how to apply the soft

[…]


Read the original article: Securing work-at-home apps