I didn’t get to see the discussion between Justine Bone and Chris Wysopal about the former’s approach to monetizing vulnerabilities. If you’re not familiar with the approach, or the “Muddy Waters” episode, take a minute to brush up, I’ll wait….

OK, so if you’re in one computer security sub-community the first words out of your mouth are probably something along the lines of: “what a bunch of money-grubbing parasites.” If you knew anyone associated with this event you’ve probably stop talking to them. You’d certainly start talking shit about them. This is supposed to be about security, not profiteering.

If you’re in a different sub-community you’re probably thinking something along the lines of, “what a bunch of money-grubbing parasites,” only for different reasons. You’re not naive enough to think that a giant company will drop everything to fix the buffer overflow you discovered last week. Even if they did, because it’s a couple of lines in a couple of million lines of code, a fix isn’t necessarily imminent. Publicity linked to responsible disclosure is a more passive way of telling the world: “We are open for business” because it’s about security, but it’s also about paying the mortgage.

If you’re in yet another sub-community you’re probably wondering why you didn’t think of it yourself, and are fingering your Rolodex to find a firm to team up with. Not because mortgages or yachts don’t pay for themselves, but because you realize that the only way to get some companies to give a shit is to hit them where it hurts: in the wallet.

The idea that vulnerability disclosure, in any of its flavors, is having a sufficiently powerful impact on computer security is not zero, but its not registering on a scale that matters. Bug bounty programs are all the rage, and they have great utility, but it will take time before the global pwns/minute ratio changes in any meaningful fashion.

Arguing about the utility of your preferred disclosure policy misses the most significant point about vulnerabilities: the people who created them don’t care unless it costs them money. For publicly traded companies, pwnage does impact the stock price: for maybe a fiscal quarter. Just about every company that’s suffered an epic breach sees their stock price at or higher than it was pre-breach just a year later. Shorting a company’s stock before dropping the mic on one vulnerability is a novelty: it’s a material event if you can do it fiscal quarter after fiscal quarter.

We can go round and round about what’s going to drive improvements in computer security writ large, but when you boil it down it’s really only about one of and/or two things: money and bodies. This particular approach to monetizing vulnerabilities tackles both.

We will begin to see significant improvements in computer security when a sufficient number of people die in a sufficiently short period of time due to computer security issues. At a minimum we’ll see legislative action, which will be designed to drive improvements. Do you know how many people had to die before seatbelts in cars became mandatory? You don’t want to know.

When the cost of making insecure devices exceeds the profits they generate, we’ll see improvements. At a minimum we’ll see bug bounty programs, which is one piece of the puzzle of making actually, or at least reasonably secure devices. Do you know how hard it is to write secure code? You don’t want to know.

If you’re someone with a vulnerable medical device implanted in them you’re probably thinking something along the lines of, “who the **** do you think you are, telling people how to kill me?” Yeah, there is that. But as has been pointed out in numerous interviews, who is more wrong: the person who points out the vulnerability (without PoC) or the company that knowingly let’s people walk around with potentially fatally flawed devices in their bodies? Maybe two wrongs don’t make a right, but as is so often the case in security, you have to choose between the least terrible option.

Leave a Reply