C.R.E.A.M. IoT Edition

I didn’t get to see the discussion between Justine Bone and Chris Wysopal about the former’s approach to monetizing vulnerabilities. If you’re not familiar with the approach, or the “Muddy Waters” episode, take a minute to brush up, I’ll wait….

OK, so if you’re in one computer security sub-community the first words out of your mouth are probably something along the lines of: “what a bunch of money-grubbing parasites.” If you knew anyone associated with this event you’ve probably stop talking to them. You’d certainly start talking shit about them. This is supposed to be about security, not profiteering.

If you’re in a different sub-community you’re probably thinking something along the lines of, “what a bunch of money-grubbing parasites,” only for different reasons. You’re not naive enough to think that a giant company will drop everything to fix the buffer overflow you discovered last week. Even if they did, because it’s a couple of lines in a couple of million lines of code, a fix isn’t necessarily imminent. Publicity linked to responsible disclosure is a more passive way of telling the world: “We are open for business” because it’s about security, but it’s also about paying the mortgage.

If you’re in yet another sub-community you’re probably wondering why you didn’t think of it yourself, and are fingering your Rolodex to find a firm to team up with. Not because mortgages or yachts don’t pay for themselves, but because you realize that the only way to get some companies to give a shit is to hit them where it hurts: in the wallet.

The idea that vulnerability disclosure, in any of its flavors, is having a sufficiently powerful impact on computer security is not zero, but its not registering on a scale that matters. Bug bounty programs are all the rage, and they have great utility, but it will take time before the global pwns/minute ratio changes in any meaningful fashion.

Arguing about the utility of your preferred disclosure policy misses the most significant point about vulnerabilities: the people who created them don’t care unless it costs them money. For publicly traded companies, pwnage does impact the stock price: for maybe a fiscal quarter. Just about every company that’s suffered an epic breach sees their stock price at or higher than it was pre-breach just a year later. Shorting a company’s stock before dropping the mic on one vulnerability is a novelty: it’s a material event if you can do it fiscal quarter after fiscal quarter.

We can go round and round about what’s going to drive improvements in computer security writ large, but when you boil it down it’s really only about one of and/or two things: money and bodies. This particular approach to monetizing vulnerabilities tackles both.

We will begin to see significant improvements in computer security when a sufficient number of people die in a sufficiently short period of time due to computer security issues. At a minimum we’ll see legislative action, which will be designed to drive improvements. Do you know how many people had to die before seatbelts in cars became mandatory? You don’t want to know.

When the cost of making insecure devices exceeds the profits they generate, we’ll see improvements. At a minimum we’ll see bug bounty programs, which is one piece of the puzzle of making actually, or at least reasonably secure devices. Do you know how hard it is to write secure code? You don’t want to know.

If you’re someone with a vulnerable medical device implanted in them you’re probably thinking something along the lines of, “who the **** do you think you are, telling people how to kill me?” Yeah, there is that. But as has been pointed out in numerous interviews, who is more wrong: the person who points out the vulnerability (without PoC) or the company that knowingly let’s people walk around with potentially fatally flawed devices in their bodies? Maybe two wrongs don’t make a right, but as is so often the case in security, you have to choose between the least terrible option.

The Wolf is Here

For decades we’ve heard that iCalamity is right around the corner. For decades we’ve largely ignored pleas to try and address computer security issues when they are relatively cheap and easy, before they got too large and complicated to do at all. We have been living a fairy tale life, and absent bold action and an emphasis on resiliency, it only gets grim(m)er going forward.

Reasonably affordable personal computers became a thing when I was in high school. I fiddled around a bit, but I didn’t know that computer security was a thing until I was on active duty and the Morris Worm was all over the news. Between the last time Snap! charted and today, we have covered a lot of ground from a general purpose IT perspective. We’ve gone from HTML and CGI to the cloud. From a security perspective however, we’ll still largely relying on firewalls, anti-virus, and SSL.

Why the disparate pace of progress? People demand that their technology be functional, not secure. Like so many areas of our lives, we worry about the here and now, not the what-might-be. We only worry about risks until a sufficiently horrific scenario occurs, or if one is not enough, until enough of them occur in a sufficiently short period of time.

Of course today we don’t just have to worry about securing PCs. By now it is fairly common knowledge that your car is full of computers, as is increasingly your house. Some people wear computers, and some of us are walking around with computers inside of us. Critical infrastructure is lousy with computers, and this week we learned that those shepherd boys crying ‘wolf’ all those years weren’t playing us for fools, they were just too early.

The fragility of our standard of living is no longer the musings of Cassandras. The proof of concept was thankfully demonstrated far, far away, but the reality is we’re not really any safer just because ‘merica. Keeping the lights on, hearts beating, and the water flowing is a far more complex endeavor than you find in the commodity IT world. It is entirely possible that in some situations there is no ‘fix’ to certain problems, which means given various inter-dependencies we will always find ourselves with a Damoclean sword over our heads.

Mixed mythologies notwithstanding, the key to success writ large is insight and resiliency. The more aware you are of what you have, how it works, and how to get along without it will be critical to surviving both accidents and attacks. I would like to think that the market will demand both functional and secure technology, and that manufacturers will respond accordingly, but 50 years of playing kick the can tells me that’s not likely. The analog to security in industrial environments is safety, and that’s one area power plants, hospitals, and the like have down far better than their peers in the general purpose computing world. We might not be able to secure the future, but with luck we should be able to survive it.