Cybersecurity Through the Lens of Rock Climbing

I’ve been to a lot of kid’s sporting events in the last decade plus. They have their moments, but I think I speak for all parents who are not living vicariously through their child’s prowess on the field of play when I say there are a few dozen places you’d rather be than sitting on a cooler of orange slices and water bottles on a Saturday morning.

But since we’re fond of making sports a metaphor for so many other things in life — or is it the other way around — I thought I’d point out a couple of lessons that rock climbing (yes, they have competitions) teaches us in security.

Everything is harder than it looks. When my son started rock climbing he was all about using his arms, with predictable results. It wasn’t until he realized the importance of using all four limbs that he really started to have success. There is no shortage of recommendations or guidance or frameworks that one can use to help secure an enterprise, but if it was as easy as installing anti-virus, telling the CEO there are bad guys out there, and checking boxes on a list, my SF86 wouldn’t be in Beijing.

There is a significant difference between practice and real life. Climbing gyms have all sorts of different configurations on their walls, but they cannot always replicate what you’ll find in the wild. Sometimes, there isn’t a convenient hand- or foot-hold to get you over the top. Sometimes you hit a dead end and have to find another way around. In security maybe that’s a corporate policy (or raison d’etre). Maybe its a regulation or even a physical constraint. Regardless, you need to be prepared to take a long, winding route to your goal, or accept that what needs doing is one crag too far.

You need strength in your core and at the extremities. Having a strong grip is great, but without a high level of strength and mobility in your abdomen, shoulders, and hips, you will find it very hard to get up and out of tight spots. Better security requires a range of talents, tools, and methods. You’ve got to work on them all, and in a coordinated fashion with the rest of the organization, to succeed.

Energy drains quickly. A given bouldering problem may be both vertical and horizontal. The distance traversed may not be long, but crawling on all fours, upside-down, is not a party. Trying to achieve security goals can be equally challenging and exhausting. You’re always the person who says, ‘no’. You’re always fighting for resources, and respect. You’re always the scapegoat. At some point everyone asks, “why bother?”

No one gets through the hard stuff the first time. Everyone who makes going through a high V-rated route look easy only does so because they fell on their backsides more often than they reached the top. They make it look easy because they know what doesn’t work. Senior practitioners, successful CISOs, they all failed a lot before they won.

Breaches Forever!

The computer security industry is not stopping breaches. Not for lack of trying, but if you’re familiar with the myth of Sisyphus, such efforts are the definition of pointless. If this sounds strange coming from a computer security person, it shouldn’t. I’m not here to blow smoke up your fourth point of contact; I’m hear to point out that the impetus for progress is not going to come from anything a bunch of nerds conjure up.

The arguments that spring up whenever there is an epic breach are predictable and can be broken down into two major themes:

    1. Everyone in the victim company is an idiot. If they just employed people like me and my friends, this never would have happened.
    2. Securing data on an enterprise scale is hard. The idea that there is one or a hundred things that could have been done to prevent this disaster dismisses the complexity of what’s involved in protecting an “enterprise” and not “my basement lab.”

Now, the argument over whether or not the C-levels of Equifax were equipped — intellectually or materially — has been made, but the result doesn’t matter. Day to day the dynamic in corporations around the world is the same. The world’s greatest CISO still has to fight for budget, human resources, technical equipment and software, etc. The CFO still has to balance budgets and attempt (futile as it may be in security) to assess if the CISO’s requests produce a sufficient ROI, etc. The CEO really only cares about making his numbers in a fashion that keeps him out of jail.

There is no requirement for a secure enterprise. There is a requirement to have an enterprise that is secure enough to maintain compliance with applicable laws and that enables effective business operations.

Did Equifax do wrong? From what we can tell via publicly available information they did things, to varying degrees of effectiveness, and with questionable timing. They could have done a better job, but Equifax is just like every corporation in that security is something they have to comply with; profit is why they get up in the morning.

Breaches, regardless of their size or the sensitivity of the data involved, have become so commonplace that they are no longer automatically considered problematic. A breach alone is no longer justification for a lawsuit. Increasingly you have to show actual damages to have standing. Credit card number compromised? The bank makes you whole and happily issues you a new card. Medical data compromised? Insurance fraud is readily solved by a rate increase you hardly notice. Intimate details of your life lost to a foreign adversary? Well I guess the Forbidden City really is at this point.

And life goes on.

Breaches are a part of our way of life. By and large they do not impact our lives enough (or enough lives) to merit the kind of attention they get. As a friend recently pointed out, we are now living in a “post-authentication” world: so much data about us has been lost/stolen that anyone can be anyone else for a length of time. There is no point in trying to keep your personal information personal because it’s all effectively public, and has been for some time. Many times over.

The idea that this breach, or any breach hereafter, is going to be ‘the one’ that mobilizes the populace to a degree that they’re willing to do what is necessary to achieve political/legal change is wishful thinking. An angry mob, to the extent that anyone outside of the usual privacy/security community is going to get off their couch, is no substitute for the well-funded and organized industry lobbying effort.

I’m not saying it’s right, I’m saying that’s how it’s always played out, and there is no indication history is not going to repeat itself.

Cyber Diplomacy Will Not Save You

The idea that the promises of diplomats and statesmen will render cyberspace a safe place is a fantasy you can ill afford to entertain if you want to remain a going concern.

Many positive things have been said about the recent memorandum of understanding between China and the US, in particular the section dealing with cyber security. Just as much derision has been heaped upon it. From the perspective of the diplomats the agreement is a win because it gives us ammunition to use in the future. When another data breach or attack takes place and is attributed to China they can say “You are breaking your promise and what follows is on you.”

From the perspective of the nay-sayers the point is simple: because you cannot verify the actions – or inaction – of your adversary, they will always have deniability. Yes, you can shave most of these problems with Occam’s razor, but when you are talking about taking legal action that may deny someone their liberty, or in an extreme case strategic action, you kind of want to base your decision on something more than ‘it stands to reason.’

Talk is cheap. Actions speak louder than words. Clichés that could not be more apt when it comes to the issue of computer and information security. The US indicting five PLA officers for cyber-crimes is motion; actually arresting an American woman in China is action. One of the six aforementioned people knows what a prison cell looks like. Guess which country is showing it’s hard on (alleged) bad actors?

I’m like most people in that I would be happy if diplomacy led to concrete action, but until the online world is actually sunshine and lollipops it is important for everyone to remember that on a practical level, all this hand-shaking means nothing. You are still primarily responsible for your own cyber defense and no one is going to make you whole if you fail. Memorandum, treaty, or pinky-swear, attacks – state-sponsored/sanctioned or not – are not going to stop. IP theft isn’t going away. Data breaches will continue apace. We have no way of stopping bad things from happening online short of a global re-engineering effort that remakes the Internet and everything that rides on it securable and surveil-able.

That is never going to happen.

If what happened last week reminds you of another famous event in ironic diplomatic history, you’re not far off. Until people die in sufficient numbers due to a cyber-attack, do not expect radical or even incremental change because the foreseeable future of online security is still death-by-a-thousand-cuts . . . something I would point out the Chinese invented.

Functionality > Security

It was reported recently that a security researcher found several exploitable vulnerabilities in a FireEye product. ‘I tried to work with them,’ he said, but was apparently rebuffed/ignored, so here you go: an 0-day. There are at least three sides to every vulnerability disclosure story so I don’t particularly care about who said what when. What we all should be concerned about is the law that applies to all software, regardless of what it does for a living. That law?

Functionality trumps security.

Every. Time.

People don’t think twice when random commodity software product is found to have some horrendous vulnerability that makes it look like its code was produced by a band of monkeys that was rejected from the Shakespeare project, but when code belonging to something meant to keep your enterprise safe is found to have holes, that’s news.

It shouldn’t be.

I’ve been involved with enough security software projects to know that even the most security-minded people want their stuff to work first, then they lock things down. I don’t know that there is such a thing as a secure developer, there are just developers with varying levels of concern about security and different ideas on when that concern should be addressed. That any security product has holes in it should not be a surprise; what’s a surprise is that disclosures like this are not more common.

In fact, I would not be surprised if the last portion of the year didn’t see an increase in the number of flaws in security products being revealed publicly, with a corresponding increase in the level of hype. Much of that hype will be justified because – to draw on a popular security analog – if someone sells you a brick wall, you expect it to be able to withstand a certain level of physical damage; you do not expect to find out that key bricks are actually made of papier mâché.

Does that make the security company who sold you the software negligent? Well, does it work as advertised? Yes? Then the answer is probably ‘no.’ Remember: security products are not silver bullets: EVERYTHING you use has holes in it and you need to prepare and respond accordingly. You don’t terminate your workforce because people are demonstrably the weakest link when it comes to security, you manage the problem and associated risk. The same should be true for ALL the software you run, regardless of what it does for a living.

I know enough legacy-Mandiant people to know that they go to work every day trying to do the right thing and this latest development is just another example of how thankless computer security is (regardless of who you work for). Like the philanderer who didn’t use Ashley Madison pointing and laughing at the guy who did, the hypocrisy factor is going to go through the roof. My suggestion: save your self-righteousness and channel that energy into tightening your own work and helping tighten up the work of others. Demonstrate that you’re about security, not being famous.

No Accountability No Peace (of Mind)?

Thanks to the ever vigilant Richard Bejtlich for pointing out Jeremiah Grossman’s slides on the idea of INFOSEC security guarantees. Reading them reminded me of a saying, the exact wording of which I forget now, but it is something along the lines of ‘some analogies are useful’ and others…not so much.

Jeremiah does a good job explaining how guarantees can be a discriminator and how certain issues surrounding guarantees can be addressed, but there are a few factors that I think make this an untenable prospect:

  • Boots are not Computer Systems. A great American outdoor gear company has no problem issuing a 100% guarantee on their outdoor clothing because they have intimate knowledge and granular control over every aspect of a given garment; you cannot say the same for any sufficiently large or complex piece of software. As the CSO of Oracle recently pointed out, big software companies try to write secure code and they check for and patch vulnerabilities when they find them; but as pretty much the rest of the Internet pointed out in response: that’s not enough. CIO Alice knows her enterprise is running MS Windows, but neither Alice nor anyone that works for her knows the Windows kernel like Bob the guy breaking into Alice’s company does.
  • Money Over Everything. You know another reason why the great American outdoor gear company doesn’t mind issuing a 100% guarantee on their products? Margins. 1 boot out of 10,000 goes bad? “Oh my, how ever will we afford this? Oh, right, those boots cost me $10 to make and $10 to ship and market…and retail for $200 a pair.” I don’t know any developers or security practitioners who are poor, but I also don’t know any whose money is so long they could survive more than one claim against their labors.
  • Compliance. How does victim Big Co. prove they’re compliant with the terms of the guarantee? Yes, we are awash in data these days, but do you have someone on staff who can effortlessly and instantly call that data up? What if your findings are disputed? Yes, if you can conduct an effective forensic investigation you might be able to pinpoint a failure…but who covers the cost of the investigation? What if, in trying to claim that $100,000 guarantee payout you have to spend $500,000 over six months?
  • Fine print. A guarantee isn’t really useful to a customer if it is so heavily lawyered-up that it would be useless to file a claim. An example Richard points out in his post: If someone manages to overcome a defense via a sufficiently novel approach, the vendor isn’t liable for that because it is not a ‘failure’ on their part. Yet a sufficiently resourceful and motivated attacker isn’t going to break a window or kick in a door – where he knows the alarm system sensors are – he’s going to take a saws-all to a wall and walk through the studs.

Competent practitioners can and should take pride in and stand by their work, but there are far too many factors involved in “securing” a thing than can be identified, calculated and accounted for such that a guarantee would be both meaningful and valuable to both parties. Let’s be frank: nothing is coded to be secure; it is coded to be functional. Functionality and utility are what people are willing to pay for, security is what they are forced to pay for. Not the same thing.

The (Dis)illusion of Control

Conventional wisdom is telling us that “assumption of breach” is the new normal. Some otherwise well-respected names in computer security would have you believe that the appropriate response to such conditions is to increase the cost to the attackers. If you’re too expensive to breach – so the logic goes – the bad guys will go looking for someone. Maybe someday, when everyone makes hacking too expensive, it will stop.

Maybe I will play power forward for the Celtics.

There are two major problems with “drive up attacker cost” logic. The first is that you have almost no control over how expensive it is to hack your organization. You have no meaningful, granular control over:

  • The hardware you use
  • The operating system you use
  • The applications you use
  • The protocols used by all of the above
  • …and the communications infrastructure all of the above uses to exchange bytes with customers, vendors, etc., etc., etc.

Any one of the aforementioned items, or more than one of them interacting with each other, is ripe with vulnerabilities that will be exploited for fun and profit. For those who are in it for the profit, this is their job. They are good at it to the tune of billions of dollars a year worldwide.

The second problem is that “driving up attacker cost” is a misnomer. What advocates of this particular approach are really saying is: “spend more money” on the same things that failed to keep you secure in the first place.

2012 is not the year corporate (or governmental) enterprises wake up and start to take security seriously. Most corporate victims of cyber crime recently surveyed couldn’t be bothered to do simple things that would have prevented an attack (even more this year than last year), but suddenly they’re going to go from willful ignorance to becoming highly astute with regards to cyber threats now that we’re going to stop pretending there is anyone out there who isn’t or hasn’t been owned? More likely such thinking will have the opposite effect: why fight when I can punt?

Neither are enterprises going to change the way they do business, or otherwise introduce new complexities for the sake of improving security. There is a reason why so many businesses keep feeding and sheltering a cash cow, even when its becoming increasingly clear that milk production is dropping rapidly: security is an expense that does not directly translate into profitability.

There is only one thing you do control, and that is how quickly and effectively you respond to breaches of security. If you’re going to spend time and money on security, stop spending it on things that don’t work (well) and start focusing on things that could actually make a difference:

  • Improve your awareness of what happens on your hosts: that’s where the bad stuff happens.
  • Improve your ability to capture the minimum-meaningful network traffic: for every additional needle full-packet capture provides, it also supplies a thousand pieces of hay.
  • Reduce your attack surface by exposing a little of yourself to external research as possible: they can’t eat your fruit if you’ve trimmed all the low-hanging branches

The goal here is not to make it expensive to get hacked, its to make it so cheap to respond you don’t particularly care if you get hacked. That’s basically the position most businesses have today, so why no align your approach to security accordingly?

Fight the Power!

It’s funny how popular you can get these days by screaming “political interference.” Contrast the experiences of two different intelligence officers: one spent two years in a junior position at one agency and was perpetually oppressed by partisans that ran the show; the other (yours truly) spent nearly 20 years at different agencies and grades and has no idea what the political disposition of either his colleagues or his superiors were. One has a book deal and the other, oddly enough, does not.

Continue reading

David Axe lays the smack down

From West Point to the Pentagon
Leavenworth and back down

Seven years after the launch of Wikipedia – the user-edited online encyclopedia that brought the “open source” concept to the masses – the U.S. Army is still playing catch-up. The Army’s idea of harnessing the ‘net is to launch isolated websites, put generals in charge and lock everything behind passwords, while banning popular open-source civilian websites.

[…]

Galvin advises patience. “Our leaders are getting comfortable working in that [collaborative] environment,” he says. And that means Army wikis aren’t far off. But even if they arrived tomorrow, they’d still be seven years late.

“75 percent is junk . . .”

Clay Osborne, vice president of human resources and diversity at Bausch & Lomb, based in Rochester, N.Y., said the findings matched what his own company has discovered. Programs that work, he said, focus on the business advantages that come with diversity of thought, (emphasis mine) and that requires having people with diverse backgrounds.

perhaps more important, when you consider that the leadership of the intelligence community has placed such a fantastic emphasis on diversity:

. . .training is likely to be effective only in the context of an organization genuinely interested in cultural and structural change.