“Cyber MAD” is a Bad Idea. Really Bad.

I don’t know how many times I have to say this, but nothing screams “legacy future” like trying to shoe-horn cold-war thinking into “cyber.” This latest attempt doesn’t disappoint (or maybe it does, depending on how you look at it) because it completely miss two key points:

  1. Cyberspace is not meat-space;
  2. Digital weapons are nothing like atomic ones.

Yes, like the nuclear arms race, it is in fact more expensive to defend yourself than it is to attack someone. Generally speaking. Its OK to paint with a broad brush on this point because so many entities online are so woefully inadequate when it comes to defense that we forget that there are actually some who are quite hard and expensive to attack. Any serious colored-hat who is being honest will tell you that they deal with more than their fair share of unknowns and ‘unknown unknowns’ when going after any given target.

But unlike malicious actions in cyberspace, there is no parsing nuclear war. You’re nuked, or you’re not. Cyber-espionage, cyber-crime, cyber-attack…all indistinguishable in all technically meaningful ways. Each has a different intent, which we are left to speculate about after-the-fact. In the other scenario, no one is around to speculate why a battalion of Reds turned their keys and pushed their buttons.

Attacker identity is indeed important whether you’re viewing a potential conflict through nuclear or digital lenses, but you know what excuse doesn’t work in the nuclear scenario? “It wasn’t me.”

Um, IR burn says it was…

There is no such equivalent in cyberspace. You can get close – real close – given sufficient data and time, but there will be no Colin Powell-at-the-UN-moment in response to a cyber threat because “it wasn’t me” is a perfectly acceptable excuse.

But we have data.

You can fabricate data

You know what you can’t fabricate? Fallout.

All of this, ALL OF THIS, is completely pointless because if some adversary had both the will and the wherewithal to attack and destroy our and just our critical infrastructure and national security/defense capabilities via cyber means…what are we meant to strike back with? Who are those who happen to be left unscathed supposed to determine who struck first? I was not a Missileer, but I’m fairly certain you can’t conduct granular digital attribution from the bottom of an ICBM silo.

What is the point of worrying about destruction anyway? Who wants that? The criminals? No, there is too much money to be made keeping systems up and careless people online. The spies? No, there is too much data to harvest and destruction might actually make collection hard. Crazy-bent-on-global-domination types? This is where I invoke the “Movie Plot Threat” clause. If the scenario you need to make your theory work in cyberspace is indistinguishable from a James Bond script, you can’t be taken seriously.

MAD for cyberspace is a bad idea because its completely academic and does nothing to advance the cause of safety or security online (the countdown to someone calling me “anti-intellectual” for pointing out this imperial nudity starts in 5, 4, 3….). MAD, cyber deterrence, all this old think is completely useless in any practical sense. You know why MAD and all those related ideas worked in the 60s? Because they dealt with the world and the problem in front of them as it was, not how they wished it to be.

I wholeheartedly agree that we need to do more and do more differently in order to make cyberspace a safer and more secure environment. I don’t know anyone who argues otherwise. I’m even willing to bet there is a period of history that would provide a meaningful analog to the problems we face today, but the Cold War isn’t it.

Malware Analysis: The Danger of Connecting the Dots

The findings of a lot of malware analysis are not in fact “analysis;” they’re a collection of data points linked together by assumptions whose validity and credibility have not been evaluated. This lack of analytic methodology could prove exceedingly problematic for those charged with making decisions about cyber security. If you cannot trust your analysis, how are you supposed to make sound cyber security decisions?

Question: If I give you a malware binary to reverse engineer, what do you see? Think about your answer for a minute and then read on. We’ll revisit this shortly.

It is accepted as conventional wisdom that Stuxnet is related to Duqu, which is in turn related to Flame. All of these malware have been described as “sophisticated” and “advanced,” so much so that they must be the work of a nation-state (such work presumably requiring large amounts of time and lots of skilled people and the code written for purposes beyond simply siphoning off other people’s cash). The claim that the US government is behind Stuxnet has consequently led people to assume that all related code is US sponsored, funded, or otherwise backed.

Except for the claim of authorship, all of the aforementioned data points come from people who reverse engineer malware binaries. These are technically smart people who practice an arcane and difficult art, but what credibility does that give them beyond their domain? In our quest for answers do we give too much weight to the conclusions of those with discrete technical expertise and fail to approach the problem with sufficient depth and objectivity?

Let’s take each of these claims in turn.

Are there similarities if not outright sharing of code in Stuxnet, Duqu and Flame? Yes. Does that mean the same people wrote them all? Do you believe there is a global marketplace where malware is created and sold? Do you believe the people who operate in that marketplace collaborate? Do you believe that the principle of “code reuse” is alive and well? If you answered “yes” to any of these questions then a single source of “advanced” malware cannot be your only valid conclusion.

Is the code in Stuxnet, etc. “sophisticated?” Define “sophisticated” in the context of malware. Forget about malware and try to define “sophisticated” in the context of software, period. Is Excel more sophisticated than Photoshop? When words have no hard and widely-accepted definitions, they can mean whatever you want them to mean, which means they have no meaning at all.

Can only a nation-state produce such code? How many government-funded software projects are you aware of that work as advertised? You can probably count on one hand and have fingers left over. But now, somehow, when it comes to malware, suddenly we’re to believe that the government has gotten its shit together?

“But Mike, these are, like, weapons. Super secret stuff. The government is really good at that.”

Really? Have you ever heard of the Osprey? Or the F-35? Or the Crusader? Or the JTRS? Or Land Warrior? Groundbreaker? Trailblazer? Virtual Case File?

I’m not trying to trivialize the issues associated with large and complex technology projects, my point is that a government program to build malware would be subject to the same issues and consequently no better – and quite possibly worse – than any non-governmental effort to do the same thing. Cyber crime statistics – inflated though they may be – tell us that governments are not the only entities that can and do fund malware development.

“But Mike, the government contracts out most of its technology work. Why couldn’t they contract out the building of digital weapons?”

They very well could, but then what does that tell us? It tells us that if you wanted to build the best malware you have to go on the open market (read: people who may not care who they’re working for, as long as their money is good).

As far as the US government “admitting” that they were behind Stuxnet: they did no such thing. A reporter, an author of a book, says that a government official told him that the US was behind Stuxnet. Neither the President of the United States, nor the Secretary of Defense, nor the Directors of the CIA or NSA got up in front of a camera and said, “That’s us!” which is what an admission would be. Let me reiterate: a guy who has a political agenda told a guy who wants to sell books that the US was behind Stuxnet.

It’s easy to believe the US is behind Stuxnet, as much as it is to believe Israel is behind it. You know who else doesn’t like countries who don’t have nuclear weapons to get them? Almost every country in the world, including those countries that currently have nuclear weapons. You know who else might not want Iran – a majority Shia country – to have an atomic bomb? Roughly 30 Sunni countries for starters, most of which could afford to go onto the previously mentioned open market and pay for malware development. What? You hadn’t thought about the non-proliferation treaty or that Sunni-Shia thing? Yeah, neither has anyone working for Kaspersky, Symantec, F-Secure, etc., etc.

Back to the question I asked earlier: What do you see when you reverse engineer a binary?

Answer: Exactly what the author wants you to see.

  • I want you to see words in a language that would throw suspicion on someone else.
  • I want you to see that my code was compiled in a particular foreign language (even though I only read and/or write in a totally different language).
  • I want you to see certain comments or coding styles that are the same or similar to someone else’s (because I reuse other people’s code).
  • I want you to see data about compilation date/time, PDB file path, etc., which could lead you to draw erroneous conclusions have no bearing on malware behavior or capability.

Contrary to post-9/11-conventional wisdom, good analysis is not dot-connecting. That’s part of the process, but it’s not the whole or only process. Good analysis has methodology behind it, as well as a fair dose of experience or exposure to other disciplines that comes into play. Most of all, whenever possible, there are multiple, verifiable, meaningful data points to help back up your assertions. Let me give you an example.

I used to work with a guy we’ll call “Luke.” Luke was a firm believer in the value of a given type of data. He thought it was infallible. So strong were Luke’s convictions about the findings he produced using only this particular type of data that he would draw conclusions about the world that flew in the face of what the rest of us like to call “reality.” If Luke’s assertions were true, WW III would have been triggered, but as many, many other sources of data were able to point out, Luke was wrong.

There was a reason why Luke was the oldest junior analyst in the whole department.

Luke, like a lot of people, fall victim to a number of problems, fallacies and mental traps when they attempt to draw conclusions from data. This is not an exhaustive list, but illustrative of what I mean.

Focus Isn’t All That. There is a misconception that narrow and intense focus leads to better conclusions. The opposite tends to be true: the more you focus on a specific problem, the less likely you are to think clearly and objectively. Because you just “know” certain things are true, you feel comfortable taking shortcuts to reach your conclusion, which in turn simply drives you further away from the truth.

I’ve Seen This Before. We give too much credence to patterns. When you see the same or very similar events taking place or tactics used your natural reaction is to assume that what is happening now is what happened in the past. You discount other options because its “history repeating itself.”

The Shoehorn Effect. We don’t like questions that don’t have answers. Everything has to have an explanation, regardless of whether or not the explanation is actually true. When you cannot come up with an explanation that makes sense to you, you will fit the answer to match the question.

Predisposition. We allow our biases to drive us to seek out data that supports our conclusions and discount data that refutes it.

Emption. You cannot discount the emotional element involved in drawing conclusions, especially if your reputation is riding on the result. Emotions about a given decision can run so high that it overcomes your ability to think clearly. Rationalism goes out the window when your gut (or your greed) over-rides your brain.

How can we overcome the aforementioned flaws? There are a range of methodologies analysts use to improve objectivity and criticality. These are by no means exhaustive, but they give you an idea of the kind of effort that goes into serious analytic efforts.

Weighted Ranking. It may not seem obvious to you, but when presented with two or more choices, you choose X over Y based on the merits of X, Y (and/or Z). Ranking is instinctual and therefore often unconscious. The problem with most informal efforts at ranking is that its one-dimensional.

“Why do you like the TV show Homicide and not Dragnet?”

“Well, I like cop shows but I don’t like black-and-white shows.”

“OK, you realize those are two different things you’re comparing?”

A proper ranking means you’re comparing one thing against another using the same criteria. Using our example you could compare TV shows based on genre, sub-genre, country of origin, actors, etc., rank them according to preference in each category, and then tally the results. Do this with TV shows – or any problem – and you’ll see that your initial, instinctive results will be quite different than those of your weighted rankings.

Hypothesis Testing. You assert the truth of your hypothesis through supporting evidence, but you are always working with incomplete or questionable data, so you can never prove a hypothesis true; we accept it to be true until evidence surfaces that suggest it to be false (see bias note above). Information becomes evidence when it is linked to a hypothesis, and evidence is valid once we’ve subjected it to questioning: where did the information come from? How plausible is it? How reliable is it?

Devil’s Advocacy. Taking a contrary or opposing position from what is the accepted answer helps overcome biases and one-dimensional thinking. Devil’s advocacy seeks out new evidence to refute “what everybody knows,” including evidence that was disregarded by those who take the prevailing point of view.

This leads me to another point I alluded to earlier and that isn’t addressed in media coverage of malware analysis: what qualifications does your average reverse engineer have when it comes to drawing conclusions about geo-political-security issues? You don’t call a plumber to fix your fuse box. You don’t ask a diplomat about the latest developments in no-till farming. Why in the world would you take at face value what a reverse engineer says about anything except very specific, technical findings? I’m not saying people are not entitled to their opinions, but credibility counts if those opinions are going to have value.

So where are we?

  • There are no set or even widely accepted definitions related to malware  (e.g. what is “sophisticated” or “advanced”).
  • There is no widely understood or accepted baseline of what sort of technical, intellectual or actual-capital required to build malware.
  • Data you get out of code, through reverse engineering or from source, is not guaranteed to be accurate when it comes to issues of authorship or origin.
  • Malware analysts do not apply any analytic methodology in an attempt to confirm or refute their single-source findings.
  • Efforts to link data found in code to larger issues of geo-political importance are at best superficial.

Why is all of this important? Computer security issues are becoming an increasingly important factor in our lives. Not that everyone appreciates it, but look at where we have been and where we are headed. Just under 20 years ago few people in the US, much less the world, world were online; now more people in the world get online via their phones than do on a traditional computer. Cars use computers to drive themselves, and biological implants are controlled via Bluetooth. Neither of these new developments has meaningful security features built into them, but no one would ever be interested in hacking insulin pumps or pacemakers, right?

Taking computer security threats seriously starts by putting serious thought and effort behind our research and conclusions. The government does not provide information like this to the public, so we rely on vendors and security companies (whose primary interest is profit) to do it for us. When that “analysis,” which is far from rigorous is delivered to decision-makers who are used to dealing with conclusions that have been developed through a much more robust methodology, their decisions can have far reaching negative consequences.

Sometimes a quick-and-dirty analysis is right, and as long as you’re OK with the fact that that is all that most malware analysis is, OK. But you’re planning on making serious decisions about the threat you face from cyberspace, you should really take the time and effort to ensure that your analysis has looked beyond what IDA shows and considered more diverse and far-reaching factors.

You Were Promised Neither Security Nor Privacy

If you remember hearing the song Istanbul (Not Constantinople) on the radio the first time around, then you remember all the predictions about what life in the 21st century was supposed to be like. Of particular note was the prediction that we would use flying cars and jet packs to get around, among other awesome technological advances.

Recently someone made the comment online (for the life of me I can’t find it now) that goes something like this: If you are the children of the people who were promised jet packs you should not be disappointed because you were not promised these things, you were promised life as depicted in Snow Crash or True Names.

Generation X for the win!

The amateur interpretation of leaked NSA documents has sparked this debate about how governments – the U.S. in particular – are undermining if not destroying the security and privacy of the ‘Net. We need no less than a “Magna Carta” to protect us, which would be a great idea if were actually being oppressed to such a degree that our liberties were being infringed upon by a despot and his arbitrary whims. For those not keeping track: the internet is not a person, nor is it run by DIRNSA.

I don’t claim to have been there at the beginning but in the early-mid 90s my first exposure to the internet was…stereotypical (I am no candidate for sainthood). I knew what it took to protect global computer networks because that was my day job for the government; accessing the ‘Net (or BBSes) at home was basically the wild west. There was no Sheriff or fire department if case things got dangerous or you got robbed. Everyone knew this, no one was complaining and no one expected anything more.

What would become the commercial internet went from warez and naughty ASCII images to house hunting, banking, news, and keeping up with your family and friends. Now it made sense to have some kind of security mechanisms in place because, just like in meat-space, there are some things you want people to know and other things you do not. But the police didn’t do that for you, you entrusted that to the people who were offering up the service in cyberspace, again, just like you do in the real world.

But did those companies really have an incentive to secure your information or maintain your privacy? Not in any meaningful way. For one, security is expensive and customers pay for functionality, not security. It actually makes more business sense to do the minimum necessary for security because on the off chance that there is a breach, you can make up any losses on the backs of your customers (discretely of course).

Secondly, your data couldn’t be too secure because there was value in knowing who you are, what you liked, what you did, and who you talked to. The money you paid for your software license was just one revenue stream; a company could make even more money using and/or selling your information and online habits. Such practices manifest themselves in things like spam email and targeted ads on web sites; the people who were promised jet packs know it by another name: junk mail.

Let’s be clear: the only people who have really cared about network security are the military; everyone else is in this to make a buck (flowery, feel-good, kumbaya language notwithstanding). Commercial concerns operating online care about your privacy until it impacts their money.

Is weakening the security of a privately owned software product a crime? No. It makes crypto  nerds really, really angry, but it’s not illegal. Imitating a popular social networking site to gain access to systems owned by terrorists is what an intelligence agency operating online should do (they don’t actually take over THE Facebook site, for everyone with a reading comprehension problem). Co-opting botnets? We ought to be applauding a move like that, not lambasting them.

There is something to the idea that introducing weaknesses into programs and algorithms puts more people than just terrorists and criminals at risk, but in order for that to be a realistic concern you would have to have some kind of evidence that the security mechanisms available in products today are an adequate defense against malicious attack, and they’re not. What passes for “security” in most code is laughable. Have none of the people raising this concern heard of Pwn2Own? Or that there is a global market for 0-day an the US government is only one of many, many customers?

People who are lamenting the actions of intelligence agencies talk like the internet is this free natural resource that belongs to all and come hold my hand and sing the Coca Cola song… I’m sure the Verizons of the world would be surprised to hear that. Free WiFi at the coffee shop? It’s only free to you because the store is paying for it (or not, because you didn’t notice the $.05 across the board price increase on coffee and muffins when the router was installed).

Talking about the ‘Net as a human right doesn’t make it so. Just like claiming to be a whistle blower doesn’t make you one, or claiming something is unconstitutional when the nine people specifically put in place to determine such things hasn’t ruled on the issue. You can still live your life without using TCP/IP or HTTP, you just don’t want to.

Ascribing nefarious intent to government action – in particular the NSA as depicted in Enemy of the State – displays a level of ignorance about how government – in particular intelligence agencies – actually work. The public health analog is useful in some regards, but it breaks down when you start talking about how government actions online are akin to putting civilians at risk in the real world. Our government’s number one responsibility is keeping you safe; that it has the capability to inflect harm on massive numbers of people does not mean they will use it and it most certainly does not mean they’ll use it on YOU. To think otherwise is simply movie-plot-thinking (he said, with a hint of irony).

Between Preppers and FEMA Trailers

Today, for want of a budget, the Federal government is shutting down. If the nation suffered a massive cyber attack today what would happen? If you think the government is going to defend you against a cyber attack or help you in the aftermath of a digital catastrophe – budget or no budget – think again. The government cannot save you, and you can no more count on timely assistance in the online world as you can in the physical one in the aftermath of a disaster. Help might come eventually, but your ability to fight off hostiles or weather a digital storm depends largely on what you can do for yourself.

The vast majority of the time, natural or man-made disasters are things that happen to someone else. People who live in disaster or storm prone areas know that at any given moment they may have to make due with what they have on hand, consequently they prepare to deal with the worst-case scenario for a reasonable amount of time. The reason you don’t see people in the mountain-west or north-east in FEMA trailers after massive snow or ice storms is a culture of resilience and self-reliance.

How does this translate into the digital world? Don’t efforts like the Comprehensive National Cybersecurity Initiative and all the attention foreign state-sponsored industrial espionage has gotten recently belay the idea that the government isn’t ready, willing and able to take action in the face of a digital crisis?

Federal agencies are no better at protecting themselves from digital attack than anyone else. The same tricks that lead to a breach at a bank work against a government employee. Despite spending tens of billions of tax dollars on cyber security we continue to hear about how successful attackers are and that attacks are growing and threatening our economy and way of life. The increasing amount of connectivity in industrial control systems puts us at even greater risk of a disaster because very few people know how to secure a power plant or oil refinery.

It’s not that the government does not want to make the Internet a safer and more secure; it is simply ill-equipped to do so. Industrial-age practices, bureaucracy, a sloth-like pace, its love affair with lobbyists, and its inability to retain senior leaders with security chops means “cyber” will always be the most talked-about also-ran issue in government. You know what issue has shut down the federal government this week? It isn’t “cyber.”

Protect you against threats? What leverage do we really have against a country like China? Cold War approaches won’t work. For one, you’re probably reading this on something made in China; your dad never owned a Soviet-made anything. We cannot implement “digital arms control” or a deterrence regime because there is no meaningful analog between nuclear weapons and digital ones. Trying to retrofit new problems into old constructs is how Cold Warriors maintain relevance; it’s just not terribly useful in the real world.

So what are we to do? Historically speaking, when the law could not keep up with human expansion into unknown territory, people were expected to defend themselves and uphold the rudiments of good social behavior. If someone threatened you on your remote homestead, you needed to be prepared to defend yourself until the Marshal arrived. This is not a call to vigilantism, nor that you should become some kind of iPrepper, but a reflection of the fact that the person most responsible for your safety and security online is you. As my former colleague Marc Sachs recently put it:

“If you’re worried about it, do something about it. Take security on yourselves, and don’t trust anybody else to do it.”

What do you or your business need to survive in the short- and long-term if you’re hacked? Invest time and money accordingly. If computer security is terra incognita then hire a guide to get you to where you want to go and teach you what you need to know to survive once you’re there. Unless you want to suffer through the digital equivalent of life in a FEMA trailer, you need to take some responsibility to improve your resilience and ensure your viability.

Stop Pretending You Care (about the NSA)

You’ve read the stories, heard the interviews, and downloaded the docs and you’re shocked, SHOCKED to find that one of the world’s most powerful intelligence agencies has migrated from collecting digital tons of data from radio waves and telephone cables to the Internet. You’re OUTRAGED at the supposed violation of your privacy by these un-elected bureaucrats who get their jollies listening to your sweet nothings.

Except you’re not.

Not really.

Are you really concerned about your privacy? Let’s find out:

  1. Do you only ever pay for things with cash (and you don’t have a credit or debit card)?
  2. Do you have no fixed address?
  3. Do you get around town or strange places with a map and compass?
  4. Do you only make phone calls using burner phones (trashed after one use) or public phones (never the same one twice)?
  5. Do you always go outside wearing a hoodie (up) and either Groucho Marx glasses or a Guy Fawkes mask?
  6. Do you wrap all online communications in encryption, pass them through TOR, use an alias and only type with latex gloves on stranger’s computers when they leave the coffee table to use the bathroom?
  7. Do you have any kind of social media presence?
  8. Are you reading this over the shoulder of someone else?

The answer key, if you’re serious about not having “big brother” of any sort up in your biznaz is: Y, Y, Y, Y, Y, Y, N, Y. Obviously not a comprehensive list of things you should do to stay off anyone’s radar, but anything less and all your efforts are for naught.

People complain about their movements being tracked and their behaviors being examined; but then they post selfies to 1,000 “friends” and “check in” at bars and activate all sorts of GPS-enabled features while they shop using their store club card so they can save $.25 on albacore tuna. The NSA doesn’t care about your daily routine: the grocery store, electronics store, and companies that make consumer products all care very, very much. Remember this story? Of course you don’t because that’s just marketing, the NSA is “spying” on you.

Did you sign up for the “do not call” list? Did you breathe a sigh of relief and, as a reward to yourself, order a pizza? Guess what? You just put yourself back on data brokers and marketing companies “please call me” list. What? You didn’t read the fine print of the law (or the fine print on any of the EULAs of the services or software you use)? You thought you had an expectation of privacy?! Doom on you.

Let’s be honest about what the vast majority of people mean when they say they care about their privacy:

I don’t want people looking at me while I’m in the process of carrying out a bodily function, carnal antics, or enjoying a guilty pleasure.

Back in the day, privacy was easy: you shut the door and drew the blinds.

But today, even though you might shut the door, your phone can transmit sounds, the camera in your laptop can transmit pictures, your set-top-box is telling someone what you’re watching (and depending on what the content is can infer what you’re doing while you are watching). You think you’re being careful, if not downright discrete, but you’re not. Even trained professionals screw up and it only takes one mistake for everything you thought you kept under wraps to blow up.

If you really want privacy in the world we live in today you need to accept a great deal of inconvenience. If you’re not down with that, or simply can’t do it for whatever reason, then you need to accept that almost nothing in your life is a secret unless it’s done alone in your basement, with the lights off and all your electronics locked in a Faraday cage upstairs.

Don’t trust the googles or any US-based ISP for your email and data anymore? Planning to relocate your digital life overseas? Hey, you know where the NSA doesn’t need a warrant to do its business and they can assume you’re not a citizen? Overseas.

People are now talking about “re-engineering the Internet” to make it NSA-proof…sure, good luck getting everyone who would need to chop on that to give you a thumbs up. Oh, also, everyone who makes stuff that connects to the Internet. Oh, also, everyone who uses the Internet who now has to buy new stuff because their old stuff won’t work with the New Improved Internet(tm). Employ encryption and air-gap multiple systems? Great advice for hard-core nerds and the paranoid, but not so much for 99.99999% of the rest of the users of the ‘Net.

/* Note to crypto-nerds: We get it; you’re good at math. But if you really cared about security you’d make en/de-cryption as push-button simple to install and use as anything in an App store, otherwise you’re just ensuring the average person runs around online naked. */

Now, what you SHOULD be doing instead of railing against over-reaches (real or imagined…because the total number of commentators on the “NSA scandal” who actually know what they’re talking about can be counted on one hand with digits left over) is what every citizen has a right to do, but rarely does: vote.

The greatest power in this country is not financial, it’s political. Intelligence reforms only came about in the 70s because of the sunshine reflecting off of abuses/overreaches could not be ignored by those who are charged with overseeing intelligence activities. So if you assume the worst of what has been reported about the NSA in the press (again, no one leaking this material, and almost no one reporting of commenting on it actually did SIGINT for a living…credibility is important here) then why have you not called your Congressman or Senator? If you’re from CA, WV, OR, MD, CO, VA, NM, ME, GA, NC, ID, IN, FL, MI, TX, NY, NJ, MN, NV, KS, IL, RI, AZ, CT, AL or OK you’ve got a direct line to those who are supposed to ride herd on the abusers.

Planning on voting next year? Planning on voting for an incumbent? Then you’re not really doing the minimum you can to bring about change. No one cares about your sign-waving or online protest. Remember those Occupy people? Remember all the reforms to the financial system they brought about?

Yeah….

No one will listen to you? Do what Google, Facebook, AT&T, Verizon and everyone else you’re angry at does: form a lobby, raise money, and button hole those who can actually make something happen. You need to play the game to win.

I’m not defending bad behavior. I used to live and breath Ft. Meade, but I’ve come dangerously close to being “lost” thanks to the ham-handedness of how they’ve handled things. But let’s not pretend that we – all of us – are lifting a finger to do anything meaningful about it. You’re walking around your house naked with the drapes open and are surprised when people gather on the sidewalk – including the police who show up to see why a crowd is forming – to take in the view. Yes, that’s how you roll in your castle, but don’t pretend you care about keeping it personal.

Explaining Computer Security Through the Lens of Boston

Events surrounding the attack at the Boston Marathon, and the subsequent manhunt, are on-going as this is being drafted. Details may change, but the conclusions should not.

This is by no means an effort to equate terrorism and its horrible aftermath to an intrusion or data breach (which is trivial by comparison), merely an attempt to use current events in the physical world – which people tend to understand more readily – to help make sense of computer security – a complicated and multi-faceted problem few understand well.

  1. You are vulnerable to attack at any time. From an attacker’s perspective the Boston Marathon is a great opportunity (lots of people close together), but a rare one (only happens once a year). Your business on-line however, is an opportunity that presents itself 24/7. You can no more protect your enterprise against attack than the marathon could have been run inside of a giant blast-proof Habitrail. Anyone who tells you different is asking you to buy the digital equivalent of a Habitrail.
  2. It doesn’t take much to cause damage. In cyberspace everyone is atwitter about “advanced” threats, but most of the techniques that cause problems online are not advanced. Why would you expose your best weapons when simple ones will do? In the physical world there is a complicating factor of the difficulty of getting engineered weapons to places that are not war zones, but like the improved explosives used in Boston, digital weapons are easy to obtain or, if you’re clever enough, build yourself.
  3. Don’t hold out hope for closure. Unless what happens to you online is worthy of a multi-jurisdictional – even international – law enforcement effort, forget about trying to find someone to pay for what happened to you. If they’re careful, the people who attack you will never be caught. Crimes in the real world have evidence that can be analyzed; digital attacks might leave evidence behind, but you can’t always count on that. As I put fingers to keyboard one suspect behind the Boston bombing is dead and the other the subject of a massive manhunt, but that wouldn’t have happened if the suspects had not made some kind of mistake(s). Robbing 7-11s, shooting cops and throwing explosives from a moving vehicle are not the marks of professionals. Who gets convicted of computer crimes? The greedy and the careless.

The response to the bombings in Boston reflect an exposure – directly or indirectly – to 10+ years of war. If this had happened in 2001 there probably would have been more fatalities. That’s a lesson system owners (who are perpetually under digital fire) should take to heart: pay attention to what works – rapid response mechanisms, democratizing capabilities, resilience – and invest your precious security dollars accordingly.

How Many Holes in a Gohor Stick?

I’ve never used Palantir. I’ve never used DCGS-A. When I started as an Analyst you (no-shit) used pencil and paper (and a thing called a guhor stick…but that’s a lewd joke for another day). The kerfuffle over Palatir vs. DCGS-A reminds me of the days when computers started making in-roads in analysis shops, and I hope everyone involved can remember some of those lessons learned.

Now my working world in those early days wasn’t entirely computer-free, but back then computers were where you stored data and recorded activity and typed up reports, you didn’t “link” things together and you certainly didn’t draw, graph or do anything anyone coming up in the business today would recognize as computer-oriented.

If there was a quantum leap in the utility computers gave to analysis it was this application called Analyst Notebook. Analyst Notebook would take in the data you had already entered into some other system (assuming you could get it out of said system), and kick out diagrams and pictures that let you make quick sense of who was talking to whom, what happened when, and identify connections or anomalies you may have missed staring into a green screen at row after row, column after column of letters and numbers.

That’s the key here: Analyst Notebook, Palantir, etc. are Analyst’s tools, they are not analysis tools. Is that a distinction without a difference? I’m not aware of any software application that will think on your behalf. I’m not aware of anyone in the military or IC who would trust answers produced entirely by an algorithm and without human interpretation or enhancement. If you could computerize analysis you wouldn’t have a headcount problem in the IC. Analyst Notebook, Palantir, DCGS-A . . . they’re all tools, and if you’ve been working with hand tools all your life and suddenly someone hands you a Skil saw, of course you’re going to think the Skil saw was sent from heaven.

Now, is the government notorious for producing bloated, expensive, minimally functional software that everyone hates to use (when it works at all)? We don’t have time to go into all the examples, but the answer is ‘yes.’ If I offer you tool A OR tool B when you’ve been using tool C, which are you going to choose? Does that make your other choice crap? Of course not.

It sounds to me like if there is a 800 lb gorilla in the room it’s usability, and if there is one thing that commercial apps excel at its the user experience. Think about the Google interface, and then think about a data retrieval system fielded in the 70s, and you tell me what your average analyst would rather use…

If the ultimate requirement is capability, then the answer is simple: hold a shoot-out and may the best app win. Pretty-but-sub-capable isn’t going to cut it; functional-but-frustrating isn’t either. If DCGS-A is all that, they should be big enough to learn from what Palantir does well; If Palantir is really about saving lives and national defense, they ought to be big enough to implement what GIs need most. Competition raises everyone’s game, but this isn’t about .com vs .gov, it’s about lives.

The (Dis)illusion of Control

Conventional wisdom is telling us that “assumption of breach” is the new normal. Some otherwise well-respected names in computer security would have you believe that the appropriate response to such conditions is to increase the cost to the attackers. If you’re too expensive to breach – so the logic goes – the bad guys will go looking for someone. Maybe someday, when everyone makes hacking too expensive, it will stop.

Maybe I will play power forward for the Celtics.

There are two major problems with “drive up attacker cost” logic. The first is that you have almost no control over how expensive it is to hack your organization. You have no meaningful, granular control over:

  • The hardware you use
  • The operating system you use
  • The applications you use
  • The protocols used by all of the above
  • …and the communications infrastructure all of the above uses to exchange bytes with customers, vendors, etc., etc., etc.

Any one of the aforementioned items, or more than one of them interacting with each other, is ripe with vulnerabilities that will be exploited for fun and profit. For those who are in it for the profit, this is their job. They are good at it to the tune of billions of dollars a year worldwide.

The second problem is that “driving up attacker cost” is a misnomer. What advocates of this particular approach are really saying is: “spend more money” on the same things that failed to keep you secure in the first place.

2012 is not the year corporate (or governmental) enterprises wake up and start to take security seriously. Most corporate victims of cyber crime recently surveyed couldn’t be bothered to do simple things that would have prevented an attack (even more this year than last year), but suddenly they’re going to go from willful ignorance to becoming highly astute with regards to cyber threats now that we’re going to stop pretending there is anyone out there who isn’t or hasn’t been owned? More likely such thinking will have the opposite effect: why fight when I can punt?

Neither are enterprises going to change the way they do business, or otherwise introduce new complexities for the sake of improving security. There is a reason why so many businesses keep feeding and sheltering a cash cow, even when its becoming increasingly clear that milk production is dropping rapidly: security is an expense that does not directly translate into profitability.

There is only one thing you do control, and that is how quickly and effectively you respond to breaches of security. If you’re going to spend time and money on security, stop spending it on things that don’t work (well) and start focusing on things that could actually make a difference:

  • Improve your awareness of what happens on your hosts: that’s where the bad stuff happens.
  • Improve your ability to capture the minimum-meaningful network traffic: for every additional needle full-packet capture provides, it also supplies a thousand pieces of hay.
  • Reduce your attack surface by exposing a little of yourself to external research as possible: they can’t eat your fruit if you’ve trimmed all the low-hanging branches

The goal here is not to make it expensive to get hacked, its to make it so cheap to respond you don’t particularly care if you get hacked. That’s basically the position most businesses have today, so why no align your approach to security accordingly?

Business Does Not Care About Your Chinese Cyber Problem

If you have spent more than ten minutes tracking cyber security issues in this country you know that if there is a Snidely Whiplash in this business it’s the Chinese. If it’s not the government its “patriotic hackers,” or some variation on those themes. The argument over “APT” rages on (is it a ‘who?’ Is it a ‘what?’) and while not clearly labeled “Chinese” we now have “adversaries” to worry about.

Setting aside issues related to the veracity of such claims, let me just state unequivocally: No one cares.

If you are a regular reader you know me and my background (if you don’t here is a snapshot), so you know that I know the scope and scale of the problem and that I’m not talking about this issue in a state-on-state context. My problem is that too many people are trying to extend that context into areas it is ill-suited. In doing so they are not actually improving security. They may in fact be perpetuating the problem.

Rarely do you talk to someone at the C-level – someone who has profits and Wall Street and the Board on his mind – who gives a shit about who his adversary is or what their motivations are. The occasional former military officer-turned-executive will have a flash of patriotic fervor, but then the General Counsel steps up and the flag would be furled. In the end the course of action they all approve is designed to make the pain go away: get the evil out of the network, get the hosts back online, and get everyone back to work. I haven’t talked to every executive about this issue, so your mileage may vary, but one only need read up on the hack-and-decline of Nortel understand what the most common reaction to “someone is intentionally focused on stealing our ideas,” is in the C-suites of American corporations.

This is not a new problem. You have never, ironically, heard of d’Entrecolles. American industrial might wasn’t a home-grown effort: we did the same thing to our cousins across the pond. Nortel is only a recent example of a worst-case industrial espionage scenario playing out. Ever heard of  Ellery Systems? Of course you haven’t.

IP theft is not a trivial issue, but any number of things can happen to a given piece of IP once it is stolen. The new owners may not be able to make full or even nominally effective use of the information; the purpose or product they apply the IP to has little or nothing to do with what the IP’s creators are using it for; the market the new owner is targeting isn’t open to or pursued by the US; or in the normal course of events, what made the IP valuable at the point of compromise might change making it useless or undesirable by the time its new owners bring it to market.

Companies that suffer the fate of Ellery and Nortel are notable because they are rare. Despite the fact that billions in IP is being siphoned off through the ‘Net, there is not a corresponding number of bankruptcies. That’s not a defense; merely a fat, juicy data point supporting the argument that if the fate of the company is not in imminent danger, no one is going to care that maybe, some day, when certain conditions are met, last week’s intrusion was the first domino to fall.

If you are honestly interested in abating the flow of IP out of this country, your most effective course of action should be to argue in a context that business will not only understand but be willing to execute.  Arguing Us vs. Them to people who are not in the actual warfighting business is a losing proposition. The days of industry re-orienting and throwing their weight behind a “war” effort are gone (unless you are selling to PMCs). “More security” generally comes at the expense of productivity, and that is a non-starter. Security done in a fashion that adds value – or at the very least does not serious impede the ability to make money – has the potential to be a winner.

I say ‘has the potential’ because to be honest you can’t count on business decision-makers caring about security no matter how compelling your argument. Top marks if remember the security company @Stake. Bonus points if you remember that they used to put out a magazine called Secure Business Quarterly that tried to argue the whole security-enabling-business thing. Did you notice I said “remember” and “used to?”

We have to resign ourselves to the very real possibility that there will never be an event so massive, so revealing, that security will be a peer to other factors in a business decision. While that’s great for job security, it also says a lot about what society values in the information age.

We Are Our Own Worst Enemy

My latest op-ed in SC Magazine:

It is tough being in cybersecurity. Defense is a cost center, and it’s hard to find meaningful metrics to demonstrate success. Interest in security is also cyclical: Major breaches stir action, but as time passes, interest and resources wane, though the threat is still there. Yet the biggest problem with cybersecurity is ourselves. Before we can succeed, all of us must agree to change.

Read the whole thing.