R.E.S.P.E.C.T.

They’re not cybersecurity experts, but they did stay at a Holiday Inn Express last night.

Because we have no common body of knowledge from which to explore and learn from prior art, you can predict like the seasons when another cohort of professionals from other disciplines will attempt to tell us what is good for us, despite not understanding the fundamentals of information technology, or how the Internet works.  Whether the idea is deterrence or arms control or any other pressing global security issue of the past 100 years, they’re free with their analogies and advice because their approaches are perfectly suited to deal with threats that are realized hourly, every day, worldwide, for the past several decades.

That’s a joke.

To be fair, their ability to fit so much tripe, trope, hyperbole, and nonsense into a single opening sentence much less an entire op-ed is a skill that has to be acknowledged. Of all the topics I have a passing familiarity with, it would probably take months before I could replicate such a feat. Of course, I would be soundly pummeled about the head and shoulders for doing so because people who actually know what they were talking about would not hesitate to point out what a bumbling neophyte I was.

Yet for some reason, when it comes to things-cyber, people from other fields feel like they can just man ‘splain their way into the discussion and expect their ideas to be treated with respect because of their credentials in an entirely different discipline. This is false authority syndrome at its worst, and for some reason we just sit there and take it.

Maybe it’s a status thing? Thinking heavy thoughts about this space is a relatively new endeavor compared to the broader study of things martial or international. We want to be effective, but we haven’t figured out how to do so without simultaneously being walked on like a hallway runner.  

Maybe it’s a nerd thing? Legions of subject matter experts could point out the shortcomings of these ill-conceived ideas but are too busy actually working problems to care what people who only say polite things politely to other polite people at polite gatherings (which strangely rarely if ever invite technical experts) are saying?

Whatever the reasons may be, it is high time we started pushing back.

The resistance should start by asking every expert in political science, international relations, public policy, or other disciplines to explain exactly how their methodologies will work against cyberspace problems. Not hand-wavy answers extrapolated from the most tenuous similarities: with precision. How exactly are regimes that works for bugs, gas, or isotopes going to work for code? Go ahead, we’ll wait.

Likewise, we need to force digital dilettantes to acknowledge the corrupting power of code and its impact on behavior. It took an age before all the countries in the UN working group on cybersecurity to agree that countries should behave themselves online. A declaration that has, is, and will continue to be violated continuously by the intelligence and security organs of those same signatories. Agreeing to truly meaningful norms means agreeing to give up one of the most powerful intelligence capabilities and “weapons” platforms ever invented. No nation will agree to that with a straight face, and no nation who signs such an agreement believes any of the others will adhere to its terms. Acknowledgement of this reality delineates those serious about contributing from those advocating busy work.

Every piece of work from a professional from another field that does not have a co-author who is an expert in this field should be looked at with a jaundiced eye, and the editorial judgement of the outlet viewed askance. Electricians don’t get published in JAMA – unless of course they’re working with an MD writing about how they built a better medical mousetrap. Disseminating uncritically the work of “eminents” and “formers” gives credibility to ideas that have long since been dissected, discredited, or otherwise shown to be unworkable. Save those precious column inches for the credible and novel.

Lest frustration get the better of me, let me make it clear that the world would be a worse place without the dedication and diligence of prior generations of national and international security professionals. But just as security policy required a whole new set of expertise and thinking on August 6th, 1945, so too are we more likely to find solutions to the problems we face from people who understand the technology and the issues in context.

The Global Ungoverned Area

There are places on this planet where good, civilized people simply do not voluntarily go, or willingly stay. What elected governments do in safer and more developed parts of the world are carried out in these areas by despots and militias, often at terrible cost to those who have nowhere else to go and no means to go if they did.

Life online is not unlike life in these ungoverned areas: anyone with the skill and the will is a potential warlord governing their own illicit enterprise, basking in the spoils garnered from the misery of a mass of unfortunates. Who is to stop them? A relative handful of government entities, each with competing agendas, varying levels of knowledge, skills, and resources, none of whom can move fast enough, far enough, or with enough vigor to respond in-kind.

Reaping the whirlwind of apathy

Outside of the government, computer security is rarely something anyone asks for except in certain edge cases. Security is a burden, a cost center. Consumers want functionality. Functionality always trumps security. So much so that most people do not seem to care if security fails. People want an effective solution to their problem. If it happens to also not leak personal or financial data like a sieve, great, but neither is it a deal-breaker.

At the start of the PC age we couldn’t wait to put a computer on every desk. With the advent of the World Wide Web, we rushed headlong into putting anything and everything online. Today online you can play the most trivial game or fulfill your basic needs of food, shelter, and clothing, all at the push of a button. The down side to cyber-ing everything without adequate consideration to security? Epic security failures of all sorts.

Now we stand at the dawn of the age of the Internet of Things. Computers have gone from desktops to laptops to handhelds to wearables and now implantables. And again we can’t wait to employ technology, we also can’t be bothered to secure it.

How things are done

What is our response? Laws and treaties, or at least proposals for same, that decant old approaches into new digital bottles. We decided drugs and povertywere bad, so we declared “war” on them, with dismal results. This sort of thinking is how we get the Wassenaar Agreement applied to cybersecurity: because that’s what people who mean well and are trained in “how things are done” do. But there are a couple of problems with treating cyberspace like 17th century Europe:

  • Even when most people agree on most things, it only takes one issue to bring the whole thing crashing down.
  • The most well-intentioned efforts to deter bad behavior are useless if you cannot enforce the rules, and given the rate at which we incarcerate bad guys it is clear we cannot enforce the rules in any meaningful way at a scale that matters.
  • While all the diplomats of all the governments of the world may agree to follow certain rules, the world’s intelligence organs will continue to use all the tools at their disposal to accomplish their missions, and that includes cyber ones.

This is not to say that such efforts are entirely useless (if you happen to arrest someone you want to have a lot of books to throw at them), just that the level of effort put forth is disproportionate to the impact that it will have on life online. Who is invited to these sorts of discussions? Governments. Who causes the most trouble online? Non-state actors.

Roads less traveled

I am not entirely dismissive of political-diplomatic efforts to improve the security and safety of cyberspace, merely unenthusiastic. Just because “that’s how things are done” doesn’t mean that’s what’s going to get us where we need to be. What it shows is inflexible thinking, and an unwillingness to accept reality. If we’re going to expend time and energy on efforts to civilize cyberspace, let’s do things that might actually work in our lifetimes.

  • Practical diplomacy. We’re never going to get every nation on the same page. Not even for something as heinous as child porn. This means bilateral agreements. Yes, it is more work to both close and manage such agreement, but it beats hoping for some “universal” agreement on norms that will never come.
  • Soft(er) power. No one wants another 9/11, but what we put in place to reduce that risk, isn’t The private enterprises that supply us with the Internet – and computer technology in general – will fight regulation, but they will respond to economic incentives.
  • The human factor. It’s rare to see trash along a highway median, and our rivers don’t catch fire Why? In large part because of the crying Indian. A concerted effort to change public opinion can in fact change behavior (and let’s face it: people are the root of the problem).

Every week a new breach, a new “wake-up call,” yet there is simply not sufficient demand for a safer and more secure cyberspace. The impact of malicious activity online is greater than zero, but not catastrophic, which makes pursuing grandiose solutions a waste of cycles that could be put to better use achieving incremental gains (see ‘boil the ocean’).

Once we started selling pet food and porn online, it stopped being the “information superhighway” and became a demolition derby track. The sooner we recognize it for what it is the sooner we can start to come up with ideas and courses of action more likely to be effective.

/* Originally posted at Modern Warfare blog at CSO Online */

The Wolf is Here

For decades we’ve heard that iCalamity is right around the corner. For decades we’ve largely ignored pleas to try and address computer security issues when they are relatively cheap and easy, before they got too large and complicated to do at all. We have been living a fairy tale life, and absent bold action and an emphasis on resiliency, it only gets grim(m)er going forward.

Reasonably affordable personal computers became a thing when I was in high school. I fiddled around a bit, but I didn’t know that computer security was a thing until I was on active duty and the Morris Worm was all over the news. Between the last time Snap! charted and today, we have covered a lot of ground from a general purpose IT perspective. We’ve gone from HTML and CGI to the cloud. From a security perspective however, we’ll still largely relying on firewalls, anti-virus, and SSL.

Why the disparate pace of progress? People demand that their technology be functional, not secure. Like so many areas of our lives, we worry about the here and now, not the what-might-be. We only worry about risks until a sufficiently horrific scenario occurs, or if one is not enough, until enough of them occur in a sufficiently short period of time.

Of course today we don’t just have to worry about securing PCs. By now it is fairly common knowledge that your car is full of computers, as is increasingly your house. Some people wear computers, and some of us are walking around with computers inside of us. Critical infrastructure is lousy with computers, and this week we learned that those shepherd boys crying ‘wolf’ all those years weren’t playing us for fools, they were just too early.

The fragility of our standard of living is no longer the musings of Cassandras. The proof of concept was thankfully demonstrated far, far away, but the reality is we’re not really any safer just because ‘merica. Keeping the lights on, hearts beating, and the water flowing is a far more complex endeavor than you find in the commodity IT world. It is entirely possible that in some situations there is no ‘fix’ to certain problems, which means given various inter-dependencies we will always find ourselves with a Damoclean sword over our heads.

Mixed mythologies notwithstanding, the key to success writ large is insight and resiliency. The more aware you are of what you have, how it works, and how to get along without it will be critical to surviving both accidents and attacks. I would like to think that the market will demand both functional and secure technology, and that manufacturers will respond accordingly, but 50 years of playing kick the can tells me that’s not likely. The analog to security in industrial environments is safety, and that’s one area power plants, hospitals, and the like have down far better than their peers in the general purpose computing world. We might not be able to secure the future, but with luck we should be able to survive it.

Intelligence Agencies Are Not Here to Defend Your Enterprise

If there is a potentially dangerous side-effect to the discovery of a set of 0-days allegedly belonging to the NSA it is the dissemination of the idea, and credulous belief of same, that intelligence agencies should place the security of the Internet – and commercial concerns that use it – above their actual missions. It displays an all-too familiar ignorance of why intelligence agencies exist and how they operate. Before you get back to rending your hair and gnashing your teeth, let’s keep a few things in mind.

  1. Intelligence agencies exist to gather information, analyze it, and deliver their findings to policymakers so that they can make decisions about how to deal with threats to the nation. Period. You can, and agencies often do, dress this up and expand on it in order to motivate the workforce, or more likely grab more money and authority, but when it comes down to it, stealing and making sense of other people’s information is the job. Doing code reviews and QA for Cisco is not the mission.
  2. The one element in the intelligence community that was charged with supporting defense is no more. I didn’t like it then, and it seems pretty damn foolish now, but there you are, all in the name of “agility.” NSA’s IAD had the potential to do the things that all the security and privacy pundits imagine should be done for the private sector, but their job was still keeping Uncle Sam secure, not Wal-Mart.
  3. The VEP is an exercise in optics. “Of course we’ll cooperate with your vulnerability release program,” says every inter-agency representative. “As long as it doesn’t interfere with our mission,” they whisper up their sleeve. Remember in every spy movie you ever saw, how the spooks briefed Congress on all the things, but not really? That.
  4. 0-days are only 0-days as far as you know. What one can make another can undo – and so can someone else. The idea that someone, somewhere, working for someone else’s intelligence agency might not also be doing vulnerability research, uncovering exploitable conditions in popular networking products, and using same in the furtherance of their national security goals is a special kind of hubris.
  5. Cyber security simply is not the issue we think it is. That we do any of this cyber stuff is only (largely) to support more traditional instruments and exercises of national power. Cyber doesn’t kill. Airstrikes kill. Snipers kill. Mortars kill. Policymakers are still far and away concerned with things that go ‘boom’ not bytes.In case you haven’t been paying attention for the past 15 years, we’ve had actual, shooting wars to deal with, not cyber war. 

I have spent most of my career being a defender (in and out of several different intelligence agencies). I understand the frustration, but blaming intelligence agencies for doing their job is not helpful. If you like living in the land of the free its important to note that rules that would preclude the NSA from doing what it does merely handicaps us; no one we consider a threat is going to stop looking for and exploiting holes. The SVR or MSS do not care about your amicus brief. The Internet is an important part of our world, and we should all be concerned about its operational well-being, but the way to reduce the chance that someone can crack your computer code is to write better code, and test it faster than the spooks can.

The Airborne Shuffle in Cyberspace

I did my fair share supporting and helping develop its predecessor, but I have no special insights into what is going on at CYBERCOM today. I am loathe to criticize when I don’t know all the details, still I see reports like this and scratch my head and wonder: why is anyone surprised?

Focus. If you have to wake up early to do an hour of PT, get diverted afterwards to pee in a cup, finally get to work and develop a good head of steam, only to leave early to go to the arms room and spend an hour cleaning a rifle, you’re not going to develop a world-class capability in any meaningful time-frame. Not in this domain. Not to mention the fact that after about two years whatever talent you’ve managed to develop rotates out and you have to start all over again.

Speed. If you have to call a meeting to call a meeting, and the actual meeting can’t take place for two weeks because everyone who needs to be there is involved in some variation of the distractions noted above, or TDY, you have no chance. It also doesn’t help that when you manage to have the meeting you are forced to delay decisions because of some minutia. You’re not just behind the power curve, you’re running in the opposite direction.

Agility. If your business model is to train generalists and buy your technology…over the course of several years…you are going to have a hard time going up against people with deep expertise who can create their own capabilities in days. Do we need a reminder inhow effective sub-peer adversaries can be against cutting edge military technology? You know what the people attacking SWIFT or major defense contractors aren’t doing? Standing up a PMO.

The procurement and use of tanks or aircraft carriers is limited to the military in meat-space, but in cyberspace anyone can develop or acquire weapons and project power. Globally. If you’re not taking this into consideration you’re basically the 18th Pomeranians. Absent radical changes no government hierarchy is going to out-perform or out-maneuver such adversaries, but it may be possible to close the gaps to some degree.

Focus. You should not lower standards for general purpose military skills, but in a CONUS, office environment you can exercise more control over how that training is performed and scheduled. Every Marine a rifleman, I get it, but shooting wars are relatively rare; the digital conflict has been engaged for decades (and if your cyber troops are hearing shots fired in anger, you’ve probably already lost).

Speed. Hackers don’t hold meetings, they open chat sessions. Their communication with their peers and partners is more or less constant. If you’re used to calling a formation to deliver your messages orally, you’re going to have to get used to not doing that. Uncomfortable with being glued to a screen – desktop or handheld? You’re probably ill-suited to operate in this domain.

Agility. You are never going to replicate ‘silicon valley’ in the DOD without completely disrupting DOD culture. The latter is a zero-defect environment, whereas the former considers failures to be a necessary part of producing excellence. You cannot hold company-level command for 15 years because its the job you’re best suited to; you can be one of the world’s best reverse engineers for as long as you want to be. What is “normal” should mean nothing inside an outfit like CYBERCOM.

Additional factors to consider…

Homestead. If you get assigned to CYBERCOM you’re there for at least 10 years. That’s about 20 dog years from the perspective of the domain and related technology experience, and it will be invaluable if you are serious about effective performance on the battlefield.

Lower Rank/Greater Impact. Cyberspace is where the ‘strategic corporal’ is going to play an out-sized role. At any given moment the commander – once their intent is made clear – is the least important person in the room.

Bias for Action. In meat-space if you pull the trigger you cannot call back the bullet. If your aim is true your target dies. In cyberspace your bullets don’t have to be fatal. The effect need only be temporary. We can and should be doing far more than we apparently are, because I guarantee our adversaries are.

“Cyber MAD” is a Bad Idea. Really Bad.

I don’t know how many times I have to say this, but nothing screams “legacy future” like trying to shoe-horn cold-war thinking into “cyber.” This latest attempt doesn’t disappoint (or maybe it does, depending on how you look at it) because it completely miss two key points:

  1. Cyberspace is not meat-space;
  2. Digital weapons are nothing like atomic ones.

Yes, like the nuclear arms race, it is in fact more expensive to defend yourself than it is to attack someone. Generally speaking. Its OK to paint with a broad brush on this point because so many entities online are so woefully inadequate when it comes to defense that we forget that there are actually some who are quite hard and expensive to attack. Any serious colored-hat who is being honest will tell you that they deal with more than their fair share of unknowns and ‘unknown unknowns’ when going after any given target.

But unlike malicious actions in cyberspace, there is no parsing nuclear war. You’re nuked, or you’re not. Cyber-espionage, cyber-crime, cyber-attack…all indistinguishable in all technically meaningful ways. Each has a different intent, which we are left to speculate about after-the-fact. In the other scenario, no one is around to speculate why a battalion of Reds turned their keys and pushed their buttons.

Attacker identity is indeed important whether you’re viewing a potential conflict through nuclear or digital lenses, but you know what excuse doesn’t work in the nuclear scenario? “It wasn’t me.”

Um, IR burn says it was…

There is no such equivalent in cyberspace. You can get close – real close – given sufficient data and time, but there will be no Colin Powell-at-the-UN-moment in response to a cyber threat because “it wasn’t me” is a perfectly acceptable excuse.

But we have data.

You can fabricate data

You know what you can’t fabricate? Fallout.

All of this, ALL OF THIS, is completely pointless because if some adversary had both the will and the wherewithal to attack and destroy our and just our critical infrastructure and national security/defense capabilities via cyber means…what are we meant to strike back with? Who are those who happen to be left unscathed supposed to determine who struck first? I was not a Missileer, but I’m fairly certain you can’t conduct granular digital attribution from the bottom of an ICBM silo.

What is the point of worrying about destruction anyway? Who wants that? The criminals? No, there is too much money to be made keeping systems up and careless people online. The spies? No, there is too much data to harvest and destruction might actually make collection hard. Crazy-bent-on-global-domination types? This is where I invoke the “Movie Plot Threat” clause. If the scenario you need to make your theory work in cyberspace is indistinguishable from a James Bond script, you can’t be taken seriously.

MAD for cyberspace is a bad idea because its completely academic and does nothing to advance the cause of safety or security online (the countdown to someone calling me “anti-intellectual” for pointing out this imperial nudity starts in 5, 4, 3….). MAD, cyber deterrence, all this old think is completely useless in any practical sense. You know why MAD and all those related ideas worked in the 60s? Because they dealt with the world and the problem in front of them as it was, not how they wished it to be.

I wholeheartedly agree that we need to do more and do more differently in order to make cyberspace a safer and more secure environment. I don’t know anyone who argues otherwise. I’m even willing to bet there is a period of history that would provide a meaningful analog to the problems we face today, but the Cold War isn’t it.

Malware Analysis: The Danger of Connecting the Dots

The findings of a lot of malware analysis are not in fact “analysis;” they’re a collection of data points linked together by assumptions whose validity and credibility have not been evaluated. This lack of analytic methodology could prove exceedingly problematic for those charged with making decisions about cyber security. If you cannot trust your analysis, how are you supposed to make sound cyber security decisions?

Question: If I give you a malware binary to reverse engineer, what do you see? Think about your answer for a minute and then read on. We’ll revisit this shortly.

It is accepted as conventional wisdom that Stuxnet is related to Duqu, which is in turn related to Flame. All of these malware have been described as “sophisticated” and “advanced,” so much so that they must be the work of a nation-state (such work presumably requiring large amounts of time and lots of skilled people and the code written for purposes beyond simply siphoning off other people’s cash). The claim that the US government is behind Stuxnet has consequently led people to assume that all related code is US sponsored, funded, or otherwise backed.

Except for the claim of authorship, all of the aforementioned data points come from people who reverse engineer malware binaries. These are technically smart people who practice an arcane and difficult art, but what credibility does that give them beyond their domain? In our quest for answers do we give too much weight to the conclusions of those with discrete technical expertise and fail to approach the problem with sufficient depth and objectivity?

Let’s take each of these claims in turn.

Are there similarities if not outright sharing of code in Stuxnet, Duqu and Flame? Yes. Does that mean the same people wrote them all? Do you believe there is a global marketplace where malware is created and sold? Do you believe the people who operate in that marketplace collaborate? Do you believe that the principle of “code reuse” is alive and well? If you answered “yes” to any of these questions then a single source of “advanced” malware cannot be your only valid conclusion.

Is the code in Stuxnet, etc. “sophisticated?” Define “sophisticated” in the context of malware. Forget about malware and try to define “sophisticated” in the context of software, period. Is Excel more sophisticated than Photoshop? When words have no hard and widely-accepted definitions, they can mean whatever you want them to mean, which means they have no meaning at all.

Can only a nation-state produce such code? How many government-funded software projects are you aware of that work as advertised? You can probably count on one hand and have fingers left over. But now, somehow, when it comes to malware, suddenly we’re to believe that the government has gotten its shit together?

“But Mike, these are, like, weapons. Super secret stuff. The government is really good at that.”

Really? Have you ever heard of the Osprey? Or the F-35? Or the Crusader? Or the JTRS? Or Land Warrior? Groundbreaker? Trailblazer? Virtual Case File?

I’m not trying to trivialize the issues associated with large and complex technology projects, my point is that a government program to build malware would be subject to the same issues and consequently no better – and quite possibly worse – than any non-governmental effort to do the same thing. Cyber crime statistics – inflated though they may be – tell us that governments are not the only entities that can and do fund malware development.

“But Mike, the government contracts out most of its technology work. Why couldn’t they contract out the building of digital weapons?”

They very well could, but then what does that tell us? It tells us that if you wanted to build the best malware you have to go on the open market (read: people who may not care who they’re working for, as long as their money is good).

As far as the US government “admitting” that they were behind Stuxnet: they did no such thing. A reporter, an author of a book, says that a government official told him that the US was behind Stuxnet. Neither the President of the United States, nor the Secretary of Defense, nor the Directors of the CIA or NSA got up in front of a camera and said, “That’s us!” which is what an admission would be. Let me reiterate: a guy who has a political agenda told a guy who wants to sell books that the US was behind Stuxnet.

It’s easy to believe the US is behind Stuxnet, as much as it is to believe Israel is behind it. You know who else doesn’t like countries who don’t have nuclear weapons to get them? Almost every country in the world, including those countries that currently have nuclear weapons. You know who else might not want Iran – a majority Shia country – to have an atomic bomb? Roughly 30 Sunni countries for starters, most of which could afford to go onto the previously mentioned open market and pay for malware development. What? You hadn’t thought about the non-proliferation treaty or that Sunni-Shia thing? Yeah, neither has anyone working for Kaspersky, Symantec, F-Secure, etc., etc.

Back to the question I asked earlier: What do you see when you reverse engineer a binary?

Answer: Exactly what the author wants you to see.

  • I want you to see words in a language that would throw suspicion on someone else.
  • I want you to see that my code was compiled in a particular foreign language (even though I only read and/or write in a totally different language).
  • I want you to see certain comments or coding styles that are the same or similar to someone else’s (because I reuse other people’s code).
  • I want you to see data about compilation date/time, PDB file path, etc., which could lead you to draw erroneous conclusions have no bearing on malware behavior or capability.

Contrary to post-9/11-conventional wisdom, good analysis is not dot-connecting. That’s part of the process, but it’s not the whole or only process. Good analysis has methodology behind it, as well as a fair dose of experience or exposure to other disciplines that comes into play. Most of all, whenever possible, there are multiple, verifiable, meaningful data points to help back up your assertions. Let me give you an example.

I used to work with a guy we’ll call “Luke.” Luke was a firm believer in the value of a given type of data. He thought it was infallible. So strong were Luke’s convictions about the findings he produced using only this particular type of data that he would draw conclusions about the world that flew in the face of what the rest of us like to call “reality.” If Luke’s assertions were true, WW III would have been triggered, but as many, many other sources of data were able to point out, Luke was wrong.

There was a reason why Luke was the oldest junior analyst in the whole department.

Luke, like a lot of people, fall victim to a number of problems, fallacies and mental traps when they attempt to draw conclusions from data. This is not an exhaustive list, but illustrative of what I mean.

Focus Isn’t All That. There is a misconception that narrow and intense focus leads to better conclusions. The opposite tends to be true: the more you focus on a specific problem, the less likely you are to think clearly and objectively. Because you just “know” certain things are true, you feel comfortable taking shortcuts to reach your conclusion, which in turn simply drives you further away from the truth.

I’ve Seen This Before. We give too much credence to patterns. When you see the same or very similar events taking place or tactics used your natural reaction is to assume that what is happening now is what happened in the past. You discount other options because its “history repeating itself.”

The Shoehorn Effect. We don’t like questions that don’t have answers. Everything has to have an explanation, regardless of whether or not the explanation is actually true. When you cannot come up with an explanation that makes sense to you, you will fit the answer to match the question.

Predisposition. We allow our biases to drive us to seek out data that supports our conclusions and discount data that refutes it.

Emption. You cannot discount the emotional element involved in drawing conclusions, especially if your reputation is riding on the result. Emotions about a given decision can run so high that it overcomes your ability to think clearly. Rationalism goes out the window when your gut (or your greed) over-rides your brain.

How can we overcome the aforementioned flaws? There are a range of methodologies analysts use to improve objectivity and criticality. These are by no means exhaustive, but they give you an idea of the kind of effort that goes into serious analytic efforts.

Weighted Ranking. It may not seem obvious to you, but when presented with two or more choices, you choose X over Y based on the merits of X, Y (and/or Z). Ranking is instinctual and therefore often unconscious. The problem with most informal efforts at ranking is that its one-dimensional.

“Why do you like the TV show Homicide and not Dragnet?”

“Well, I like cop shows but I don’t like black-and-white shows.”

“OK, you realize those are two different things you’re comparing?”

A proper ranking means you’re comparing one thing against another using the same criteria. Using our example you could compare TV shows based on genre, sub-genre, country of origin, actors, etc., rank them according to preference in each category, and then tally the results. Do this with TV shows – or any problem – and you’ll see that your initial, instinctive results will be quite different than those of your weighted rankings.

Hypothesis Testing. You assert the truth of your hypothesis through supporting evidence, but you are always working with incomplete or questionable data, so you can never prove a hypothesis true; we accept it to be true until evidence surfaces that suggest it to be false (see bias note above). Information becomes evidence when it is linked to a hypothesis, and evidence is valid once we’ve subjected it to questioning: where did the information come from? How plausible is it? How reliable is it?

Devil’s Advocacy. Taking a contrary or opposing position from what is the accepted answer helps overcome biases and one-dimensional thinking. Devil’s advocacy seeks out new evidence to refute “what everybody knows,” including evidence that was disregarded by those who take the prevailing point of view.

This leads me to another point I alluded to earlier and that isn’t addressed in media coverage of malware analysis: what qualifications does your average reverse engineer have when it comes to drawing conclusions about geo-political-security issues? You don’t call a plumber to fix your fuse box. You don’t ask a diplomat about the latest developments in no-till farming. Why in the world would you take at face value what a reverse engineer says about anything except very specific, technical findings? I’m not saying people are not entitled to their opinions, but credibility counts if those opinions are going to have value.

So where are we?

  • There are no set or even widely accepted definitions related to malware  (e.g. what is “sophisticated” or “advanced”).
  • There is no widely understood or accepted baseline of what sort of technical, intellectual or actual-capital required to build malware.
  • Data you get out of code, through reverse engineering or from source, is not guaranteed to be accurate when it comes to issues of authorship or origin.
  • Malware analysts do not apply any analytic methodology in an attempt to confirm or refute their single-source findings.
  • Efforts to link data found in code to larger issues of geo-political importance are at best superficial.

Why is all of this important? Computer security issues are becoming an increasingly important factor in our lives. Not that everyone appreciates it, but look at where we have been and where we are headed. Just under 20 years ago few people in the US, much less the world, world were online; now more people in the world get online via their phones than do on a traditional computer. Cars use computers to drive themselves, and biological implants are controlled via Bluetooth. Neither of these new developments has meaningful security features built into them, but no one would ever be interested in hacking insulin pumps or pacemakers, right?

Taking computer security threats seriously starts by putting serious thought and effort behind our research and conclusions. The government does not provide information like this to the public, so we rely on vendors and security companies (whose primary interest is profit) to do it for us. When that “analysis,” which is far from rigorous is delivered to decision-makers who are used to dealing with conclusions that have been developed through a much more robust methodology, their decisions can have far reaching negative consequences.

Sometimes a quick-and-dirty analysis is right, and as long as you’re OK with the fact that that is all that most malware analysis is, OK. But you’re planning on making serious decisions about the threat you face from cyberspace, you should really take the time and effort to ensure that your analysis has looked beyond what IDA shows and considered more diverse and far-reaching factors.

Between Preppers and FEMA Trailers

Today, for want of a budget, the Federal government is shutting down. If the nation suffered a massive cyber attack today what would happen? If you think the government is going to defend you against a cyber attack or help you in the aftermath of a digital catastrophe – budget or no budget – think again. The government cannot save you, and you can no more count on timely assistance in the online world as you can in the physical one in the aftermath of a disaster. Help might come eventually, but your ability to fight off hostiles or weather a digital storm depends largely on what you can do for yourself.

The vast majority of the time, natural or man-made disasters are things that happen to someone else. People who live in disaster or storm prone areas know that at any given moment they may have to make due with what they have on hand, consequently they prepare to deal with the worst-case scenario for a reasonable amount of time. The reason you don’t see people in the mountain-west or north-east in FEMA trailers after massive snow or ice storms is a culture of resilience and self-reliance.

How does this translate into the digital world? Don’t efforts like the Comprehensive National Cybersecurity Initiative and all the attention foreign state-sponsored industrial espionage has gotten recently belay the idea that the government isn’t ready, willing and able to take action in the face of a digital crisis?

Federal agencies are no better at protecting themselves from digital attack than anyone else. The same tricks that lead to a breach at a bank work against a government employee. Despite spending tens of billions of tax dollars on cyber security we continue to hear about how successful attackers are and that attacks are growing and threatening our economy and way of life. The increasing amount of connectivity in industrial control systems puts us at even greater risk of a disaster because very few people know how to secure a power plant or oil refinery.

It’s not that the government does not want to make the Internet a safer and more secure; it is simply ill-equipped to do so. Industrial-age practices, bureaucracy, a sloth-like pace, its love affair with lobbyists, and its inability to retain senior leaders with security chops means “cyber” will always be the most talked-about also-ran issue in government. You know what issue has shut down the federal government this week? It isn’t “cyber.”

Protect you against threats? What leverage do we really have against a country like China? Cold War approaches won’t work. For one, you’re probably reading this on something made in China; your dad never owned a Soviet-made anything. We cannot implement “digital arms control” or a deterrence regime because there is no meaningful analog between nuclear weapons and digital ones. Trying to retrofit new problems into old constructs is how Cold Warriors maintain relevance; it’s just not terribly useful in the real world.

So what are we to do? Historically speaking, when the law could not keep up with human expansion into unknown territory, people were expected to defend themselves and uphold the rudiments of good social behavior. If someone threatened you on your remote homestead, you needed to be prepared to defend yourself until the Marshal arrived. This is not a call to vigilantism, nor that you should become some kind of iPrepper, but a reflection of the fact that the person most responsible for your safety and security online is you. As my former colleague Marc Sachs recently put it:

“If you’re worried about it, do something about it. Take security on yourselves, and don’t trust anybody else to do it.”

What do you or your business need to survive in the short- and long-term if you’re hacked? Invest time and money accordingly. If computer security is terra incognita then hire a guide to get you to where you want to go and teach you what you need to know to survive once you’re there. Unless you want to suffer through the digital equivalent of life in a FEMA trailer, you need to take some responsibility to improve your resilience and ensure your viability.

Stop Pretending You Care (about the NSA)

You’ve read the stories, heard the interviews, and downloaded the docs and you’re shocked, SHOCKED to find that one of the world’s most powerful intelligence agencies has migrated from collecting digital tons of data from radio waves and telephone cables to the Internet. You’re OUTRAGED at the supposed violation of your privacy by these un-elected bureaucrats who get their jollies listening to your sweet nothings.

Except you’re not.

Not really.

Are you really concerned about your privacy? Let’s find out:

  1. Do you only ever pay for things with cash (and you don’t have a credit or debit card)?
  2. Do you have no fixed address?
  3. Do you get around town or strange places with a map and compass?
  4. Do you only make phone calls using burner phones (trashed after one use) or public phones (never the same one twice)?
  5. Do you always go outside wearing a hoodie (up) and either Groucho Marx glasses or a Guy Fawkes mask?
  6. Do you wrap all online communications in encryption, pass them through TOR, use an alias and only type with latex gloves on stranger’s computers when they leave the coffee table to use the bathroom?
  7. Do you have any kind of social media presence?
  8. Are you reading this over the shoulder of someone else?

The answer key, if you’re serious about not having “big brother” of any sort up in your biznaz is: Y, Y, Y, Y, Y, Y, N, Y. Obviously not a comprehensive list of things you should do to stay off anyone’s radar, but anything less and all your efforts are for naught.

People complain about their movements being tracked and their behaviors being examined; but then they post selfies to 1,000 “friends” and “check in” at bars and activate all sorts of GPS-enabled features while they shop using their store club card so they can save $.25 on albacore tuna. The NSA doesn’t care about your daily routine: the grocery store, electronics store, and companies that make consumer products all care very, very much. Remember this story? Of course you don’t because that’s just marketing, the NSA is “spying” on you.

Did you sign up for the “do not call” list? Did you breathe a sigh of relief and, as a reward to yourself, order a pizza? Guess what? You just put yourself back on data brokers and marketing companies “please call me” list. What? You didn’t read the fine print of the law (or the fine print on any of the EULAs of the services or software you use)? You thought you had an expectation of privacy?! Doom on you.

Let’s be honest about what the vast majority of people mean when they say they care about their privacy:

I don’t want people looking at me while I’m in the process of carrying out a bodily function, carnal antics, or enjoying a guilty pleasure.

Back in the day, privacy was easy: you shut the door and drew the blinds.

But today, even though you might shut the door, your phone can transmit sounds, the camera in your laptop can transmit pictures, your set-top-box is telling someone what you’re watching (and depending on what the content is can infer what you’re doing while you are watching). You think you’re being careful, if not downright discrete, but you’re not. Even trained professionals screw up and it only takes one mistake for everything you thought you kept under wraps to blow up.

If you really want privacy in the world we live in today you need to accept a great deal of inconvenience. If you’re not down with that, or simply can’t do it for whatever reason, then you need to accept that almost nothing in your life is a secret unless it’s done alone in your basement, with the lights off and all your electronics locked in a Faraday cage upstairs.

Don’t trust the googles or any US-based ISP for your email and data anymore? Planning to relocate your digital life overseas? Hey, you know where the NSA doesn’t need a warrant to do its business and they can assume you’re not a citizen? Overseas.

People are now talking about “re-engineering the Internet” to make it NSA-proof…sure, good luck getting everyone who would need to chop on that to give you a thumbs up. Oh, also, everyone who makes stuff that connects to the Internet. Oh, also, everyone who uses the Internet who now has to buy new stuff because their old stuff won’t work with the New Improved Internet(tm). Employ encryption and air-gap multiple systems? Great advice for hard-core nerds and the paranoid, but not so much for 99.99999% of the rest of the users of the ‘Net.

/* Note to crypto-nerds: We get it; you’re good at math. But if you really cared about security you’d make en/de-cryption as push-button simple to install and use as anything in an App store, otherwise you’re just ensuring the average person runs around online naked. */

Now, what you SHOULD be doing instead of railing against over-reaches (real or imagined…because the total number of commentators on the “NSA scandal” who actually know what they’re talking about can be counted on one hand with digits left over) is what every citizen has a right to do, but rarely does: vote.

The greatest power in this country is not financial, it’s political. Intelligence reforms only came about in the 70s because of the sunshine reflecting off of abuses/overreaches could not be ignored by those who are charged with overseeing intelligence activities. So if you assume the worst of what has been reported about the NSA in the press (again, no one leaking this material, and almost no one reporting of commenting on it actually did SIGINT for a living…credibility is important here) then why have you not called your Congressman or Senator? If you’re from CA, WV, OR, MD, CO, VA, NM, ME, GA, NC, ID, IN, FL, MI, TX, NY, NJ, MN, NV, KS, IL, RI, AZ, CT, AL or OK you’ve got a direct line to those who are supposed to ride herd on the abusers.

Planning on voting next year? Planning on voting for an incumbent? Then you’re not really doing the minimum you can to bring about change. No one cares about your sign-waving or online protest. Remember those Occupy people? Remember all the reforms to the financial system they brought about?

Yeah….

No one will listen to you? Do what Google, Facebook, AT&T, Verizon and everyone else you’re angry at does: form a lobby, raise money, and button hole those who can actually make something happen. You need to play the game to win.

I’m not defending bad behavior. I used to live and breath Ft. Meade, but I’ve come dangerously close to being “lost” thanks to the ham-handedness of how they’ve handled things. But let’s not pretend that we – all of us – are lifting a finger to do anything meaningful about it. You’re walking around your house naked with the drapes open and are surprised when people gather on the sidewalk – including the police who show up to see why a crowd is forming – to take in the view. Yes, that’s how you roll in your castle, but don’t pretend you care about keeping it personal.

Dust off Khrushchev while we’re at it

Kissinger’s call for detente would make a lot more sense if the analog to “cyber” was the cold war, MAD, etc.

It is not.

I have a lot of respect for the former SECSTATE, but to be mildly uncharitable, he doesn’t really have a lot to add to this discussion. None of his cold war ilk do. “Cyber” is pretty much the closest thing to a perfect weapon anyone has seen in history (you can claim “it wasn’t me!” and no one can prove definitively otherwise in a meaningful time frame). Proposed solutions that ignore or give short shrift to this basic fact are a colossal waste of time, which is all cold war retreads have at this point. No one who can use “cyber” as a meaningful weapon for intelligence or combative activities is going to surrender one byte of capability. No security regime that has been proposed stands up to a modicum of scrutiny once the most basic, practical issues are raised. We need to hear proposals that have at least one foot rooted in reality because the threat is here and now; ideas whose success depends on a world that doesn’t currently exist and is unlikely to (did I mention no one in their right might would give up capability? I did, good) are consuming cycles we could be using to come up with something practical.