“Cyber MAD” is a Bad Idea. Really Bad.

I don’t know how many times I have to say this, but nothing screams “legacy future” like trying to shoe-horn cold-war thinking into “cyber.” This latest attempt doesn’t disappoint (or maybe it does, depending on how you look at it) because it completely miss two key points:

  1. Cyberspace is not meat-space;
  2. Digital weapons are nothing like atomic ones.

Yes, like the nuclear arms race, it is in fact more expensive to defend yourself than it is to attack someone. Generally speaking. Its OK to paint with a broad brush on this point because so many entities online are so woefully inadequate when it comes to defense that we forget that there are actually some who are quite hard and expensive to attack. Any serious colored-hat who is being honest will tell you that they deal with more than their fair share of unknowns and ‘unknown unknowns’ when going after any given target.

But unlike malicious actions in cyberspace, there is no parsing nuclear war. You’re nuked, or you’re not. Cyber-espionage, cyber-crime, cyber-attack…all indistinguishable in all technically meaningful ways. Each has a different intent, which we are left to speculate about after-the-fact. In the other scenario, no one is around to speculate why a battalion of Reds turned their keys and pushed their buttons.

Attacker identity is indeed important whether you’re viewing a potential conflict through nuclear or digital lenses, but you know what excuse doesn’t work in the nuclear scenario? “It wasn’t me.”

Um, IR burn says it was…

There is no such equivalent in cyberspace. You can get close – real close – given sufficient data and time, but there will be no Colin Powell-at-the-UN-moment in response to a cyber threat because “it wasn’t me” is a perfectly acceptable excuse.

But we have data.

You can fabricate data

You know what you can’t fabricate? Fallout.

All of this, ALL OF THIS, is completely pointless because if some adversary had both the will and the wherewithal to attack and destroy our and just our critical infrastructure and national security/defense capabilities via cyber means…what are we meant to strike back with? Who are those who happen to be left unscathed supposed to determine who struck first? I was not a Missileer, but I’m fairly certain you can’t conduct granular digital attribution from the bottom of an ICBM silo.

What is the point of worrying about destruction anyway? Who wants that? The criminals? No, there is too much money to be made keeping systems up and careless people online. The spies? No, there is too much data to harvest and destruction might actually make collection hard. Crazy-bent-on-global-domination types? This is where I invoke the “Movie Plot Threat” clause. If the scenario you need to make your theory work in cyberspace is indistinguishable from a James Bond script, you can’t be taken seriously.

MAD for cyberspace is a bad idea because its completely academic and does nothing to advance the cause of safety or security online (the countdown to someone calling me “anti-intellectual” for pointing out this imperial nudity starts in 5, 4, 3….). MAD, cyber deterrence, all this old think is completely useless in any practical sense. You know why MAD and all those related ideas worked in the 60s? Because they dealt with the world and the problem in front of them as it was, not how they wished it to be.

I wholeheartedly agree that we need to do more and do more differently in order to make cyberspace a safer and more secure environment. I don’t know anyone who argues otherwise. I’m even willing to bet there is a period of history that would provide a meaningful analog to the problems we face today, but the Cold War isn’t it.

The Lessons of PFC Manning

Make no mistake: PFC Manning made some very bad decisions and he should pay a very heavy price. Taking a step back, however, one can see that in his betrayal he has done something of a public service for both the security and operational communities in both the military, government, and commercial world.

Lesson number one is that your current computer security regime is probably a waste of time and effort. Even in what should have been an extremely secure environment, computer security was something approaching a joke. If Manning’s M.O. is confirmed, there was a complete security breakdown. Military necessity has always trumped certain non-combat-related protocols during wartime, but being able to run roughshod through Top Secret networks and rip classified material to cracked music CDs beggars belief. No amount of briefings, posters, forms and extra-duties will remedy this problem.

Next: you can’t ensure the confidentiality or integrity of anything on SIPRnet or JWICS (private sector entities who find themselves with a similar insider threat issue, insert your own network here). There are intelligence community agencies that don’t like to use SIPRnet, the military’s secret-level network, because they think it isn’t nearly as secure as it should be. PFC Manning has demonstrated that neither is the military’s top secret-level network. The intelligence posted to JWICS by any DOD-intelligence activity (which is most of the intelligence community) has been at risk for who knows how long. If one misguided, low-level troop can do what he is alleged to have done, I don’t even want to think about what a determined adversary – or an agent-in-place – could have been doing all this time.

Finally, more certifications and billions of dollars worth of grand strategies will not improve security. Ten CNCIs would not have stopped this, only a fundamental change in culture – both operational and security – would have worked. To the best of my knowledge, money doesn’t fund the widespread dissemination of good security ideas; it just buys more of the same boxes, software and bodies to reinforce the same dysfunctional security models.

If we are truly serious about improving computer security, if we don’t want $17 Billion in CNCI money to go completely to waste, if we are finally tired of shooting our own feet while trekking towards security nirvana, we need to pay attention to reality, design our security solutions accordingly.

If your approach to security impedes a unit’s (company, agency, etc.) ability to operate effectively, you’re doing it wrong. Security that presumes a condition or series of conditions that do not exist in the real world – much less combat environments – will fail. The people who need to get things done will intentionally cause it to fail . . . in order to get things done. This is not an original thought, but it one that needs to be revised in both military, government, and business circles. Good security is not perfect, it is good enough for what you need to do, what environment you are operating in, and for the duration of your decision-making cycle.

Presume your adversaries know everything you do at this point: react accordingly. Things are fairly speculative at this point, but when the damage assessment is done I’m fairly sure most sane people involved will probably walk away thinking there is no way to verify the confidentiality or integrity of any piece of information on SIPRnet or JWICS. I think that makes this a perfect time to implement some living intelligence solution. Maintaining the static production model gives our adversaries the advantage, because what was a mystery is now history and their Pentagon-ology skills have just gotten a huge boost. An environment of living intelligence also makes spy/leak hunting a lot easier by allowing a more granular view of who accessed what, when.

Clinging to outmoded security models and approaches is only going to end up endangering soldiers and national security because no one will adhere to them when they are needed most. Stop focusing on moats and walls because the enemy is already inside the wire (literally and figuratively). Most arguments against change – radical or incremental – don’t carry a lot of weight because they presume that what was done to date made us secure. What was done to date made us more insecure than ever; doing more of the same won’t bring improvement.

My greatest concern is that when he is in prison and the final chapter on the story of his actions is written, our “solution” will be more strongly-worded policy, more stringent procedures, more paperwork . . . all of which will promptly be ignored the next time the operational need demands it. We’ll carry on – business as usual – thinking that now we’re safe and secure in our own digital cloister, when in fact we’re simply doing more of the same things that got us in trouble in the first place. The tragedy here is not that we were undone by a shit-bird GI who didn’t have his head screwed on straight, it’s that we will ignore what he is teaching us.

Incongruence Defined

How apropos that on the heels of the publication of one of the better ideas out there on how to improve the quality and accessibility of intelligence to consumers, we hear reports that the leadership of the IC wants to roll back the intelligence sharing clock.

Let’s be clear: blaming Private Manning and Wikileaks for a chilling effect on sharing is a red herring. Everyone in this business knew that the second capabilities like Intelink went live that the power of one individual to provide unauthorized sources access to wide swaths of classified material that they would previously not have had access to went through the roof. Such systems made Security and Counterintelligence pros shudder, because seeking and exploiting access to classified information above or beyond what you were authorized to access is a classic indicator of potential espionage.

There is also something ironic about senior individuals who talk to people who are not authorized to receive classified information (e.g. reporters) off the record and on background, railing against people who talk to reporters off the record and on background. What they are really saying is: “We don’t like it when someone exposes information that runs counter to the controlled message we are trying to get out.” If you’re not down with exposing classified information to ANYONE not authorized to receive it, you shouldn’t be down with exposing classified information to EVERYONE not authorized to receive it. Anything else is simple hypocrisy.

Of course nothing in this town is so straight forward, and not considering the gray area is considered woefully naïve, so let’s break it down into what I hope are reasonably easy to understand and acceptable chunks:

Leaks of any sort would be a lot less dangerous if we reformed the classification system. A system that didn’t overclassify or needlessly classify information could concentrate all-too-scarce security resources on protecting what truly needed to be kept from unauthorized personnel.

Intelligence, sadly, is a political football. All sides of a given argument will play with it to make their point. If you, the people and party in power are not going to stop using it thusly, then stop making boogie men out of your political opponents who do. Lay down some ground rules about what is judicious and what is foolish – punish egregious infractions – and may the best team win.

Private Manning is not a political opponent or foreign threat. He never should have been in the Army, much less in the job he had, so making his actions seem more than what they are – the misguided, ill-conceived actions of a child – and using it as an excuse to suspend (or worse, walk back) sharing initiatives is so disingenuous it would be laughable if it were not so serious. You’re going to make it harder for consumers to get what they need because of some OD green s***-bird?! WTF IMI

If you’re concerned about the pressures technology is and will place on security, and the implications that has on protecting classified information, then make a serious effort to understand how to leverage both the ability to deliver and the ability to monitor digital information and its consumption. Cutting zillion dollar contracts that end up late, short, or just plain fail isn’t the answer; taking the time to find people who actually understand the issues and the technology is. The alternative is to keep pretending that you can kick the can down the road, thereby becoming increasingly irrelevant to both warfighters and policymakers. The effective death of the IC won’t be caused by insufficient or incorrect information; it’ll be caused by the cumbersome hoops one will have to jump through to reach its products, compared with other data sources of sufficient quality and accuracy to get the job done.

Malware Analysis: The Danger of Connecting the Dots

The findings of malware analysis are not in fact “analysis;” they’re a collection of data points linked together by assumptions whose validity and credibility have not been evaluated. This lack of analytic methodology could prove exceedingly problematic for those charged with making decisions about cyber security. If you cannot trust your analysis, how are you supposed to make sound cyber security decisions?

Question: If I give you a malware binary to reverse engineer, what do you see? Think about your answer for a minute and then read on. We’ll revisit this shortly.

It is accepted as conventional wisdom that Stuxnet is related to Duqu, which is in turn related to Flame. All of these malware have been described as “sophisticated” and “advanced,” so much so that they must be the work of a nation-state (such work presumably requiring large amounts of time and lots of skilled people and the code written for purposes beyond simply siphoning off other people’s cash). The claim that the US government is behind Stuxnet has consequently led people to assume that all related code is US sponsored, funded, or otherwise backed.

Except for the claim of authorship, all of the aforementioned data points come from people who reverse engineer malware binaries. These are technically smart people who practice an arcane and difficult art, but what credibility does that give them beyond their domain? In our quest for answers do we give too much weight to the conclusions of those with discrete technical expertise and fail to approach the problem with sufficient depth and objectivity?

Let’s take each of these claims in turn.

Are there similarities if not outright sharing of code in Stuxnet, Duqu and Flame? Yes. Does that mean the same people wrote them all? Do you believe there is a global marketplace where malware is created and sold? Do you believe the people who operate in that marketplace collaborate? Do you believe that the principle of “code reuse” is alive and well? If you answered “yes” to any of these questions then a single source of “advanced” malware cannot be your only valid conclusion.

Is the code in Stuxnet, etc. “sophisticated?” Define “sophisticated” in the context of malware. Forget about malware and try to define “sophisticated” in the context of software, period. Is Excel more sophisticated than Photoshop? When words have no hard and widely-accepted definitions, they can mean whatever you want them to mean, which means they have no meaning at all.

Can only a nation-state produce such code? How many government-funded software projects are you aware of that work as advertised? You can probably count on one hand and have fingers left over. But now, somehow, when it comes to malware, suddenly we’re to believe that the government has gotten its shit together?

“But Mike, these are, like, weapons. Super secret stuff. The government is really good at that.”

Really? Have you ever heard of the Osprey? Or the F-35? Or the Crusader? Or the JTRS? Or Land Warrior? Groundbreaker? Trailblazer? Virtual Case File?

I’m not trying to trivialize the issues associated with large and complex technology projects, my point is that a government program to build malware would be subject to the same issues and consequently no better – and quite possibly worse – than any non-governmental effort to do the same thing. Cyber crime statistics – inflated though they may be – tell us that governments are not the only entities that can and do fund malware development.

“But Mike, the government contracts out most of its technology work. Why couldn’t they contract out the building of digital weapons?”

They very well could, but then what does that tell us? It tells us that if you wanted to build the best malware you have to go on the open market (read: people who may not care who they’re working for, as long as their money is good).

As far as the US government “admitting” that they were behind Stuxnet: they did no such thing. A reporter, an author of a book, says that a government official told him that the US was behind Stuxnet. Neither the President of the United States, nor the Secretary of Defense, nor the Directors of the CIA or NSA got up in front of a camera and said, “That’s us!” which is what an admission would be. Let me reiterate: a guy who has a political agenda told a guy who wants to sell books that the US was behind Stuxnet.

It’s easy to believe the US is behind Stuxnet, as much as it is to believe Israel is behind it. You know who else doesn’t like countries who don’t have nuclear weapons to get them? Almost every country in the world, including those countries that currently have nuclear weapons. You know who else might not want Iran – a majority Shia country – to have an atomic bomb? Roughly 30 Sunni countries for starters, most of which could afford to go onto the previously mentioned open market and pay for malware development. What? You hadn’t thought about the non-proliferation treaty or that Sunni-Shia thing? Yeah, neither has anyone working for Kaspersky, Symantec, F-Secure, etc., etc.

Back to the question I asked earlier: What do you see when you reverse engineer a binary?

Answer: Exactly what the author wants you to see.

  • I want you to see words in a language that would throw suspicion on someone else.
  • I want you to see that my code was compiled in a particular foreign language (even though I only read and/or write in a totally different language).
  • I want you to see certain comments or coding styles that are the same or similar to someone else’s (because I reuse other people’s code).
  • I want you to see data about compilation date/time, PDB file path, etc., which could lead you to draw erroneous conclusions have no bearing on malware behavior or capability.

Contrary to post-9/11-conventional wisdom, good analysis is not dot-connecting. That’s part of the process, but it’s not the whole or only process. Good analysis has methodology behind it, as well as a fair dose of experience or exposure to other disciplines that comes into play. Most of all, whenever possible, there are multiple, verifiable, meaningful data points to help back up your assertions. Let me give you an example.

I used to work with a guy we’ll call “Luke.” Luke was a firm believer in the value of a given type of data. He thought it was infallible. So strong were Luke’s convictions about the findings he produced using only this particular type of data that he would draw conclusions about the world that flew in the face of what the rest of us like to call “reality.” If Luke’s assertions were true, WW III would have been triggered, but as many, many other sources of data were able to point out, Luke was wrong.

There was a reason why Luke was the oldest junior analyst in the whole department.

Luke, like a lot of people, fall victim to a number of problems, fallacies and mental traps when they attempt to draw conclusions from data. This is not an exhaustive list, but illustrative of what I mean.

Focus Isn’t All That. There is a misconception that narrow and intense focus leads to better conclusions. The opposite tends to be true: the more you focus on a specific problem, the less likely you are to think clearly and objectively. Because you just “know” certain things are true, you feel comfortable taking shortcuts to reach your conclusion, which in turn simply drives you further away from the truth.

I’ve Seen This Before. We give too much credence to patterns. When you see the same or very similar events taking place or tactics used your natural reaction is to assume that what is happening now is what happened in the past. You discount other options because its “history repeating itself.”

The Shoehorn Effect. We don’t like questions that don’t have answers. Everything has to have an explanation, regardless of whether or not the explanation is actually true. When you cannot come up with an explanation that makes sense to you, you will fit the answer to match the question.

Predisposition. We allow our biases to drive us to seek out data that supports our conclusions and discount data that refutes it.

Emption. You cannot discount the emotional element involved in drawing conclusions, especially if your reputation is riding on the result. Emotions about a given decision can run so high that it overcomes your ability to think clearly. Rationalism goes out the window when your gut (or your greed) over-rides your brain.

How can we overcome the aforementioned flaws? There are a range of methodologies analysts use to improve objectivity and criticality. These are by no means exhaustive, but they give you an idea of the kind of effort that goes into serious analytic efforts.

Weighted Ranking. It may not seem obvious to you, but when presented with two or more choices, you choose X over Y based on the merits of X, Y (and/or Z). Ranking is instinctual and therefore often unconscious. The problem with most informal efforts at ranking is that its one-dimensional.

“Why do you like the TV show Homicide and not Dragnet?”

“Well, I like cop shows but I don’t like black-and-white shows.”

“OK, you realize those are two different things you’re comparing?”

A proper ranking means you’re comparing one thing against another using the same criteria. Using our example you could compare TV shows based on genre, sub-genre, country of origin, actors, etc., rank them according to preference in each category, and then tally the results. Do this with TV shows – or any problem – and you’ll see that your initial, instinctive results will be quite different than those of your weighted rankings.

Hypothesis Testing. You assert the truth of your hypothesis through supporting evidence, but you are always working with incomplete or questionable data, so you can never prove a hypothesis true; we accept it to be true until evidence surfaces that suggest it to be false (see bias note above). Information becomes evidence when it is linked to a hypothesis, and evidence is valid once we’ve subjected it to questioning: where did the information come from? How plausible is it? How reliable is it?

Devil’s Advocacy. Taking a contrary or opposing position from what is the accepted answer helps overcome biases and one-dimensional thinking. Devil’s advocacy seeks out new evidence to refute “what everybody knows,” including evidence that was disregarded by those who take the prevailing point of view.

This leads me to another point I alluded to earlier and that isn’t addressed in media coverage of malware analysis: what qualifications does your average reverse engineer have when it comes to drawing conclusions about geo-political-security issues? You don’t call a plumber to fix your fuse box. You don’t ask a diplomat about the latest developments in no-till farming. Why in the world would you take at face value what a reverse engineer says about anything except very specific, technical findings? I’m not saying people are not entitled to their opinions, but credibility counts if those opinions are going to have value.

So where are we?

  • There are no set or even widely accepted definitions related to malware  (e.g. what is “sophisticated” or “advanced”).
  • There is no widely understood or accepted baseline of what sort of technical, intellectual or actual-capital required to build malware.
  • Data you get out of code, through reverse engineering or from source, is not guaranteed to be accurate when it comes to issues of authorship or origin.
  • Malware analysts do not apply any analytic methodology in an attempt to confirm or refute their single-source findings.
  • Efforts to link data found in code to larger issues of geo-political importance are at best superficial.

Why is all of this important? Computer security issues are becoming an increasingly important factor in our lives. Not that everyone appreciates it, but look at where we have been and where we are headed. Just under 20 years ago few people in the US, much less the world, world were online; now more people in the world get online via their phones than do on a traditional computer. Cars use computers to drive themselves, and biological implants are controlled via Bluetooth. Neither of these new developments has meaningful security features built into them, but no one would ever be interested in hacking insulin pumps or pacemakers, right?

Taking computer security threats seriously starts by putting serious thought and effort behind our research and conclusions. The government does not provide information like this to the public, so we rely on vendors and security companies (whose primary interest is profit) to do it for us. When that “analysis,” which is far from rigorous is delivered to decision-makers who are used to dealing with conclusions that have been developed through a much more robust methodology, their decisions can have far reaching negative consequences.

Sometimes a quick-and-dirty analysis is right, and as long as you’re OK with the fact that that is all that most malware analysis is, OK. But you’re planning on making serious decisions about the threat you face from cyberspace, you should really take the time and effort to ensure that your analysis has looked beyond what IDA shows and considered more diverse and far-reaching factors.


We’re Not Breaking Up Anything

A leading Senate critic of online surveillance wants the government to stop widespread spying on phone calls, texts and emails, saying the “digital dragnet” doesn’t make the country safer, and only hurts the U.S. economy.

What data is there to support such notions? That jobs have been lost in any significant numbers? That revenues for any of the associated enterprises are down dramatically based solely on recent revelations? Are there any metrics behind such claims besides the volume and length of press releases from privacy organizations/activists and NSA-haters?

I’m guessing the answer is “no.”

Tech executives and industry experts warned those revelations would hurt Silicon Valley companies by making consumers and business customers fearful that U.S. companies can’t protect sensitive data from government prying.

As executives from TJMaxx, Target, Home Depot, JP Morgan, Heartland Payment Systems, etc., etc. will testify, U.S. companies can’t protect sensitive data from anyone. I smell herring.

Some analysts estimated last year that U.S. tech companies could lose tens of billions of dollars in sales, particularly after European firms began marketing themselves as being more secure than U.S. competitors – or less vulnerable to legal demands from the U.S. government.

So “estimations” …from last year… not actual data…from today.

What’s the backup plan?

“The simplest outcome is that we’re going to end up breaking the Internet,” Schmidt said. “Because what’s going to happen is, governments will do bad laws of one kind or another, and they are eventually going to say, ‘We want our own Internet in our country because we want it to work our way, right? And we don’t want these NSA and other people in it.'”

The first rule of SIGINT Club is: going overseas is a help, not a hindrance, to collection.

The second rule of SIGINT Club is: if one man can build it, another man can break it.

Years ago, when asked by think tanks and futurists how I thought things were going to play out I thought Balkanization was the future too. But once I realized that people really didn’t care about security or privacy, I jumped from anger straight to acceptance. We’re not re-engineering the Internet to make it more secure or private. We’re not splitting it up. Ever heard of the steam roller called Internet of Things? Something you should all be aware of: it’s riding on the Internet. No one is disrupting this gravy train for the sake of security. I’m a security guy. Saying this is upsetting to me, but there is no meaningful indication that we’ve learned anything or are prepared to do anything different.

Cybersecurity month history lesson #1,283

Once again, its “cybersecurity awareness month” and once again we are reminded that there is nothing new under the sun:

The huge cyberattack on JPMorgan Chase that touched more than 83 million households and businesses was one of the most serious computer intrusions into an American corporation. But it could have been much worse.

Actually, if you go on to read the rest of it you know it probably was worse, there is simply no way for them to know otherwise. They say all the right words, and by “right” I mean technically correct but actually unverifiable.

The breadth of the attacks — and the lack of clarity about whether it was an effort to steal from accounts or to demonstrate that the hackers could penetrate even the best-protected American financial institutions — has left Washington intelligence officials and policy makers far more concerned than they have let on publicly. Some American officials speculate that the breach was intended to send a message to Wall Street and the United States about the vulnerability of the digital network of one of the world’s most important banking institutions.

Lesson number one when it comes to trying to assess the motivation of hackers: try not to hurt yourself.  Its easy to say, ‘well banks are where the money is’ (probably a good bet, more on that in a second) and it appears very deep when you try to link events like this to larger geo-political issues, but in the end we’re all guessing. Until you get someone in the dock, and can verify his story, it is all speculation…and a great big waste of time. No tending to someone with a sucking chest wound stops to as “why” they start treating the injury lest they lose the patient.

Still, the recent attacks on the financial firms raise the possibility that the banks may not be up to the job of defending themselves. The attacks will also stoke questions about regulations governing when companies must inform regulators and their customers about a breach.

“It was a huge surprise that they were able to compromise a huge bank like JPMorgan,” said Al Pascual, a security analyst with Javelin Strategy and Research. “It scared the pants off many people.”

There is no such thing as ‘too big to hack.’ Massive enterprises get hacked all the time. If there is a difference between any of them it is most likely the size of the security budget. If you’re going to be surprised at something be surprised at the scale of the thing and the time-line. This was always going to happen, it will always happen, what needs to change is making sure it is identified and resolved in a much shorter time-frame.

As to the ‘why,’ like I said before, you can spend all day spinning your wheels about that. People look sideways at the fact that accounts were not siphoned dry. As I speculated earlier, things that are as good as money are often better than actual money. What’s better: stealing from someone once or stealing from them forever? This is what makes hacks against USIS really scary – and something people looking at this latest case should keep in mind.


Indictments In-schmightments

Indictments against Chinese officials for hacking into U.S. companies is a typical government move of confusing motion with action. What’s the point of indictments if the targets will never see the inside of a prison cell because they’ll never be tried because they’ll never be extradited?

“Well we have this paper and held a press conference, so…TIGER BLOOD!”

Indictments are a completely impractical move that is designed to show some level of resolve, but is likely to cost both the government and U.S. private industry more than has been anticipated. I do not doubt that someone has attempted to calculate just how expensive and painful the retaliation may be, but if we have learned anything in the last few years it is that such estimates are inevitably low-balled because we underestimate our adversaries and how pervasive technology has become.

It is acknowledged that the Chinese are widely and deeply embedded into computer systems in the U.S. For every intrusion we know about there are others that are unknown to us. We can warn and mitigate against damage or destruction the case of the former, but we have no idea how painful if not crippling the latter may be. To paraphrase Mike Tyson: Everyone thinks they know how things will go down until they get punched in the face.

China is a very large market for U.S. technology (legitimately obtained). What happens when their government decides to not stroke checks to U.S. companies anymore? At what point do U.S. tech giants and the Chamber of Commerce start lobbying our government to stop being such hard-asses? China is not a monolith, but like any sufficiently large entity, once its momentum shifts, the impact is not trivial.

China is a serious perpetrator in this domain, but it is not the only one. Once again: we’re only focusing on China ref cyberspace because we’re focused on (possibly) fighting China in meat-space (someday). Notice that we’re not having this conversation in French.

China is going to react to these events, and it is going to go badly for us in a public way. What would have been a better play?

Start Swinging. If the government is standing squarely behind the idea that this sort of action should stop, it should stop talking and start fighting. We know how to fight secret wars and proxy wars; it’s what all the political re-treads trying to make a name in “cyber” did back in the day when our adversary was another country with a red flag. Put that legacy future thinking to good use for a change and figure out how to inflict pain without actually delivering knock-out punches (remember, in cyberspace you can deny everything).

Change the Game. The U.S. is one of the few countries that doesn’t use its national security capabilities to the benefit of private industry. Its PRIVATE industry and they’re on their own, though we’ve been trying to make sure they take to a level playing field. The idea that we’re going to bring about some kind of international norm in this regard is a pipe dream, so stop smoking: get government out of the fair play business and let companies compete internationally on par with their competitors.

“But Mike, that like, leads to bribes and stuff.”

That’s an ugly word, but actions that “facilitate” deals is pretty much how most of the rest of the world works. We can maintain this white-hat sense of dignity and continue to lose, or we can stop playing that game and come up with one that we can win.

Horse Head in Bed. If you have enough information to indict someone you have enough information to influence them without a big public scene. In the Godfather Don Corleone didn’t send a bunch of muscle to the Woltz studios to get Johnny Fontane his movie role, he did this instead. Wang Dong isn’t a rich international jet-setter, but he has a house or flat, a bank account, and a myriad of other things that can be touched. Is that going to change Chinese policy? No. Is putting a horse head in the beds of everyone in Unit 61398 going to influence policy? It might give them pause, which is more than is happening now because they think they can’t be touched.

We can influence Chinese behavior in any number of ways, but in over two decades of being involved in these issues I have yet to come across an administration that was prepared to go to blows over hacking. Hacking is what the government gets concerned about because there isn’t a shooting war going on. We have brought a knife to a…fight where our opponent could pull out any number of weapons more powerful than a knife. We’re not prepared for this.

You Were Promised Neither Security Nor Privacy

If you remember hearing the song Istanbul (Not Constantinople) on the radio the first time around, then you remember all the predictions about what life in the 21st century was supposed to be like. Of particular note was the prediction that we would use flying cars and jet packs to get around, among other awesome technological advances.

Recently someone made the comment online (for the life of me I can’t find it now) that goes something like this: If you are the children of the people who were promised jet packs you should not be disappointed because you were not promised these things, you were promised life as depicted in Snow Crash or True Names.

Generation X for the win!

The amateur interpretation of leaked NSA documents has sparked this debate about how governments – the U.S. in particular – are undermining if not destroying the security and privacy of the ‘Net. We need no less than a “Magna Carta” to protect us, which would be a great idea if were actually being oppressed to such a degree that our liberties were being infringed upon by a despot and his arbitrary whims. For those not keeping track: the internet is not a person, nor is it run by DIRNSA.

I don’t claim to have been there at the beginning but in the early-mid 90s my first exposure to the internet was…stereotypical (I am no candidate for sainthood). I knew what it took to protect global computer networks because that was my day job for the government; accessing the ‘Net (or BBSes) at home was basically the wild west. There was no Sheriff or fire department if case things got dangerous or you got robbed. Everyone knew this, no one was complaining and no one expected anything more.

What would become the commercial internet went from warez and naughty ASCII images to house hunting, banking, news, and keeping up with your family and friends. Now it made sense to have some kind of security mechanisms in place because, just like in meat-space, there are some things you want people to know and other things you do not. But the police didn’t do that for you, you entrusted that to the people who were offering up the service in cyberspace, again, just like you do in the real world.

But did those companies really have an incentive to secure your information or maintain your privacy? Not in any meaningful way. For one, security is expensive and customers pay for functionality, not security. It actually makes more business sense to do the minimum necessary for security because on the off chance that there is a breach, you can make up any losses on the backs of your customers (discretely of course).

Secondly, your data couldn’t be too secure because there was value in knowing who you are, what you liked, what you did, and who you talked to. The money you paid for your software license was just one revenue stream; a company could make even more money using and/or selling your information and online habits. Such practices manifest themselves in things like spam email and targeted ads on web sites; the people who were promised jet packs know it by another name: junk mail.

Let’s be clear: the only people who have really cared about network security are the military; everyone else is in this to make a buck (flowery, feel-good, kumbaya language notwithstanding). Commercial concerns operating online care about your privacy until it impacts their money.

Is weakening the security of a privately owned software product a crime? No. It makes crypto  nerds really, really angry, but it’s not illegal. Imitating a popular social networking site to gain access to systems owned by terrorists is what an intelligence agency operating online should do (they don’t actually take over THE Facebook site, for everyone with a reading comprehension problem). Co-opting botnets? We ought to be applauding a move like that, not lambasting them.

There is something to the idea that introducing weaknesses into programs and algorithms puts more people than just terrorists and criminals at risk, but in order for that to be a realistic concern you would have to have some kind of evidence that the security mechanisms available in products today are an adequate defense against malicious attack, and they’re not. What passes for “security” in most code is laughable. Have none of the people raising this concern heard of Pwn2Own? Or that there is a global market for 0-day an the US government is only one of many, many customers?

People who are lamenting the actions of intelligence agencies talk like the internet is this free natural resource that belongs to all and come hold my hand and sing the Coca Cola song… I’m sure the Verizons of the world would be surprised to hear that. Free WiFi at the coffee shop? It’s only free to you because the store is paying for it (or not, because you didn’t notice the $.05 across the board price increase on coffee and muffins when the router was installed).

Talking about the ‘Net as a human right doesn’t make it so. Just like claiming to be a whistle blower doesn’t make you one, or claiming something is unconstitutional when the nine people specifically put in place to determine such things hasn’t ruled on the issue. You can still live your life without using TCP/IP or HTTP, you just don’t want to.

Ascribing nefarious intent to government action – in particular the NSA as depicted in Enemy of the State – displays a level of ignorance about how government – in particular intelligence agencies – actually work. The public health analog is useful in some regards, but it breaks down when you start talking about how government actions online are akin to putting civilians at risk in the real world. Our government’s number one responsibility is keeping you safe; that it has the capability to inflect harm on massive numbers of people does not mean they will use it and it most certainly does not mean they’ll use it on YOU. To think otherwise is simply movie-plot-thinking (he said, with a hint of irony).

Surveillance Protests: Get Serious or Go Home

In the US of A, if you don’t like the fact that your government may have collected data about your phone calls and emails you can do something about it without fear of being thrown in a Gulag. Unfortunately, the actions being proposed by those who take offense at this kind of things isn’t the kind of “something” that is going to make a difference.

Just as a reminder:  The Executive Branch (where the NSA sits) carries out national policy; the Legislative Branch funds the Executive Branch agencies that carry out national policy; the Judicial Branch makes sure the other two branches aren’t breaking the law.

None of the aforementioned organizations care about your petition, or your march, or your online protest.

If you want to bring about political change you need to get out the vote. If you want to get out the vote you need to spend money. A lot of it. As I’ve stated before: your average citizen cares more about just about anything than they do things-cyber. With apologies to Benjamin Franklin, the only thing that is sure to get people’s attention is sex and taxes…”cyber” is an also-ran politically.

Don’t like the NSA maybe capturing your meta-data? Gather up enough friends, pool your money, and hire a lobbyist. Just so you know: A couple dozen mega defense contractors that make billions of dollars a year supporting the NSA and its sister organizations are your competition.

I’m not saying it is right, I’m not saying it is fair, I’m just saying that’s the way it is. If you want to win the game you have to play; anything less is a waste of time.



Sam and His (not so) Crazy Ramblings

If you haven’t already done so, start here.

Go ahead, I’ll wait.

Sam and I don’t go way back, but he’s easily the most intellectual and yet accessible thinker on these sorts of issues, especially as they interact with other disciplines. While he can’t draw from decades of experience behind closed doors, you’d never know it based on his grasp of the issues.

Having said that, there are some things that only a grizzled old veteran of the intelligence wars – actual and bureaucratic – can shed light on, hence the following response…

1) NSA will be half the size it is today.

Why I think he’s wrong.

It takes a LOT to reduce the size of a federal agency; even more so an intelligence agency. I’ve been in the IC through fat times and lean, cold war, hot wars, peace dividend and war on terror and I’ve never seen an agency shrink in any significant way. It might not grow as fast as expected, it might shrink somewhat through natural attrition, but to say “half the size” is basically nonsense from a historical perspective.

Where I think he might be on to something.

The NSA is really two outfits in one: an intelligence agency and a security agency. They can complement each other but they don’t have to be under the same roof. In fact pulling the security agency out of NSA, making it a separate entity, and retooling it into an agency that supports security at both the national and individual level would go a long way in both winning back public trust, as well as actually making it harder for malicious outsiders to hurt us.

2) NSA becomes a contractor free agency.

Why I think he’s wrong.

Go into any intelligence agency today and you have 4 categories of people: managers, a thin slice of very senior subject matter experts, a lot of very junior people trying to be experts, and sandwiched in between is a layer of mid-careerists who, when they’re not trying to jockey for the senior SME slot once the geezer in it dies, is acting as a project manager or COTR for various efforts that are carried out by contractors. The IC can’t function without contractors because Congress won’t allow the IC to hire more employees. They won’t allow them to hire more employees but at the same time they won’t stand for a reduction in the number of missions that need to be executed. The only solution to that problem is contractors.

The IC also cannot hire enough technical experts in enough subjects to keep pace with the demands of their missions. The whole point of contractors is to bring them on to address new or advanced issue X, and then leave (or reduce their presence) once things are in hand. What we have is perpetual 1-base plus four option year contracts. Serving as a federal employee for 30 years, retiring, and then coming back as a contractor to work on the same mission for another a decade or more isn’t unusual, its standard practice. Same number of missions, same changes in technology, means contractors are here to stay.

Where I think he’s on to something.

Contracts need to be: short(er) term efforts that are focused on hard technical problems, with the goal of getting things to the point where more generalist feds can take over. The size of contracts need to be reduced. Hundreds of millions of dollars doesn’t buy more success, it just buys more butts in seats.

3) Elements of NSA working toward national infrastructure security are split off.

No argument.

4) NSA and CyberCom split

The sooner the better.

5) NSA has to invest in privacy preserving security as penance

See #1 above.

6) Individuals may find themselves under congressional investigation

Why I think he’s wrong.

NSA abuses, real or imagined, intentional or unintentional are a fringe issue. People in the crypto and privacy sub-culture care, some people in computer and information security care, people who have no idea how SIGINT works but are happy to have yet-another reason to hate the gov’t care…but the vast majority of everyone else doesn’t. Outside of New York, Washington DC, and a few other major cities, I challenge you to walk out into the street and find someone who has heard of this issue in any more than a passing sense. Then find someone so mad about it they’re going to take political action. Taxes, social security, health care: that’s what the majority of people in this country care about. NSA Internet surveillance of the ’10s is not NSA (and CIA and FBI) surveillance of people in the 70s.

Where I think he’s on to something.

If intelligence agencies are good at one thing its burying bodies. Is anyone going to find themselves in front of Church Committee 2.0? No. Will the people who were leaning the furthest in the foxhole on efforts that were exposed going to find themselves asked to quietly find their way out the door? Absolutely. This is how it works: the seniors thank and then shepherd those that pushed the envelope to the side, those who take their place know exactly where the line is drawn and stay weeeellll behind it. They communicate that to the generations that are coming up, and that buys us a few decades of sailing on a more even keel…

…until the next catastrophic surprise…