The Global Ungoverned Area

There are places on this planet where good, civilized people simply do not voluntarily go, or willingly stay. What elected governments do in safer and more developed parts of the world are carried out in these areas by despots and militias, often at terrible cost to those who have nowhere else to go and no means to go if they did.

Life online is not unlike life in these ungoverned areas: anyone with the skill and the will is a potential warlord governing their own illicit enterprise, basking in the spoils garnered from the misery of a mass of unfortunates. Who is to stop them? A relative handful of government entities, each with competing agendas, varying levels of knowledge, skills, and resources, none of whom can move fast enough, far enough, or with enough vigor to respond in-kind.

Reaping the whirlwind of apathy

Outside of the government, computer security is rarely something anyone asks for except in certain edge cases. Security is a burden, a cost center. Consumers want functionality. Functionality always trumps security. So much so that most people do not seem to care if security fails. People want an effective solution to their problem. If it happens to also not leak personal or financial data like a sieve, great, but neither is it a deal-breaker.

At the start of the PC age we couldn’t wait to put a computer on every desk. With the advent of the World Wide Web, we rushed headlong into putting anything and everything online. Today online you can play the most trivial game or fulfill your basic needs of food, shelter, and clothing, all at the push of a button. The down side to cyber-ing everything without adequate consideration to security? Epic security failures of all sorts.

Now we stand at the dawn of the age of the Internet of Things. Computers have gone from desktops to laptops to handhelds to wearables and now implantables. And again we can’t wait to employ technology, we also can’t be bothered to secure it.

How things are done

What is our response? Laws and treaties, or at least proposals for same, that decant old approaches into new digital bottles. We decided drugs and povertywere bad, so we declared “war” on them, with dismal results. This sort of thinking is how we get the Wassenaar Agreement applied to cybersecurity: because that’s what people who mean well and are trained in “how things are done” do. But there are a couple of problems with treating cyberspace like 17th century Europe:

  • Even when most people agree on most things, it only takes one issue to bring the whole thing crashing down.
  • The most well-intentioned efforts to deter bad behavior are useless if you cannot enforce the rules, and given the rate at which we incarcerate bad guys it is clear we cannot enforce the rules in any meaningful way at a scale that matters.
  • While all the diplomats of all the governments of the world may agree to follow certain rules, the world’s intelligence organs will continue to use all the tools at their disposal to accomplish their missions, and that includes cyber ones.

This is not to say that such efforts are entirely useless (if you happen to arrest someone you want to have a lot of books to throw at them), just that the level of effort put forth is disproportionate to the impact that it will have on life online. Who is invited to these sorts of discussions? Governments. Who causes the most trouble online? Non-state actors.

Roads less traveled

I am not entirely dismissive of political-diplomatic efforts to improve the security and safety of cyberspace, merely unenthusiastic. Just because “that’s how things are done” doesn’t mean that’s what’s going to get us where we need to be. What it shows is inflexible thinking, and an unwillingness to accept reality. If we’re going to expend time and energy on efforts to civilize cyberspace, let’s do things that might actually work in our lifetimes.

  • Practical diplomacy. We’re never going to get every nation on the same page. Not even for something as heinous as child porn. This means bilateral agreements. Yes, it is more work to both close and manage such agreement, but it beats hoping for some “universal” agreement on norms that will never come.
  • Soft(er) power. No one wants another 9/11, but what we put in place to reduce that risk, isn’t The private enterprises that supply us with the Internet – and computer technology in general – will fight regulation, but they will respond to economic incentives.
  • The human factor. It’s rare to see trash along a highway median, and our rivers don’t catch fire Why? In large part because of the crying Indian. A concerted effort to change public opinion can in fact change behavior (and let’s face it: people are the root of the problem).

Every week a new breach, a new “wake-up call,” yet there is simply not sufficient demand for a safer and more secure cyberspace. The impact of malicious activity online is greater than zero, but not catastrophic, which makes pursuing grandiose solutions a waste of cycles that could be put to better use achieving incremental gains (see ‘boil the ocean’).

Once we started selling pet food and porn online, it stopped being the “information superhighway” and became a demolition derby track. The sooner we recognize it for what it is the sooner we can start to come up with ideas and courses of action more likely to be effective.

/* Originally posted at Modern Warfare blog at CSO Online */

Cyber War: The Fastest Way to Improve Cybersecurity?

For all the benefits IT in general and the Internet specifically have given us, it has also introduced significant risks to our well-being and way of life. Yet cybersecurity is still not a priority for a majority of people and organizations. No amount of warnings about the risks associated with poor cybersecurity have helped drive significant change. Neither have real-world incidents that get worse and worse every year.

The lack of security in technology is largely a question of economics: people want functional things, not secure things, so that’s what manufacturers and coders produce. We express shock after weaknesses are exposed, and then forget what happened when the next shiny thing comes along. Security problems become particularly disconcerting when we start talking about the Internet of Things, which are not just for our convenience; they can be essential to one’s well-being.

To be clear: war is a terrible thing. But war is also the mother of considerable ad hoc innovation and inventions that have a wide impact long after the shooting stops. War forces us to make those hard decisions we kept putting off because we were so busy “crushing” and “disrupting” everything. It forces us to re-evaluate what we consider important, like a reliable AND secure grid, like a pacemaker that that works AND cannot be trivially hacked. Some of the positive things we might expect to get out of a cyberwar include:

  • A true understanding of how much we rely on IT in general and the Internet specifically. You don’t know what you’ve got till it’s gone, so the song says, and that’s certainly true of IT. You know IT impacts a great deal of your life, but almost no one understands how far it all goes. The last 20 years has basically been us plugging computers into networks and crossing our fingers. Risk? We have no idea.
  • A meaningful appreciation for the importance of security. Today, insecurity is an inconvenience. It is not entirely victimless, but increasingly it does not automatically make one a victim. It is a fine, a temporary dip in share price. In war, insecurity means death.
  • The importance of resilience. We are making dumb things ‘smart’ at an unprecedented rate. Left in the dust is the knowledge required to operate sans high technology in the wake of an attack. If you’re pushing 50 or older, you remember how to operate without ATMs, GrubHub, and GPS. Everyone else is literally going to be broke, hungry, and lost in the woods.
  • The creation of practical, effective, scalable solutions. Need to arm a resistance force quickly and cheaply? No problem. Need enough troops to fight in two theaters at opposite ends of the globe? No problem. Need ships tomorrow to get those men and materiel to the fight? No problem. When it has to be done, you find a way.
  • The creation of new opportunities for growth. When you’re tending your victory garden after a 12 hour shift in the ammo plant, or picking up bricks from what used to be your home in Dresden, it’s hard to imagine a world of prosperity. But after war comes a post-war boom. No one asked for the PC, cell phone, or iPod, yet all have impacted our lives and the economy in significant ways. There is no reason to think that the same thing won’t happen again, we just have a hard time conceiving it at this point in time.

In a cyberwar there will be casualties. Perhaps not directly, as you see in a bombing campaign, but the impacts associated with a technologically advanced nation suddenly thrown back into the industrial (or worse) age (think Puerto Rico post-Hurricane Maria). The pain will be felt most severely in the cohorts that pose the greatest risk to internal stability. If you’re used to standing in line for everything, the inability to use IT is not a big a deal. If you’re the nouveau riche of a kleptocracy – or a member of a massive new middle class – and suddenly you’re back with the proles, you’re not going to be happy, and you’re going to question the legitimacy of whomever purports to be in charge, yet can’t keep the lights on or supply potable water.

Change as driven by conflict is a provocative thought experiment, and certainly a worst-case scenario. The most likely situation is the status quo: breaches, fraud, denial, and disruption. If we reassess our relationship with cybersecurity it will certainly be via tragedy, but not necessarily war. Given how we responded to security failings 16 years ago however, it is unclear if those changes will be effective, much less ideal.

/* Originally published in CSOonline – Modern Warfare blog */

Intelligence Agencies Are Not Here to Defend Your Enterprise

If there is a potentially dangerous side-effect to the discovery of a set of 0-days allegedly belonging to the NSA it is the dissemination of the idea, and credulous belief of same, that intelligence agencies should place the security of the Internet – and commercial concerns that use it – above their actual missions. It displays an all-too familiar ignorance of why intelligence agencies exist and how they operate. Before you get back to rending your hair and gnashing your teeth, let’s keep a few things in mind.

  1. Intelligence agencies exist to gather information, analyze it, and deliver their findings to policymakers so that they can make decisions about how to deal with threats to the nation. Period. You can, and agencies often do, dress this up and expand on it in order to motivate the workforce, or more likely grab more money and authority, but when it comes down to it, stealing and making sense of other people’s information is the job. Doing code reviews and QA for Cisco is not the mission.
  2. The one element in the intelligence community that was charged with supporting defense is no more. I didn’t like it then, and it seems pretty damn foolish now, but there you are, all in the name of “agility.” NSA’s IAD had the potential to do the things that all the security and privacy pundits imagine should be done for the private sector, but their job was still keeping Uncle Sam secure, not Wal-Mart.
  3. The VEP is an exercise in optics. “Of course we’ll cooperate with your vulnerability release program,” says every inter-agency representative. “As long as it doesn’t interfere with our mission,” they whisper up their sleeve. Remember in every spy movie you ever saw, how the spooks briefed Congress on all the things, but not really? That.
  4. 0-days are only 0-days as far as you know. What one can make another can undo – and so can someone else. The idea that someone, somewhere, working for someone else’s intelligence agency might not also be doing vulnerability research, uncovering exploitable conditions in popular networking products, and using same in the furtherance of their national security goals is a special kind of hubris.
  5. Cyber security simply is not the issue we think it is. That we do any of this cyber stuff is only (largely) to support more traditional instruments and exercises of national power. Cyber doesn’t kill. Airstrikes kill. Snipers kill. Mortars kill. Policymakers are still far and away concerned with things that go ‘boom’ not bytes.In case you haven’t been paying attention for the past 15 years, we’ve had actual, shooting wars to deal with, not cyber war. 

I have spent most of my career being a defender (in and out of several different intelligence agencies). I understand the frustration, but blaming intelligence agencies for doing their job is not helpful. If you like living in the land of the free its important to note that rules that would preclude the NSA from doing what it does merely handicaps us; no one we consider a threat is going to stop looking for and exploiting holes. The SVR or MSS do not care about your amicus brief. The Internet is an important part of our world, and we should all be concerned about its operational well-being, but the way to reduce the chance that someone can crack your computer code is to write better code, and test it faster than the spooks can.

The Airborne Shuffle in Cyberspace

I did my fair share supporting and helping develop its predecessor, but I have no special insights into what is going on at CYBERCOM today. I am loathe to criticize when I don’t know all the details, still I see reports like this and scratch my head and wonder: why is anyone surprised?

Focus. If you have to wake up early to do an hour of PT, get diverted afterwards to pee in a cup, finally get to work and develop a good head of steam, only to leave early to go to the arms room and spend an hour cleaning a rifle, you’re not going to develop a world-class capability in any meaningful time-frame. Not in this domain. Not to mention the fact that after about two years whatever talent you’ve managed to develop rotates out and you have to start all over again.

Speed. If you have to call a meeting to call a meeting, and the actual meeting can’t take place for two weeks because everyone who needs to be there is involved in some variation of the distractions noted above, or TDY, you have no chance. It also doesn’t help that when you manage to have the meeting you are forced to delay decisions because of some minutia. You’re not just behind the power curve, you’re running in the opposite direction.

Agility. If your business model is to train generalists and buy your technology…over the course of several years…you are going to have a hard time going up against people with deep expertise who can create their own capabilities in days. Do we need a reminder inhow effective sub-peer adversaries can be against cutting edge military technology? You know what the people attacking SWIFT or major defense contractors aren’t doing? Standing up a PMO.

The procurement and use of tanks or aircraft carriers is limited to the military in meat-space, but in cyberspace anyone can develop or acquire weapons and project power. Globally. If you’re not taking this into consideration you’re basically the 18th Pomeranians. Absent radical changes no government hierarchy is going to out-perform or out-maneuver such adversaries, but it may be possible to close the gaps to some degree.

Focus. You should not lower standards for general purpose military skills, but in a CONUS, office environment you can exercise more control over how that training is performed and scheduled. Every Marine a rifleman, I get it, but shooting wars are relatively rare; the digital conflict has been engaged for decades (and if your cyber troops are hearing shots fired in anger, you’ve probably already lost).

Speed. Hackers don’t hold meetings, they open chat sessions. Their communication with their peers and partners is more or less constant. If you’re used to calling a formation to deliver your messages orally, you’re going to have to get used to not doing that. Uncomfortable with being glued to a screen – desktop or handheld? You’re probably ill-suited to operate in this domain.

Agility. You are never going to replicate ‘silicon valley’ in the DOD without completely disrupting DOD culture. The latter is a zero-defect environment, whereas the former considers failures to be a necessary part of producing excellence. You cannot hold company-level command for 15 years because its the job you’re best suited to; you can be one of the world’s best reverse engineers for as long as you want to be. What is “normal” should mean nothing inside an outfit like CYBERCOM.

Additional factors to consider…

Homestead. If you get assigned to CYBERCOM you’re there for at least 10 years. That’s about 20 dog years from the perspective of the domain and related technology experience, and it will be invaluable if you are serious about effective performance on the battlefield.

Lower Rank/Greater Impact. Cyberspace is where the ‘strategic corporal’ is going to play an out-sized role. At any given moment the commander – once their intent is made clear – is the least important person in the room.

Bias for Action. In meat-space if you pull the trigger you cannot call back the bullet. If your aim is true your target dies. In cyberspace your bullets don’t have to be fatal. The effect need only be temporary. We can and should be doing far more than we apparently are, because I guarantee our adversaries are.

Malware Analysis: The Danger of Connecting the Dots

The findings of a lot of malware analysis are not in fact “analysis;” they’re a collection of data points linked together by assumptions whose validity and credibility have not been evaluated. This lack of analytic methodology could prove exceedingly problematic for those charged with making decisions about cyber security. If you cannot trust your analysis, how are you supposed to make sound cyber security decisions?

Question: If I give you a malware binary to reverse engineer, what do you see? Think about your answer for a minute and then read on. We’ll revisit this shortly.

It is accepted as conventional wisdom that Stuxnet is related to Duqu, which is in turn related to Flame. All of these malware have been described as “sophisticated” and “advanced,” so much so that they must be the work of a nation-state (such work presumably requiring large amounts of time and lots of skilled people and the code written for purposes beyond simply siphoning off other people’s cash). The claim that the US government is behind Stuxnet has consequently led people to assume that all related code is US sponsored, funded, or otherwise backed.

Except for the claim of authorship, all of the aforementioned data points come from people who reverse engineer malware binaries. These are technically smart people who practice an arcane and difficult art, but what credibility does that give them beyond their domain? In our quest for answers do we give too much weight to the conclusions of those with discrete technical expertise and fail to approach the problem with sufficient depth and objectivity?

Let’s take each of these claims in turn.

Are there similarities if not outright sharing of code in Stuxnet, Duqu and Flame? Yes. Does that mean the same people wrote them all? Do you believe there is a global marketplace where malware is created and sold? Do you believe the people who operate in that marketplace collaborate? Do you believe that the principle of “code reuse” is alive and well? If you answered “yes” to any of these questions then a single source of “advanced” malware cannot be your only valid conclusion.

Is the code in Stuxnet, etc. “sophisticated?” Define “sophisticated” in the context of malware. Forget about malware and try to define “sophisticated” in the context of software, period. Is Excel more sophisticated than Photoshop? When words have no hard and widely-accepted definitions, they can mean whatever you want them to mean, which means they have no meaning at all.

Can only a nation-state produce such code? How many government-funded software projects are you aware of that work as advertised? You can probably count on one hand and have fingers left over. But now, somehow, when it comes to malware, suddenly we’re to believe that the government has gotten its shit together?

“But Mike, these are, like, weapons. Super secret stuff. The government is really good at that.”

Really? Have you ever heard of the Osprey? Or the F-35? Or the Crusader? Or the JTRS? Or Land Warrior? Groundbreaker? Trailblazer? Virtual Case File?

I’m not trying to trivialize the issues associated with large and complex technology projects, my point is that a government program to build malware would be subject to the same issues and consequently no better – and quite possibly worse – than any non-governmental effort to do the same thing. Cyber crime statistics – inflated though they may be – tell us that governments are not the only entities that can and do fund malware development.

“But Mike, the government contracts out most of its technology work. Why couldn’t they contract out the building of digital weapons?”

They very well could, but then what does that tell us? It tells us that if you wanted to build the best malware you have to go on the open market (read: people who may not care who they’re working for, as long as their money is good).

As far as the US government “admitting” that they were behind Stuxnet: they did no such thing. A reporter, an author of a book, says that a government official told him that the US was behind Stuxnet. Neither the President of the United States, nor the Secretary of Defense, nor the Directors of the CIA or NSA got up in front of a camera and said, “That’s us!” which is what an admission would be. Let me reiterate: a guy who has a political agenda told a guy who wants to sell books that the US was behind Stuxnet.

It’s easy to believe the US is behind Stuxnet, as much as it is to believe Israel is behind it. You know who else doesn’t like countries who don’t have nuclear weapons to get them? Almost every country in the world, including those countries that currently have nuclear weapons. You know who else might not want Iran – a majority Shia country – to have an atomic bomb? Roughly 30 Sunni countries for starters, most of which could afford to go onto the previously mentioned open market and pay for malware development. What? You hadn’t thought about the non-proliferation treaty or that Sunni-Shia thing? Yeah, neither has anyone working for Kaspersky, Symantec, F-Secure, etc., etc.

Back to the question I asked earlier: What do you see when you reverse engineer a binary?

Answer: Exactly what the author wants you to see.

  • I want you to see words in a language that would throw suspicion on someone else.
  • I want you to see that my code was compiled in a particular foreign language (even though I only read and/or write in a totally different language).
  • I want you to see certain comments or coding styles that are the same or similar to someone else’s (because I reuse other people’s code).
  • I want you to see data about compilation date/time, PDB file path, etc., which could lead you to draw erroneous conclusions have no bearing on malware behavior or capability.

Contrary to post-9/11-conventional wisdom, good analysis is not dot-connecting. That’s part of the process, but it’s not the whole or only process. Good analysis has methodology behind it, as well as a fair dose of experience or exposure to other disciplines that comes into play. Most of all, whenever possible, there are multiple, verifiable, meaningful data points to help back up your assertions. Let me give you an example.

I used to work with a guy we’ll call “Luke.” Luke was a firm believer in the value of a given type of data. He thought it was infallible. So strong were Luke’s convictions about the findings he produced using only this particular type of data that he would draw conclusions about the world that flew in the face of what the rest of us like to call “reality.” If Luke’s assertions were true, WW III would have been triggered, but as many, many other sources of data were able to point out, Luke was wrong.

There was a reason why Luke was the oldest junior analyst in the whole department.

Luke, like a lot of people, fall victim to a number of problems, fallacies and mental traps when they attempt to draw conclusions from data. This is not an exhaustive list, but illustrative of what I mean.

Focus Isn’t All That. There is a misconception that narrow and intense focus leads to better conclusions. The opposite tends to be true: the more you focus on a specific problem, the less likely you are to think clearly and objectively. Because you just “know” certain things are true, you feel comfortable taking shortcuts to reach your conclusion, which in turn simply drives you further away from the truth.

I’ve Seen This Before. We give too much credence to patterns. When you see the same or very similar events taking place or tactics used your natural reaction is to assume that what is happening now is what happened in the past. You discount other options because its “history repeating itself.”

The Shoehorn Effect. We don’t like questions that don’t have answers. Everything has to have an explanation, regardless of whether or not the explanation is actually true. When you cannot come up with an explanation that makes sense to you, you will fit the answer to match the question.

Predisposition. We allow our biases to drive us to seek out data that supports our conclusions and discount data that refutes it.

Emption. You cannot discount the emotional element involved in drawing conclusions, especially if your reputation is riding on the result. Emotions about a given decision can run so high that it overcomes your ability to think clearly. Rationalism goes out the window when your gut (or your greed) over-rides your brain.

How can we overcome the aforementioned flaws? There are a range of methodologies analysts use to improve objectivity and criticality. These are by no means exhaustive, but they give you an idea of the kind of effort that goes into serious analytic efforts.

Weighted Ranking. It may not seem obvious to you, but when presented with two or more choices, you choose X over Y based on the merits of X, Y (and/or Z). Ranking is instinctual and therefore often unconscious. The problem with most informal efforts at ranking is that its one-dimensional.

“Why do you like the TV show Homicide and not Dragnet?”

“Well, I like cop shows but I don’t like black-and-white shows.”

“OK, you realize those are two different things you’re comparing?”

A proper ranking means you’re comparing one thing against another using the same criteria. Using our example you could compare TV shows based on genre, sub-genre, country of origin, actors, etc., rank them according to preference in each category, and then tally the results. Do this with TV shows – or any problem – and you’ll see that your initial, instinctive results will be quite different than those of your weighted rankings.

Hypothesis Testing. You assert the truth of your hypothesis through supporting evidence, but you are always working with incomplete or questionable data, so you can never prove a hypothesis true; we accept it to be true until evidence surfaces that suggest it to be false (see bias note above). Information becomes evidence when it is linked to a hypothesis, and evidence is valid once we’ve subjected it to questioning: where did the information come from? How plausible is it? How reliable is it?

Devil’s Advocacy. Taking a contrary or opposing position from what is the accepted answer helps overcome biases and one-dimensional thinking. Devil’s advocacy seeks out new evidence to refute “what everybody knows,” including evidence that was disregarded by those who take the prevailing point of view.

This leads me to another point I alluded to earlier and that isn’t addressed in media coverage of malware analysis: what qualifications does your average reverse engineer have when it comes to drawing conclusions about geo-political-security issues? You don’t call a plumber to fix your fuse box. You don’t ask a diplomat about the latest developments in no-till farming. Why in the world would you take at face value what a reverse engineer says about anything except very specific, technical findings? I’m not saying people are not entitled to their opinions, but credibility counts if those opinions are going to have value.

So where are we?

  • There are no set or even widely accepted definitions related to malware  (e.g. what is “sophisticated” or “advanced”).
  • There is no widely understood or accepted baseline of what sort of technical, intellectual or actual-capital required to build malware.
  • Data you get out of code, through reverse engineering or from source, is not guaranteed to be accurate when it comes to issues of authorship or origin.
  • Malware analysts do not apply any analytic methodology in an attempt to confirm or refute their single-source findings.
  • Efforts to link data found in code to larger issues of geo-political importance are at best superficial.

Why is all of this important? Computer security issues are becoming an increasingly important factor in our lives. Not that everyone appreciates it, but look at where we have been and where we are headed. Just under 20 years ago few people in the US, much less the world, world were online; now more people in the world get online via their phones than do on a traditional computer. Cars use computers to drive themselves, and biological implants are controlled via Bluetooth. Neither of these new developments has meaningful security features built into them, but no one would ever be interested in hacking insulin pumps or pacemakers, right?

Taking computer security threats seriously starts by putting serious thought and effort behind our research and conclusions. The government does not provide information like this to the public, so we rely on vendors and security companies (whose primary interest is profit) to do it for us. When that “analysis,” which is far from rigorous is delivered to decision-makers who are used to dealing with conclusions that have been developed through a much more robust methodology, their decisions can have far reaching negative consequences.

Sometimes a quick-and-dirty analysis is right, and as long as you’re OK with the fact that that is all that most malware analysis is, OK. But you’re planning on making serious decisions about the threat you face from cyberspace, you should really take the time and effort to ensure that your analysis has looked beyond what IDA shows and considered more diverse and far-reaching factors.

You Were Promised Neither Security Nor Privacy

If you remember hearing the song Istanbul (Not Constantinople) on the radio the first time around, then you remember all the predictions about what life in the 21st century was supposed to be like. Of particular note was the prediction that we would use flying cars and jet packs to get around, among other awesome technological advances.

Recently someone made the comment online (for the life of me I can’t find it now) that goes something like this: If you are the children of the people who were promised jet packs you should not be disappointed because you were not promised these things, you were promised life as depicted in Snow Crash or True Names.

Generation X for the win!

The amateur interpretation of leaked NSA documents has sparked this debate about how governments – the U.S. in particular – are undermining if not destroying the security and privacy of the ‘Net. We need no less than a “Magna Carta” to protect us, which would be a great idea if were actually being oppressed to such a degree that our liberties were being infringed upon by a despot and his arbitrary whims. For those not keeping track: the internet is not a person, nor is it run by DIRNSA.

I don’t claim to have been there at the beginning but in the early-mid 90s my first exposure to the internet was…stereotypical (I am no candidate for sainthood). I knew what it took to protect global computer networks because that was my day job for the government; accessing the ‘Net (or BBSes) at home was basically the wild west. There was no Sheriff or fire department if case things got dangerous or you got robbed. Everyone knew this, no one was complaining and no one expected anything more.

What would become the commercial internet went from warez and naughty ASCII images to house hunting, banking, news, and keeping up with your family and friends. Now it made sense to have some kind of security mechanisms in place because, just like in meat-space, there are some things you want people to know and other things you do not. But the police didn’t do that for you, you entrusted that to the people who were offering up the service in cyberspace, again, just like you do in the real world.

But did those companies really have an incentive to secure your information or maintain your privacy? Not in any meaningful way. For one, security is expensive and customers pay for functionality, not security. It actually makes more business sense to do the minimum necessary for security because on the off chance that there is a breach, you can make up any losses on the backs of your customers (discretely of course).

Secondly, your data couldn’t be too secure because there was value in knowing who you are, what you liked, what you did, and who you talked to. The money you paid for your software license was just one revenue stream; a company could make even more money using and/or selling your information and online habits. Such practices manifest themselves in things like spam email and targeted ads on web sites; the people who were promised jet packs know it by another name: junk mail.

Let’s be clear: the only people who have really cared about network security are the military; everyone else is in this to make a buck (flowery, feel-good, kumbaya language notwithstanding). Commercial concerns operating online care about your privacy until it impacts their money.

Is weakening the security of a privately owned software product a crime? No. It makes crypto  nerds really, really angry, but it’s not illegal. Imitating a popular social networking site to gain access to systems owned by terrorists is what an intelligence agency operating online should do (they don’t actually take over THE Facebook site, for everyone with a reading comprehension problem). Co-opting botnets? We ought to be applauding a move like that, not lambasting them.

There is something to the idea that introducing weaknesses into programs and algorithms puts more people than just terrorists and criminals at risk, but in order for that to be a realistic concern you would have to have some kind of evidence that the security mechanisms available in products today are an adequate defense against malicious attack, and they’re not. What passes for “security” in most code is laughable. Have none of the people raising this concern heard of Pwn2Own? Or that there is a global market for 0-day an the US government is only one of many, many customers?

People who are lamenting the actions of intelligence agencies talk like the internet is this free natural resource that belongs to all and come hold my hand and sing the Coca Cola song… I’m sure the Verizons of the world would be surprised to hear that. Free WiFi at the coffee shop? It’s only free to you because the store is paying for it (or not, because you didn’t notice the $.05 across the board price increase on coffee and muffins when the router was installed).

Talking about the ‘Net as a human right doesn’t make it so. Just like claiming to be a whistle blower doesn’t make you one, or claiming something is unconstitutional when the nine people specifically put in place to determine such things hasn’t ruled on the issue. You can still live your life without using TCP/IP or HTTP, you just don’t want to.

Ascribing nefarious intent to government action – in particular the NSA as depicted in Enemy of the State – displays a level of ignorance about how government – in particular intelligence agencies – actually work. The public health analog is useful in some regards, but it breaks down when you start talking about how government actions online are akin to putting civilians at risk in the real world. Our government’s number one responsibility is keeping you safe; that it has the capability to inflect harm on massive numbers of people does not mean they will use it and it most certainly does not mean they’ll use it on YOU. To think otherwise is simply movie-plot-thinking (he said, with a hint of irony).

Stop Pretending You Care (about the NSA)

You’ve read the stories, heard the interviews, and downloaded the docs and you’re shocked, SHOCKED to find that one of the world’s most powerful intelligence agencies has migrated from collecting digital tons of data from radio waves and telephone cables to the Internet. You’re OUTRAGED at the supposed violation of your privacy by these un-elected bureaucrats who get their jollies listening to your sweet nothings.

Except you’re not.

Not really.

Are you really concerned about your privacy? Let’s find out:

  1. Do you only ever pay for things with cash (and you don’t have a credit or debit card)?
  2. Do you have no fixed address?
  3. Do you get around town or strange places with a map and compass?
  4. Do you only make phone calls using burner phones (trashed after one use) or public phones (never the same one twice)?
  5. Do you always go outside wearing a hoodie (up) and either Groucho Marx glasses or a Guy Fawkes mask?
  6. Do you wrap all online communications in encryption, pass them through TOR, use an alias and only type with latex gloves on stranger’s computers when they leave the coffee table to use the bathroom?
  7. Do you have any kind of social media presence?
  8. Are you reading this over the shoulder of someone else?

The answer key, if you’re serious about not having “big brother” of any sort up in your biznaz is: Y, Y, Y, Y, Y, Y, N, Y. Obviously not a comprehensive list of things you should do to stay off anyone’s radar, but anything less and all your efforts are for naught.

People complain about their movements being tracked and their behaviors being examined; but then they post selfies to 1,000 “friends” and “check in” at bars and activate all sorts of GPS-enabled features while they shop using their store club card so they can save $.25 on albacore tuna. The NSA doesn’t care about your daily routine: the grocery store, electronics store, and companies that make consumer products all care very, very much. Remember this story? Of course you don’t because that’s just marketing, the NSA is “spying” on you.

Did you sign up for the “do not call” list? Did you breathe a sigh of relief and, as a reward to yourself, order a pizza? Guess what? You just put yourself back on data brokers and marketing companies “please call me” list. What? You didn’t read the fine print of the law (or the fine print on any of the EULAs of the services or software you use)? You thought you had an expectation of privacy?! Doom on you.

Let’s be honest about what the vast majority of people mean when they say they care about their privacy:

I don’t want people looking at me while I’m in the process of carrying out a bodily function, carnal antics, or enjoying a guilty pleasure.

Back in the day, privacy was easy: you shut the door and drew the blinds.

But today, even though you might shut the door, your phone can transmit sounds, the camera in your laptop can transmit pictures, your set-top-box is telling someone what you’re watching (and depending on what the content is can infer what you’re doing while you are watching). You think you’re being careful, if not downright discrete, but you’re not. Even trained professionals screw up and it only takes one mistake for everything you thought you kept under wraps to blow up.

If you really want privacy in the world we live in today you need to accept a great deal of inconvenience. If you’re not down with that, or simply can’t do it for whatever reason, then you need to accept that almost nothing in your life is a secret unless it’s done alone in your basement, with the lights off and all your electronics locked in a Faraday cage upstairs.

Don’t trust the googles or any US-based ISP for your email and data anymore? Planning to relocate your digital life overseas? Hey, you know where the NSA doesn’t need a warrant to do its business and they can assume you’re not a citizen? Overseas.

People are now talking about “re-engineering the Internet” to make it NSA-proof…sure, good luck getting everyone who would need to chop on that to give you a thumbs up. Oh, also, everyone who makes stuff that connects to the Internet. Oh, also, everyone who uses the Internet who now has to buy new stuff because their old stuff won’t work with the New Improved Internet(tm). Employ encryption and air-gap multiple systems? Great advice for hard-core nerds and the paranoid, but not so much for 99.99999% of the rest of the users of the ‘Net.

/* Note to crypto-nerds: We get it; you’re good at math. But if you really cared about security you’d make en/de-cryption as push-button simple to install and use as anything in an App store, otherwise you’re just ensuring the average person runs around online naked. */

Now, what you SHOULD be doing instead of railing against over-reaches (real or imagined…because the total number of commentators on the “NSA scandal” who actually know what they’re talking about can be counted on one hand with digits left over) is what every citizen has a right to do, but rarely does: vote.

The greatest power in this country is not financial, it’s political. Intelligence reforms only came about in the 70s because of the sunshine reflecting off of abuses/overreaches could not be ignored by those who are charged with overseeing intelligence activities. So if you assume the worst of what has been reported about the NSA in the press (again, no one leaking this material, and almost no one reporting of commenting on it actually did SIGINT for a living…credibility is important here) then why have you not called your Congressman or Senator? If you’re from CA, WV, OR, MD, CO, VA, NM, ME, GA, NC, ID, IN, FL, MI, TX, NY, NJ, MN, NV, KS, IL, RI, AZ, CT, AL or OK you’ve got a direct line to those who are supposed to ride herd on the abusers.

Planning on voting next year? Planning on voting for an incumbent? Then you’re not really doing the minimum you can to bring about change. No one cares about your sign-waving or online protest. Remember those Occupy people? Remember all the reforms to the financial system they brought about?

Yeah….

No one will listen to you? Do what Google, Facebook, AT&T, Verizon and everyone else you’re angry at does: form a lobby, raise money, and button hole those who can actually make something happen. You need to play the game to win.

I’m not defending bad behavior. I used to live and breath Ft. Meade, but I’ve come dangerously close to being “lost” thanks to the ham-handedness of how they’ve handled things. But let’s not pretend that we – all of us – are lifting a finger to do anything meaningful about it. You’re walking around your house naked with the drapes open and are surprised when people gather on the sidewalk – including the police who show up to see why a crowd is forming – to take in the view. Yes, that’s how you roll in your castle, but don’t pretend you care about keeping it personal.

How Many Holes in a Gohor Stick?

I’ve never used Palantir. I’ve never used DCGS-A. When I started as an Analyst you (no-shit) used pencil and paper (and a thing called a guhor stick…but that’s a lewd joke for another day). The kerfuffle over Palatir vs. DCGS-A reminds me of the days when computers started making in-roads in analysis shops, and I hope everyone involved can remember some of those lessons learned.

Now my working world in those early days wasn’t entirely computer-free, but back then computers were where you stored data and recorded activity and typed up reports, you didn’t “link” things together and you certainly didn’t draw, graph or do anything anyone coming up in the business today would recognize as computer-oriented.

If there was a quantum leap in the utility computers gave to analysis it was this application called Analyst Notebook. Analyst Notebook would take in the data you had already entered into some other system (assuming you could get it out of said system), and kick out diagrams and pictures that let you make quick sense of who was talking to whom, what happened when, and identify connections or anomalies you may have missed staring into a green screen at row after row, column after column of letters and numbers.

That’s the key here: Analyst Notebook, Palantir, etc. are Analyst’s tools, they are not analysis tools. Is that a distinction without a difference? I’m not aware of any software application that will think on your behalf. I’m not aware of anyone in the military or IC who would trust answers produced entirely by an algorithm and without human interpretation or enhancement. If you could computerize analysis you wouldn’t have a headcount problem in the IC. Analyst Notebook, Palantir, DCGS-A . . . they’re all tools, and if you’ve been working with hand tools all your life and suddenly someone hands you a Skil saw, of course you’re going to think the Skil saw was sent from heaven.

Now, is the government notorious for producing bloated, expensive, minimally functional software that everyone hates to use (when it works at all)? We don’t have time to go into all the examples, but the answer is ‘yes.’ If I offer you tool A OR tool B when you’ve been using tool C, which are you going to choose? Does that make your other choice crap? Of course not.

It sounds to me like if there is a 800 lb gorilla in the room it’s usability, and if there is one thing that commercial apps excel at its the user experience. Think about the Google interface, and then think about a data retrieval system fielded in the 70s, and you tell me what your average analyst would rather use…

If the ultimate requirement is capability, then the answer is simple: hold a shoot-out and may the best app win. Pretty-but-sub-capable isn’t going to cut it; functional-but-frustrating isn’t either. If DCGS-A is all that, they should be big enough to learn from what Palantir does well; If Palantir is really about saving lives and national defense, they ought to be big enough to implement what GIs need most. Competition raises everyone’s game, but this isn’t about .com vs .gov, it’s about lives.

Dust off Khrushchev while we’re at it

Kissinger’s call for detente would make a lot more sense if the analog to “cyber” was the cold war, MAD, etc.

It is not.

I have a lot of respect for the former SECSTATE, but to be mildly uncharitable, he doesn’t really have a lot to add to this discussion. None of his cold war ilk do. “Cyber” is pretty much the closest thing to a perfect weapon anyone has seen in history (you can claim “it wasn’t me!” and no one can prove definitively otherwise in a meaningful time frame). Proposed solutions that ignore or give short shrift to this basic fact are a colossal waste of time, which is all cold war retreads have at this point. No one who can use “cyber” as a meaningful weapon for intelligence or combative activities is going to surrender one byte of capability. No security regime that has been proposed stands up to a modicum of scrutiny once the most basic, practical issues are raised. We need to hear proposals that have at least one foot rooted in reality because the threat is here and now; ideas whose success depends on a world that doesn’t currently exist and is unlikely to (did I mention no one in their right might would give up capability? I did, good) are consuming cycles we could be using to come up with something practical.

fighting the long war with the jr. varsity

Let me preempt the inevitable brickbats by saying I never met a new/recent hire that wasn’t better educated than I was at that age (and probably more inquisitive to boot):

The Department of Defense will face a worldwide civilian manning challenge in the near future, because roughly 22 percent of its work force will reach retirement age within two years, a senior Defense Department official said Monday.

This follows on the heels of an earlier report:

Continue reading