Don’t Quit Your Day Job (Yet)

My business partners and took the leap from “employees” to “founders” four years ago. It has been a challenge, most often in areas and on things you least expected to be worried about when you started, but we are rapidly approaching our fifth year in business and things could not be better. Well, they _could_ but we’re quibbling over some very first-world-problems here.

I was a soldier. Then I was a fed. In two-decades working for ‘the man’ I’ve seen pretty much all there is to see as far as bureaucracies goes. The government shutdown is really just the capstone of D.C. ***-hattery than your average fed – even in the vaunted halls of the intelligence community – deal with on a daily basis.

But standing at the vanguard of national security without getting paid, or just not being allowed to stand post at all, is a demoralizing affair. I don’t know if there is a ‘stages of layoff’ model but a lot of people I used to work with have reached “ship-jumping” stage, which I describe as a Howard Beale moment only for employment. A lot of these folks have come to me looking for advice and I’m afraid I don’t give them the answer they want to hear, to wit: don’t do it.

The responses are varied, from the curious to the angry, but those that bother to hear me out come away with an even greater appreciation for what it means to strike out on your own.

  1. You don’t know anybody. Most of my former colleagues only know other people like them. Well, laid off people don’t offer other laid off people jobs. Laid off people are lamenting how ill-liquid they actually are. If you’re going to hang up your own shingle you need to know people who know what you can do and are willing to pay for that. Now. You don’t have a long runway down which to coast while you develop new business: that mortgage isn’t going to pay for itself.
  2. You don’t know how to work. I don’t mean they don’t know what they’re doing; I mean they’re wholly unprepared for how “civilians” – for lack of a better term – do business. Even my former colleagues who are contractors run face-first into culture shock when dealing with purely commercial concerns.
  3. Most of your day is consumed with non-work. The early days of a business is 99% preparing to do business and 1% actual “work” you want to do. Can’t afford a secretary? Well guess who is going to answer all the phone calls. Don’t have a pricing department? Guess who is learning on-the-job. Haven’t been paid in three months? Guess who gets to play Accounts Receivable technician.
  4. You get paid when the company gets paid. When you’re working for yourself you’re not drawing a reliable paycheck. Net 30 is what the invoice says but not everyone is going to honor that. If you don’t know anybody (see 1 above) then you surely don’t know anybody who is willing to pay you up front.
  5. Nobody cares what you are famous for. You’re a big-wig behind the barbed wire and locked doors? That’s nice: what can you do for me now? Everyone in the Community knows you as the go-to person on issue X? That’s cute: What have you done that you can talk about that I care about? Today, right now, hundreds of thousands of people can claim a security clearance and X years of experience…you stand out how, exactly?

This is not me telling people to suck it up and keep toiling away for Uncle Sam; this is me telling people that if they are really set on not being put into this situation again, then they need to use this time to prepare for when the time is right. If you’re going to empty your 401(k) at least do it when you’ve maximized your chances of success.

  1. Get to know people. Don’t violate sound OPSEC practices, but don’t not-interact with people just because you’re spooky and they are not. Conferences are not about keynotes, they’re about the connections you make in between talks and at lunch and at the bar. Those vendors who call you incessantly about stuff you’re not going to buy? Let them buy you a <$25 cup of coffee, explain why you’re not buying, and let them see you as a person, not a number. Being a decent person who does not waste their time isn’t going to go unnoticed later on.
  2. Make sure people know what you can do. Again, this isn’t terribly easy for people who keep secrets for a living, but it’s do-able. Write an article, contribute to a journal, if you have the time do a little something on the side. The hoops you jump through for publication approval and outside employment now will pay off later.
  3.  Start paying yourself now. Odds are no one will pay you up front, so sock away enough money to keep you and yours afloat during the initial dry spell; cut expenses you can live without (and won’t be enjoying anyway since you’ll be working your *** off) to build more financial runway. As long as you are in start-up mode, cash is king, so prepare yourself accordingly.
  4. Lay the groundwork now. Set up your LLC or Corporation. Set up your company bank account. Get a lawyer. Draw up contracts, NDAs, teaming agreements, etc. Get a virtual PBX or Google Voice number, record your greetings and set up your forwarding rules. Get a company computer and load all the tools onto it that you’ll need to do business. Join all the frequent flyer and hotel clubs you can. Start your first day of independence ready to go, not on logistics.
  5. Kiss everyone goodbye. Days are only 24 hours long, but somehow you’ll be logging 36 hours of work. Accept that ‘work life balance’ is a dream you’re not going to experience for a while. The pain you experience now builds the foundation for a sequester-proof, shut-down-resistant enterprise you captain, not some politician who gets paid regardless of how much others suffer.

Between Preppers and FEMA Trailers

Today, for want of a budget, the Federal government is shutting down. If the nation suffered a massive cyber attack today what would happen? If you think the government is going to defend you against a cyber attack or help you in the aftermath of a digital catastrophe – budget or no budget – think again. The government cannot save you, and you can no more count on timely assistance in the online world as you can in the physical one in the aftermath of a disaster. Help might come eventually, but your ability to fight off hostiles or weather a digital storm depends largely on what you can do for yourself.

The vast majority of the time, natural or man-made disasters are things that happen to someone else. People who live in disaster or storm prone areas know that at any given moment they may have to make due with what they have on hand, consequently they prepare to deal with the worst-case scenario for a reasonable amount of time. The reason you don’t see people in the mountain-west or north-east in FEMA trailers after massive snow or ice storms is a culture of resilience and self-reliance.

How does this translate into the digital world? Don’t efforts like the Comprehensive National Cybersecurity Initiative and all the attention foreign state-sponsored industrial espionage has gotten recently belay the idea that the government isn’t ready, willing and able to take action in the face of a digital crisis?

Federal agencies are no better at protecting themselves from digital attack than anyone else. The same tricks that lead to a breach at a bank work against a government employee. Despite spending tens of billions of tax dollars on cyber security we continue to hear about how successful attackers are and that attacks are growing and threatening our economy and way of life. The increasing amount of connectivity in industrial control systems puts us at even greater risk of a disaster because very few people know how to secure a power plant or oil refinery.

It’s not that the government does not want to make the Internet a safer and more secure; it is simply ill-equipped to do so. Industrial-age practices, bureaucracy, a sloth-like pace, its love affair with lobbyists, and its inability to retain senior leaders with security chops means “cyber” will always be the most talked-about also-ran issue in government. You know what issue has shut down the federal government this week? It isn’t “cyber.”

Protect you against threats? What leverage do we really have against a country like China? Cold War approaches won’t work. For one, you’re probably reading this on something made in China; your dad never owned a Soviet-made anything. We cannot implement “digital arms control” or a deterrence regime because there is no meaningful analog between nuclear weapons and digital ones. Trying to retrofit new problems into old constructs is how Cold Warriors maintain relevance; it’s just not terribly useful in the real world.

So what are we to do? Historically speaking, when the law could not keep up with human expansion into unknown territory, people were expected to defend themselves and uphold the rudiments of good social behavior. If someone threatened you on your remote homestead, you needed to be prepared to defend yourself until the Marshal arrived. This is not a call to vigilantism, nor that you should become some kind of iPrepper, but a reflection of the fact that the person most responsible for your safety and security online is you. As my former colleague Marc Sachs recently put it:

“If you’re worried about it, do something about it. Take security on yourselves, and don’t trust anybody else to do it.”

What do you or your business need to survive in the short- and long-term if you’re hacked? Invest time and money accordingly. If computer security is terra incognita then hire a guide to get you to where you want to go and teach you what you need to know to survive once you’re there. Unless you want to suffer through the digital equivalent of life in a FEMA trailer, you need to take some responsibility to improve your resilience and ensure your viability.

Stop Pretending You Care (about the NSA)

You’ve read the stories, heard the interviews, and downloaded the docs and you’re shocked, SHOCKED to find that one of the world’s most powerful intelligence agencies has migrated from collecting digital tons of data from radio waves and telephone cables to the Internet. You’re OUTRAGED at the supposed violation of your privacy by these un-elected bureaucrats who get their jollies listening to your sweet nothings.

Except you’re not.

Not really.

Are you really concerned about your privacy? Let’s find out:

  1. Do you only ever pay for things with cash (and you don’t have a credit or debit card)?
  2. Do you have no fixed address?
  3. Do you get around town or strange places with a map and compass?
  4. Do you only make phone calls using burner phones (trashed after one use) or public phones (never the same one twice)?
  5. Do you always go outside wearing a hoodie (up) and either Groucho Marx glasses or a Guy Fawkes mask?
  6. Do you wrap all online communications in encryption, pass them through TOR, use an alias and only type with latex gloves on stranger’s computers when they leave the coffee table to use the bathroom?
  7. Do you have any kind of social media presence?
  8. Are you reading this over the shoulder of someone else?

The answer key, if you’re serious about not having “big brother” of any sort up in your biznaz is: Y, Y, Y, Y, Y, Y, N, Y. Obviously not a comprehensive list of things you should do to stay off anyone’s radar, but anything less and all your efforts are for naught.

People complain about their movements being tracked and their behaviors being examined; but then they post selfies to 1,000 “friends” and “check in” at bars and activate all sorts of GPS-enabled features while they shop using their store club card so they can save $.25 on albacore tuna. The NSA doesn’t care about your daily routine: the grocery store, electronics store, and companies that make consumer products all care very, very much. Remember this story? Of course you don’t because that’s just marketing, the NSA is “spying” on you.

Did you sign up for the “do not call” list? Did you breathe a sigh of relief and, as a reward to yourself, order a pizza? Guess what? You just put yourself back on data brokers and marketing companies “please call me” list. What? You didn’t read the fine print of the law (or the fine print on any of the EULAs of the services or software you use)? You thought you had an expectation of privacy?! Doom on you.

Let’s be honest about what the vast majority of people mean when they say they care about their privacy:

I don’t want people looking at me while I’m in the process of carrying out a bodily function, carnal antics, or enjoying a guilty pleasure.

Back in the day, privacy was easy: you shut the door and drew the blinds.

But today, even though you might shut the door, your phone can transmit sounds, the camera in your laptop can transmit pictures, your set-top-box is telling someone what you’re watching (and depending on what the content is can infer what you’re doing while you are watching). You think you’re being careful, if not downright discrete, but you’re not. Even trained professionals screw up and it only takes one mistake for everything you thought you kept under wraps to blow up.

If you really want privacy in the world we live in today you need to accept a great deal of inconvenience. If you’re not down with that, or simply can’t do it for whatever reason, then you need to accept that almost nothing in your life is a secret unless it’s done alone in your basement, with the lights off and all your electronics locked in a Faraday cage upstairs.

Don’t trust the googles or any US-based ISP for your email and data anymore? Planning to relocate your digital life overseas? Hey, you know where the NSA doesn’t need a warrant to do its business and they can assume you’re not a citizen? Overseas.

People are now talking about “re-engineering the Internet” to make it NSA-proof…sure, good luck getting everyone who would need to chop on that to give you a thumbs up. Oh, also, everyone who makes stuff that connects to the Internet. Oh, also, everyone who uses the Internet who now has to buy new stuff because their old stuff won’t work with the New Improved Internet(tm). Employ encryption and air-gap multiple systems? Great advice for hard-core nerds and the paranoid, but not so much for 99.99999% of the rest of the users of the ‘Net.

/* Note to crypto-nerds: We get it; you’re good at math. But if you really cared about security you’d make en/de-cryption as push-button simple to install and use as anything in an App store, otherwise you’re just ensuring the average person runs around online naked. */

Now, what you SHOULD be doing instead of railing against over-reaches (real or imagined…because the total number of commentators on the “NSA scandal” who actually know what they’re talking about can be counted on one hand with digits left over) is what every citizen has a right to do, but rarely does: vote.

The greatest power in this country is not financial, it’s political. Intelligence reforms only came about in the 70s because of the sunshine reflecting off of abuses/overreaches could not be ignored by those who are charged with overseeing intelligence activities. So if you assume the worst of what has been reported about the NSA in the press (again, no one leaking this material, and almost no one reporting of commenting on it actually did SIGINT for a living…credibility is important here) then why have you not called your Congressman or Senator? If you’re from CA, WV, OR, MD, CO, VA, NM, ME, GA, NC, ID, IN, FL, MI, TX, NY, NJ, MN, NV, KS, IL, RI, AZ, CT, AL or OK you’ve got a direct line to those who are supposed to ride herd on the abusers.

Planning on voting next year? Planning on voting for an incumbent? Then you’re not really doing the minimum you can to bring about change. No one cares about your sign-waving or online protest. Remember those Occupy people? Remember all the reforms to the financial system they brought about?

Yeah….

No one will listen to you? Do what Google, Facebook, AT&T, Verizon and everyone else you’re angry at does: form a lobby, raise money, and button hole those who can actually make something happen. You need to play the game to win.

I’m not defending bad behavior. I used to live and breath Ft. Meade, but I’ve come dangerously close to being “lost” thanks to the ham-handedness of how they’ve handled things. But let’s not pretend that we – all of us – are lifting a finger to do anything meaningful about it. You’re walking around your house naked with the drapes open and are surprised when people gather on the sidewalk – including the police who show up to see why a crowd is forming – to take in the view. Yes, that’s how you roll in your castle, but don’t pretend you care about keeping it personal.

Prepare for the Pendulum Swing

I’m not going to belabor the tale of woe those trying to deal with Edward Snowden’s theft are dealing with right now. For a moment I want to opine on some of the secondary and tangential issues that I predict is going to make life in the IC more difficult because of his actions:

  1. Polygraphs. If it is true that he only took the job with BAH to gain access to specific data in order to reveal it, IC polygraph units are going to have to cancel leave through 2025. Moving from one agency to another? Get ready to get hooked up to the box (again). In a sys admin job? Pucker up. That old timer you used to get who realized that people were people and they had lives? He’s going to be replaced by a legion of whippersnappers who will all be gunning to catch the next leaker. Good people will be deep-sixed and those who survive will wonder if it’s worth the ***-pain.
  2. Investigations. When you can’t pick up on obvious problem-children, and when the bottom-line is more important than doing a good job, the bureaucracy will retrench and do what it does best: drop into low gear and distrust outsiders. There are only so many government investigators, and it’s not like there are fewer missions. Coverage will slip, tasks won’t get done, the risk of surprise (you know, what we’re supposed to try and avoid) goes up.
  3. Visits. Even in the information age some things are best discussed in person. Remember how your “community” badge would kinda-sorta get you into wherever you needed to go? Good luck with that for the foreseeable future. That three hour block of time you used to allocate to go to a meeting across town? You might as well write off the whole day.
  4. Two-Man Rule. Great theory; it will suck in practice. Remember when you used to be able to call the help desk and your boy Chuck would reset your password over the phone? Yeah, not any more. Something that took minutes will take hours; something that used to take hours will take days; things that took days will take weeks. In the information age, ostensibly the information enterprise, will work about as quickly and efficiently as a pre-assembly-line car factory.
  5. Sharing. Yes, the mechanisms will still exist, but no one actually will (officially). No one will say so out loud, but in a series of staff calls of decreasing seniority the word will get out: don’t post or share anything good or the least bit sensitive online. Stovepipes will be reinforced and what good was done over the past decade+ to break down barriers will get washed away. Sharing will go underground, which will simply make detecting leaks harder.

This story is far from over, but if you’ve been in this business for any length of time you know how wildly the pendulum swings when something bad happens. Nothing actually improves, everything just gets more difficult. This was less of a big deal during the industrial age, but that age has past.

 

 

Compare and Contrast

I love how, on a mailing list I belong to that is full of Ph.D.s and J.D.s, when I call for practical approaches to real-world problems I’m called “anti-intellectual” and in other forums when I allude to someone’s level of formal education – or lack thereof – I’m called “elitist.” What’s the old saying? If you’re pissing both sides off equally you must be doing something right.

The latest example?

I recently brought up the fact that neither Bradley Manning nor Edward Snowden were Daniel Ellsberg. I didn’t come out and say ‘they weren’t fit to hold his jock,’ I was pointing out that when you compared who they were and what they did, Dr. Ellsberg is a whole different class of actor. Let’s get on the ‘tubes and let me show you what I mean:

Daniel Ellsberg

Education: Harvard undergraduate (on scholarship); Cambridge (Wilson Fellowship); Harvard (again) for graduate school and eventually his Ph.D.

Employment: USMC officer (honorable); RAND Corporation; the Department of State and the Department of Defense (he didn’t work “in the Pentagon” he worked for the Secretary of Defense).

Access: With regards to the “Pentagon Papers” he operated at the highest level and knew the full contents of the report.

 

Bradley Manning

Education: High School; One semester of Community College (dropped out)

Employment: Software developer (for four months); Pizza parlor; US Army Intelligence Analyst

Access: A variety of classified military, intelligence and diplomatic systems accessible in theater.

 

Edward Snowden

Education: Dropped out of high school; earned GED; briefly attended Community College.

Employment: US Army (never got out of training status); contract security guard; IT engineer at the CIA and NSA

(Reported) Access: Discrete systems supporting HUMINT and SIGINT operations.

 

Snowden wasn’t an intelligence operator or analyst, he was an IT guy who supported intelligence operators and analysts. Sports agents know a lot about sports, but no one confuses them for players. Manning had access to a lot of data, but he was a junior analyst who (if the Army still works like it worked when I was in) was focused on a particular problem set, not the Middle East theater writ large. If you worked with either one of these guys you wouldn’t care what they thought about anything work-related beyond the very narrow slice where they had demonstrable expertise, but because you know nothing about intelligence work and they happened to have a clearance you think they’re all that and a bag of crisps.

I’m not saying Snowden and Manning aren’t smart. I’m not saying they’re not earnest in their beliefs. I’m saying if I’m going to accept the judgment of an individual about issues of national if not international import, the guy who did nothing but flex the muscles in his 18-pound brain and had full view of the entire problem has a lot more credibility.

If that makes me elitist, well, I’ll be over here sipping cognac if you want to slap me across the face with a velvet glove.

Explaining Computer Security Through the Lens of Boston

Events surrounding the attack at the Boston Marathon, and the subsequent manhunt, are on-going as this is being drafted. Details may change, but the conclusions should not.

This is by no means an effort to equate terrorism and its horrible aftermath to an intrusion or data breach (which is trivial by comparison), merely an attempt to use current events in the physical world – which people tend to understand more readily – to help make sense of computer security – a complicated and multi-faceted problem few understand well.

  1. You are vulnerable to attack at any time. From an attacker’s perspective the Boston Marathon is a great opportunity (lots of people close together), but a rare one (only happens once a year). Your business on-line however, is an opportunity that presents itself 24/7. You can no more protect your enterprise against attack than the marathon could have been run inside of a giant blast-proof Habitrail. Anyone who tells you different is asking you to buy the digital equivalent of a Habitrail.
  2. It doesn’t take much to cause damage. In cyberspace everyone is atwitter about “advanced” threats, but most of the techniques that cause problems online are not advanced. Why would you expose your best weapons when simple ones will do? In the physical world there is a complicating factor of the difficulty of getting engineered weapons to places that are not war zones, but like the improved explosives used in Boston, digital weapons are easy to obtain or, if you’re clever enough, build yourself.
  3. Don’t hold out hope for closure. Unless what happens to you online is worthy of a multi-jurisdictional – even international – law enforcement effort, forget about trying to find someone to pay for what happened to you. If they’re careful, the people who attack you will never be caught. Crimes in the real world have evidence that can be analyzed; digital attacks might leave evidence behind, but you can’t always count on that. As I put fingers to keyboard one suspect behind the Boston bombing is dead and the other the subject of a massive manhunt, but that wouldn’t have happened if the suspects had not made some kind of mistake(s). Robbing 7-11s, shooting cops and throwing explosives from a moving vehicle are not the marks of professionals. Who gets convicted of computer crimes? The greedy and the careless.

The response to the bombings in Boston reflect an exposure – directly or indirectly – to 10+ years of war. If this had happened in 2001 there probably would have been more fatalities. That’s a lesson system owners (who are perpetually under digital fire) should take to heart: pay attention to what works – rapid response mechanisms, democratizing capabilities, resilience – and invest your precious security dollars accordingly.

On “cyber intelligence”

Intelligence.

From what I can tell it’s the new hotness in cybersecurity.

From what I can tell it’s also not being done very well. The end result of course being that “intelligence” is treated as a fad or gimmick, which would be a terrible mistake for the cybersecurity community to make.

Let’s lay down a few givens before we go any further. For starters, “intelligence” is like “APT:” If you’re not using the proper definition, you’re just playing marketing tricks. Boiled down to its essence it works like this:

  •  No matter how good the source, a discrete piece of “data” or data “feed” is not intelligence
  • Intelligence is not a mashup of disparate data points; that’s “information”
  • Intelligence is information that is put into context and enhanced with expert (human) input that provides the intelligence consumer with insight.

No application, device or appliance is capable of providing you with intelligence. Such mechanisms may provide you with enhanced information, but without the human element it’s still just information. If machines could produce intelligence, a whole lot of people in this business would be unemployed.

Your organizational decision-maker(s) are your intelligence “consumers.” Every consumer wants something different from their intelligence product, which is where the human element comes into play. The intelligence requirements of the C-level is of little utility to the responder on scene, and vice versa. Devices and feeds in and of themselves cannot support either requirement. Any purveyor of “intelligence” that does not have a human between data and consumer is not offering intelligence. If you are not paying for someone to apply their little gray cells to your or their data, you’re paying a premium for something you could probably get for free.

Intelligence is not fool-proof. Intelligence tells you something you don’t already know, but because you cannot know everything, there are no guarantees. Intelligence providers who claim to be flawless, or nearly so, are not producing content of value because only the most generic and heavily cavetated output can be made to seem right 100% of the time. You don’t need to pay extra for people to tell you “maybe” and “possibly.”

I’m just touching the surface here, and if anyone wants me to riff longer I will, but I just wanted to make sure something was out there standing athwart the “cyber intelligence” hype train shouting “stop!”

What’s the Alternative?

The Director of the National Security Agency argues that the NSA should be in charge of computer security in this country. Long the home of some of subject matter experts in computer technology and cryptography, this would seem to make a lot of sense.

But the NSA is an intelligence agency, and free people in a democratic society don’t like the idea of an intelligence agency – built to listen in on the conversations of “others” overseas – turning its extremely powerful data collection apparatus on them. The same or at least a similar argument is made whenever the topic of a domestic intelligence agency is brought up and the FBI argues that they should do the job: People don’t like the idea of those who can arrest you also having the authority to snoop on you. Dig hard and long enough into anyone’s life and you’re bound to find them committing a “crime,” and when you’re rewarded by the number of arrests you make and convictions you win, well, the recipe for abuse becomes obvious.

The hyperbole surrounding computer security that has been bantered about over the past few years aside, it’s clear that the more pervasive computers (in all their forms) become in our lives, the more of a problem insecure systems pose. But if access to, and the use of, such technology is increasingly viewed as a “right,” then some mechanism for defending that right is in order. If that defending entity isn’t the NSA, what is the alternative?

The Department of Homeland Security is often touted as the place where domestic computer security (if that’s even a thing) should be addressed, but I know of no one who would entrust such a mission to an organization that is famous for its dysfunction, and there is enough of that in computer security already. Remember, this is the agency that changes out “cyber czars” more frequently than Liz Taylor changed husbands (am I dating myself?).

Before we completely discard the idea of NSA involvement it may be useful to point out that the NSA is actually two large organizations underneath the same umbrella: an intelligence collection and analysis organization, and an information security organization. The former is the part that listens in on people’s conversations; the latter is the part that is in charge of wrapping math around our own conversations. There is an obvious symbiosis there, but what if you spun the INFOSEC organization out of big-NSA and let if focus on cyber security for all of us? Removed from Ft. Meade, ideally out of the Washington DC area altogether, it could be the center of expertise both the government and private sector need and would trust because they’d be about “security” not “intelligence.”

There is also an argument to be made that there isn’t a compelling need to do anything new from a governmental perspective. Leaving industry to its own devices seems like a bad idea, but cases where poor computer security led to the outright downfall of a company are notable because they’re so rare. The fact of the matter is that companies that get hacked and lose intellectual property suffer no long-term financial penalty, and since that’s what Wall Street grades C-level executives on, where is the incentive to change? It’s worth noting that the loudest voices lamenting the cost of IP theft all have a vested interest in more security, not higher profits.

This begs the question: is “economic prosperity” truly a national security issue? If that were the case the Chinese would have started chopping off French heads once they learned d’Entrecolles had stolen the method for making ‘china;’ the British would have hunted down and shot Slater and his ilk. Protecting IP and R&D that supports defense is a stronger argument, but traditionally our government isn’t in the business of making sure private enterprises can turn a profit (let’s not get side-tracked talking about farm subsidies). This is not the case in other countries, but since when is the US, France? If we became France (in this regard) at some point while we weren’t looking, then it’s time to make that policy known so that we can all act accordingly.

At this point, if forced to do something, I’d say we shift our resources as noted above. I’d rather have a solution that wasn’t a big-government one, but I can’t come up with one at this point. Anyone have any other, original ideas that don’t involve more spooks in the wire?


Don’t Believe the Hype

I want you to read this tweet:

 

Two things:

1. The government is constantly whinging on about how we need more sharing. The private sector elements who actually get involved in sharing regimes constantly complain about how “sharing” with the government is a one-way street. Who are you going to give a sympathetic ear to the next time someone utters the words “public-private partnership?” How much more annoying is it that places like DHS want to borrow private-sector expertise but don’t want to pay for it?

2. What makes this lop-sided relationship really annoying is that the private sector “attack surface” is several metric-*** tons larger than the government one. Who is it that needs more and better intel about cyber threats, exactly?


Malware Analysis: The Danger of Connecting the Dots

The findings of malware analysis are not in fact “analysis;” they’re a collection of data points linked together by assumptions whose validity and credibility have not been evaluated. This lack of analytic methodology could prove exceedingly problematic for those charged with making decisions about cyber security. If you cannot trust your analysis, how are you supposed to make sound cyber security decisions?

Question: If I give you a malware binary to reverse engineer, what do you see? Think about your answer for a minute and then read on. We’ll revisit this shortly.

It is accepted as conventional wisdom that Stuxnet is related to Duqu, which is in turn related to Flame. All of these malware have been described as “sophisticated” and “advanced,” so much so that they must be the work of a nation-state (such work presumably requiring large amounts of time and lots of skilled people and the code written for purposes beyond simply siphoning off other people’s cash). The claim that the US government is behind Stuxnet has consequently led people to assume that all related code is US sponsored, funded, or otherwise backed.

Except for the claim of authorship, all of the aforementioned data points come from people who reverse engineer malware binaries. These are technically smart people who practice an arcane and difficult art, but what credibility does that give them beyond their domain? In our quest for answers do we give too much weight to the conclusions of those with discrete technical expertise and fail to approach the problem with sufficient depth and objectivity?

Let’s take each of these claims in turn.

Are there similarities if not outright sharing of code in Stuxnet, Duqu and Flame? Yes. Does that mean the same people wrote them all? Do you believe there is a global marketplace where malware is created and sold? Do you believe the people who operate in that marketplace collaborate? Do you believe that the principle of “code reuse” is alive and well? If you answered “yes” to any of these questions then a single source of “advanced” malware cannot be your only valid conclusion.

Is the code in Stuxnet, etc. “sophisticated?” Define sophisticated in the context of malware. Forget about malware and try to define “sophisticated” in the context of software, period. Is Excel more sophisticated than Photoshop? When words have no hard and widely-accepted definitions, they can mean whatever you want them to mean, which means they have no meaning at all.

Can only a nation-state produce such code? How many government-funded software projects are you aware of that work as advertised? You can probably count on one hand and have fingers left over. But now, somehow, when it comes to malware, suddenly we’re to believe that the government has gotten its s*** together?

“But Mike, these are, like, weapons. Super secret stuff. The government is really good at that.”

Really? Have you ever heard of the Osprey? Or the F-35? Or the Crusader? Or the JTRS? Or Land Warrior? Groundbreaker? Trailblazer? Virtual Case File?

I’m not trying to trivialize the issues associated with large and complex technology projects, my point is that a government program to build malware would be subject to the same issues and consequently no better – and quite possibly worse – than any non-governmental effort to do the same thing. Cyber crime statistics – inflated though they may be – tell us that governments are not the only entities that can and do fund malware development.

“But Mike, the government contracts out most of its technology work. Why couldn’t they contract out the building of digital weapons?”

They very well could, but then what does that tell us? It tells us that if you wanted to build the best malware you have to go on the open market (read: people who may not care who they’re working for, as long as their money is good).

As far as the US government “admitting” that they were behind Stuxnet: they did no such thing. A reporter, an author of a book, says that a government official told him that the US was behind Stuxnet. Neither the President of the United States, nor the Secretary of Defense, nor the Directors of the CIA or NSA got up in front of a camera and said, “That’s us!” which is what an admission would be. Let me reiterate: a guy who has a political agenda told a guy who wants to sell books that the US was behind Stuxnet.

It’s easy to believe the US is behind Stuxnet, as much as it is to believe Israel is behind it. You know who else doesn’t like countries who don’t have nuclear weapons to get them? Almost every country in the world, including those countries that currently have nuclear weapons. You know who else might not want Iran – a majority Shia country – to have an atomic bomb? Roughly 30 Sunni countries for starters, most of which could afford to go onto the previously mentioned open market and pay for malware development. What? You hadn’t thought about the non-proliferation treaty or that Sunni-Shia thing? Yeah, neither has anyone working for Kaspersky, Symantec, F-Secure, etc., etc.

Back to the question I asked earlier: What do you see when you reverse engineer a binary?

Answer: Exactly what the author wants you to see.

  • I want you to see words in a language that would throw suspicion on someone else.
  • I want you to see that my code was compiled in a particular foreign language (even though I only read and/or write in a totally different language).
  • I want you to see certain comments or coding styles that are the same or similar to someone else’s (because I reuse other people’s code).
  • I want you to see data about compilation date/time, PDB file path, etc., which could lead you to draw erroneous conclusions have no bearing on malware behavior or capability.

Contrary to post-9/11-conventional wisdom, good analysis is not dot-connecting. That’s part of the process, but it’s not the whole or only process. Good analysis has methodology behind it, as well as a fair dose of experience or exposure to other disciplines that comes into play. Most of all, whenever possible, there are multiple, verifiable, meaningful data points to help back up your assertions. Let me give you an example.

I used to work with a guy we’ll call “Luke.” Luke was a firm believer in the value of a given type of data. He thought it was infallible. So strong were Luke’s convictions about the findings he produced using only this particular type of data that he would draw conclusions about the world that flew in the face of what the rest of us like to call “reality.” If Luke’s assertions were true, World War III would have been triggered, but as many, many other sources of data were able to point out, Luke was wrong.

There was a reason why Luke was the oldest junior analyst in the whole department.

There are a number of problems, fallacies and mental traps that people tend to suffer when they attempt to draw conclusions from data. This is not an exhaustive list, but illustrative of what I mean.

Focus Isn’t All That. There is a misconception that narrow and intense focus leads to better conclusions. In fact the opposite tends to be true: the more you focus on a specific problem, the less likely you are to think clearly and objectively. Because you just “know” certain things are true, you feel comfortable taking shortcuts to reach your conclusion, which in turn simply drives you further away from the truth.

I’ve Seen This Before. We give too much credence to patterns. When you see the same or very similar events taking place or tactics used your natural reaction is to assume that what is happening now is what happened in the past. You discount other options because its “history repeating itself.”

The Shoehorn Effect. We don’t like questions that don’t have answers. Everything has to have an explanation, regardless of whether or not the explanation is actually true. When you cannot come up with an explanation that makes sense to you, you will fit the answer to match the question.

Predisposition. We allow our biases to drive us to seek out data that supports our conclusions and discount data that refutes it.

Emotion. You cannot discount the emotional element involved in drawing conclusions, especially if your reputation is riding on the result. Emotions about a given decision can run so high that it overcomes your ability to think clearly. Rationalism goes out the window when your gut (or your greed) over-rides your brain.

How can we overcome the aforementioned flaws? There are a range of methodologies analysts use to improve objectivity and criticality. These are by no means exhaustive, but they give you an idea of the kind of effort that goes into serious analytic efforts.

Weighted Ranking. It may not seem obvious to you, but when presented with two or more choices, you choose X over Y based on the merits of X, Y (and/or Z). Ranking is instinctual and therefore often unconscious. The problem with most informal efforts at ranking is that its one-dimensional.

“Why do you like the TV show Homicide and not Dragnet?”

“Well, I like cop shows but I don’t like black-and-white shows.”

“OK, you realize those are two different things you’re comparing?”

A proper ranking means you’re comparing one thing against another using the same criteria. Using our example you could compare TV shows based on genre, sub-genre, country of origin, actors, etc., rank them according to preference in each category, and then tally the results. Do this with TV shows – or any problem – and you’ll see that your initial, instinctive results will be quite different than those of your weighted rankings.

Hypothesis Testing. You assert the truth of your hypothesis through supporting evidence, but you are always working with incomplete or questionable data, so you can never prove a hypothesis true; we accept it to be true until evidence surfaces that suggest it to be false (see bias note above). Information becomes evidence when it is linked to a hypothesis, and evidence is valid once we’ve subjected it to questioning: where did the information come from? How plausible is it? How reliable is it? What is motivating the source (agenda)?

Devil’s Advocacy. Taking a contrary or opposing position from what is the accepted answer helps overcome biases and one-dimensional thinking. Devil’s advocacy seeks out new evidence to refute “what everybody knows,” including evidence that was disregarded by those who take the prevailing point of view.

This leads me to another point I alluded to earlier and that isn’t addressed in media coverage of malware analysis: what qualifications does your average reverse engineer have when it comes to drawing conclusions about geo-political-security issues? You don’t call a plumber to fix your fuse box. You don’t ask a diplomat about the latest developments in no-till farming. Why in the world would you take at face value what a reverse engineer says about anything except very specific, technical findings? I’m not saying people are not entitled to their opinions, but credibility counts if those opinions are going to have value.

So where are we?

  • There are no set or even widely accepted definitions related to malware  (e.g. what is “sophisticated” or “advanced”).
  • There is no widely understood or accepted baseline of what sort of technical, intellectual or actual-capital required to build malware.
  • Data you get out of code, through reverse engineering or from source, is not guaranteed to be accurate when it comes to issues of authorship or origin.
  • Malware analysts do not apply any analytic methodology in an attempt to confirm or refute their single-source findings.
  • Efforts to link data found in code to larger issues of geo-political importance are at best superficial.

Why is all of this important? Computer security issues are becoming an increasingly important factor in our lives. Not that everyone appreciates it, but look at where we have been and where we are headed. Just under 20 years ago few people in the US, much less the world, world were online; now more people in the world get online via their phones than do on a traditional computer. Cars use computers to drive themselves, and biological implants are controlled via Bluetooth. Neither of these new developments has meaningful security features built into them, but no one would ever be interested in hacking insulin pumps or pacemakers, right?

Taking computer security threats seriously starts by putting serious thought and effort behind our research and conclusions. The government does not provide information like this to the public, so we rely on vendors and security companies (whose primary interest is profit) to do it for us. When that “analysis,” which is far from rigorous is delivered to decision-makers who are used to dealing with conclusions that have been developed through a much more robust methodology, their decisions can have far reaching negative consequences.

Sometimes a quick-and-dirty analysis is right, and as long as you’re OK with the fact that that is all that most malware analysis is, OK. But if you are planning on making serious decisions about the threat you face from cyberspace, you should really take the time and effort to ensure that your analysis has looked beyond what IDA shows and considered more diverse and far-reaching factors.