Good Cyber Security is Not Glamorous

One of the more common reasons why most organizations push back on spending for cyber security is the lack of a “return on investment.” All that fancy, shiny cyber-y stuff costs a lot of money without providing a clear benefit that is commensurate with the expenditure. Firewalls are expensive. IDS/IPS are expensive. SIEMs are expensive. Talent to run it all (if you can even find it) is expensive.

Yet for all that expense the end result may still be a breach that costs millions of dollars, and the source of that breach is almost assuredly something that makes all that expense seem like a waste, not an investment. Advancing cyber security starts with promulgating the message that like most things in life: success is about the grind.

The Importance of Blocking and Tackling

A good, sound security capability can in fact be very pedestrian. Take some time to look at the SANS Top 25 (formerly 10) lists going back several years. Do the same thing for the OWASP Top 10. If you look closely you’ll notice that while names may change, the basic problems do not. Buffer overflows and cross-site scripting are not “advanced” or “sophisticated” but they work.  All year, every year.

Addressing the most common security problems facing any enterprise does not require floor-to-ceiling displays showing maps of the world and stoplight charts and data flows from country to country. It doesn’t require a lot of software or hardware or subscriptions or licenses or feeds. The biggest problems are the most common ones that don’t necessarily require advanced skills or technology to resolve. You can harden your enterprise against the most likely and most dangerous problems without ever talking to a salesperson or worrying about how much you’re going to have to pay that guy with all the letters after his name.

Are You Ready For Some Football?

It wouldn’t be fall without a football analogy, so here is the first one of the season: If you knew who Odell Beckham Jr. was before Lena Dunham did, you know where I’m going with this. If you didn’t, go to YouTube and enter his name, I’ll wait…..

Amazing plays are not the result of practicing acrobatics in full pads. Wide Receivers don’t take contortionist classes. Training for football season at any level is about fundamentals. Everyone doing the same drills, or variations on a theme, that they’ve done since they first put on a helmet. Why? Because the bulk of success on the field is attributable to fundamentals. Blocking and tackling. Plays that make the highlight reels are the result of individual athleticism, instincts, and drive, but no receiver gets into position to make the highlight reel without mastering the basics first.

A team of journeymen who are well versed in the basics alone may not make the playoffs, much less the Super Bowl, but that’s not the point; you want to avoid being beaten by the second string of the local community college. If you want to know how well buying expensive “solutions” to your problems works, I invite you to check out the drama that has been Washington Redskins since 1999.

Its About Perspective

You can’t read an article on cybersecurity and not see the words “advanced” or “sophisticated” either in the text or the half-dozen ads around the story. Security companies cannot move product or get customers to renew subscriptions without promoting some level of fear, uncertainty and doubt. No product salesperson will bring up the fact that procuring the next-generation whatever they are selling is almost assuredly buying a castle that will be installed on a foundation of sand (to be fair: it’s not their job to revamp your security program).

This is not to say you ignore the truly advanced or dangerous, but you need to put it all into perspective. You don’t buy an alarm system for your house and then leave your doors and windows open. You do not spend more money on the car with the highest safety ratings and then roll out without wearing your seat belt. You don’t buy your kids bicycle helmets and then set them loose on the freeway. You do all the things that keep you and yours safe because to ignore the basics undermines the advanced.The same holds true in cyber security, and the sooner we put on our Carhartts and spend more sweat equity than we do cash, the sooner we are likely to see real improvements.

Intelligence Agencies Are Not Here to Defend Your Enterprise

If there is a potentially dangerous side-effect to the discovery of a set of 0-days allegedly belonging to the NSA it is the dissemination of the idea, and credulous belief of same, that intelligence agencies should place the security of the Internet – and commercial concerns that use it – above their actual missions. It displays an all-too familiar ignorance of why intelligence agencies exist and how they operate. Before you get back to rending your hair and gnashing your teeth, let’s keep a few things in mind.

  1. Intelligence agencies exist to gather information, analyze it, and deliver their findings to policymakers so that they can make decisions about how to deal with threats to the nation. Period. You can, and agencies often do, dress this up and expand on it in order to motivate the workforce, or more likely grab more money and authority, but when it comes down to it, stealing and making sense of other people’s information is the job. Doing code reviews and QA for Cisco is not the mission.
  2. The one element in the intelligence community that was charged with supporting defense is no more. I didn’t like it then, and it seems pretty damn foolish now, but there you are, all in the name of “agility.” NSA’s IAD had the potential to do the things that all the security and privacy pundits imagine should be done for the private sector, but their job was still keeping Uncle Sam secure, not Wal-Mart.
  3. The VEP is an exercise in optics. “Of course we’ll cooperate with your vulnerability release program,” says every inter-agency representative. “As long as it doesn’t interfere with our mission,” they whisper up their sleeve. Remember in every spy movie you ever saw, how the spooks briefed Congress on all the things, but not really? That.
  4. 0-days are only 0-days as far as you know. What one can make another can undo – and so can someone else. The idea that someone, somewhere, working for someone else’s intelligence agency might not also be doing vulnerability research, uncovering exploitable conditions in popular networking products, and using same in the furtherance of their national security goals is a special kind of hubris.
  5. Cyber security simply is not the issue we think it is. That we do any of this cyber stuff is only (largely) to support more traditional instruments and exercises of national power. Cyber doesn’t kill. Airstrikes kill. Snipers kill. Mortars kill. Policymakers are still far and away concerned with things that go ‘boom’ not bytes.In case you haven’t been paying attention for the past 15 years, we’ve had actual, shooting wars to deal with, not cyber war. 

I have spent most of my career being a defender (in and out of several different intelligence agencies). I understand the frustration, but blaming intelligence agencies for doing their job is not helpful. If you like living in the land of the free its important to note that rules that would preclude the NSA from doing what it does merely handicaps us; no one we consider a threat is going to stop looking for and exploiting holes. The SVR or MSS do not care about your amicus brief. The Internet is an important part of our world, and we should all be concerned about its operational well-being, but the way to reduce the chance that someone can crack your computer code is to write better code, and test it faster than the spooks can.

The Airborne Shuffle in Cyberspace

I did my fair share supporting and helping develop its predecessor, but I have no special insights into what is going on at CYBERCOM today. I am loathe to criticize when I don’t know all the details, still I see reports like this and scratch my head and wonder: why is anyone surprised?

Focus. If you have to wake up early to do an hour of PT, get diverted afterwards to pee in a cup, finally get to work and develop a good head of steam, only to leave early to go to the arms room and spend an hour cleaning a rifle, you’re not going to develop a world-class capability in any meaningful time-frame. Not in this domain. Not to mention the fact that after about two years whatever talent you’ve managed to develop rotates out and you have to start all over again.

Speed. If you have to call a meeting to call a meeting, and the actual meeting can’t take place for two weeks because everyone who needs to be there is involved in some variation of the distractions noted above, or TDY, you have no chance. It also doesn’t help that when you manage to have the meeting you are forced to delay decisions because of some minutia. You’re not just behind the power curve, you’re running in the opposite direction.

Agility. If your business model is to train generalists and buy your technology…over the course of several years…you are going to have a hard time going up against people with deep expertise who can create their own capabilities in days. Do we need a reminder inhow effective sub-peer adversaries can be against cutting edge military technology? You know what the people attacking SWIFT or major defense contractors aren’t doing? Standing up a PMO.

The procurement and use of tanks or aircraft carriers is limited to the military in meat-space, but in cyberspace anyone can develop or acquire weapons and project power. Globally. If you’re not taking this into consideration you’re basically the 18th Pomeranians. Absent radical changes no government hierarchy is going to out-perform or out-maneuver such adversaries, but it may be possible to close the gaps to some degree.

Focus. You should not lower standards for general purpose military skills, but in a CONUS, office environment you can exercise more control over how that training is performed and scheduled. Every Marine a rifleman, I get it, but shooting wars are relatively rare; the digital conflict has been engaged for decades (and if your cyber troops are hearing shots fired in anger, you’ve probably already lost).

Speed. Hackers don’t hold meetings, they open chat sessions. Their communication with their peers and partners is more or less constant. If you’re used to calling a formation to deliver your messages orally, you’re going to have to get used to not doing that. Uncomfortable with being glued to a screen – desktop or handheld? You’re probably ill-suited to operate in this domain.

Agility. You are never going to replicate ‘silicon valley’ in the DOD without completely disrupting DOD culture. The latter is a zero-defect environment, whereas the former considers failures to be a necessary part of producing excellence. You cannot hold company-level command for 15 years because its the job you’re best suited to; you can be one of the world’s best reverse engineers for as long as you want to be. What is “normal” should mean nothing inside an outfit like CYBERCOM.

Additional factors to consider…

Homestead. If you get assigned to CYBERCOM you’re there for at least 10 years. That’s about 20 dog years from the perspective of the domain and related technology experience, and it will be invaluable if you are serious about effective performance on the battlefield.

Lower Rank/Greater Impact. Cyberspace is where the ‘strategic corporal’ is going to play an out-sized role. At any given moment the commander – once their intent is made clear – is the least important person in the room.

Bias for Action. In meat-space if you pull the trigger you cannot call back the bullet. If your aim is true your target dies. In cyberspace your bullets don’t have to be fatal. The effect need only be temporary. We can and should be doing far more than we apparently are, because I guarantee our adversaries are.

How Do You Get Good at Incident Response?

The Verizon Data Breach Report has been saying it for years. The Forrester/Veracode report Planning for Failurereiterates the same points. It is only a matter of time before your company is breached. Odds are you won’t know about the breach for months, someone other than your security team is going to tell you about it, and the response to the breach is going to be expensive, disruptive, time-consuming and…less than optimal.

If you’ve been breached before, or if you’re an enterprise of any size, it’s not like you don’t have an incident reponse plan, but as Mike Tyson famously said: “Everyone has a plan till they get hit in the mouth.” When is the last time you tested that plan? Is your plan 500 pages in a 3” three-ring dust-covered binder sitting on a shelf in the SOC? That’s not a plan, that’s praying.

Your ability to respond to breaches needs to be put into practice by sparring against partners who are peers or near-peers to the kinds of threat actors you face on a daily basis. How do you do that? By testing with realism:

Over long(er)-terms. Someone who wants what you have is not going to stop after a few days or even a few weeks.Adversaries whose efforts will accelerate by years because of stolen intellectual property don’t mind waiting months; adversaries who strategize over centuries don’t mind waiting years.

Goal-oriented. Serious threat actors attack you for a reason: they are going to get paid for your data. Efforts that don’t help them accomplish their goals are time and resources wasted. The vulnerability-of-the-month may do nothing to advance their agenda; they’re going to find a way in that no one on your staff even knows exists.

In the context of your environment. The best security training in the world is still contrived. Even the most sophisticated training lab is nothing like the systems your security team have to work with every day.

Contrast the above to your average pen-test, which is short, “noisy,” and limited in scope. Pen-tests need to be done, but recognize that for the most part pen-testing has become commoditized and increasingly vendors are competing on speed and price. Is that how you’re going to identify and assess potential risks? Lowest bidder?

If we’re breached I’ll call in outside experts.

As well you should, but what are you going to do while you wait for them to show up?

Even if you have a dedicated security team in your company, odds are that team is trained to “man the battlements” so-to-speak. They’re looking for known indicators of activity along known vectors; they’re not trained to fight off an enemy who has come in through a hole of their own making. It doesn’t make sense to keep a staff of IR specialists on the team; that’s an expensive prospect for even the most security-conscious organization.But it does make sense to train your people in basic techniques, just enough to prevent wholesale pillaging. More importantly, they need to practice those techniques so that they can do them on a moment’s notice, under fire.

Your enterprise is not a castle. There is no wall that you can build that will be high enough or thick enough to repel all attackers. If your definition of defensive success is “keep bad guys out” you are setting yourself and our people up for failure. The true measure of defensive success is the speed at which you detect, eject and mitigate the actions of your attackers. If you don’t have a corresponding plan to do that yourself – or to hold out long enough for the cavalry to come – and that plan is not regularly and realistically tested, you’re planning for victim-hood.

Cyber Security Through the Lens of Theranos

[This is not me piling on to the woes of Theranos or its CEO. It’s not. Well, it is to the degree that you can’t draw analogies without pointing out some embarrassing truths, but let’s be honest: we have all, like Fox Mulder, wanted to believe in something fantastical, despite all signs to the contrary.]

Credibility Matters. Any product, any service, any methodology that promises the world – or something akin to it – should be viewed with a jaundiced eye. If the driving force behind said promise is effectively a random stranger, even more so. Cyber security has been studied to death. The idea that one person has uncovered something no one else in the field has figured out is so unlikely you almost have to assume they’re full of ****.  I worked on something that was thought to be novel. Turns out it wasn’t, which means we were on to something, but it could be argued that better or at least faster minds than ours were already on the case.

Enablers Are Evil. When the unit of measure is “billions” all sorts of yahoos will come out of the woodwork. Most of them are there because you’re measuring things in billions, not because what you’re doing is actually worth billions. In the case of Theranos they’re worth nothing and have been for a long time. In the security space it is rare to find a company whose valuation is not by and large aspirational. Those doing to assessing really have no idea if those solutions will stand the test of time. And by “time” I mean “the point at which customers realize they’ve been had.”

The Importance of Being Honest. People are putting their trust in you; you owe it to them to be honest and forthright. When over 90% of “your” work has nothing to do with what you’ve sold people on, that’s what most people would call fraud. You exacerbate the problem with half-measures and stalling tactics, so not only are you a liar, you’re sleazy as well. How is that helping the cause exactly? Are you in this business to have an impact or are you just here for the paycheck and what passes for fame? It’s OK, we’re all only human, just be up front about it.

I have to imagine that in the beginning everyone starts out with the best of intentions, but given the nature of the work and the potential impact it can have, we need to hold ourselves to higher standards. If we’re not checking ourselves we’re setting ourselves up for a situation where checks will be imposed upon us by people who know very nearly nothing of what it takes to succeed, much less advance security.

Cyber Diplomacy Will Not Save You

The idea that the promises of diplomats and statesmen will render cyberspace a safe place is a fantasy you can ill afford to entertain if you want to remain a going concern.

Many positive things have been said about the recent memorandum of understanding between China and the US, in particular the section dealing with cyber security. Just as much derision has been heaped upon it. From the perspective of the diplomats the agreement is a win because it gives us ammunition to use in the future. When another data breach or attack takes place and is attributed to China they can say “You are breaking your promise and what follows is on you.”

From the perspective of the nay-sayers the point is simple: because you cannot verify the actions – or inaction – of your adversary, they will always have deniability. Yes, you can shave most of these problems with Occam’s razor, but when you are talking about taking legal action that may deny someone their liberty, or in an extreme case strategic action, you kind of want to base your decision on something more than ‘it stands to reason.’

Talk is cheap. Actions speak louder than words. Clichés that could not be more apt when it comes to the issue of computer and information security. The US indicting five PLA officers for cyber-crimes is motion; actually arresting an American woman in China is action. One of the six aforementioned people knows what a prison cell looks like. Guess which country is showing it’s hard on (alleged) bad actors?

I’m like most people in that I would be happy if diplomacy led to concrete action, but until the online world is actually sunshine and lollipops it is important for everyone to remember that on a practical level, all this hand-shaking means nothing. You are still primarily responsible for your own cyber defense and no one is going to make you whole if you fail. Memorandum, treaty, or pinky-swear, attacks – state-sponsored/sanctioned or not – are not going to stop. IP theft isn’t going away. Data breaches will continue apace. We have no way of stopping bad things from happening online short of a global re-engineering effort that remakes the Internet and everything that rides on it securable and surveil-able.

That is never going to happen.

If what happened last week reminds you of another famous event in ironic diplomatic history, you’re not far off. Until people die in sufficient numbers due to a cyber-attack, do not expect radical or even incremental change because the foreseeable future of online security is still death-by-a-thousand-cuts . . . something I would point out the Chinese invented.

Functionality > Security

It was reported recently that a security researcher found several exploitable vulnerabilities in a FireEye product. ‘I tried to work with them,’ he said, but was apparently rebuffed/ignored, so here you go: an 0-day. There are at least three sides to every vulnerability disclosure story so I don’t particularly care about who said what when. What we all should be concerned about is the law that applies to all software, regardless of what it does for a living. That law?

Functionality trumps security.

Every. Time.

People don’t think twice when random commodity software product is found to have some horrendous vulnerability that makes it look like its code was produced by a band of monkeys that was rejected from the Shakespeare project, but when code belonging to something meant to keep your enterprise safe is found to have holes, that’s news.

It shouldn’t be.

I’ve been involved with enough security software projects to know that even the most security-minded people want their stuff to work first, then they lock things down. I don’t know that there is such a thing as a secure developer, there are just developers with varying levels of concern about security and different ideas on when that concern should be addressed. That any security product has holes in it should not be a surprise; what’s a surprise is that disclosures like this are not more common.

In fact, I would not be surprised if the last portion of the year didn’t see an increase in the number of flaws in security products being revealed publicly, with a corresponding increase in the level of hype. Much of that hype will be justified because – to draw on a popular security analog – if someone sells you a brick wall, you expect it to be able to withstand a certain level of physical damage; you do not expect to find out that key bricks are actually made of papier mâché.

Does that make the security company who sold you the software negligent? Well, does it work as advertised? Yes? Then the answer is probably ‘no.’ Remember: security products are not silver bullets: EVERYTHING you use has holes in it and you need to prepare and respond accordingly. You don’t terminate your workforce because people are demonstrably the weakest link when it comes to security, you manage the problem and associated risk. The same should be true for ALL the software you run, regardless of what it does for a living.

I know enough legacy-Mandiant people to know that they go to work every day trying to do the right thing and this latest development is just another example of how thankless computer security is (regardless of who you work for). Like the philanderer who didn’t use Ashley Madison pointing and laughing at the guy who did, the hypocrisy factor is going to go through the roof. My suggestion: save your self-righteousness and channel that energy into tightening your own work and helping tighten up the work of others. Demonstrate that you’re about security, not being famous.

No Accountability No Peace (of Mind)?

Thanks to the ever vigilant Richard Bejtlich for pointing out Jeremiah Grossman’s slides on the idea of INFOSEC security guarantees. Reading them reminded me of a saying, the exact wording of which I forget now, but it is something along the lines of ‘some analogies are useful’ and others…not so much.

Jeremiah does a good job explaining how guarantees can be a discriminator and how certain issues surrounding guarantees can be addressed, but there are a few factors that I think make this an untenable prospect:

  • Boots are not Computer Systems. A great American outdoor gear company has no problem issuing a 100% guarantee on their outdoor clothing because they have intimate knowledge and granular control over every aspect of a given garment; you cannot say the same for any sufficiently large or complex piece of software. As the CSO of Oracle recently pointed out, big software companies try to write secure code and they check for and patch vulnerabilities when they find them; but as pretty much the rest of the Internet pointed out in response: that’s not enough. CIO Alice knows her enterprise is running MS Windows, but neither Alice nor anyone that works for her knows the Windows kernel like Bob the guy breaking into Alice’s company does.
  • Money Over Everything. You know another reason why the great American outdoor gear company doesn’t mind issuing a 100% guarantee on their products? Margins. 1 boot out of 10,000 goes bad? “Oh my, how ever will we afford this? Oh, right, those boots cost me $10 to make and $10 to ship and market…and retail for $200 a pair.” I don’t know any developers or security practitioners who are poor, but I also don’t know any whose money is so long they could survive more than one claim against their labors.
  • Compliance. How does victim Big Co. prove they’re compliant with the terms of the guarantee? Yes, we are awash in data these days, but do you have someone on staff who can effortlessly and instantly call that data up? What if your findings are disputed? Yes, if you can conduct an effective forensic investigation you might be able to pinpoint a failure…but who covers the cost of the investigation? What if, in trying to claim that $100,000 guarantee payout you have to spend $500,000 over six months?
  • Fine print. A guarantee isn’t really useful to a customer if it is so heavily lawyered-up that it would be useless to file a claim. An example Richard points out in his post: If someone manages to overcome a defense via a sufficiently novel approach, the vendor isn’t liable for that because it is not a ‘failure’ on their part. Yet a sufficiently resourceful and motivated attacker isn’t going to break a window or kick in a door – where he knows the alarm system sensors are – he’s going to take a saws-all to a wall and walk through the studs.

Competent practitioners can and should take pride in and stand by their work, but there are far too many factors involved in “securing” a thing than can be identified, calculated and accounted for such that a guarantee would be both meaningful and valuable to both parties. Let’s be frank: nothing is coded to be secure; it is coded to be functional. Functionality and utility are what people are willing to pay for, security is what they are forced to pay for. Not the same thing.

“Cyber MAD” is a Bad Idea. Really Bad.

I don’t know how many times I have to say this, but nothing screams “legacy future” like trying to shoe-horn cold-war thinking into “cyber.” This latest attempt doesn’t disappoint (or maybe it does, depending on how you look at it) because it completely miss two key points:

  1. Cyberspace is not meat-space;
  2. Digital weapons are nothing like atomic ones.

Yes, like the nuclear arms race, it is in fact more expensive to defend yourself than it is to attack someone. Generally speaking. Its OK to paint with a broad brush on this point because so many entities online are so woefully inadequate when it comes to defense that we forget that there are actually some who are quite hard and expensive to attack. Any serious colored-hat who is being honest will tell you that they deal with more than their fair share of unknowns and ‘unknown unknowns’ when going after any given target.

But unlike malicious actions in cyberspace, there is no parsing nuclear war. You’re nuked, or you’re not. Cyber-espionage, cyber-crime, cyber-attack…all indistinguishable in all technically meaningful ways. Each has a different intent, which we are left to speculate about after-the-fact. In the other scenario, no one is around to speculate why a battalion of Reds turned their keys and pushed their buttons.

Attacker identity is indeed important whether you’re viewing a potential conflict through nuclear or digital lenses, but you know what excuse doesn’t work in the nuclear scenario? “It wasn’t me.”

Um, IR burn says it was…

There is no such equivalent in cyberspace. You can get close – real close – given sufficient data and time, but there will be no Colin Powell-at-the-UN-moment in response to a cyber threat because “it wasn’t me” is a perfectly acceptable excuse.

But we have data.

You can fabricate data

You know what you can’t fabricate? Fallout.

All of this, ALL OF THIS, is completely pointless because if some adversary had both the will and the wherewithal to attack and destroy our and just our critical infrastructure and national security/defense capabilities via cyber means…what are we meant to strike back with? Who are those who happen to be left unscathed supposed to determine who struck first? I was not a Missileer, but I’m fairly certain you can’t conduct granular digital attribution from the bottom of an ICBM silo.

What is the point of worrying about destruction anyway? Who wants that? The criminals? No, there is too much money to be made keeping systems up and careless people online. The spies? No, there is too much data to harvest and destruction might actually make collection hard. Crazy-bent-on-global-domination types? This is where I invoke the “Movie Plot Threat” clause. If the scenario you need to make your theory work in cyberspace is indistinguishable from a James Bond script, you can’t be taken seriously.

MAD for cyberspace is a bad idea because its completely academic and does nothing to advance the cause of safety or security online (the countdown to someone calling me “anti-intellectual” for pointing out this imperial nudity starts in 5, 4, 3….). MAD, cyber deterrence, all this old think is completely useless in any practical sense. You know why MAD and all those related ideas worked in the 60s? Because they dealt with the world and the problem in front of them as it was, not how they wished it to be.

I wholeheartedly agree that we need to do more and do more differently in order to make cyberspace a safer and more secure environment. I don’t know anyone who argues otherwise. I’m even willing to bet there is a period of history that would provide a meaningful analog to the problems we face today, but the Cold War isn’t it.

Malware Analysis: The Danger of Connecting the Dots

The findings of a lot of malware analysis are not in fact “analysis;” they’re a collection of data points linked together by assumptions whose validity and credibility have not been evaluated. This lack of analytic methodology could prove exceedingly problematic for those charged with making decisions about cyber security. If you cannot trust your analysis, how are you supposed to make sound cyber security decisions?

Question: If I give you a malware binary to reverse engineer, what do you see? Think about your answer for a minute and then read on. We’ll revisit this shortly.

It is accepted as conventional wisdom that Stuxnet is related to Duqu, which is in turn related to Flame. All of these malware have been described as “sophisticated” and “advanced,” so much so that they must be the work of a nation-state (such work presumably requiring large amounts of time and lots of skilled people and the code written for purposes beyond simply siphoning off other people’s cash). The claim that the US government is behind Stuxnet has consequently led people to assume that all related code is US sponsored, funded, or otherwise backed.

Except for the claim of authorship, all of the aforementioned data points come from people who reverse engineer malware binaries. These are technically smart people who practice an arcane and difficult art, but what credibility does that give them beyond their domain? In our quest for answers do we give too much weight to the conclusions of those with discrete technical expertise and fail to approach the problem with sufficient depth and objectivity?

Let’s take each of these claims in turn.

Are there similarities if not outright sharing of code in Stuxnet, Duqu and Flame? Yes. Does that mean the same people wrote them all? Do you believe there is a global marketplace where malware is created and sold? Do you believe the people who operate in that marketplace collaborate? Do you believe that the principle of “code reuse” is alive and well? If you answered “yes” to any of these questions then a single source of “advanced” malware cannot be your only valid conclusion.

Is the code in Stuxnet, etc. “sophisticated?” Define “sophisticated” in the context of malware. Forget about malware and try to define “sophisticated” in the context of software, period. Is Excel more sophisticated than Photoshop? When words have no hard and widely-accepted definitions, they can mean whatever you want them to mean, which means they have no meaning at all.

Can only a nation-state produce such code? How many government-funded software projects are you aware of that work as advertised? You can probably count on one hand and have fingers left over. But now, somehow, when it comes to malware, suddenly we’re to believe that the government has gotten its shit together?

“But Mike, these are, like, weapons. Super secret stuff. The government is really good at that.”

Really? Have you ever heard of the Osprey? Or the F-35? Or the Crusader? Or the JTRS? Or Land Warrior? Groundbreaker? Trailblazer? Virtual Case File?

I’m not trying to trivialize the issues associated with large and complex technology projects, my point is that a government program to build malware would be subject to the same issues and consequently no better – and quite possibly worse – than any non-governmental effort to do the same thing. Cyber crime statistics – inflated though they may be – tell us that governments are not the only entities that can and do fund malware development.

“But Mike, the government contracts out most of its technology work. Why couldn’t they contract out the building of digital weapons?”

They very well could, but then what does that tell us? It tells us that if you wanted to build the best malware you have to go on the open market (read: people who may not care who they’re working for, as long as their money is good).

As far as the US government “admitting” that they were behind Stuxnet: they did no such thing. A reporter, an author of a book, says that a government official told him that the US was behind Stuxnet. Neither the President of the United States, nor the Secretary of Defense, nor the Directors of the CIA or NSA got up in front of a camera and said, “That’s us!” which is what an admission would be. Let me reiterate: a guy who has a political agenda told a guy who wants to sell books that the US was behind Stuxnet.

It’s easy to believe the US is behind Stuxnet, as much as it is to believe Israel is behind it. You know who else doesn’t like countries who don’t have nuclear weapons to get them? Almost every country in the world, including those countries that currently have nuclear weapons. You know who else might not want Iran – a majority Shia country – to have an atomic bomb? Roughly 30 Sunni countries for starters, most of which could afford to go onto the previously mentioned open market and pay for malware development. What? You hadn’t thought about the non-proliferation treaty or that Sunni-Shia thing? Yeah, neither has anyone working for Kaspersky, Symantec, F-Secure, etc., etc.

Back to the question I asked earlier: What do you see when you reverse engineer a binary?

Answer: Exactly what the author wants you to see.

  • I want you to see words in a language that would throw suspicion on someone else.
  • I want you to see that my code was compiled in a particular foreign language (even though I only read and/or write in a totally different language).
  • I want you to see certain comments or coding styles that are the same or similar to someone else’s (because I reuse other people’s code).
  • I want you to see data about compilation date/time, PDB file path, etc., which could lead you to draw erroneous conclusions have no bearing on malware behavior or capability.

Contrary to post-9/11-conventional wisdom, good analysis is not dot-connecting. That’s part of the process, but it’s not the whole or only process. Good analysis has methodology behind it, as well as a fair dose of experience or exposure to other disciplines that comes into play. Most of all, whenever possible, there are multiple, verifiable, meaningful data points to help back up your assertions. Let me give you an example.

I used to work with a guy we’ll call “Luke.” Luke was a firm believer in the value of a given type of data. He thought it was infallible. So strong were Luke’s convictions about the findings he produced using only this particular type of data that he would draw conclusions about the world that flew in the face of what the rest of us like to call “reality.” If Luke’s assertions were true, WW III would have been triggered, but as many, many other sources of data were able to point out, Luke was wrong.

There was a reason why Luke was the oldest junior analyst in the whole department.

Luke, like a lot of people, fall victim to a number of problems, fallacies and mental traps when they attempt to draw conclusions from data. This is not an exhaustive list, but illustrative of what I mean.

Focus Isn’t All That. There is a misconception that narrow and intense focus leads to better conclusions. The opposite tends to be true: the more you focus on a specific problem, the less likely you are to think clearly and objectively. Because you just “know” certain things are true, you feel comfortable taking shortcuts to reach your conclusion, which in turn simply drives you further away from the truth.

I’ve Seen This Before. We give too much credence to patterns. When you see the same or very similar events taking place or tactics used your natural reaction is to assume that what is happening now is what happened in the past. You discount other options because its “history repeating itself.”

The Shoehorn Effect. We don’t like questions that don’t have answers. Everything has to have an explanation, regardless of whether or not the explanation is actually true. When you cannot come up with an explanation that makes sense to you, you will fit the answer to match the question.

Predisposition. We allow our biases to drive us to seek out data that supports our conclusions and discount data that refutes it.

Emption. You cannot discount the emotional element involved in drawing conclusions, especially if your reputation is riding on the result. Emotions about a given decision can run so high that it overcomes your ability to think clearly. Rationalism goes out the window when your gut (or your greed) over-rides your brain.

How can we overcome the aforementioned flaws? There are a range of methodologies analysts use to improve objectivity and criticality. These are by no means exhaustive, but they give you an idea of the kind of effort that goes into serious analytic efforts.

Weighted Ranking. It may not seem obvious to you, but when presented with two or more choices, you choose X over Y based on the merits of X, Y (and/or Z). Ranking is instinctual and therefore often unconscious. The problem with most informal efforts at ranking is that its one-dimensional.

“Why do you like the TV show Homicide and not Dragnet?”

“Well, I like cop shows but I don’t like black-and-white shows.”

“OK, you realize those are two different things you’re comparing?”

A proper ranking means you’re comparing one thing against another using the same criteria. Using our example you could compare TV shows based on genre, sub-genre, country of origin, actors, etc., rank them according to preference in each category, and then tally the results. Do this with TV shows – or any problem – and you’ll see that your initial, instinctive results will be quite different than those of your weighted rankings.

Hypothesis Testing. You assert the truth of your hypothesis through supporting evidence, but you are always working with incomplete or questionable data, so you can never prove a hypothesis true; we accept it to be true until evidence surfaces that suggest it to be false (see bias note above). Information becomes evidence when it is linked to a hypothesis, and evidence is valid once we’ve subjected it to questioning: where did the information come from? How plausible is it? How reliable is it?

Devil’s Advocacy. Taking a contrary or opposing position from what is the accepted answer helps overcome biases and one-dimensional thinking. Devil’s advocacy seeks out new evidence to refute “what everybody knows,” including evidence that was disregarded by those who take the prevailing point of view.

This leads me to another point I alluded to earlier and that isn’t addressed in media coverage of malware analysis: what qualifications does your average reverse engineer have when it comes to drawing conclusions about geo-political-security issues? You don’t call a plumber to fix your fuse box. You don’t ask a diplomat about the latest developments in no-till farming. Why in the world would you take at face value what a reverse engineer says about anything except very specific, technical findings? I’m not saying people are not entitled to their opinions, but credibility counts if those opinions are going to have value.

So where are we?

  • There are no set or even widely accepted definitions related to malware  (e.g. what is “sophisticated” or “advanced”).
  • There is no widely understood or accepted baseline of what sort of technical, intellectual or actual-capital required to build malware.
  • Data you get out of code, through reverse engineering or from source, is not guaranteed to be accurate when it comes to issues of authorship or origin.
  • Malware analysts do not apply any analytic methodology in an attempt to confirm or refute their single-source findings.
  • Efforts to link data found in code to larger issues of geo-political importance are at best superficial.

Why is all of this important? Computer security issues are becoming an increasingly important factor in our lives. Not that everyone appreciates it, but look at where we have been and where we are headed. Just under 20 years ago few people in the US, much less the world, world were online; now more people in the world get online via their phones than do on a traditional computer. Cars use computers to drive themselves, and biological implants are controlled via Bluetooth. Neither of these new developments has meaningful security features built into them, but no one would ever be interested in hacking insulin pumps or pacemakers, right?

Taking computer security threats seriously starts by putting serious thought and effort behind our research and conclusions. The government does not provide information like this to the public, so we rely on vendors and security companies (whose primary interest is profit) to do it for us. When that “analysis,” which is far from rigorous is delivered to decision-makers who are used to dealing with conclusions that have been developed through a much more robust methodology, their decisions can have far reaching negative consequences.

Sometimes a quick-and-dirty analysis is right, and as long as you’re OK with the fact that that is all that most malware analysis is, OK. But you’re planning on making serious decisions about the threat you face from cyberspace, you should really take the time and effort to ensure that your analysis has looked beyond what IDA shows and considered more diverse and far-reaching factors.