The Lessons of PFC Manning

Make no mistake: PFC Manning made some very bad decisions and he should pay a very heavy price. Taking a step back, however, one can see that in his betrayal he has done something of a public service for both the security and operational communities in both the military, government, and commercial world.

Lesson number one is that your current computer security regime is probably a waste of time and effort. Even in what should have been an extremely secure environment, computer security was something approaching a joke. If Manning’s M.O. is confirmed, there was a complete security breakdown. Military necessity has always trumped certain non-combat-related protocols during wartime, but being able to run roughshod through Top Secret networks and rip classified material to cracked music CDs beggars belief. No amount of briefings, posters, forms and extra-duties will remedy this problem.

Next: you can’t ensure the confidentiality or integrity of anything on SIPRnet or JWICS (private sector entities who find themselves with a similar insider threat issue, insert your own network here). There are intelligence community agencies that don’t like to use SIPRnet, the military’s secret-level network, because they think it isn’t nearly as secure as it should be. PFC Manning has demonstrated that neither is the military’s top secret-level network. The intelligence posted to JWICS by any DOD-intelligence activity (which is most of the intelligence community) has been at risk for who knows how long. If one misguided, low-level troop can do what he is alleged to have done, I don’t even want to think about what a determined adversary – or an agent-in-place – could have been doing all this time.

Finally, more certifications and billions of dollars worth of grand strategies will not improve security. Ten CNCIs would not have stopped this, only a fundamental change in culture – both operational and security – would have worked. To the best of my knowledge, money doesn’t fund the widespread dissemination of good security ideas; it just buys more of the same boxes, software and bodies to reinforce the same dysfunctional security models.

If we are truly serious about improving computer security, if we don’t want $17 Billion in CNCI money to go completely to waste, if we are finally tired of shooting our own feet while trekking towards security nirvana, we need to pay attention to reality, design our security solutions accordingly.

If your approach to security impedes a unit’s (company, agency, etc.) ability to operate effectively, you’re doing it wrong. Security that presumes a condition or series of conditions that do not exist in the real world – much less combat environments – will fail. The people who need to get things done will intentionally cause it to fail . . . in order to get things done. This is not an original thought, but it one that needs to be revised in both military, government, and business circles. Good security is not perfect, it is good enough for what you need to do, what environment you are operating in, and for the duration of your decision-making cycle.

Presume your adversaries know everything you do at this point: react accordingly. Things are fairly speculative at this point, but when the damage assessment is done I’m fairly sure most sane people involved will probably walk away thinking there is no way to verify the confidentiality or integrity of any piece of information on SIPRnet or JWICS. I think that makes this a perfect time to implement some living intelligence solution. Maintaining the static production model gives our adversaries the advantage, because what was a mystery is now history and their Pentagon-ology skills have just gotten a huge boost. An environment of living intelligence also makes spy/leak hunting a lot easier by allowing a more granular view of who accessed what, when.

Clinging to outmoded security models and approaches is only going to end up endangering soldiers and national security because no one will adhere to them when they are needed most. Stop focusing on moats and walls because the enemy is already inside the wire (literally and figuratively). Most arguments against change – radical or incremental – don’t carry a lot of weight because they presume that what was done to date made us secure. What was done to date made us more insecure than ever; doing more of the same won’t bring improvement.

My greatest concern is that when he is in prison and the final chapter on the story of his actions is written, our “solution” will be more strongly-worded policy, more stringent procedures, more paperwork . . . all of which will promptly be ignored the next time the operational need demands it. We’ll carry on – business as usual – thinking that now we’re safe and secure in our own digital cloister, when in fact we’re simply doing more of the same things that got us in trouble in the first place. The tragedy here is not that we were undone by a shit-bird GI who didn’t have his head screwed on straight, it’s that we will ignore what he is teaching us.

On “cyber intelligence”

Intelligence.

From what I can tell it’s the new hotness in cybersecurity.

From what I can tell it’s also not being done very well. The end result of course being that “intelligence” is treated as a fad or gimmick, which would be a terrible mistake for the cybersecurity community to make.

Let’s lay down a few givens before we go any further. For starters, “intelligence” is like “APT:” If you’re not using the proper definition, you’re just playing marketing tricks. Boiled down to its essence it works like this:

  •  No matter how good the source, a discrete piece of “data” or data “feed” is not intelligence
  • Intelligence is not a mashup of disparate data points; that’s “information”
  • Intelligence is information that is put into context and enhanced with expert (human) input that provides the intelligence consumer with insight.

No application, device or appliance is capable of providing you with intelligence. Such mechanisms may provide you with enhanced information, but without the human element it’s still just information. If machines could produce intelligence, a whole lot of people in this business would be unemployed.

Your organizational decision-maker(s) are your intelligence “consumers.” Every consumer wants something different from their intelligence product, which is where the human element comes into play. The intelligence requirements of the C-level is of little utility to the responder on scene, and vice versa. Devices and feeds in and of themselves cannot support either requirement. Any purveyor of “intelligence” that does not have a human between data and consumer is not offering intelligence. If you are not paying for someone to apply their little gray cells to your or their data, you’re paying a premium for something you could probably get for free.

Intelligence is not fool-proof. Intelligence tells you something you don’t already know, but because you cannot know everything, there are no guarantees. Intelligence providers who claim to be flawless, or nearly so, are not producing content of value because only the most generic and heavily cavetated output can be made to seem right 100% of the time. You don’t need to pay extra for people to tell you “maybe” and “possibly.”

I’m just touching the surface here, and if anyone wants me to riff longer I will, but I just wanted to make sure something was out there standing athwart the “cyber intelligence” hype train shouting “stop!”