Sam and His (not so) Crazy Ramblings

If you haven’t already done so, start here.

Go ahead, I’ll wait.

Sam and I don’t go way back, but he’s easily the most intellectual and yet accessible thinker on these sorts of issues, especially as they interact with other disciplines. While he can’t draw from decades of experience behind closed doors, you’d never know it based on his grasp of the issues.

Having said that, there are some things that only a grizzled old veteran of the intelligence wars – actual and bureaucratic – can shed light on, hence the following response…

1) NSA will be half the size it is today.

Why I think he’s wrong.

It takes a LOT to reduce the size of a federal agency; even more so an intelligence agency. I’ve been in the IC through fat times and lean, cold war, hot wars, peace dividend and war on terror and I’ve never seen an agency shrink in any significant way. It might not grow as fast as expected, it might shrink somewhat through natural attrition, but to say “half the size” is basically nonsense from a historical perspective.

Where I think he might be on to something.

The NSA is really two outfits in one: an intelligence agency and a security agency. They can complement each other but they don’t have to be under the same roof. In fact pulling the security agency out of NSA, making it a separate entity, and retooling it into an agency that supports security at both the national and individual level would go a long way in both winning back public trust, as well as actually making it harder for malicious outsiders to hurt us.

2) NSA becomes a contractor free agency.

Why I think he’s wrong.

Go into any intelligence agency today and you have 4 categories of people: managers, a thin slice of very senior subject matter experts, a lot of very junior people trying to be experts, and sandwiched in between is a layer of mid-careerists who, when they’re not trying to jockey for the senior SME slot once the geezer in it dies, is acting as a project manager or COTR for various efforts that are carried out by contractors. The IC can’t function without contractors because Congress won’t allow the IC to hire more employees. They won’t allow them to hire more employees but at the same time they won’t stand for a reduction in the number of missions that need to be executed. The only solution to that problem is contractors.

The IC also cannot hire enough technical experts in enough subjects to keep pace with the demands of their missions. The whole point of contractors is to bring them on to address new or advanced issue X, and then leave (or reduce their presence) once things are in hand. What we have is perpetual 1-base plus four option year contracts. Serving as a federal employee for 30 years, retiring, and then coming back as a contractor to work on the same mission for another a decade or more isn’t unusual, its standard practice. Same number of missions, same changes in technology, means contractors are here to stay.

Where I think he’s on to something.

Contracts need to be: short(er) term efforts that are focused on hard technical problems, with the goal of getting things to the point where more generalist feds can take over. The size of contracts need to be reduced. Hundreds of millions of dollars doesn’t buy more success, it just buys more butts in seats.

3) Elements of NSA working toward national infrastructure security are split off.

No argument.

4) NSA and CyberCom split

The sooner the better.

5) NSA has to invest in privacy preserving security as penance

See #1 above.

6) Individuals may find themselves under congressional investigation

Why I think he’s wrong.

NSA abuses, real or imagined, intentional or unintentional are a fringe issue. People in the crypto and privacy sub-culture care, some people in computer and information security care, people who have no idea how SIGINT works but are happy to have yet-another reason to hate the gov’t care…but the vast majority of everyone else doesn’t. Outside of New York, Washington DC, and a few other major cities, I challenge you to walk out into the street and find someone who has heard of this issue in any more than a passing sense. Then find someone so mad about it they’re going to take political action. Taxes, social security, health care: that’s what the majority of people in this country care about. NSA Internet surveillance of the ’10s is not NSA (and CIA and FBI) surveillance of people in the 70s.

Where I think he’s on to something.

If intelligence agencies are good at one thing its burying bodies. Is anyone going to find themselves in front of Church Committee 2.0? No. Will the people who were leaning the furthest in the foxhole on efforts that were exposed going to find themselves asked to quietly find their way out the door? Absolutely. This is how it works: the seniors thank and then shepherd those that pushed the envelope to the side, those who take their place know exactly where the line is drawn and stay weeeellll behind it. They communicate that to the generations that are coming up, and that buys us a few decades of sailing on a more even keel…

…until the next catastrophic surprise…

Prepare for the Pendulum Swing

I’m not going to belabor the tale of woe those trying to deal with Edward Snowden’s theft are dealing with right now. For a moment I want to opine on some of the secondary and tangential issues that I predict is going to make life in the IC more difficult because of his actions:

  1. Polygraphs. If it is true that he only took the job with BAH to gain access to specific data in order to reveal it, IC polygraph units are going to have to cancel leave through 2025. Moving from one agency to another? Get ready to get hooked up to the box (again). In a sys admin job? Pucker up. That old timer you used to get who realized that people were people and they had lives? He’s going to be replaced by a legion of whippersnappers who will all be gunning to catch the next leaker. Good people will be deep-sixed and those who survive will wonder if it’s worth the ***-pain.
  2. Investigations. When you can’t pick up on obvious problem-children, and when the bottom-line is more important than doing a good job, the bureaucracy will retrench and do what it does best: drop into low gear and distrust outsiders. There are only so many government investigators, and it’s not like there are fewer missions. Coverage will slip, tasks won’t get done, the risk of surprise (you know, what we’re supposed to try and avoid) goes up.
  3. Visits. Even in the information age some things are best discussed in person. Remember how your “community” badge would kinda-sorta get you into wherever you needed to go? Good luck with that for the foreseeable future. That three hour block of time you used to allocate to go to a meeting across town? You might as well write off the whole day.
  4. Two-Man Rule. Great theory; it will suck in practice. Remember when you used to be able to call the help desk and your boy Chuck would reset your password over the phone? Yeah, not any more. Something that took minutes will take hours; something that used to take hours will take days; things that took days will take weeks. In the information age, ostensibly the information enterprise, will work about as quickly and efficiently as a pre-assembly-line car factory.
  5. Sharing. Yes, the mechanisms will still exist, but no one actually will (officially). No one will say so out loud, but in a series of staff calls of decreasing seniority the word will get out: don’t post or share anything good or the least bit sensitive online. Stovepipes will be reinforced and what good was done over the past decade+ to break down barriers will get washed away. Sharing will go underground, which will simply make detecting leaks harder.

This story is far from over, but if you’ve been in this business for any length of time you know how wildly the pendulum swings when something bad happens. Nothing actually improves, everything just gets more difficult. This was less of a big deal during the industrial age, but that age has past.

 

 

Compare and Contrast

I love how, on a mailing list I belong to that is full of Ph.D.s and J.D.s, when I call for practical approaches to real-world problems I’m called “anti-intellectual” and in other forums when I allude to someone’s level of formal education – or lack thereof – I’m called “elitist.” What’s the old saying? If you’re pissing both sides off equally you must be doing something right.

The latest example?

I recently brought up the fact that neither Bradley Manning nor Edward Snowden were Daniel Ellsberg. I didn’t come out and say ‘they weren’t fit to hold his jock,’ I was pointing out that when you compared who they were and what they did, Dr. Ellsberg is a whole different class of actor. Let’s get on the ‘tubes and let me show you what I mean:

Daniel Ellsberg

Education: Harvard undergraduate (on scholarship); Cambridge (Wilson Fellowship); Harvard (again) for graduate school and eventually his Ph.D.

Employment: USMC officer (honorable); RAND Corporation; the Department of State and the Department of Defense (he didn’t work “in the Pentagon” he worked for the Secretary of Defense).

Access: With regards to the “Pentagon Papers” he operated at the highest level and knew the full contents of the report.

 

Bradley Manning

Education: High School; One semester of Community College (dropped out)

Employment: Software developer (for four months); Pizza parlor; US Army Intelligence Analyst

Access: A variety of classified military, intelligence and diplomatic systems accessible in theater.

 

Edward Snowden

Education: Dropped out of high school; earned GED; briefly attended Community College.

Employment: US Army (never got out of training status); contract security guard; IT engineer at the CIA and NSA

(Reported) Access: Discrete systems supporting HUMINT and SIGINT operations.

 

Snowden wasn’t an intelligence operator or analyst, he was an IT guy who supported intelligence operators and analysts. Sports agents know a lot about sports, but no one confuses them for players. Manning had access to a lot of data, but he was a junior analyst who (if the Army still works like it worked when I was in) was focused on a particular problem set, not the Middle East theater writ large. If you worked with either one of these guys you wouldn’t care what they thought about anything work-related beyond the very narrow slice where they had demonstrable expertise, but because you know nothing about intelligence work and they happened to have a clearance you think they’re all that and a bag of crisps.

I’m not saying Snowden and Manning aren’t smart. I’m not saying they’re not earnest in their beliefs. I’m saying if I’m going to accept the judgment of an individual about issues of national if not international import, the guy who did nothing but flex the muscles in his 18-pound brain and had full view of the entire problem has a lot more credibility.

If that makes me elitist, well, I’ll be over here sipping cognac if you want to slap me across the face with a velvet glove.

On “cyber intelligence”

Intelligence.

From what I can tell it’s the new hotness in cybersecurity.

From what I can tell it’s also not being done very well. The end result of course being that “intelligence” is treated as a fad or gimmick, which would be a terrible mistake for the cybersecurity community to make.

Let’s lay down a few givens before we go any further. For starters, “intelligence” is like “APT:” If you’re not using the proper definition, you’re just playing marketing tricks. Boiled down to its essence it works like this:

  •  No matter how good the source, a discrete piece of “data” or data “feed” is not intelligence
  • Intelligence is not a mashup of disparate data points; that’s “information”
  • Intelligence is information that is put into context and enhanced with expert (human) input that provides the intelligence consumer with insight.

No application, device or appliance is capable of providing you with intelligence. Such mechanisms may provide you with enhanced information, but without the human element it’s still just information. If machines could produce intelligence, a whole lot of people in this business would be unemployed.

Your organizational decision-maker(s) are your intelligence “consumers.” Every consumer wants something different from their intelligence product, which is where the human element comes into play. The intelligence requirements of the C-level is of little utility to the responder on scene, and vice versa. Devices and feeds in and of themselves cannot support either requirement. Any purveyor of “intelligence” that does not have a human between data and consumer is not offering intelligence. If you are not paying for someone to apply their little gray cells to your or their data, you’re paying a premium for something you could probably get for free.

Intelligence is not fool-proof. Intelligence tells you something you don’t already know, but because you cannot know everything, there are no guarantees. Intelligence providers who claim to be flawless, or nearly so, are not producing content of value because only the most generic and heavily cavetated output can be made to seem right 100% of the time. You don’t need to pay extra for people to tell you “maybe” and “possibly.”

I’m just touching the surface here, and if anyone wants me to riff longer I will, but I just wanted to make sure something was out there standing athwart the “cyber intelligence” hype train shouting “stop!”

Don’t Believe the Hype

I want you to read this tweet:

 

Two things:

1. The government is constantly whinging on about how we need more sharing. The private sector elements who actually get involved in sharing regimes constantly complain about how “sharing” with the government is a one-way street. Who are you going to give a sympathetic ear to the next time someone utters the words “public-private partnership?” How much more annoying is it that places like DHS want to borrow private-sector expertise but don’t want to pay for it?

2. What makes this lop-sided relationship really annoying is that the private sector “attack surface” is several metric-*** tons larger than the government one. Who is it that needs more and better intel about cyber threats, exactly?


Malware Analysis: The Danger of Connecting the Dots

The findings of malware analysis are not in fact “analysis;” they’re a collection of data points linked together by assumptions whose validity and credibility have not been evaluated. This lack of analytic methodology could prove exceedingly problematic for those charged with making decisions about cyber security. If you cannot trust your analysis, how are you supposed to make sound cyber security decisions?

Question: If I give you a malware binary to reverse engineer, what do you see? Think about your answer for a minute and then read on. We’ll revisit this shortly.

It is accepted as conventional wisdom that Stuxnet is related to Duqu, which is in turn related to Flame. All of these malware have been described as “sophisticated” and “advanced,” so much so that they must be the work of a nation-state (such work presumably requiring large amounts of time and lots of skilled people and the code written for purposes beyond simply siphoning off other people’s cash). The claim that the US government is behind Stuxnet has consequently led people to assume that all related code is US sponsored, funded, or otherwise backed.

Except for the claim of authorship, all of the aforementioned data points come from people who reverse engineer malware binaries. These are technically smart people who practice an arcane and difficult art, but what credibility does that give them beyond their domain? In our quest for answers do we give too much weight to the conclusions of those with discrete technical expertise and fail to approach the problem with sufficient depth and objectivity?

Let’s take each of these claims in turn.

Are there similarities if not outright sharing of code in Stuxnet, Duqu and Flame? Yes. Does that mean the same people wrote them all? Do you believe there is a global marketplace where malware is created and sold? Do you believe the people who operate in that marketplace collaborate? Do you believe that the principle of “code reuse” is alive and well? If you answered “yes” to any of these questions then a single source of “advanced” malware cannot be your only valid conclusion.

Is the code in Stuxnet, etc. “sophisticated?” Define sophisticated in the context of malware. Forget about malware and try to define “sophisticated” in the context of software, period. Is Excel more sophisticated than Photoshop? When words have no hard and widely-accepted definitions, they can mean whatever you want them to mean, which means they have no meaning at all.

Can only a nation-state produce such code? How many government-funded software projects are you aware of that work as advertised? You can probably count on one hand and have fingers left over. But now, somehow, when it comes to malware, suddenly we’re to believe that the government has gotten its s*** together?

“But Mike, these are, like, weapons. Super secret stuff. The government is really good at that.”

Really? Have you ever heard of the Osprey? Or the F-35? Or the Crusader? Or the JTRS? Or Land Warrior? Groundbreaker? Trailblazer? Virtual Case File?

I’m not trying to trivialize the issues associated with large and complex technology projects, my point is that a government program to build malware would be subject to the same issues and consequently no better – and quite possibly worse – than any non-governmental effort to do the same thing. Cyber crime statistics – inflated though they may be – tell us that governments are not the only entities that can and do fund malware development.

“But Mike, the government contracts out most of its technology work. Why couldn’t they contract out the building of digital weapons?”

They very well could, but then what does that tell us? It tells us that if you wanted to build the best malware you have to go on the open market (read: people who may not care who they’re working for, as long as their money is good).

As far as the US government “admitting” that they were behind Stuxnet: they did no such thing. A reporter, an author of a book, says that a government official told him that the US was behind Stuxnet. Neither the President of the United States, nor the Secretary of Defense, nor the Directors of the CIA or NSA got up in front of a camera and said, “That’s us!” which is what an admission would be. Let me reiterate: a guy who has a political agenda told a guy who wants to sell books that the US was behind Stuxnet.

It’s easy to believe the US is behind Stuxnet, as much as it is to believe Israel is behind it. You know who else doesn’t like countries who don’t have nuclear weapons to get them? Almost every country in the world, including those countries that currently have nuclear weapons. You know who else might not want Iran – a majority Shia country – to have an atomic bomb? Roughly 30 Sunni countries for starters, most of which could afford to go onto the previously mentioned open market and pay for malware development. What? You hadn’t thought about the non-proliferation treaty or that Sunni-Shia thing? Yeah, neither has anyone working for Kaspersky, Symantec, F-Secure, etc., etc.

Back to the question I asked earlier: What do you see when you reverse engineer a binary?

Answer: Exactly what the author wants you to see.

  • I want you to see words in a language that would throw suspicion on someone else.
  • I want you to see that my code was compiled in a particular foreign language (even though I only read and/or write in a totally different language).
  • I want you to see certain comments or coding styles that are the same or similar to someone else’s (because I reuse other people’s code).
  • I want you to see data about compilation date/time, PDB file path, etc., which could lead you to draw erroneous conclusions have no bearing on malware behavior or capability.

Contrary to post-9/11-conventional wisdom, good analysis is not dot-connecting. That’s part of the process, but it’s not the whole or only process. Good analysis has methodology behind it, as well as a fair dose of experience or exposure to other disciplines that comes into play. Most of all, whenever possible, there are multiple, verifiable, meaningful data points to help back up your assertions. Let me give you an example.

I used to work with a guy we’ll call “Luke.” Luke was a firm believer in the value of a given type of data. He thought it was infallible. So strong were Luke’s convictions about the findings he produced using only this particular type of data that he would draw conclusions about the world that flew in the face of what the rest of us like to call “reality.” If Luke’s assertions were true, World War III would have been triggered, but as many, many other sources of data were able to point out, Luke was wrong.

There was a reason why Luke was the oldest junior analyst in the whole department.

There are a number of problems, fallacies and mental traps that people tend to suffer when they attempt to draw conclusions from data. This is not an exhaustive list, but illustrative of what I mean.

Focus Isn’t All That. There is a misconception that narrow and intense focus leads to better conclusions. In fact the opposite tends to be true: the more you focus on a specific problem, the less likely you are to think clearly and objectively. Because you just “know” certain things are true, you feel comfortable taking shortcuts to reach your conclusion, which in turn simply drives you further away from the truth.

I’ve Seen This Before. We give too much credence to patterns. When you see the same or very similar events taking place or tactics used your natural reaction is to assume that what is happening now is what happened in the past. You discount other options because its “history repeating itself.”

The Shoehorn Effect. We don’t like questions that don’t have answers. Everything has to have an explanation, regardless of whether or not the explanation is actually true. When you cannot come up with an explanation that makes sense to you, you will fit the answer to match the question.

Predisposition. We allow our biases to drive us to seek out data that supports our conclusions and discount data that refutes it.

Emotion. You cannot discount the emotional element involved in drawing conclusions, especially if your reputation is riding on the result. Emotions about a given decision can run so high that it overcomes your ability to think clearly. Rationalism goes out the window when your gut (or your greed) over-rides your brain.

How can we overcome the aforementioned flaws? There are a range of methodologies analysts use to improve objectivity and criticality. These are by no means exhaustive, but they give you an idea of the kind of effort that goes into serious analytic efforts.

Weighted Ranking. It may not seem obvious to you, but when presented with two or more choices, you choose X over Y based on the merits of X, Y (and/or Z). Ranking is instinctual and therefore often unconscious. The problem with most informal efforts at ranking is that its one-dimensional.

“Why do you like the TV show Homicide and not Dragnet?”

“Well, I like cop shows but I don’t like black-and-white shows.”

“OK, you realize those are two different things you’re comparing?”

A proper ranking means you’re comparing one thing against another using the same criteria. Using our example you could compare TV shows based on genre, sub-genre, country of origin, actors, etc., rank them according to preference in each category, and then tally the results. Do this with TV shows – or any problem – and you’ll see that your initial, instinctive results will be quite different than those of your weighted rankings.

Hypothesis Testing. You assert the truth of your hypothesis through supporting evidence, but you are always working with incomplete or questionable data, so you can never prove a hypothesis true; we accept it to be true until evidence surfaces that suggest it to be false (see bias note above). Information becomes evidence when it is linked to a hypothesis, and evidence is valid once we’ve subjected it to questioning: where did the information come from? How plausible is it? How reliable is it? What is motivating the source (agenda)?

Devil’s Advocacy. Taking a contrary or opposing position from what is the accepted answer helps overcome biases and one-dimensional thinking. Devil’s advocacy seeks out new evidence to refute “what everybody knows,” including evidence that was disregarded by those who take the prevailing point of view.

This leads me to another point I alluded to earlier and that isn’t addressed in media coverage of malware analysis: what qualifications does your average reverse engineer have when it comes to drawing conclusions about geo-political-security issues? You don’t call a plumber to fix your fuse box. You don’t ask a diplomat about the latest developments in no-till farming. Why in the world would you take at face value what a reverse engineer says about anything except very specific, technical findings? I’m not saying people are not entitled to their opinions, but credibility counts if those opinions are going to have value.

So where are we?

  • There are no set or even widely accepted definitions related to malware  (e.g. what is “sophisticated” or “advanced”).
  • There is no widely understood or accepted baseline of what sort of technical, intellectual or actual-capital required to build malware.
  • Data you get out of code, through reverse engineering or from source, is not guaranteed to be accurate when it comes to issues of authorship or origin.
  • Malware analysts do not apply any analytic methodology in an attempt to confirm or refute their single-source findings.
  • Efforts to link data found in code to larger issues of geo-political importance are at best superficial.

Why is all of this important? Computer security issues are becoming an increasingly important factor in our lives. Not that everyone appreciates it, but look at where we have been and where we are headed. Just under 20 years ago few people in the US, much less the world, world were online; now more people in the world get online via their phones than do on a traditional computer. Cars use computers to drive themselves, and biological implants are controlled via Bluetooth. Neither of these new developments has meaningful security features built into them, but no one would ever be interested in hacking insulin pumps or pacemakers, right?

Taking computer security threats seriously starts by putting serious thought and effort behind our research and conclusions. The government does not provide information like this to the public, so we rely on vendors and security companies (whose primary interest is profit) to do it for us. When that “analysis,” which is far from rigorous is delivered to decision-makers who are used to dealing with conclusions that have been developed through a much more robust methodology, their decisions can have far reaching negative consequences.

Sometimes a quick-and-dirty analysis is right, and as long as you’re OK with the fact that that is all that most malware analysis is, OK. But if you are planning on making serious decisions about the threat you face from cyberspace, you should really take the time and effort to ensure that your analysis has looked beyond what IDA shows and considered more diverse and far-reaching factors.

 

How Many Holes in a Gohor Stick?

I’ve never used Palantir. I’ve never used DCGS-A. When I started as an Analyst you (no-shit) used pencil and paper (and a thing called a guhor stick…but that’s a lewd joke for another day). The kerfuffle over Palatir vs. DCGS-A reminds me of the days when computers started making in-roads in analysis shops, and I hope everyone involved can remember some of those lessons learned.

Now my working world in those early days wasn’t entirely computer-free, but back then computers were where you stored data and recorded activity and typed up reports, you didn’t “link” things together and you certainly didn’t draw, graph or do anything anyone coming up in the business today would recognize as computer-oriented.

If there was a quantum leap in the utility computers gave to analysis it was this application called Analyst Notebook. Analyst Notebook would take in the data you had already entered into some other system (assuming you could get it out of said system), and kick out diagrams and pictures that let you make quick sense of who was talking to whom, what happened when, and identify connections or anomalies you may have missed staring into a green screen at row after row, column after column of letters and numbers.

That’s the key here: Analyst Notebook, Palantir, etc. are Analyst’s tools, they are not analysis tools. Is that a distinction without a difference? I’m not aware of any software application that will think on your behalf. I’m not aware of anyone in the military or IC who would trust answers produced entirely by an algorithm and without human interpretation or enhancement. If you could computerize analysis you wouldn’t have a headcount problem in the IC. Analyst Notebook, Palantir, DCGS-A . . . they’re all tools, and if you’ve been working with hand tools all your life and suddenly someone hands you a Skil saw, of course you’re going to think the Skil saw was sent from heaven.

Now, is the government notorious for producing bloated, expensive, minimally functional software that everyone hates to use (when it works at all)? We don’t have time to go into all the examples, but the answer is ‘yes.’ If I offer you tool A OR tool B when you’ve been using tool C, which are you going to choose? Does that make your other choice crap? Of course not.

It sounds to me like if there is a 800 lb gorilla in the room it’s usability, and if there is one thing that commercial apps excel at its the user experience. Think about the Google interface, and then think about a data retrieval system fielded in the 70s, and you tell me what your average analyst would rather use…

If the ultimate requirement is capability, then the answer is simple: hold a shoot-out and may the best app win. Pretty-but-sub-capable isn’t going to cut it; functional-but-frustrating isn’t either. If DCGS-A is all that, they should be big enough to learn from what Palantir does well; If Palantir is really about saving lives and national defense, they ought to be big enough to implement what GIs need most. Competition raises everyone’s game, but this isn’t about .com vs .gov, it’s about lives.

Suspect or Sloppy?

Privacy mavens are all atwitter at the news this morning:

A Justice Department investigation has found pervasive errors in the
FBI’s use of its power to secretly demand telephone, e-mail and
financial records in national security cases, officials with access to
the report said yesterday.

The inspector general’s audit found 22
possible breaches of internal FBI and Justice Department regulations —
some of which were potential violations of law — in a sampling of 293
“national security letters.” The letters were used by the FBI to obtain
the personal records of U.S. residents or visitors between 2003 and
2005. The FBI identified 26 potential violations in other cases.

Set aside for a moment the old-fashioned notion of privacy so many keep fantasizing about and the fact that law enforcement and intelligence need some fast and easy way to gather common personal information because bad people live, work and operate among us and what does the article really tell us?

The pervasiveness and diversity of the errors suggest that there is a serious training deficiency at the FBI. Even without NSLs FBI agents have always handled a not more personal information than, say, NSA officers, but at NSA rules about dealing with such information are beaten into your head and heart from day one. Violations are dealt with swiftly and harshly. If these cases were part of a deliberate campaign to abuse NSLs there would be more focus and the errors more consistent.

It is also important to note that this revelation was self-exposed by the IG, not the result of a leak or a lawsuit. A systematic and organized effort would have a much more substantial defense/spin machine at work, or enough sense to not have been caught altogether.

Need more convincing? One of the more revealing points is brought up later in the article:

Fine’s audit, which was limited to 77 case files in four FBI field
offices, found that those offices did not even generate accurate counts
of the national security letters they issued, omitting about one in
five letters from the reports they sent to headquarters in Washington.
Those inaccurate numbers, in turn, were used as the basis for required
reports to Congress.

Remember, this is an agency that is legend among bureaucracies for the depth and breadth of its paperwork. You can build a whole career at the FBI doing nothing but “i” dotting and “t” crossing. In that sort of environment the fact that each office didn’t have a log documenting each letter issued suggests confusion or chaos, not conspiracy.

This is a problem about unclear policy and shoddy procedure, not organized and systemic mischief against the people. The Bureau would do well to learn some lessons from their brothers in Anne Arundel County (could have leveraged Mo B. when you had her) or they’re likely to start slipping towards the danger zone that the Nebraska Avenue kids are in (which – mark my words – is a fiasco waiting to happen).

“reputation system”

From the Enterprise Resilience Management Blog:

Anyone who believes he knows of information relating to these proposed
patents will be able to post this online and solicit comments from
others. But this will suddenly make available reams of information,
which could be from suspect sources, and so the program includes a
‘reputation system’ for ranking the material and evaluating the
expertise of those submitting it.

“reputation system” – how the wiki-fied, blogosphered IC can sort the wheat from the chaff and cast off the last vestiges of the old way of doing things.

Now, to find out the status of that reform book draft . . .

All aboard the cluetrain express

This is classic:


Wiki technology advocates within the intelligence community, known as
intellipedians, were circulating among their colleagues promoting the
use of the collaborative social software to create intelligence
products, the official said.


The general response among the intelligence
technologists the intellipedians approached was “It’s great! Can you
build one for us?,” according to the official. That question indicated
that the technologists had not grasped the intellipedians’ premise that
wiki information sharing should permeate the community, the official
said.

You know your agency’s head geek got his degree from a state-funded diploma mill when he stops you after the second slide of a briefing and says, “What’s this XML you’re talking about?” This was five years ago and apparently little has changed.

As with any sufficiently radical effort (and believe me, this is practically magic to some on the inside) there is a marked difference between the public face and the reality in the cube. Are people using it? Sure. Is it pervasive? Not a chance. Is it widely and solely the way business is done? Dream on. Getting a foot in the door is one thing; closing the sale is another issue entirely.

Have fun storming the castle . . .