The Wolf Approaches

In the government, the use of the term “grave” means something very specific. That meaning should be obvious to you, but on the off chance that you haven’t had your first cup of coffee yet, it means that whatever the issue is, messing it up could cost someone (or more than one person) their life.

The attack against the water system in Oldsmar, Florida was a potentially grave situation. A water system is not a trivial technology enterprise and as such it has numerous checks – including a human in the loop – to make sure malicious activity or honest mistakes don’t end lives. But the fact that an outsider was able to get such access in the first place makes it clear that there exists a disconnect between what such systems are supposed to be, and what they are.

We give the Sheriff of Pinellas County a pass on the use of the term “wake-up call” because he has not spent a large portion of his life in the belly of the cybersecurity beast. A wake-up call only happens once, how we respond indicates how serious we are about taking action:

  • In 2015 DHS Secretary Jeh Johnson calls OPM breach a “wake-up call”
  • In 2012 General Alexander, Director of the National Security Agency, calls the hacker attack on Saudi ARAMCO a “wake-up call”
  • In 2010 Michael M. DuBose, chief of the Justice Department’s Computer Crime and Intellectual Property Section, called successful breaches such as Aurora “a wake-up call”
  • In 2008 Deputy Secretary of Defense William Lynn called the BUCKSHOT YANKEE incident “an important wake-up call.”
  • In 2003 Mike Rothery, Director of Critical Infrastructure Policy in the Attorney-General’s Department of the (Australian) Federal Government called a hack into a wastewater treatment plant “a wake-up call.”
  • In 2000 Attorney General Janet Reno called a series of denial of service attacks against various companies a “wake-up call.”
  • In 1998 Deputy Secretary of Defense John Hamre called the SOLAR SUNRISE incident “a wake-up call.”
  • In 1989 IT executive Thomas Nolle wrote in Computer Week that poor LAN security was a “wake-up call.”

The details of this particular case are probably never going to see sufficient sunlight, which only adds to the ignorance on these matters at all levels, and sustains the fragility we so desperately need to improve. This is particularly important when you consider how our relationship with technology is only getting more intimate.

These are issues that are decades old, yet if you want to have some idea of what it will take to spur action, keep in mind that we intentionally poisoned 100,000 people via a water system for five years and no one is in jail (yet). The idea that the people rooting around in such systems have the ability to cause such effects but don’t because they appreciate the moral, ethical, and legal implications of the matter is increasingly wishful thinking.

We in security have long been accused of “crying ‘wolf’” and for a long time those critics were right. We knew bad things could happen because Sturgeon’s Law has been in full effect in IT for ages, but it has taken this long for matters to go from merely serious to grave. Like everyone who puts off addressing a potentially fatal issue until the symptoms cannot be ignored anymore, our ability to survive what comes next is an open question.

Cyber Stars

/* Warning: Extensive over-use of the word “cyber” ahead. */

 

The other day my old friend and colleague Bob Gourley Tweeted:

Random thought: There are 24 four-star flag officers in the U.S. military. Every 4 star I have ever met is really smart. But only one of those 24 has real cyber war experience, and he is retiring soon. How do we change that for the better?

My friendly, snarky-a** response at the time was:

First: Get a time machine

The services have had “cyber” components for several years now, and the US Cyber Command has been active since 2009. But a military officer could have been exposed to what we would recognize as the cyber mission these days at roughly the turn of the century. For the sake of discussion let’s say this was their first assignment out of training. The average amount of time officers spend at various ranks breaks down something like this:

Rank / Time in Service

 

2nd Lieutenant / 1 year

1st Lieutenant / 1.5 years

Captain / 4 years

Major/ 10 years

Lieutenant Colonel/ 16 years

Colonel/ 22 years

 

So if our notional lieutenant started her career in cyber in ‘99, she attended all the right schools, got sufficient command time, and punched all her staff assignment tickets, she might be a G2 (chief intelligence officer) or battalion commander. If she was a “rock star” she may have received several “below the zone” promotions (getting advanced ahead of her peers) and might even be looking at colonel in the very near future.

But…

Time in service doesn’t mean time spent doing the job. The first 4-6 years of an officer’s career is learning the ropes. It is probably when they’re the most technically oriented. Once they get a company-level command their life is basically paperwork (and shaking their head ruefully and the shenanigans of the junior enlisted in their charge).

After company command is staff jobs (more paperwork), and higher civilian and military education. Lieutenant colonel is an officer’s next opportunity at command, and where they’re exposed in-depth to sub-disciplines and how to make all those moving parts work as a coherent whole. Then more staff time until colonel, and with luck brigade command.

In 20 years Colonel Duty Bound is a very well-rounded officer, but she has spent less than half of that time actively working the mission.

“But Mike, there were more senior officers who were working the mission back then. The pipeline of experienced cyber offices isn’t so grim.”

True, but you know who I never heard of back then? Paul Nakasone. You know who I did know? Dusty Rhodes (not the other one). “Who?” you ask. Exactly. Then Captain Jay Healey could have been a Colonel by now. Then Lt. Commander Bill Peyton a Rear Admiral. Then Major Marc Sachs a Lieutenant General. My man Bob Gourley could have been an Admiral and running US Fleet Cyber Command by now, but you know what the Navy decided not to do to one of the pioneering officers in the cyber field? Make him a Captain. We’re not lacking in talent, we’re lacking in talent management.

We have been training, equipping, and staffing for the cyber mission – in fits and starts – for over two decades, and yet the cyber career field is still a newborn. To put things into perspective, the Army Air Corps went from biplanes to the B-29 Super Fortress and nascent jet fighters between the ~20 years of its formation and the end of WWII. Moore’s Law indeed.

The various service schoolhouses can turn out 1,000 cyber lieutenants and ensigns a year, but there are still only a handful of flag officer billets for service-level and national-level command in the field. To be successful as warfighters in the information age, we have to ensure that “cyber” is an element within every career field. As odd as this sounds, we can’t treat technology, the use thereof, and the associated risks and threats to same, as something special. Everyone has to know something about it. Everyone has to be responsible for it to some degree. Every commander at every level in every career field needs to know what cyber can do for them (and if they’re not careful what it can do to them and their ability to execute the mission).

Success is a constellation, not a supernova.

The Global Ungoverned Area

There are places on this planet where good, civilized people simply do not voluntarily go, or willingly stay. What elected governments do in safer and more developed parts of the world are carried out in these areas by despots and militias, often at terrible cost to those who have nowhere else to go and no means to go if they did.

Life online is not unlike life in these ungoverned areas: anyone with the skill and the will is a potential warlord governing their own illicit enterprise, basking in the spoils garnered from the misery of a mass of unfortunates. Who is to stop them? A relative handful of government entities, each with competing agendas, varying levels of knowledge, skills, and resources, none of whom can move fast enough, far enough, or with enough vigor to respond in-kind.

Reaping the whirlwind of apathy

Outside of the government, computer security is rarely something anyone asks for except in certain edge cases. Security is a burden, a cost center. Consumers want functionality. Functionality always trumps security. So much so that most people do not seem to care if security fails. People want an effective solution to their problem. If it happens to also not leak personal or financial data like a sieve, great, but neither is it a deal-breaker.

At the start of the PC age we couldn’t wait to put a computer on every desk. With the advent of the World Wide Web, we rushed headlong into putting anything and everything online. Today online you can play the most trivial game or fulfill your basic needs of food, shelter, and clothing, all at the push of a button. The down side to cyber-ing everything without adequate consideration to security? Epic security failures of all sorts.

Now we stand at the dawn of the age of the Internet of Things. Computers have gone from desktops to laptops to handhelds to wearables and now implantables. And again we can’t wait to employ technology, we also can’t be bothered to secure it.

How things are done

What is our response? Laws and treaties, or at least proposals for same, that decant old approaches into new digital bottles. We decided drugs and povertywere bad, so we declared “war” on them, with dismal results. This sort of thinking is how we get the Wassenaar Agreement applied to cybersecurity: because that’s what people who mean well and are trained in “how things are done” do. But there are a couple of problems with treating cyberspace like 17th century Europe:

  • Even when most people agree on most things, it only takes one issue to bring the whole thing crashing down.
  • The most well-intentioned efforts to deter bad behavior are useless if you cannot enforce the rules, and given the rate at which we incarcerate bad guys it is clear we cannot enforce the rules in any meaningful way at a scale that matters.
  • While all the diplomats of all the governments of the world may agree to follow certain rules, the world’s intelligence organs will continue to use all the tools at their disposal to accomplish their missions, and that includes cyber ones.

This is not to say that such efforts are entirely useless (if you happen to arrest someone you want to have a lot of books to throw at them), just that the level of effort put forth is disproportionate to the impact that it will have on life online. Who is invited to these sorts of discussions? Governments. Who causes the most trouble online? Non-state actors.

Roads less traveled

I am not entirely dismissive of political-diplomatic efforts to improve the security and safety of cyberspace, merely unenthusiastic. Just because “that’s how things are done” doesn’t mean that’s what’s going to get us where we need to be. What it shows is inflexible thinking, and an unwillingness to accept reality. If we’re going to expend time and energy on efforts to civilize cyberspace, let’s do things that might actually work in our lifetimes.

  • Practical diplomacy. We’re never going to get every nation on the same page. Not even for something as heinous as child porn. This means bilateral agreements. Yes, it is more work to both close and manage such agreement, but it beats hoping for some “universal” agreement on norms that will never come.
  • Soft(er) power. No one wants another 9/11, but what we put in place to reduce that risk, isn’t The private enterprises that supply us with the Internet – and computer technology in general – will fight regulation, but they will respond to economic incentives.
  • The human factor. It’s rare to see trash along a highway median, and our rivers don’t catch fire Why? In large part because of the crying Indian. A concerted effort to change public opinion can in fact change behavior (and let’s face it: people are the root of the problem).

Every week a new breach, a new “wake-up call,” yet there is simply not sufficient demand for a safer and more secure cyberspace. The impact of malicious activity online is greater than zero, but not catastrophic, which makes pursuing grandiose solutions a waste of cycles that could be put to better use achieving incremental gains (see ‘boil the ocean’).

Once we started selling pet food and porn online, it stopped being the “information superhighway” and became a demolition derby track. The sooner we recognize it for what it is the sooner we can start to come up with ideas and courses of action more likely to be effective.

/* Originally posted at Modern Warfare blog at CSO Online */

Cyber War: The Fastest Way to Improve Cybersecurity?

For all the benefits IT in general and the Internet specifically have given us, it has also introduced significant risks to our well-being and way of life. Yet cybersecurity is still not a priority for a majority of people and organizations. No amount of warnings about the risks associated with poor cybersecurity have helped drive significant change. Neither have real-world incidents that get worse and worse every year.

The lack of security in technology is largely a question of economics: people want functional things, not secure things, so that’s what manufacturers and coders produce. We express shock after weaknesses are exposed, and then forget what happened when the next shiny thing comes along. Security problems become particularly disconcerting when we start talking about the Internet of Things, which are not just for our convenience; they can be essential to one’s well-being.

To be clear: war is a terrible thing. But war is also the mother of considerable ad hoc innovation and inventions that have a wide impact long after the shooting stops. War forces us to make those hard decisions we kept putting off because we were so busy “crushing” and “disrupting” everything. It forces us to re-evaluate what we consider important, like a reliable AND secure grid, like a pacemaker that that works AND cannot be trivially hacked. Some of the positive things we might expect to get out of a cyberwar include:

  • A true understanding of how much we rely on IT in general and the Internet specifically. You don’t know what you’ve got till it’s gone, so the song says, and that’s certainly true of IT. You know IT impacts a great deal of your life, but almost no one understands how far it all goes. The last 20 years has basically been us plugging computers into networks and crossing our fingers. Risk? We have no idea.
  • A meaningful appreciation for the importance of security. Today, insecurity is an inconvenience. It is not entirely victimless, but increasingly it does not automatically make one a victim. It is a fine, a temporary dip in share price. In war, insecurity means death.
  • The importance of resilience. We are making dumb things ‘smart’ at an unprecedented rate. Left in the dust is the knowledge required to operate sans high technology in the wake of an attack. If you’re pushing 50 or older, you remember how to operate without ATMs, GrubHub, and GPS. Everyone else is literally going to be broke, hungry, and lost in the woods.
  • The creation of practical, effective, scalable solutions. Need to arm a resistance force quickly and cheaply? No problem. Need enough troops to fight in two theaters at opposite ends of the globe? No problem. Need ships tomorrow to get those men and materiel to the fight? No problem. When it has to be done, you find a way.
  • The creation of new opportunities for growth. When you’re tending your victory garden after a 12 hour shift in the ammo plant, or picking up bricks from what used to be your home in Dresden, it’s hard to imagine a world of prosperity. But after war comes a post-war boom. No one asked for the PC, cell phone, or iPod, yet all have impacted our lives and the economy in significant ways. There is no reason to think that the same thing won’t happen again, we just have a hard time conceiving it at this point in time.

In a cyberwar there will be casualties. Perhaps not directly, as you see in a bombing campaign, but the impacts associated with a technologically advanced nation suddenly thrown back into the industrial (or worse) age (think Puerto Rico post-Hurricane Maria). The pain will be felt most severely in the cohorts that pose the greatest risk to internal stability. If you’re used to standing in line for everything, the inability to use IT is not a big a deal. If you’re the nouveau riche of a kleptocracy – or a member of a massive new middle class – and suddenly you’re back with the proles, you’re not going to be happy, and you’re going to question the legitimacy of whomever purports to be in charge, yet can’t keep the lights on or supply potable water.

Change as driven by conflict is a provocative thought experiment, and certainly a worst-case scenario. The most likely situation is the status quo: breaches, fraud, denial, and disruption. If we reassess our relationship with cybersecurity it will certainly be via tragedy, but not necessarily war. Given how we responded to security failings 16 years ago however, it is unclear if those changes will be effective, much less ideal.

/* Originally published in CSOonline – Modern Warfare blog */

What Cybersecurity and a Trip to the Dentist Have in Common

It was that time of year again. The day I lie and promise to be good the rest of the year: dental check-up day. During this most recent visit I was struck at how much people treat the security of their computers and accounts in the same way they treat their oral health.

You know what you’re supposed to do, but you don’t do it. “How often do you floss?” the dentist asks us, knowing full well that we’re lying through our bloody gums. If we flossed regularly we wouldn’t have bloody gums. When it comes to security we know we’re supposed to do all sorts of things, like create strong passwords and never re-use them, or lock our screens when we leave our desks, or use two-factor authentication on everything we can. When do we do these things? When a bunch of passwords get stolen and cracked, or when a phish leads to a data breach; the equivalent of flossing like a maniac the night before your annual check-up.

You have tools, but you don’t use them well. Mechanical toothbrushes, water flossers, even the metal tools the hygienist uses to scrape away plaque, are all readily available. When do you use them? You brush in the morning for sure and usually at night. We already know you don’t floss. You bought the Waterpik but it makes such a mess you only use it after corn on the cob or brisket. Likewise, you may run anti-virus software but you’re not diligent about updating it. You delay installing patches because it is inconvenient. You allow Flash and pop-ups and cookies and all sorts of things that could cause problems because who wants to use the web like it’s 1995?

Solutions are rarely permanent. Fillings replace the gap left when a cavity is removed, but eventually fillings can develop cracks. Crowns can come loose. That new IDS or firewall or end-point solution, where there was none, is a significant improvement in your security posture, but there are ways to bypass or undermine every security mechanism, at which point you’re back in the hands of expensive professionals (to fix the problem and/or clean up the mess) and looking at another pricy – and temporary – investment.

You have to get your hands dirty to do the job right. Understanding just what a sorry state your oral health is in means letting someone put their hands in your mouth. They’re spraying water and its splashing on your face. They’re getting their blood on their fingers. Bits of gunk are flying around. Sometimes they have to put you under because what’s necessary would make you scream. There is no such thing as a quick fix to security problems either. You have to attack the problem at the root, and that means blood, sweat, and tears.

These issues don’t exist in a vacuum. Dental health impacts more than just your mouth, and illnesses that impact other parts of your body can impact oral health. Bad or poor security can have a negative impact on your organization in myriad ways, and if your organization doesn’t place a priority on security you’re not going to get the best security capabilities or resources. In both cases you have to view the situation holistically. Just because you have a pretty smile, doesn’t mean you don’t have problems.

 

No One is Too Small to Attack

If you’ve been a security practitioner for any length of time, you have probably hear this from a client at least once:

We’re too small/unimportant to be a target of hackers.

If you’ve been doing this for any length of time you also know this is the point in the conversation where you smile politely, get up, and excuse yourself while they go back to their business and you go on to your next meeting. Anyone who has it in their head that they don’t have a red laser dot on their forehead is not going to be convinced by your war stories or ream of counter-examples.

They will learn the hard way.

The thing you want to tell these folks is that anyone online is a target because everyone online has something of value. The reason most folks who think they’re not targets think the way they do is because they don’t deal in valuable information. Data breaches at banks, government agencies, or credit bureaus make headlines because your name, along with your birth date, social security number, bank account, and so on are monetizable.

If you move or make commodity widgets, your efficiency and up-time are what you consider valuable. The design of the widget is not special; they’re one of a hundred factories worldwide that make widgets. What these folks don’t realize is that just having a computer online is a valuable resource to someone. That’s one more processor that a bad guy didn’t have before. It’s one more hard drive they can store illicit material on. One more system they can hop through or use to target another victim. You may not be a target, but you could be an accessory.

It’s also important to note that while you may not be the intended victim of someone else’s attack, that you were involved means down-time, and the expense of cleaning systems, and most all the other issues that the actual victim has to deal with. Yes, on a smaller scale, but it’s not zero, which is the sum you came up with when you decided you weren’t a target.

The widget makers of the world are right to look with a jaundiced eye at calls to spend a lot on security, or to procure a lot of fancy boxes and software. When solutions are designed by people who cut their teeth on fighting nation-state adversaries and “advanced” threats, there isn’t a lot of options for people who need the basics.

Success in cybersecurity at every level means paying attention to business needs, and acceptable risks, not just external threats. The best advice is holistic in nature, not a pitch that plays to your professional strengths. That you know how to wield a hammer is not an excuse for only paying attention to exposed nails.

Most of the time, the best security recommendations are the cheap and unglamorous ones. No, it’s not pretty or fun, but it’s what you owe your clients if you’re really about security.

C.R.E.A.M. IoT Edition

I didn’t get to see the discussion between Justine Bone and Chris Wysopal about the former’s approach to monetizing vulnerabilities. If you’re not familiar with the approach, or the “Muddy Waters” episode, take a minute to brush up, I’ll wait….

OK, so if you’re in one computer security sub-community the first words out of your mouth are probably something along the lines of: “what a bunch of money-grubbing parasites.” If you knew anyone associated with this event you’ve probably stop talking to them. You’d certainly start talking shit about them. This is supposed to be about security, not profiteering.

If you’re in a different sub-community you’re probably thinking something along the lines of, “what a bunch of money-grubbing parasites,” only for different reasons. You’re not naive enough to think that a giant company will drop everything to fix the buffer overflow you discovered last week. Even if they did, because it’s a couple of lines in a couple of million lines of code, a fix isn’t necessarily imminent. Publicity linked to responsible disclosure is a more passive way of telling the world: “We are open for business” because it’s about security, but it’s also about paying the mortgage.

If you’re in yet another sub-community you’re probably wondering why you didn’t think of it yourself, and are fingering your Rolodex to find a firm to team up with. Not because mortgages or yachts don’t pay for themselves, but because you realize that the only way to get some companies to give a shit is to hit them where it hurts: in the wallet.

The idea that vulnerability disclosure, in any of its flavors, is having a sufficiently powerful impact on computer security is not zero, but its not registering on a scale that matters. Bug bounty programs are all the rage, and they have great utility, but it will take time before the global pwns/minute ratio changes in any meaningful fashion.

Arguing about the utility of your preferred disclosure policy misses the most significant point about vulnerabilities: the people who created them don’t care unless it costs them money. For publicly traded companies, pwnage does impact the stock price: for maybe a fiscal quarter. Just about every company that’s suffered an epic breach sees their stock price at or higher than it was pre-breach just a year later. Shorting a company’s stock before dropping the mic on one vulnerability is a novelty: it’s a material event if you can do it fiscal quarter after fiscal quarter.

We can go round and round about what’s going to drive improvements in computer security writ large, but when you boil it down it’s really only about one of and/or two things: money and bodies. This particular approach to monetizing vulnerabilities tackles both.

We will begin to see significant improvements in computer security when a sufficient number of people die in a sufficiently short period of time due to computer security issues. At a minimum we’ll see legislative action, which will be designed to drive improvements. Do you know how many people had to die before seatbelts in cars became mandatory? You don’t want to know.

When the cost of making insecure devices exceeds the profits they generate, we’ll see improvements. At a minimum we’ll see bug bounty programs, which is one piece of the puzzle of making actually, or at least reasonably secure devices. Do you know how hard it is to write secure code? You don’t want to know.

If you’re someone with a vulnerable medical device implanted in them you’re probably thinking something along the lines of, “who the **** do you think you are, telling people how to kill me?” Yeah, there is that. But as has been pointed out in numerous interviews, who is more wrong: the person who points out the vulnerability (without PoC) or the company that knowingly let’s people walk around with potentially fatally flawed devices in their bodies? Maybe two wrongs don’t make a right, but as is so often the case in security, you have to choose between the least terrible option.

The Wolf is Here

For decades we’ve heard that iCalamity is right around the corner. For decades we’ve largely ignored pleas to try and address computer security issues when they are relatively cheap and easy, before they got too large and complicated to do at all. We have been living a fairy tale life, and absent bold action and an emphasis on resiliency, it only gets grim(m)er going forward.

Reasonably affordable personal computers became a thing when I was in high school. I fiddled around a bit, but I didn’t know that computer security was a thing until I was on active duty and the Morris Worm was all over the news. Between the last time Snap! charted and today, we have covered a lot of ground from a general purpose IT perspective. We’ve gone from HTML and CGI to the cloud. From a security perspective however, we’ll still largely relying on firewalls, anti-virus, and SSL.

Why the disparate pace of progress? People demand that their technology be functional, not secure. Like so many areas of our lives, we worry about the here and now, not the what-might-be. We only worry about risks until a sufficiently horrific scenario occurs, or if one is not enough, until enough of them occur in a sufficiently short period of time.

Of course today we don’t just have to worry about securing PCs. By now it is fairly common knowledge that your car is full of computers, as is increasingly your house. Some people wear computers, and some of us are walking around with computers inside of us. Critical infrastructure is lousy with computers, and this week we learned that those shepherd boys crying ‘wolf’ all those years weren’t playing us for fools, they were just too early.

The fragility of our standard of living is no longer the musings of Cassandras. The proof of concept was thankfully demonstrated far, far away, but the reality is we’re not really any safer just because ‘merica. Keeping the lights on, hearts beating, and the water flowing is a far more complex endeavor than you find in the commodity IT world. It is entirely possible that in some situations there is no ‘fix’ to certain problems, which means given various inter-dependencies we will always find ourselves with a Damoclean sword over our heads.

Mixed mythologies notwithstanding, the key to success writ large is insight and resiliency. The more aware you are of what you have, how it works, and how to get along without it will be critical to surviving both accidents and attacks. I would like to think that the market will demand both functional and secure technology, and that manufacturers will respond accordingly, but 50 years of playing kick the can tells me that’s not likely. The analog to security in industrial environments is safety, and that’s one area power plants, hospitals, and the like have down far better than their peers in the general purpose computing world. We might not be able to secure the future, but with luck we should be able to survive it.

Cyber Security Through the Lens of an Election

Inauguration day has come and gone, giving us some time to reflect on both the previous election process as well as what lies ahead for the next four years. There are a number of parallels between running for office and running a cyber security operation, and a few lessons learned from the former can help those involved in the latter.

It’s a Campaign, Not a Day Hike

Depending on the office you’re running for, your campaign might start years before the winner takes the oath of office. Likewise, it is likely to take years to reach the ideal end-state for the IT enterprise you’re responsible to protect. To further complicate things, technology in general and security threats specifically will change over time, which means the probability you’ll see the end of the race is very close to 0. Not running is not an option, so pace yourself.

You Need a Team

Every chief executive needs a team to get things done. In government, it’s called a “cabinet” and in business the “C-suite.” Regardless of the nomenclature, the purpose is the same: they are the people who specialize in certain things who help you formulate and execute policy. If you’re lucky you’ll get a team that buys into your vision, trusts you implicitly, and has the resources necessary to get the job done. More than likely you’re going to have something more akin to a Team of Rivals, but not ones you got to pick.

 (All Kinds of) Experience Matters

There is no one-size fits-all career path that leads to the White House. People that get into cyber security have a wide range of backgrounds. Yet in both fields people love to poke at perceived shortcomings of those who aspire to (or end up in) top positions. We pick on Michael Daniel or Rudy Giuliani for their lack of technical acumen, forgetting that George Washington never went to high school and his first job was blue collar. Being able to cast a vision, manage people under stress, mange limited resources, and inspire confidence; none of those things requires a given type or level of education, and all of them can be developed in a variety of ways.

Everyone is a Constituent

If you’re in security, everyone is “your people.” You don’t have a party, you don’t have a faction, you have to make everyone happy. At the very least you have to keep everyone from revolting. Everyone has a different agenda, different needs, different outlooks. You will make enemies, and different people will be your friend or foe depending on the situation. Success depends on keeping all those factors in balance so that you can move the center forward.

It’s a great parlor game to try and figure out what the next four years is going to be like on the political front, but the fact of the matter is we have no real idea how things are going to go. In that sense politics is a lot like cyber security: you prepare for the worst, you assume every day is going to be rocky, but sometimes you get pleasantly surprised.

Hail to the Chief! All of them.

Save Yourself – Delete Your Data

You probably don’t remember but in the spring of 2015 I wrote:

What if ransomware is only the beginning? What about exposé-ware?  I’ve copied your files. Pay me a minimal amount of money in a given time-frame or I’ll publish your data online for everyone to see. Live in a community that frowns upon certain types of behavior? Pay me or I’ll make sure the pitchfork brigade is at your door.

This week we learn:

Instead of simply encoding files so that users can’t access them, some blackmailers armed with a new kind of malware called doxware are threatening to leak potentially sensitive files to the public if a ransom isn’t paid, says Chris Ensey, COO of Dunbar Security Solutions.

My response now is the same as it was a before:

In an era when remedying computer security failures is cheaper than calling in computer security experts, we need to collectively get on board with some new ways of doing things.

For starters, we need to work at scale. Botnet takedowns are one example. I’m proud to have been associated with a few, and I’m not going to pretend every effort like this goes off without a hitch, but we need to do more at or near the same scale as the bad guys, and often. That’s really the only way we have any hope of raising attacker costs: when they’re fighting people in the same weight class with similar skills on a regular basis.

We also need to accept that the future has to be more about restoration than conviction. Most corporate victims of computer crime don’t want to prosecute, they just want to get back to work. Tactics, techniques, procedures and tools need to reflect that reality. If you’re law enforcement you don’t have a lot of leeway in that regard, but everyone else: are you really doing right by your customers if you are adhering to a law enforcement-centric approach simply because that’s how you were taught?

Finally, we need to retire more problems. You’ve heard the phrase: “if you’re so smart how come you’re not rich?” My variation is: “if you’re such an expert how come you haven’t solved anything?” Now, not every computer security problem can be solved, but there are problems that can be minimized if not trivialized. That would require regularly growing and then slaughtering cash cows. Business majors who run massive security companies don’t like that idea, but it is not like we’re going to run out of problems. So as long as there are new opportunities to slay digital dragons, you have to ask yourself: am I in this to get rich, or am I in this to make the ‘Net a safer place? Kudos if you can honestly do both.

…and I would add one more thing: If you don’t need data, get rid of it. I remember when storage was expensive and you had to be judicious about what you saved, but if you buy enough memory these days its practically free, which has led people to think that there are no consequences for control-s’ing their way to retention nirvana. The supposed value of “big data” doesn’t help. When you get down to it though, you can’t be held ransom – or extorted – over something you don’t have.