Ransomware: The Present We Deserve?

The scourge of ransomware is the inevitable result of decades of schizophrenia about our relationship with information technology and security. Treating this problem like all the ones that came before it, in the same fashion we always have, will only prolong our suffering. Clarity, creativity, and will are required if we are to have any hope of a future where ransomware is an annoyance and not a plague.

Ransomware is not new, and neither is using alternative payment systems to fund criminal activity. Cryptocurrency has certainly made ransomware much more prolific, and thanks to persistent access to computing technology and ubiquitous connectivity, its impact has been much more significant than in the past.

Recently a task force was formed to help come up with ideas on how to address the ransomware problem. The release of their findings coincided with a ransomware infection that caught the attention of – if not all – certainly half of the country. You could almost be forgiven for thinking that collectively we – government, industry, and citizens – are finally ready to mobilize against a cybersecurity problem.

Don’t hold your breath.

Über Alles

The problem with getting too worked up about most proposed solutions to security problems, especially ransomware, is that they inevitably default to “things we know how to do” not necessarily things that might work. Most efforts ignore some important issues that neither security nerds nor policy wonks seem to want to factor into their calculus.

We place a much higher value on functionality than we do security. When the average Jane started getting online without services like AOL or CompuServe, all that (Web) activity was unencrypted. It was not long (1995) before SSL came along, yet it took until 2021 for ‘a majority’ of traffic was encrypted. That is not just a long time in ‘Internet time’ it is a long time, period. Meanwhile, outside of the security realm, in that same timeframe we went from CGI and Perl scripts to ASP, SOAP, Rest, etc.

Cash rules everything around us. The ransomware infection on the Colonial Pipeline company is on everyone’s mind, but Colonial Pipeline’s pipeline system was not hacked; the system that allowed them to bill customers did. Most hacks of economic note can trace their origins to the attitude that ransoms and DFIR expenses are a cost of doing business, and not necessarily something to be avoided at all costs.

We would rather talk about “security” even if security is not the best approach to the problem. A $162B dollar industry will not be ignored. Ask 100 different security product or service vendors what the best defense against ransomware will be and none of them will bring up a sound, validated, and secure backup scheme (and related recovery procedures). Why? Because that is a system administrator’s job. Security companies would literally rather blow an advantage over the adversary to make a buck. We would rather whinge on about the “talent shortage” than admit that one of the world’s biggest security problems can be effectively addressed by the “mere” IT guy.[1]

We avoid conflicts at all costs. Public private partnerships, information sharing schemes; with very few exceptions everything you might consider doing in the name of cybersecurity is entirely voluntary. Organizations like NIST establish standards, but unless you work for the government you are not obliged to follow them (and even then…). Any mention of requiring commercial concerns to adhere to standards, policy, or practices brings out the lobbyists and arguments about how industry can regulate itself, which I am sure makes for a very compelling tale if you are running for re-election.

We cannot think clearly about the problem. Ransomware is a criminal activity. Full stop. It is not “cyberwar” and it is not “terrorism.” As a tactic it could be used in support of both war and terror, but getting paid has nothing to do with the violent promotion of political ideology.[2] Such talk really only serves as a defense for the use of currently foreign-facing intelligence capabilities against domestic targets. This is a scenario only a totalitarian could love, because once the government starts doing something, it never stops. The Internet would become a panopticon that you are forced to pay for (at least the apps that spy on you now are “free”).

We argue morality when we should be practicing humanity. Without a doubt paying a ransom rewards criminals and supports further criminal (and likely worse) activity. But understand what people are saying when they say, “don’t pay the ransom.” What they’re saying is that denying a criminal – who will almost certainly never see the inside of a prison cell much less a courtroom – a payday is more important that the businesses that will go under, the jobs that will be lost, and the families that will be impacted because they cannot recover. If you are of a certain age this sort of thinking will seem familiar. A solution built on a graveyard of bankruptcies and broken lives is not a solution to be proud of.

Accept the Things We Cannot Change…

Now that we have recognized the reality of our situation, it is time to think about how to move the ball forward in a fashion that fits within that reality. I am under no illusion that they are perfect solutions, or even very palatable ones (they are certainly not comprehensive), but they are also not dependent upon the world becoming a spherical cow of uniform size and density.

Leverage market forces; explain risk and affirmative risk acceptance. The desire for more functionality will always win out over security. So make it clear to users exactly what the risks are with regards to a given product or service. Not legal-ese in 4-point type on a click-through no one reads; up-front and in plain English. Zero-trust? Right now most people are dealing with zero-knowledge. With time the right balance of functionality and security will out, as people decide what level of risk they are comfortable with online, just everything else in their life.[3]

Make a market when there is none (or it is weak). Banks take security seriously because they would not be banks very long if they could not ensure an acceptable level of security for the float in the cell in the spreadsheet on the disk in the data center that represents our life savings. Every patient trusts that their doctor is not going to tell the world about their issues, but what their doctor’s computer has to say is another story. The market to support security in banks is massive; the market to support security in small medical practices is almost non-existent. Is that a cybersecurity-ACA for underserved markets? Maybe.

Enough Security, More Resilience. As long as there are jobs that require opening email and clicking on attachments, no amount of yelling about the evil that lurks in email (how most ransomware happens) is going to change things. You can try to avoid taking punches, which means you will only lose tired, or you can build up the ability to take punches, which extends how long you can fight, or if you become a target at all.[4]

Hate the players not the technology. No one calls for the abolishment of fiat currency because it is the spendable medium of choice for criminals. No one calls for the shuttering of financial institutions – who are supposedly professionals who know better – when they get caught doing shady deals and dealing with shady characters. Cryptocurrency is not the villain here. We should demand and enforce better behavior (know your customer, etc.), we should not be attacking technology.[5]

Think Safety. Security regulations are all about what you cannot do, and enforcement of such rules brings out the baton-wielding pseudo-cop in every security practitioner. Safety regulations are often about what you cannot do as well, but they’re also about how you can do things in a fashion that won’t result in your workforce missing limbs or otherwise hospitalized. It turns out we know how to reduce risk “without adverse effects to employment, sales, credit ratings, or firm survival” we just don’t apply that model to cyberspace.

The Future We Want

For five days in December of 1952 the city of London was hit by “the big smog.” Officials estimate that as many as 10,000 people died and 100,000 were made ill as a direct result of this confluence of weather and coal smoke. The Clean Air Act of 1956 was the result.

In 1962 the Cuyahoga River in Cleveland Ohio caught fire, one of the last of a series of environmental disasters that led to the creation of the Environmental Protection Agency.

In 2010 the Deepwater Horizon oil spill was the largest environmental disaster in U.S. history and “the biggest public health crisis from a chemical poisoning in the history of the country.”  It resulted in the passage of the RESTORE Act, funded by $20B in fines paid by BP.

We have an opportunity to do decisive and meaningful things that reduce the risks associated with our increasing dependence upon information technology before we the world burns and people die. This, of course, requires a level of will at individual, corporate, and governmental levels that heretofore has been absent when it comes to cybersecurity issues. We can improve the likelihood that our collective intestinal fortitude will rise to the occasion by addressing these issues in ways that are likely to work because they are rooted in reality, even if they seem conciliatory, or the approach is unfamiliar to us. To create a future where ransomware doesn’t exist, or is merely an annoyance, the usual way and pace of doing business will not suffice.

[1] In the interest of full disclosure, the author was formerly merely an IT guy.

[2] We want it to be more than just crime because we paid all this money for things like Cyber Command and CISA and feel like we need to get our money’s worth in a publicly attributable fashion.

[3] Ready, willing, and able to accept that we’ve already achieved that state, we just haven’t made it official yet.

[4] Again, unsexy IT stuff, not fancy blinky box security stuff for which you can charge a premium.

[5] What is the difference between malware and an app? Intent.

The Best Defense?

We have all heard the mantra:

“Attackers only have to be right once; defenders have to be right every time.”

It is, of course, complete nonsense and something only people who don’t understand how compromising computers actually works. The more accurate statement is:

“Attackers have to be right every time, and in series.”

To draw a simple analogy, imagine you are about to enter a house that is not yours. You can see the outside of the house and the door into the house, but inside the house it is pitch black. You’ve been inside the same style of house before so you have a rough idea of where the walls and doors and stairways are, but you don’t know what furniture is where and what modifications to the actual house might have been made. To get from the foyer to a bedroom with a treasure chest in the closet involves you taking tiny steps and moving your hands around in front of you in the hopes that you don’t trip on a shoe or knock over a lamp — something that would let the owner of the house know you were there.

Extending the analogy a bit, you worry that the homeowners are not dead asleep. You worry that they have motion sensors installed around the house. That there are infra-red cameras watching you stumble around. That they own night vision goggles and are handy with a shotgun. These are all things that would bring your hunt for treasure to a quick halt and all things that could be deployed against you without your knowledge.

In the never-ending debate over the relative advantages of offense over defense, or vice versa, the trend recently has been to promote defender advantages. Attackers aren’t all that because this is your house! Nobody ****s with you in your house! 

The problem of course is that you might live in the metaphorical house, but you also might have no earthly idea how to use it to your advantage. If you’ve owned several houses in different parts of the country over the course of several decades, this is all old hat, but the new homeowner (so to speak) lacks a great deal of your knowledge and experience. 

Take recent events in Texas as an example. People who live in the northern part of the country know exactly what to do when the temperatures drop below a certain level in order to prevent their water pipes from freezing. If you’ve only ever lived in Texas and never experienced epic cold relative to the region, you’re going to have a bad day. The temperatures may be back to normal, but the cost, suffering, and inconvenience associated with those rare days lingers.

There is a growing chorus of defenders who are piling on the suffering and woe of other defenders (sometimes “defenders”) because the latter are not taking advantage of the benefits ‘home ownership’ affords. This is a special kind of arrogance considering:

  • You have no idea how complicated someone else’s network is.
  • You have no idea how skilled or knowledgeable another defender is.
  • You have no idea what resources that defender has (or doesn’t).
  • You have no idea what competing priorities that person has to contend with.

If you’re only responsible for defending yourself, or your home lab, or a relatively simple IT enterprise, or deal in research and theory, your opinion about what someone else woulda/coulda/shoulda done isn’t particularly useful. It is, in fact, divisive and detrimental. 

At some point in your career you were the person who was under the gun. Who was at a loss. Who didn’t know how they were going to get themselves out of the fix they were in. At that point in time you would have given anything for someone to extend you a hand and help shoulder the burden. There are times when the best thing one can do for cyber defense has nothing to do with technology and everything to do with empathy.

Or keep throwing drowning men bricks and see where that takes the community.

From Solar Sunrise to Solar Winds: The Questionable Value of Two Decades of Cybersecurity Advice

While the Ware Report of 1970 codified the foundations of the computer security discipline, it was the President’s Commission on Critical Infrastructure Protection report of 1997 that expanded those requirements into recommendations for both discrete entities as well as the nascent communities that were growing in and around the Internet. Subsequent events that were the result of ignoring that advice in turn led to the creation of more reports, assessments, and studies that reiterate what was said before. If everyone agrees on what we should do, why do we seem incapable of doing it? Alternately, if we are doing what we have been told to do, and have not reduced the risks we face, are we asking people to do the wrong things?

A Brief History of Cybersecurity Advice

Efforts to secure, or in the vernacular of the time “audit” computers, existed before the Ware Report,[1] but it was the report that codified principles and practices that served as the building blocks of what would become the multi-billion-dollar cybersecurity industry. Books like Computer Capers[2] in the 1970s and The Cuckoo’s Egg[3] in the 1980s show just how slowly the field progressed in both the commercial and governmental spheres, and how varied and disconnected cyber defense and crime-fighting efforts were at the time.

If the “wake-up call” associated with both the sovereign-state and non-state-actor threats was not ringing with the events of The Cuckoo’s Egg, the pounding on the door by hotel security was Eligible Receiver 97 (ER97): a no-notice interoperability exercise that had both physical and cyberspace components to it. With regards to the latter, National Security Agency red teams used common hacker techniques and tools freely available online to successfully compromise dozens of military and civilian infrastructure systems.[4]

No sooner did the dust settle from ER97 than the events of Solar Sunrise kicked off: a series of compromises of Department of Defense systems everyone was sure was being perpetrated by Iraq, right up until it was proven it was the work of three teenagers. Solar Sunrise reiterated the point that not only was the ability to apply force online possible with disturbing ease, but that the types of potential threat actors we needed to be concerned about was larger than originally thought, and our ability to deal with them was inadequate.[5]

In the same timeframe the aforementioned events were taking place, a set of recommendations for dealing with such problems was promulgated through the President’s Commission on Critical Infrastructure Protection (PCCIP),[6] which called out the need for:

  • Better policies
  • Public-Private partnerships
  • Information sharing
  • Central coordination and control
  • New or improved organizations and mechanisms to deal with cyber threats and vulnerabilities
  • The need to adapt to this new environment and to be agile enough to respond to emerging threats
  • Legal reforms
  • Improved training, awareness, and education
  • More research and development

These same recommendations were made in the National Plan for Information Systems Protection in 2000,[7] the National Strategy to Secure Cyberspace in 2003,[8] the National Infrastructure Protection Plan in 2006,[9] the Securing Cyberspace for the 44th Presidency report of 2008,[10] and the Comprehensive National Cybersecurity Initiative, also in 2008.[11] For those keeping score, that is 11 years of the same advice, as hacks against commercial and governmental systems kept growing.

It is not until 2010, with the publication of the National Security Strategy (NSS),[12] that some new advice is proffered. The NSS was not exclusively focused on cybersecurity issues, but it continued to recommend partnerships and sharing, better capabilities to deal with threats, training and awareness, as well as R&D. It also highlighted the need for capacity building, as well as the establishment and promotion of “norms.” The DOD Strategy for Operating in Cyberspace (2011), the DOD Cyber Strategy (2018), the National Cyber Strategy (2018), the DHS Cybersecurity Strategy (2018), and the much-hyped Cyberspace Solarium Commission report (2020) all offer a mix of both old and new advice.[13] That’s another 10 years of telling people what ought to be done, while attacks continued apace and their negative impact grew (see Appendix Afor details).

Meanwhile Back at the Server Farm

What impact has all this good advice had on the state of cybersecurity? Well at the federal level the answer is a mixed bag. We have a history of going through “cyber czars” [14] and other senior executives responsible for cybersecurity like most people change underwear.[15] Efforts like Einstein[16] are highly touted, but its effectiveness is often called into question.[17] Every military service has to have its own “cyber command” – not counting the actual Cyber Command – and all sorts of efforts are underway to try and reinforce the ranks with information-age skills[18] in very much industrial-age institutions, with predictable effect.[19] It is hard to think about events like successful attacks against government systems during Allied Force,[20] the efforts of “patriotic hackers” after the EP3 force down,[21] the accidental bombing of the Chinese embassy in Belgrade,[22] the scope and scale of damage associated with the OPM hack,[23] and the loss of offensive tools from not only the CIA[24] but also the NSA[25] and not wonder about the value of this evergreen advice.  

At the state, local, and tribal level the situation is far worse. They have all the same functions of government to execute as their counterparts at the federal level, but none of the budget or human resources. Municipally focused ransomware attacks of the past few years are illustrative of the problem and how difficult it is to address.[26]

In the commercial sector the situation is not much better. The government has an obligation to look after the well-being of its citizens; private enterprise is driven by a profit motive and the interests of a tiny sub-set of the citizenry: shareholders. Time and time again we see ‘risk acceptance’ as the reason for failing to adhere to sound security practice, and why not? The amount of money that can be made before the inevitable compromise far exceeds the amount required to clean up the mess and compensate the victims. No one is in the cybersecurity business, not even cybersecurity companies, they are just in business.

What Might be Wrong?

21 years of asking people to do the “same old” and expecting a different result calls into question the sanity of those giving the advice, and the advice itself. The author lacks the medical qualifications to assess anyone’s mental health, but one can examine the advice given and formulate some reasonable theories to consider.

This may be the wrong advice. No one who has worked in this field for any length of time has much good to say about public-private partnerships, information sharing schemes, or the state of security awareness training. Big “R” research that has practical implications is rare, while little “r” research as presented in most conferences is lost in a wilderness of wheel-reinvention and stunt hacking.

The right advice, not always the right audiences. The number of organizations that can actually derive benefit from following such advice is actually quite small, though they themselves tend to be quite large. The security poverty line is a real thing,[27] and maybe expecting the largest segment of the economy (small and medium sized businesses) to carry on like they are JPMorgan Chase with its half-billion-dollar security budget is a bridge too far.[28]

Good advice, bad implementation. We are free with advice but parsimonious when it comes to things that would lead to adherence. With a few exceptions, everything is voluntary. We suggest, we do not mandate. We cajole we do not require. We encourage but we do not incent. Everyone is hesitant to use a stick, but we make no effort to offer carrots. Outside of the military and certain government circles, cybersecurity is something people are obliged to have, not anything they want. We appeal to people’s sense of patriotism or talk of “doing the right thing,” but the NSA is not here to save your private enterprise, and advice from people on high horses is hard to swallow.[29]

What Might Make Things Better?

If those in both policy and technology circles can agree that the recommended advice is sound, then we should be examining how we might do things differently, and how we can demonstrate success.

If it matters measure it. At a high level, asking for “better” does not make sense if you do not define what “better” means. Not hand-wavy abstractions, but hard metrics that can be measured, communicated, and evaluated.

The most important efforts must be mandatory. No one does anything voluntary for long. Such efforts start well, and everyone participating means well, but it quickly becomes number 11 on the top 10 list of things to do. Particularly with regards to government and critical infrastructure providers, no one should be able to lobby their way out of their responsibilities, which leads us to…  

Align everyone’s incentives. I am not aware of any meaningful metrics on the value of being a member of an ISAC, ISAO, or joining InfraGard (and the author has been on both sides of these relationships). In the political sphere telling someone to do something without providing resources has a name: unfunded mandate. Better security does not pay for itself. There are any number of incentives that might be offered that would drive compliance, but incentives are almost never an agenda item in panel or policy discussions.

Limited liability and full accountability. Those who provide data to help assess threats and gauge risk must be provided sufficient protection against adverse legal action (short of negligence or incompetence).[30] The lack of such data in sufficient volume makes it hard to understand the scope of the problems we face.[31] Likewise, we have to stop pretending that code, in the right context, is any different than concrete, steel, or silicon. You do not pick random people off the street to build a suspension bridge or pacemaker. This is not a call for a licensing scheme nor protectionism, but adherence to standards and imposing costs on those who willingly fail to do so.

More R&D only makes sense if you know the state of the art. There is no dedicated repository of cybersecurity knowledge that researchers at the academic, corporate, or independent levels can access to understand what prior art exists in any given security discipline. We cannot hope to level-up the science portion of the art-and-science that is cybersecurity without adhering to more scientific practices, of which a repository is a cornerstone.

Recognize the limitations of political approaches. No nation is giving up the advantages that operating in cyberspace affords them in a military or intelligence context. “Norms” are a double-edged sword; if you expect others to adhere to them, you are obliged to do the same. Now re-read the first sentence. What we may want to accomplish politically and what the Internet as-designed will allow are two different things. Aspiring Achesons and Kennans that improve their understanding of the technology that underpins cyberspace will develop approaches more likely to produce positive, achievable results.

The Next 10 Years

If history is any indication, we are a few short months away from the release of another set of policy recommendations that will encompass most of the ideas put forth previously. It will almost certainly contain nothing novel, but it will be received with a great deal of sound and fury, repeated over again annually, signifying nothing.

Forward progress in cybersecurity is entirely dependent upon the will of political leadership. Understandably blood, not bytes, takes precedence in governmental affairs, but our willingness to be so casual about something we claim to be a priority suggests that cybersecurity is not the issue we in cybersecurity think it is. That is a fair point: stealing credit card numbers, social security numbers, medical files, even taking over one’s entire identity does not equate to death. 

But the fact of the matter is that, by and large, we only learn from death. Nothing is really a problem until the body count is high enough, at which point it comes a national imperative. One need only remember their last trip to the airport to realize that this is not hyperbole. Cybersecurity is one field where we have a rare opportunity to bring about meaningful change before we have to hold a memorial service for those we lost.

Better security is a three-legged stool: You need to identify the problem, you need to devise a solution, and you need to measure the effectiveness of that solution. What has impact stays, what does not goes back to the drawing board. For 21 years we have been reinforcing two of those legs and wondering why we are still falling over. Repeating the same mantra while continuing to plug random boxes into a global network is the cyberspace equivalent of “thoughts and prayers.”

This is not a call to declare a war on cyber insecurity, if for no other reason than the wars on drugs and poverty have not exactly produced ideal results. It is a declaration that if something is worth doing then we should do it properly or reprioritize accordingly. To the extent that cybersecurity practitioners have been crying “wolf” for the past few decades, mea culpa, but it is worth remembering that eventually the wolf shows up.

[1] https://en.wikipedia.org/wiki/Ware_report

[2] https://www.amazon.com/Computer-Capers-Thomas-Whiteside/dp/0451617533

[3] https://en.wikipedia.org/wiki/The_Cuckoo%27s_Egg_(book)

[4] https://en.wikipedia.org/wiki/Eligible_Receiver_97

[5] https://www.globalsecurity.org/military/ops/solar-sunrise.htm

[6] https://www.hsdl.org/?abstract&did=487492

[7] https://www.hsdl.org/?abstract&did=341

[8] https://us-cert.cisa.gov/sites/default/files/publications/cyberspace_strategy.pdf

[9] https://www.cisa.gov/national-infrastructure-protection-plan

[10] https://www.csis.org/analysis/securing-cyberspace-44th-presidency

[11] https://obamawhitehouse.archives.gov/issues/foreign-policy/cybersecurity/national-initiative

[12] https://obamawhitehouse.archives.gov/sites/default/files/docs/2015_national_security_strategy_2.pdf

[13] https://www.solarium.gov/report

[14] https://www.nbcnews.com/id/wbna6151309

[15] https://www.cnn.com/2020/11/17/politics/chris-krebs-fired-by-trump/index.html

[16] https://www.cisa.gov/einstein

[17] https://www.business2community.com/cybersecurity/dhs-einstein-fail-01462281

[18] https://www.goarmy.com/army-cyber/cyber-direct-commissioning-program.html

[19] https://thehill.com/opinion/cybersecurity/391426-pentagon-faces-array-of-challenges-in-retaining-cybersecurity-personnel

[20] https://www.researchgate.net/publication/228605067_The_Cyberspace_Dimension_in_Armed _Conflict_Approaching_a_Complex_Issue_with_Assistance_of_the_Morphological_Method

[21] https://www.wired.com/2001/04/a-chinese-call-to-hack-u-s/

[22] https://www.wired.com/1999/09/china-fought-bombs-with-spam/

[23] https://en.wikipedia.org/wiki/Office_of_Personnel_Management_data_breach

[24] https://www.washingtonpost.com/national-security/elite-cia-unit-that-developed-hacking-tools-failed-to-secure-its-own-systems-allowing-massive-leak-an-internal-report-found/2020/06/15/502e3456-ae9d-11ea-8f56-63f38c990077_story.html

[25] https://en.wikipedia.org/wiki/The_Shadow_Brokers

[26] https://www.nytimes.com/2020/02/09/technology/ransomware-attacks.html

[27] https://www.uscybersecurity.net/csmag/the-cybersecurity-poverty-line/

[28] https://www.forbes.com/sites/stevemorgan/2016/01/30/why-j-p-morgan-chase-co-is-spending-a-half-billion-dollars-on-cybersecurity/?sh=2e4f84062599

[29] https://www.tripwire.com/state-of-security/featured/fbi-dont-pay-ransomware/

[30] https://www.lexology.com/library/detail.aspx?g=91d7fce9-4f04-4376-b4ae-b80b141f9291

[31] We are, effectively, at the mercy of private security companies who choose to publish reports on the findings they extract from the cases they are called upon to support. While informative, such reports capture the details of a fraction of a percentage of the total number of cases worldwide.

The Wolf Approaches

In the government, the use of the term “grave” means something very specific. That meaning should be obvious to you, but on the off chance that you haven’t had your first cup of coffee yet, it means that whatever the issue is, messing it up could cost someone (or more than one person) their life.

The attack against the water system in Oldsmar, Florida was a potentially grave situation. A water system is not a trivial technology enterprise and as such it has numerous checks – including a human in the loop – to make sure malicious activity or honest mistakes don’t end lives. But the fact that an outsider was able to get such access in the first place makes it clear that there exists a disconnect between what such systems are supposed to be, and what they are.

We give the Sheriff of Pinellas County a pass on the use of the term “wake-up call” because he has not spent a large portion of his life in the belly of the cybersecurity beast. A wake-up call only happens once, how we respond indicates how serious we are about taking action:

  • In 2015 DHS Secretary Jeh Johnson calls OPM breach a “wake-up call”
  • In 2012 General Alexander, Director of the National Security Agency, calls the hacker attack on Saudi ARAMCO a “wake-up call”
  • In 2010 Michael M. DuBose, chief of the Justice Department’s Computer Crime and Intellectual Property Section, called successful breaches such as Aurora “a wake-up call”
  • In 2008 Deputy Secretary of Defense William Lynn called the BUCKSHOT YANKEE incident “an important wake-up call.”
  • In 2003 Mike Rothery, Director of Critical Infrastructure Policy in the Attorney-General’s Department of the (Australian) Federal Government called a hack into a wastewater treatment plant “a wake-up call.”
  • In 2000 Attorney General Janet Reno called a series of denial of service attacks against various companies a “wake-up call.”
  • In 1998 Deputy Secretary of Defense John Hamre called the SOLAR SUNRISE incident “a wake-up call.”
  • In 1989 IT executive Thomas Nolle wrote in Computer Week that poor LAN security was a “wake-up call.”

The details of this particular case are probably never going to see sufficient sunlight, which only adds to the ignorance on these matters at all levels, and sustains the fragility we so desperately need to improve. This is particularly important when you consider how our relationship with technology is only getting more intimate.

These are issues that are decades old, yet if you want to have some idea of what it will take to spur action, keep in mind that we intentionally poisoned 100,000 people via a water system for five years and no one is in jail (yet). The idea that the people rooting around in such systems have the ability to cause such effects but don’t because they appreciate the moral, ethical, and legal implications of the matter is increasingly wishful thinking.

We in security have long been accused of “crying ‘wolf’” and for a long time those critics were right. We knew bad things could happen because Sturgeon’s Law has been in full effect in IT for ages, but it has taken this long for matters to go from merely serious to grave. Like everyone who puts off addressing a potentially fatal issue until the symptoms cannot be ignored anymore, our ability to survive what comes next is an open question.

The Global Ungoverned Area

There are places on this planet where good, civilized people simply do not voluntarily go, or willingly stay. What elected governments do in safer and more developed parts of the world are carried out in these areas by despots and militias, often at terrible cost to those who have nowhere else to go and no means to go if they did.

Life online is not unlike life in these ungoverned areas: anyone with the skill and the will is a potential warlord governing their own illicit enterprise, basking in the spoils garnered from the misery of a mass of unfortunates. Who is to stop them? A relative handful of government entities, each with competing agendas, varying levels of knowledge, skills, and resources, none of whom can move fast enough, far enough, or with enough vigor to respond in-kind.

Reaping the whirlwind of apathy

Outside of the government, computer security is rarely something anyone asks for except in certain edge cases. Security is a burden, a cost center. Consumers want functionality. Functionality always trumps security. So much so that most people do not seem to care if security fails. People want an effective solution to their problem. If it happens to also not leak personal or financial data like a sieve, great, but neither is it a deal-breaker.

At the start of the PC age we couldn’t wait to put a computer on every desk. With the advent of the World Wide Web, we rushed headlong into putting anything and everything online. Today online you can play the most trivial game or fulfill your basic needs of food, shelter, and clothing, all at the push of a button. The down side to cyber-ing everything without adequate consideration to security? Epic security failures of all sorts.

Now we stand at the dawn of the age of the Internet of Things. Computers have gone from desktops to laptops to handhelds to wearables and now implantables. And again we can’t wait to employ technology, we also can’t be bothered to secure it.

How things are done

What is our response? Laws and treaties, or at least proposals for same, that decant old approaches into new digital bottles. We decided drugs and povertywere bad, so we declared “war” on them, with dismal results. This sort of thinking is how we get the Wassenaar Agreement applied to cybersecurity: because that’s what people who mean well and are trained in “how things are done” do. But there are a couple of problems with treating cyberspace like 17th century Europe:

  • Even when most people agree on most things, it only takes one issue to bring the whole thing crashing down.
  • The most well-intentioned efforts to deter bad behavior are useless if you cannot enforce the rules, and given the rate at which we incarcerate bad guys it is clear we cannot enforce the rules in any meaningful way at a scale that matters.
  • While all the diplomats of all the governments of the world may agree to follow certain rules, the world’s intelligence organs will continue to use all the tools at their disposal to accomplish their missions, and that includes cyber ones.

This is not to say that such efforts are entirely useless (if you happen to arrest someone you want to have a lot of books to throw at them), just that the level of effort put forth is disproportionate to the impact that it will have on life online. Who is invited to these sorts of discussions? Governments. Who causes the most trouble online? Non-state actors.

Roads less traveled

I am not entirely dismissive of political-diplomatic efforts to improve the security and safety of cyberspace, merely unenthusiastic. Just because “that’s how things are done” doesn’t mean that’s what’s going to get us where we need to be. What it shows is inflexible thinking, and an unwillingness to accept reality. If we’re going to expend time and energy on efforts to civilize cyberspace, let’s do things that might actually work in our lifetimes.

  • Practical diplomacy. We’re never going to get every nation on the same page. Not even for something as heinous as child porn. This means bilateral agreements. Yes, it is more work to both close and manage such agreement, but it beats hoping for some “universal” agreement on norms that will never come.
  • Soft(er) power. No one wants another 9/11, but what we put in place to reduce that risk, isn’t The private enterprises that supply us with the Internet – and computer technology in general – will fight regulation, but they will respond to economic incentives.
  • The human factor. It’s rare to see trash along a highway median, and our rivers don’t catch fire Why? In large part because of the crying Indian. A concerted effort to change public opinion can in fact change behavior (and let’s face it: people are the root of the problem).

Every week a new breach, a new “wake-up call,” yet there is simply not sufficient demand for a safer and more secure cyberspace. The impact of malicious activity online is greater than zero, but not catastrophic, which makes pursuing grandiose solutions a waste of cycles that could be put to better use achieving incremental gains (see ‘boil the ocean’).

Once we started selling pet food and porn online, it stopped being the “information superhighway” and became a demolition derby track. The sooner we recognize it for what it is the sooner we can start to come up with ideas and courses of action more likely to be effective.

/* Originally posted at Modern Warfare blog at CSO Online */

Cyber War: The Fastest Way to Improve Cybersecurity?

For all the benefits IT in general and the Internet specifically have given us, it has also introduced significant risks to our well-being and way of life. Yet cybersecurity is still not a priority for a majority of people and organizations. No amount of warnings about the risks associated with poor cybersecurity have helped drive significant change. Neither have real-world incidents that get worse and worse every year.

The lack of security in technology is largely a question of economics: people want functional things, not secure things, so that’s what manufacturers and coders produce. We express shock after weaknesses are exposed, and then forget what happened when the next shiny thing comes along. Security problems become particularly disconcerting when we start talking about the Internet of Things, which are not just for our convenience; they can be essential to one’s well-being.

To be clear: war is a terrible thing. But war is also the mother of considerable ad hoc innovation and inventions that have a wide impact long after the shooting stops. War forces us to make those hard decisions we kept putting off because we were so busy “crushing” and “disrupting” everything. It forces us to re-evaluate what we consider important, like a reliable AND secure grid, like a pacemaker that that works AND cannot be trivially hacked. Some of the positive things we might expect to get out of a cyberwar include:

  • A true understanding of how much we rely on IT in general and the Internet specifically. You don’t know what you’ve got till it’s gone, so the song says, and that’s certainly true of IT. You know IT impacts a great deal of your life, but almost no one understands how far it all goes. The last 20 years has basically been us plugging computers into networks and crossing our fingers. Risk? We have no idea.
  • A meaningful appreciation for the importance of security. Today, insecurity is an inconvenience. It is not entirely victimless, but increasingly it does not automatically make one a victim. It is a fine, a temporary dip in share price. In war, insecurity means death.
  • The importance of resilience. We are making dumb things ‘smart’ at an unprecedented rate. Left in the dust is the knowledge required to operate sans high technology in the wake of an attack. If you’re pushing 50 or older, you remember how to operate without ATMs, GrubHub, and GPS. Everyone else is literally going to be broke, hungry, and lost in the woods.
  • The creation of practical, effective, scalable solutions. Need to arm a resistance force quickly and cheaply? No problem. Need enough troops to fight in two theaters at opposite ends of the globe? No problem. Need ships tomorrow to get those men and materiel to the fight? No problem. When it has to be done, you find a way.
  • The creation of new opportunities for growth. When you’re tending your victory garden after a 12 hour shift in the ammo plant, or picking up bricks from what used to be your home in Dresden, it’s hard to imagine a world of prosperity. But after war comes a post-war boom. No one asked for the PC, cell phone, or iPod, yet all have impacted our lives and the economy in significant ways. There is no reason to think that the same thing won’t happen again, we just have a hard time conceiving it at this point in time.

In a cyberwar there will be casualties. Perhaps not directly, as you see in a bombing campaign, but the impacts associated with a technologically advanced nation suddenly thrown back into the industrial (or worse) age (think Puerto Rico post-Hurricane Maria). The pain will be felt most severely in the cohorts that pose the greatest risk to internal stability. If you’re used to standing in line for everything, the inability to use IT is not a big a deal. If you’re the nouveau riche of a kleptocracy – or a member of a massive new middle class – and suddenly you’re back with the proles, you’re not going to be happy, and you’re going to question the legitimacy of whomever purports to be in charge, yet can’t keep the lights on or supply potable water.

Change as driven by conflict is a provocative thought experiment, and certainly a worst-case scenario. The most likely situation is the status quo: breaches, fraud, denial, and disruption. If we reassess our relationship with cybersecurity it will certainly be via tragedy, but not necessarily war. Given how we responded to security failings 16 years ago however, it is unclear if those changes will be effective, much less ideal.

/* Originally published in CSOonline – Modern Warfare blog */

What Cybersecurity and a Trip to the Dentist Have in Common

It was that time of year again. The day I lie and promise to be good the rest of the year: dental check-up day. During this most recent visit I was struck at how much people treat the security of their computers and accounts in the same way they treat their oral health.

You know what you’re supposed to do, but you don’t do it. “How often do you floss?” the dentist asks us, knowing full well that we’re lying through our bloody gums. If we flossed regularly we wouldn’t have bloody gums. When it comes to security we know we’re supposed to do all sorts of things, like create strong passwords and never re-use them, or lock our screens when we leave our desks, or use two-factor authentication on everything we can. When do we do these things? When a bunch of passwords get stolen and cracked, or when a phish leads to a data breach; the equivalent of flossing like a maniac the night before your annual check-up.

You have tools, but you don’t use them well. Mechanical toothbrushes, water flossers, even the metal tools the hygienist uses to scrape away plaque, are all readily available. When do you use them? You brush in the morning for sure and usually at night. We already know you don’t floss. You bought the Waterpik but it makes such a mess you only use it after corn on the cob or brisket. Likewise, you may run anti-virus software but you’re not diligent about updating it. You delay installing patches because it is inconvenient. You allow Flash and pop-ups and cookies and all sorts of things that could cause problems because who wants to use the web like it’s 1995?

Solutions are rarely permanent. Fillings replace the gap left when a cavity is removed, but eventually fillings can develop cracks. Crowns can come loose. That new IDS or firewall or end-point solution, where there was none, is a significant improvement in your security posture, but there are ways to bypass or undermine every security mechanism, at which point you’re back in the hands of expensive professionals (to fix the problem and/or clean up the mess) and looking at another pricy – and temporary – investment.

You have to get your hands dirty to do the job right. Understanding just what a sorry state your oral health is in means letting someone put their hands in your mouth. They’re spraying water and its splashing on your face. They’re getting their blood on their fingers. Bits of gunk are flying around. Sometimes they have to put you under because what’s necessary would make you scream. There is no such thing as a quick fix to security problems either. You have to attack the problem at the root, and that means blood, sweat, and tears.

These issues don’t exist in a vacuum. Dental health impacts more than just your mouth, and illnesses that impact other parts of your body can impact oral health. Bad or poor security can have a negative impact on your organization in myriad ways, and if your organization doesn’t place a priority on security you’re not going to get the best security capabilities or resources. In both cases you have to view the situation holistically. Just because you have a pretty smile, doesn’t mean you don’t have problems.


The Equifax Breach is Not Special

The hue and cry over the Equifax hack has subsided to a dull roar. We’ve passed the stage of ‘initial reports,’ which are usually wrong, and are firmly in armchair cybersecurity pundit mode. ‘What did Equifax executives know and when did they know it?’ inquiring minds want to know, among other things of varying relevance. All of this is de rigeur for massive breaches, along with a few other things…

First, there is more to the breach than meets the eye. This means some things won’t be as bad as initially thought, some things will be horribly worse. Today’s villains will end up looking like martyrs and everyone who seems competent will be remembered as buffoons…or maybe not. It doesn’t matter. What matters is that everyone could have done everything right and they’re still just gears in a corporate machine working off of imperfect information, under impossible deadlines, without enough funding, and without the right human resources. You know: the same problems we all have.

The leadership team of Equifax is not better or worse than any other company. This means both behavior and capabilities and actions. Much has been made about the academic qualifications of the firm’s CISO, but it’s much ado about nothing. Experian isn’t her first job in security, and her previous positions were not for outfits that were slack about security. Let’s also remember that Equifax is not in the security business, so their primary concern was never going to be security.

Equifax will still be in business a year from now. Pick a major breach at a publicly traded company. Go back as far as you like. How many of those companies are still in business? How many of them have stock prices that are the same or better as they were just before the breach? I’ll save you some time: None that I can find have gone bankrupt and their stock prices are doing just fine, thankyouverymuch. If things hold true to form they’ll suffer no long-term impact. I’m so confident about this I’m actually buying Equifax stock.

This will not be the breach event that brings about change or reform.Remember the Target breach? Home Depot? TJ Maxx? OPM? Remember how those were the breaches that were supposed to change everything? Remember how breaches stopped, executives went to jail and paid stiff fines, and everything was right with the world? This breach is no different, and there is nothing to indicate the result will be different.

Finally, nobody cares. Not enough anyway, and not for long. Security people care because of myriad reasons. Individuals care because they’re afraid of being impersonated or defrauded. Lawmakers care because their constituents care and because being outraged on behalf of the little people makes for good passive campaigning. But let me tell you what is going to happen:

  • Some other security drama is going to pop up in a couple of weeks and all the angry nerds will channel their anger in that direction because nothing helps improve security than snarky hot takes on social media.
  • Individual citizens are going to realize that most if not everything lost in this breach has been lost a dozen times before. Even if this is the time they get ripped off, banks and retailers will make them whole.
  • Lawmakers will move on to the next crisis du jour because constituents have stopped pestering them about Equifax, and the data broker/credit rating industry lobbyists will have spent a sufficient amount of money on donations, scotch, cigars, and steaks to convince the honorable gentleman from the back 40 that the industry can regulate and take care of itself.

The Equifax breach is not special. It’s just like every other breach that preceded it, and it is almost assuredly going to be another data point that supports the template for the one that follows it. Security is not the issue we think it is, and it will never be until the consequences are high enough.

No One is Too Small to Attack

If you’ve been a security practitioner for any length of time, you have probably hear this from a client at least once:

We’re too small/unimportant to be a target of hackers.

If you’ve been doing this for any length of time you also know this is the point in the conversation where you smile politely, get up, and excuse yourself while they go back to their business and you go on to your next meeting. Anyone who has it in their head that they don’t have a red laser dot on their forehead is not going to be convinced by your war stories or ream of counter-examples.

They will learn the hard way.

The thing you want to tell these folks is that anyone online is a target because everyone online has something of value. The reason most folks who think they’re not targets think the way they do is because they don’t deal in valuable information. Data breaches at banks, government agencies, or credit bureaus make headlines because your name, along with your birth date, social security number, bank account, and so on are monetizable.

If you move or make commodity widgets, your efficiency and up-time are what you consider valuable. The design of the widget is not special; they’re one of a hundred factories worldwide that make widgets. What these folks don’t realize is that just having a computer online is a valuable resource to someone. That’s one more processor that a bad guy didn’t have before. It’s one more hard drive they can store illicit material on. One more system they can hop through or use to target another victim. You may not be a target, but you could be an accessory.

It’s also important to note that while you may not be the intended victim of someone else’s attack, that you were involved means down-time, and the expense of cleaning systems, and most all the other issues that the actual victim has to deal with. Yes, on a smaller scale, but it’s not zero, which is the sum you came up with when you decided you weren’t a target.

The widget makers of the world are right to look with a jaundiced eye at calls to spend a lot on security, or to procure a lot of fancy boxes and software. When solutions are designed by people who cut their teeth on fighting nation-state adversaries and “advanced” threats, there isn’t a lot of options for people who need the basics.

Success in cybersecurity at every level means paying attention to business needs, and acceptable risks, not just external threats. The best advice is holistic in nature, not a pitch that plays to your professional strengths. That you know how to wield a hammer is not an excuse for only paying attention to exposed nails.

Most of the time, the best security recommendations are the cheap and unglamorous ones. No, it’s not pretty or fun, but it’s what you owe your clients if you’re really about security.

C.R.E.A.M. IoT Edition

I didn’t get to see the discussion between Justine Bone and Chris Wysopal about the former’s approach to monetizing vulnerabilities. If you’re not familiar with the approach, or the “Muddy Waters” episode, take a minute to brush up, I’ll wait….

OK, so if you’re in one computer security sub-community the first words out of your mouth are probably something along the lines of: “what a bunch of money-grubbing parasites.” If you knew anyone associated with this event you’ve probably stop talking to them. You’d certainly start talking shit about them. This is supposed to be about security, not profiteering.

If you’re in a different sub-community you’re probably thinking something along the lines of, “what a bunch of money-grubbing parasites,” only for different reasons. You’re not naive enough to think that a giant company will drop everything to fix the buffer overflow you discovered last week. Even if they did, because it’s a couple of lines in a couple of million lines of code, a fix isn’t necessarily imminent. Publicity linked to responsible disclosure is a more passive way of telling the world: “We are open for business” because it’s about security, but it’s also about paying the mortgage.

If you’re in yet another sub-community you’re probably wondering why you didn’t think of it yourself, and are fingering your Rolodex to find a firm to team up with. Not because mortgages or yachts don’t pay for themselves, but because you realize that the only way to get some companies to give a shit is to hit them where it hurts: in the wallet.

The idea that vulnerability disclosure, in any of its flavors, is having a sufficiently powerful impact on computer security is not zero, but its not registering on a scale that matters. Bug bounty programs are all the rage, and they have great utility, but it will take time before the global pwns/minute ratio changes in any meaningful fashion.

Arguing about the utility of your preferred disclosure policy misses the most significant point about vulnerabilities: the people who created them don’t care unless it costs them money. For publicly traded companies, pwnage does impact the stock price: for maybe a fiscal quarter. Just about every company that’s suffered an epic breach sees their stock price at or higher than it was pre-breach just a year later. Shorting a company’s stock before dropping the mic on one vulnerability is a novelty: it’s a material event if you can do it fiscal quarter after fiscal quarter.

We can go round and round about what’s going to drive improvements in computer security writ large, but when you boil it down it’s really only about one of and/or two things: money and bodies. This particular approach to monetizing vulnerabilities tackles both.

We will begin to see significant improvements in computer security when a sufficient number of people die in a sufficiently short period of time due to computer security issues. At a minimum we’ll see legislative action, which will be designed to drive improvements. Do you know how many people had to die before seatbelts in cars became mandatory? You don’t want to know.

When the cost of making insecure devices exceeds the profits they generate, we’ll see improvements. At a minimum we’ll see bug bounty programs, which is one piece of the puzzle of making actually, or at least reasonably secure devices. Do you know how hard it is to write secure code? You don’t want to know.

If you’re someone with a vulnerable medical device implanted in them you’re probably thinking something along the lines of, “who the **** do you think you are, telling people how to kill me?” Yeah, there is that. But as has been pointed out in numerous interviews, who is more wrong: the person who points out the vulnerability (without PoC) or the company that knowingly let’s people walk around with potentially fatally flawed devices in their bodies? Maybe two wrongs don’t make a right, but as is so often the case in security, you have to choose between the least terrible option.