Cybersecurity: A Human Problem
The latest Cyberattack to hit the news: a worm called Gauss, a relative of Stuxnet, targeted certain Lebanese banks. Kaspersky, a Russian security firm, discovered the attack. On their blog post, the Kaspersky researchers note that “after looking at Stuxnet…, we can say with a high degree of certainty that Gauss comes from the same ‘factory’ or ‘factories.’ All these attack toolkits represent the high end of nation-state sponsored cyber-espionage and cyberwar operations, pretty much defining the meaning of ‘sophisticated malware.’” They go on to say that “this is actually the first time we’ve observed a nation-state cyber-espionage campaign with a banking Trojan component.”
And this after a New York Times article exposed Stuxnet as being a joint US-Israeli covert operation targeting the Iranian nuclear industry, authorized by President G. W. Bush and further authorized by President Obama. The article further suggests that Iran is now mounting its own Cyberwar initiative, a result that the Obama administration understood and feared.
It seems that Cyberwar is no longer science fiction. It’s a reality, and we’re in the midst of one.
Whether you think the Stuxnet and Gauss worms were a good idea or not, this article is not the place to debate moral or ethical questions. Rather, we’re here to help you understand the reality of the situation in order to provide insight. And like it or not, we have a Cyberwar on our hands—and as with other wars, technology defines and constrains the rules of engagement. Yesterday we may have spoken about tanks or guns; today we speak of viruses and worms. But as with traditional machines of war, the human element is every bit as important as the technology, if not more so.
Problem between Keyboard and Chair
Here are some examples of what we’re talking about. When the press heard about the Gauss exploit, they asked the obvious question: who would benefit from attacking Lebanese banks? The obvious answer: anyone interested in the secret financial dealings of Hezbollah, the terrorist organization based in Lebanon. In other words, Israel and the US. The appearance of Gauss led many people across the world to come to a similar conclusion.
Assume for a moment, however, that someone wanted to make Israel and the US look bad, say the Iranians. Could the Iranians have come up with Gauss in order to gain political advantage against Israel and the US? Unlikely, perhaps, but possible. How would we know? After all, if the US and/or Israel were behind Gauss, they could have hidden their motivation simply by expanding the target to banks outside Lebanon. So maybe the fact that Gauss had such a narrow target should suggest that someone was trying to frame the US and Israel?
Here’s another twist: Kaspersky Lab was founded by Eugene Kaspersky, a Russian cryptography expert who learned his trade from the KGB’s cryptography school. Presumably he has substantial ties with Russia’s current secret police as well. Perhaps the Kaspersky report on Gauss was either fictitious or somehow skewed, a dastardly Russian plot of some sort? We have no reason to believe so, but again, how would we know for sure?
Sounds like a Robert Clancy spy novel, and for good reason—subterfuge has been a part of warfare (and in particular, espionage) since the Stone Age. But the problem is, the more we focus on the technology, the less we focus on the human aspects of the Cybersecurity problem. And that lack of focus both makes us more vulnerable, and prevents us from mounting efficacious attacks of our own.
Agile Architecture and Cyberwarfare
In a recent ZapFlash we recommended a “best defense is a good offense” approach: preventing future attacks with agile, self-innovating software. But even the most cutting-edge code is only a part of the story, because it still doesn’t address the human in the system. Targeting people is nothing new in the world of Cyberattacks, of course. Social engineering is becoming increasingly sophisticated as hackers plumb the weaknesses of our all-to-human personalities. Not a day goes by without a phishing attack arriving in our inboxes, not to mention how easy it is to talk people into giving up their passwords. But while social engineering works with individuals, Cyberwar is presumably between countries. How, then, might we go about what we might call political engineering: the analogue to social engineering, only taking place on the global stage? And how do we protect ourselves against such attacks?
The answer to both questions is to focus on how technology-centric actions will influence human beliefs and behavior. Creating a sophisticated computer virus and releasing it may achieve a technical end, the result of the software itself. But it will also likely achieve a variety of human ends, as well: it might arouse suspicion, cause people to shift their priorities or spend money, or it might make someone angry enough to retaliate, for example. Furthermore, these human ends may be more significant and desirable than the actual impact of the software itself.
ZapThink considers the focus on human issues as well as technology to be an aspect of Agile Architecture. We’ve spoken for years about the role governance plays in Agile Architectures like SOA, because governance is a best practice-driven approach for bringing human behavior in line with the goals of the organization. The big win for SOA governance, for example, was leveraging SOA for better organizational governance, rather than simple governance of the SOA initiative. The essential question, therefore, is what architectural practices apply to the human side of the Cybersecurity equation.
Our Cybersecurity example is analogous to SOA governance, although it turns governance inside out: we’re no longer trying to influence human behavior inside our organization, but rather within the world as a whole or some large parts of it. But the lesson is the same: the technology influences human behavior, and furthermore, the human behavior may be more important than the technology behavior. Protecting ourselves from such attacks also places us in the greater context of the political sphere as well.
Playing Defense
Education is the key to protecting yourself and your organization from human-targeted Cyberattacks. Take for example a phishing attack. You receive an email that looks like it’s from your bank. It tells you that, say, a large withdrawal was just made from your account. If you don’t realize it’s a phishing email, you might click on the login link in the email to check your account to see what the problem is. The link takes you to a page that looks just like your bank’s login page. But if you attempt to log in, you’re only giving your credentials to the hackers.
There are automated anti-phishing technologies out there, of course, but the hackers are always looking for ways around them, so you can’t rely upon them. Instead, you must proactively influence the behavior of your employees by educating them on how to recognize phishing attacks, and how to avoid them even when you don’t recognize them. Still not foolproof, but it may be the best you can do.
Protecting against political Cyberattacks would follow the same pattern, but would be far more difficult to implement, as educating a populace is far more difficult than educating your employees. Instead, the most effective course of action may once again be a good offense: you can use the same techniques as your opponent to influence beliefs and behavior.
Let’s use the hacker group Anonymous as an example. Any member of this loose association of hackers can propose an action—from taking down the MasterCard Web site to finding the location of a fast food worker who stepped on the lettuce, to name two real examples—and any member can vote to take that action. There’s no central control or consistent strategy. Now, let’s say you worked for a government Cyberwar department, and you were responsible for creating a Gauss-like worm with a narrow target, only you didn’t want anyone suspecting it was your country who created it. Could you make it look like Anonymous created it? Even the members of Anonymous might not realize their group wasn’t actually responsible.
The ZapThink Take
Your sphere of concern might not involve international espionage, but there are important lessons here for every architect. All too often, techies get techie tunnel vision, thinking that technology problems have technology solutions, and furthermore, the only interesting (or important) problems are technology problems. Architects, however, must also consider the human in the equation, whether you’re fooling the Iranians, making sure interface specifications are properly followed, and everything in between.
This principle is no truer than when you’re protecting against Cyberattacks. No password scheme will prevent people from writing their passwords on Post-It notes and sticking them to their computers. No firewall will prevent all phishing attacks or stop people from visiting all malware-infected sites. Education is one technique, but there’s more to governance than education. And whatever you do, always cast a skeptical eye toward any conclusions people draw from news about Cyberattacks. The technology is never the whole story.
If your job, however, is mounting Cyberattacks, understanding the human in the equation is a critically important tool—and often far less expensive and time-consuming than a purely technical attack. As any good poker player will tell you, the secret to winning isn’t having good hands, it’s knowing how to bluff, and even more importantly, knowing how to tell when the other guy is bluffing.
Image credit: Anonymous9000