It’s the nature of wars that once you’re in one, you tend to know it.
And yet the cyberwar sneaked up on us. As the American economy, everyday lives, infrastructure, and national security have grown ever more digitally dependent, so have vulnerability to personal theft and loss of privacy and threats to corporate, government, and military computer networks. “Cyber 9/11 has happened over the last 10 years, but it’s happened slowly so we don’t see it,” Amit Yoran, former cyber-security chief for the Department of Homeland Security, has said.
Attacks come “every day, at every moment,” says Xinming (Simon) Ou, an expert in computer security at Kansas State University. But he adds, “We don’t have very good technology and tools.”
In universities across the country, software engineers like Ou are searching for ways to defeat an often mysterious and faceless foe, one who thrives in an online world designed for easy collaboration and access to information. Intriguingly, because of the varied ways the Web cuts across human life, the tactics aren’t limited to algorithms and computer code. Traditional coding and software analysis remain essential, but engineers are also delving into psychology, sociology, economics, law, and public policy, among other fields. In order to fix cyberspace, they need to know not only how information technology works and how cybercrooks exploit its weaknesses but also how people interact with it, what financial incentives drive it, and what legal frameworks might help strengthen defenses.
The challenge is growing. Complaints to the federal Internet Crime Complaint Center about scams, identity theft, credit card fraud, and other crimes topped 330,000 last year, and more than 146,000 were referred to law enforcement agencies. Cyberattacks vary in scale. The average phishing scam that turns up in your inbox, seeking to steal your identity, represents a small volley, an affront to digital privacy. In sufficient volume, those volleys do serious damage. Albert Gonzalez of Miami led what the Justice Department called “the largest hacking and identity theft ring ever prosecuted by the U.S. government” before his sentencing last spring. Among his techniques: “wardriving” – driving around with a laptop looking for unsecure wireless computer networks of retailers, from whom he would steal customers’ credit and debit card numbers.
Attacks on corporations and other organizations represent something larger in scope: In a survey of 443 U.S. businesses, government agencies, and other organizations, fully 64 percent reported experiencing malware, or malicious software, infection. New malicious code signatures have been proliferating at an exponential rate, with 2.9 million measured in 2009 by Symantec, the computer security firm. Such attacks are facilitated by the availability of cyberattack toolkits that give even unskilled hackers access to sophisticated techniques. They spread viruses that slow down or damage networks or can invade and stay, capturing and transmitting trade secrets or other sensitive information. In a prominent example, Google suffered a major attack in December of 2009, which the company later called a “highly sophisticated and targeted attack on our corporate infrastructure originating from China,” aimed at “accessing the Gmail accounts of Chinese human rights activists.”
Increasingly, the lines between cyberwar and conventional war and espionage are becoming blurred. Many suspect that the massive cyberattacks that disrupted Internet service in Estonia in 2007 and disabled numerous websites in Georgia in 2008 were ordered by the Kremlin, although they have only been traced to Russian cybercriminals. In 2008, the Pentagon was startled to detect an intrusion into the depths of its classified networks. “It’s now clear that this cyberthreat is one of the most serious economic and national security challenges we face as a nation,” President Obama said in May of 2009; a year later, the armed forces created a new unit called the United States Cyber Command, recognizing a new domain of warfare. “Information technology enables almost everything the U.S. military does,” Deputy Secretary of Defense William Lynn wrote in Foreign Affairs. An adversary can score a major blow at low cost. “A dozen determined computer programmers can, if they find a vulnerability to exploit, threaten the United States’ global logistics network, steal its operational plans, blind its intelligence capabilities, or hinder its ability to deliver weapons on target.” Hackers and foreign governments, Lynn adds, are increasingly able to penetrate civilian infrastructure networks, posing a threat to power grids, transportation, and financial systems.
Ou, an assistant professor of computing and information sciences, casts the challenge in a metaphor: In a sturdily constructed house, there are two points of entry you need to secure: the door, and the window. But the Internet wasn’t built like a brick house, with security foremost in mind. It was built to share information openly, and any security features have been ad hoc add-ons.
“It’s like a very soft house,” says Ou, and it’s up to software engineers to make sure the growing packs of cyberwolves out there don’t blow the walls in.
In this “soft house,” intrusions are often detected – if at all – after the fact. One common technique is “traffic analysis.” If your network typically experiences a certain amount of traffic — if a certain amount of information traveling across it is the norm — then anomalies in the traffic patterns should be viewed with suspicion. “We develop systems that automate the process of analyzing large amounts of traffic, in order to develop a model to detect traffic indicative of a security breach,” explains Salvatore Stolfo, a professor of computer science at Columbia University’s school of engineering. The alerts a bank sends a customer after an uncharacteristically large transaction follow essentially the same principle.
Riccardo Bettati, director of the Texas A&M University Center for Information Assurance and Security, uses a story he says is “probably an urban legend” to illustrate one way cyberintruders work: Journalists used to gauge the state of alert at the Pentagon by counting the number of pizza trucks parked outside — figuring that more pizza trucks meant more workers doing overtime, which in turn signaled a crisis was brewing. “Similar games can be played in networks,” he says – by attackers just as easily as by defenders. Bettati points to a 2001 study by three University of California, Berkeley computer scientists. The team examined a security protocol called SSH, or “Secure Shell,” which offers an encrypted channel intended to stymie would-be eavesdroppers. Though SSH masked which keystrokes were struck by someone using the network, an intruder could still note the exact moment a key was struck. “If someone listens, they literally hear you type,” says Bettati.
Why does that matter, when the information is encrypted? The Berkeley team demonstrated that information about when you type something can be translated into information about what you type. The delay between two letter keys struck with alternating hands (a-l, for instance) differs from the delay between two letter keys struck with the same finger (f-f), which in turn differs from a letter key followed by a number key (a-3), and so on. The Berkeley researchers determined that a hacker eavesdropping on the timing of keystrokes could infer a great deal and substantially reduce the guesswork in obtaining a user’s password.
So-called timing attacks like the one the Berkeley team studied are often called “out of the box” attacks — outside the realm of normally expected cyberattacks. But Bettati argues that software engineers need to be on the lookout for them. “They tend to have very small boxes,” muses Bettati of some of his colleagues in the security field. “It’s relatively easy to be outside the box when designing an attack.”
Indeed, cyberdefenders must struggle to keep up with a clever adversary. “The way I tend to see it is, if we discover a vulnerability, it’s actually very likely the bad guys already know about it,” says Iliano Cervesato, a computer security expert at Carnegie Mellon’s Qatar campus.
Vulnerability often lies in how people interact with computer systems. In a 2008 paper called “You’ve Been Warned,” Carnegie Mellon’s Jason Hong and two colleagues examined why so few people heeded warnings from their Internet Explorer browsers that their system was in need of updating. They identified four possible levels of failure: “The first level was, do they even see the sign?” he says. The second: Do they understand the warning? The third: Do they believe the warning? (Many become inured to such things, the researchers found.) And fourth, even if the first three questions were answered in the affirmative, were users sufficiently motivated to bother updating?
Another ubiquitous problem is very familiar: IT specialists warn us that we shouldn’t reuse passwords and that we should compose complex ones with symbols and numbers, making them difficult to guess. Yet few of us do so; we prefer passwords that are easy to remember. Researchers are trying to improve password schemes, with pictures instead of letters, for instance, and biometric software, to improve both security and convenience. But to Hong, associate professor at Carnegie Mellon’s School of Computer Science, passwords are pieces of a larger challenge inside organizations, where “motivations aren’t aligned properly.” Cybersecurity “needs to be looked at on the holistic systems level rather than the individual and component level,” says Hong. “That’s what makes it really hard: there’s no good science in terms of understanding overall systems these days.”
Compromise of an individual password can assume large dimensions. “It’s already happened—that’s the scary part,” says Hong. Fraudulent efforts to gain sensitive information from low-level targets are called “phishing”; last February, in Germany and the Czech Republic, basic phishing attacks on carbon traders netted a group of cyberbandits 3 million Euros. Aiming higher, hackers engage in “whaling”—going after the passwords of a CEO, for instance. In 2009, a hacker claimed to have gained access to various PayPal, Amazon, and email accounts belonging to high-level employees at Twitter, including founder Evan Williams, who confirmed some of the breaches. And last February, in Germany and the Czech Republic, basic phishing attacks on carbon traders netted a group of cyberbandits 3 million euros.
Hackers are often able to infect others’ computers at a distance, through malicious code that brings the computers under the hacker’s control. These compromised computers form a so-called botnet, something like an army of zombie slaves that can later be marshaled in a coordinated attack on a major target, often through “denial of service” attacks that flood servers with so much traffic that they crash.
Hong’s start-up company, Wombat, teaches corporations and government groups how to avoid phishing — or whaling — attacks. He thinks it’s important to integrate such education into school curricula, including at the elementary level, and soon. “National cyberdefense relies on everybody doing the right thing,” he says.
But the right thing can be undermined by the economics of the software industry, argues Kansas State’s Ou. In many other industries, manufacturers risk a costly recall if they don’t ensure a product’s quality. With software, the emphasis is not on how to make the best product possible but “how to get in the market as fast as possible,” says Ou. If significant security flaws turn up later, the vendor can issue a cheap and simple patch for download. “The cost for the vendor to improve the quality of the software — that delay to market — will be significant,” says Ou. “You will lose money if you overstretch on quality assurance.”
Imagine a system engineered with robust firewalls, multiple lines of defense, and every weakness or possible point of entry perfectly fortified. You would still face what Columbia’s Stolfo calls “the ultimate cybersecurity problem”: the untrustworthy insider. “Any user with access to valuable assets can act maliciously,” he says. “It’s probably the most expensive problem of all.”
Software engineers can help. Stolfo has broken it down into two research questions. “How do you detect when an authorized individual goes bad?” he asks. “And more interestingly, how do you predict when an insider goes bad?”
There are a host of ways to approach the problem — not all of which appeal to advocates for personal privacy. Many inside jobs are attempted by people under great stress — financial, personal — or people who may have alcohol or drug problems. An employer with access to such information about its employees might be able to spot a bad apple as it begins to rot.
Somewhat less controversial would be monitoring an employee’s behavior at work. Software embedded in a PC or laptop might note that while a certain user typically sends five or six documents across the ether, one day he sent a thousand. “People who do bad things, there is some indicator that they’re different,” says Stolfo — and that indicator often appears in the digital record. Stolfo’s lab has come up with “behavioral sensors.” A sensitive financial document might be embedded with a “beacon” that emits a silent signal when it has been opened by an unauthorized user.
Such preventive work, Stolfo says, might have thwarted rogue FBI agent Robert Hanssen, who sold secrets to the Russians over decades in what is considered an intelligence disaster. An investigation after the fact turned up a bit of unusual digital behavior: The FBI kept a highly sensitive database pertaining to suspected spies. Hanssen repeatedly queried it for his own name.
Of course, not all cybersecurity concerns can be attributed to crooks, spies, or disgruntled employees. Personal information is eagerly sought by corporations, which use it in marketing, or organizations that use it in research. When this occurs, individuals ought to be informed and compensated, says Carnegie Mellon professor Latanya Sweeney, who conducts research at the intersection of computer science and public policy. “Suppose you could see all the places your data went,” she envisions. Just in the realm of medical data, one of her areas of focus, data can go to “pharmaceutical companies, management companies, analytics companies…disease management houses, equipment monitoring houses.” Patients should know this, Sweeney urges. Then, individuals should be compensated for the risk to privacy that results from information about them being moved from one place to another.
“There might emerge a social networking site where you still use the site for free, but if a company wants to use your data, they compensate you,” she suggests.
Wide ranging as cybersecurity research currently is, Eugene Spafford worries that it’s insufficient. A professor of computer science at Purdue, he edits the Computers & Security journal, and has served as a senior adviser or consultant to leading industrial firms and key government agencies. “Industry is very focused on near-term developments they can take to market,” he notes. Academics “want to be looking out a little bit farther toward the horizon,” he adds, “or even over the horizon.” Paying for that has necessarily fallen to the government, which is coming up short, he says: “We’re really falling behind. It’s not a pretty picture.”
David Zax is a freelance writer based in New York.