Michael G. McLaughlin and William J. Holstein are authors of the forthcoming “Battlefield Cyber: How China and Russia Are Undermining Our Democracy and National Security,” from which this essay is adapted. 

Chase Cunningham is a highly experienced cyber warrior. He spent twenty years in the US Navy, much of it supporting the National Security Agency’s cryptographic efforts. After he left the Navy, he was sent on assignments to the CIA, FBI, and Defense Intelligence Agency. “I got to see everything,” he says now. Along the way, he earned a PhD in computer science and wrote a book on cyber warfare. He works as chief strategy officer for cybersecurity firm Ericom Software and has branded himself as “Dr. ZeroTrust.”

America’s software is profoundly vulnerable, which is one reason China and Russia have been able to penetrate US computer systems so deeply.

Based on his expertise, we co-authors asked him: Is America’s software safe? “No. Unequivocally, no,” he immediately replied. “The very nature of the internet and how we build software means that, because it is borderless and boundaryless, there is inherent risk to it.”

How did it happen that smart people built such vulnerable systems? “It’s one of those things where the ‘nifty cool’ factor of the internet outpaced the ability to keep it secure,” said Cunningham, who is originally from Texas and hence prefers colorful language. “Like everywhere else, usability trumps security every day, twice on Sunday. In this instance, people figured out that they could make money on the internet with software, with the cloud. No one asked, ‘How do we do this correctly and securely with an eye on national security?’”

Result: America’s software is profoundly vulnerable, which is one reason China and Russia have been able to penetrate US computer systems so deeply. Understanding the depth of the challenge is the first step toward mapping out solutions. The very architecture of the vast majority of American computing systems must be shifted from the medieval concept of a castle whose walls can be defended to a more sophisticated “zero trust” model where it is assumed that attackers can and will penetrate. Boards of directors and top management must demand that their companies do not deploy software with “known vulnerabilities,” as identified by the National Institute of Standards and Technology (NIST). It will take sweeping action over a period of years to even begin creating secure software systems that actually protect data.

The heart of the issue is how software is “built.” Dating back to the time when creating the internet was seen as a great and noble undertaking, many volunteers and nonprofit software developers built the foundational code that undergirds the internet and all computer systems connected to it. They posted their software in open-source repositories so others could use it. Malicious actors can take advantage of this system in multiple ways. If a hacker or malicious insider can insert “bad” code into a repository, for example, it can produce many vulnerable programs. This practice is called repository poisoning.

The private sector does not presently have the right set of incentives to fully clean up its software act. Government will have to use a mix of sticks and carrots to change the cost-benefit calculus that takes place at the top of the corporate world.

One of the best, simplest examples of what can go wrong with open-source software is what happened with Apache’s Log4j, which is a log-in system. Log4j was developed and is maintained by volunteers at the Apache Software Foundation. Software developers have “borrowed” the Log4j code library thousands of times, perhaps hundreds of thousands of times.

The problem started in November 2021 when the cloud computing arm of Alibaba, the Chinese technology giant, spotted a vulnerability in Log4j, dubbed “Log4Shell.” The variant quickly spread globally. Hackers believed to be Chinese had figured out a way to subvert Log4j to steal sensitive information or take control of a system. It was like a nuclear bomb—the shockwaves were enormous. US officials said hundreds of millions of devices were at risk.

“The internet is a house of cards, to be perfectly frank,” Cunningham told us. “It’s all kind of cobbled together and held up by a variety of collaborations between corporate this and open source that and volunteer here and private there and whatever else. If at any one time, any one of those entities decided to say, ‘Screw it. I’ve had enough,’ and they took their toys and left the sandbox, we’d be in a very bad place.”

The private sector does not presently have the right set of incentives to fully clean up its software act. Government will have to use a mix of sticks and carrots to change the cost-benefit calculus that takes place at the top of the corporate world.

To help improve the way the private sector produces software, the concepts of isolation and micro-segmentation are increasingly important. For many years, most software applications have been called “monoliths,” meaning there is one giant body of code for everything. That makes it difficult to build in controls. More recently, developers have started switching to a microservices architecture. The idea is that a microservice should only serve a single purpose and operate on its own data.

What the government might be able to do to shift the economics of corporate decision-making to require that companies report data breaches above a certain size to the government, which means they would become publicly known.

One lever government has is writing its contracts to purchase IT products from the private sector with clear sets of expectations about how a software program will perform over time. To a limited degree, this has begun to take shape. In September 2022, the US Office of Management and Budget (OMB) released a memorandum requiring federal agencies to obtain attestation from software developers that they followed secure development practices. The government should enshrine the OMB requirement in all its purchasing. Because of the purchasing power of the federal government, this would have a substantial spillover effect in the commercial sector.

Ultimately, it is the decision making at top levels of the corporate world that will be most important in securing US systems. Stephen Soble, the CEO of Assured Enterprises, which seeks to locate and remedy known vulnerabilities, is at the forefront in arguing that companies, led by their boards of directors, must organize themselves to protect their data rather than playing an elaborate blame game after a breach. While many companies have hired chief information security officers (CISOs), one knee-jerk reaction after a breach is to simply fire the CISO. Directors and officers also increasingly retain outside counsel and cybersecurity experts and rely on insurance to help them “mitigate” losses after a breach. Soble argues that managers and directors must place more emphasis on the protection of their data.

Most managements and their boards currently assume that it’s cheaper to deal with a breach than it is to implement better security. What the government might be able to do to shift the economics of corporate decision-making to require that companies report data breaches above a certain size to the government, which means they would become publicly known. The Securities and Exchange Commission has the power to require publicly traded companies to disclose risks that are “material,” and it is in the process of requiring companies to disclose “material” cybersecurity incidents. That might help shift the cost-benefit calculus in the boardroom. What if protecting our data was actually more cost effective than hiring legions of lawyers, insurance firms, and cybersecurity companies?

Elsewhere, it is necessary to reduce companies’ “attack surface.” That means limiting the number of points of contact between a company’s IT systems and the open internet, which reduces the number of channels that malicious actors can exploit. Accepting that a network’s perimeter cannot be defended is absolutely essential. “Every user presents a risk, every device presents a risk, every data transaction presents a potential risk, and it’s all transiting an inherently dangerous environment,” Cunningham said. That’s why the concept of “zero trust” is so important. If the castle’s walls, meaning network perimeters, have been breached, what must be done? “The new, new concept if you want to continue with the medieval analogy is to make sure that everyone inside the castle is wearing a suit of armor instead of just kind of wandering around,” said Cunningham. “What we’re trying to do with micro-segmentation and isolation is make sure that every entity—every person, device, and bit of data—has got security controls around it.”

Michael G. McLaughlin and William J. Holstein are authors of the forthcoming “Battlefield Cyber: How China and Russia Are Undermining Our Democracy and National Security,” from which this essay is adapted. 

Michael G. McLaughlin, JD, was a veteran naval intelligence officer, serving as senior counterintelligence advisor for United States Cyber Command and being responsible for the coordination of Department of Defense counterintelligence operations in cyberspace. He is now a cybersecurity attorney and policy advisor in Washington, D.C.
   
William J. Holstein was based in Hong Kong and Beijing for United Press International and has been following US-China relations for more than forty years. He worked or written for Business Week, US News & World Report, the New York Times and other top publications. His nine previous books include The New Art of War: China’s Deep Strategy Inside the United States.