For the first two decades of its existence – and nearly the first three – the tech sector was, by far, one of the least regulated industries in the economies of both the U.S. and the world at large. Things have certainly changed since then, with the development of the U.S. Computer Emergency Readiness Team (US-CERT) within the Department of Homeland Security, as well as a variety of cybersecurity and tech-related agencies and units in other Cabinet departments – not to mention in many governments around the world.
Particularly in the past couple of years, the question of how exactly to regulate the use of open-source code, especially within the government itself, has weighed heavily on these varied authorities in a number of ways. It's become perfectly clear that open source, as an avenue of software and app development, isn't going anywhere. For the most part, this is a good thing, as it democratizes the creation process and prevents coders and engineers from having to start from scratch every time. Thus in a perfect world, the use of Java wouldn't ever be a problem – but that's not the world we live in.
Regulators have much to consider if they hope to craft sets of standards that substantively address these security issues without putting undue burdens of restriction on developers (and the organizations that employ them). If you're thinking about legacy application modernization for your business, it will be prudent to review the ins and outs of this matter very closely.
US-CERT's May alert highlights continued presence of known Java defects
May 2, 2019 marked the release of this year's second cybersecurity alert from US-CERT, this one focused on vulnerabilities in SAP's Gateway and Message Server applications – both of which are Java-based. These configuration-centric issues are present 75% of the time in Gateway and 90% of the time in Message Server. According to Security Boulevard, exploits used by hackers to do damage on the back of these flaws have been known quantities in the field of cybersecurity research for some time. In fact, the misconfigurations that allow such exploits to damage SAP applications have existed for more than 10 years.
Simply put, flaws in Java that existed in its earliest days, before Oracle even came along and acquired original Java creator Sun Microsystems in 2010, are still affecting what Security Boulevard described as "a majority of SAP implementations." Patching, log and tracing configurations and other best-practice security efforts may need to improve on customers' ends, but it's hardly unreasonable to also say that SAP service providers and engineers aren't taking enterprise resource planning application security seriously enough.
The larger open-source conundrum
Security issues facing Java-based applications have been at the forefront of many security experts' minds for a while largely because of the Equifax breach late in July of 2017. As explained by cybersecurity reporter Jack Corrigan in an analytical piece for NextGov, Homeland Security informed the Big Three credit bureau that March of the issue that would eventually facilitate the hack. It's not clear what Equifax's literal response was – whether the organization discounted the threat or somehow forgot to address it. What we do know is that no action was taken. The Apache Struts software used in the creation of an application through which consumers could post disputes about credit reports remained compromised, and by July's end, more than 145 million Americans' information leaked in the breach.
All that being said, it's reductive to view the issues facing open-source code security – and the conundrum of how to impose related regulations enforcing it – as solely a Java issue. All of the different open-source codes, including mobile-centric languages like Android, have certain vulnerabilities. According to a survey of development security operations professionals conducted by the software security firm Sonatype, about 25% of all developers working in both the public and private sectors said that their organizations underwent breaches in 2018 that stemmed from open-source code weaknesses. This constitutes a 75% increase in breaches of this type between 2014 and 2019.
It's also not strictly an open-source code problem, as Derek Weeks, vice president of Sonatype, explained in an interview with NextGov.
"No one can write perfect code," Weeks said to the news provider. "All code everywhere, anywhere, whether it's an open source component or written from scratch, probably has a security flaw in it somewhere."
Patching security holes in open-source code
Emile Monette, a cyber supply chain risk specialist with the Cybersecurity and Infrastructure Security Agency (one of the aforementioned other Homeland Security agencies focused on web security), told NextGov that developers' lack of focus on creator-issued security updates for open-source code and applications is a common cause of the breach issues bedeviling government and commercial users. This approach comes from a somewhat understandable focus on functionality above all other attributes, but it's far too easy for such strong intent to end up turning into tunnel vision.
By slowing down somewhat and emphasizing security more efficaciously in the open-source development process, coders and software engineers can significantly mitigate their risk of exposure to breaches and hacks. At the same time, there's an element of racing against the clock to consider: Weeks told NextGov that on average, there's a window of approximately three days between when a security flaw is announced and when hackers start launching malware, rootkits and any number of other exploits to attack the company endangered by the flaw. However, if development teams working for the government take the time to emphasize security earlier on in the process of app development, there may be much less of a chance for bugs significant enough to require a long debugging process to emerge in the first place.
Enforcing better development security standards
The question regarding regulation creation and enforcement still lingers over all of these proceedings, in no small part because of how many government agencies still have a way to go before achieving reasonable legacy system modernization.
Monette at CISA elaborated on this issue to NextGov. "We don't know what's in [the software], and that transparency problem then does not allow us to appropriately manage the vulnerabilities and weaknesses that [arise]," Monette said. "You can't manage what you don't know about."
Part of that murkiness means not knowing how much might be open source and how much had been originally written. CISA has made an effort to insist that other government departments speed up the pace of bug fixes, bumping the requirement down to a 15-day deadline instead of 30 days as of May 1, 2019, but government infosec personnel already had plenty of trouble meeting the 30-day deadline. As such it's unclear how effective tightening that regulation will end up being. In the meantime, government agencies – as well as the many vendors that work with them as independent contractors – cannot afford to lose ground in the creation of software and applications that help civic operations run more smoothly.