Why Software Developers Must Keep Cybercrime in Mind

In the world of business, the instinct for self-preservation will always be king. Outsourcing work does not mean outsourcing risk, and assuming that a vendor will look out for you and your organization’s interests is unwise.

James Thew Adobe Stock 263959295
James Thew AdobeStock_263959295.

The risk of cybercrime is too great to allow for trust. Though business relationships, especially business-to-business, may be based on trust, that leaves open the debate over software vulnerability management.

Blowing up whistleblowers creates dangerous landscape

Let’s start with identifying and disclosing software vulnerabilities. Researchers and developers dispute where responsibilities lie for early detection and disclosure. The software supply chain attacks were not at all surprising to the cybersecurity community. Developers are terrible at testing their own software for vulnerabilities and glaring holes are left open for years. When researchers find them, they’re often torn between anonymously making them public or notifying the developer. This is because developers have sought cease and desist orders or have pressed criminal charges against disclosing researchers. This has even been the case where bug bounty programs are in place. Globally, the laws protecting vulnerability disclosure are weak and developers often act to suppress the disclosure rather than have it properly placed in the MITRE CVE database of vulnerabilities. This disagreement has caused vulnerabilities to linger, allowing for further attack.

The software developers have it wrong. Delay in disclosure can open up liability for fines from SEC violations. Legal and financial liability is already clear and being reinforced by legal rulings setting precedent. More of them are turning a corner as shareholders are weighing in along with a brighter media spotlight.

These are just the table stakes. The ones at risk are the customers, the companies using the software. Where are they in this debate? Should a company take on the responsibility for finding vulnerabilities in the products they purchase? They certainly won’t have testing and research arms, so they have to depend on looking only for known, disclosed vulnerabilities. However, there is zero obligation for developers to provide these. The burden then falls on the customer without statement.

Know the vulnerabilities with outsourcing and plan around them

Companies outsourcing parts of their operation may think they are outsourcing the security and software management, which to an extent they are, but they retain the legal liability. As the party with the legal liability, it is in their self-interest, even if only financially, to consider themselves responsible for detection of attacks.

Detection should be a layer on your network, regardless of any vulnerability, known or unknown.

Vulnerability scanning tools and disclosed indicators of compromise (IOCs) are the informing source and rely on public disclosure to automate detection; no company could possibly provide a detection function without them. It’s critical that vendors embrace this ecosystem.

Click here to hear more about different technologies in the supply chain:

Open source opens the door to transparency

While we can shift the scale toward the developer for vulnerability identification and disclosure, we need to shift to the customer for mitigation. This is not the case for Software-as-a-Service (SaaS), where patches are handled by the vendor. On-premise or privately hosted software cannot rely on the vendor for deployment and patching. Too many dependencies create complexity that the vendor can only assist with. Any automation they provide for patching is concerning and needs to be run under the watchful eye of proper change management. They aren’t off the hook because they need to make patching and remediation easy and supported and they absolutely must raise the bar on communications and transparency. It’s something they can learn from the open source movement.

While it’s a safe assumption that the lack of transparency will remain much a part of doing business for most companies, it is for this reason that open source software has become a viable option. The open source movement exists partly because of the lack of transparency in the software supply chain. Large vendors with wide market reach typically guard their software, wary of proprietary intellectual property being examined by competitors or bad actors with malicious intent. Open source software, on the other hand, is transparent, allowing for external study and encourages discovery and discussion of vulnerabilities in software.

The concern regarding the lack of transparency is that disclosure of vulnerabilities and attacks can be delayed or hidden in order to protect a vendor’s reputation. The law is already very clear on the requirement for disclosure and the SEC has recently fined companies for failing to disclose breaches and were accused of defrauding investors by hiding that data. Without access to code, there is no real way to know what’s in the black box. It could even be a back door which is why some Chinese products are banned by our government. Researchers can do amazing things even with a black box but developers that provide their source code get more help in finding holes and therefore are more trusted.

Policy as effective policing

One solution would be to create and support auditing organizations that can institute software testing standards. Testing could be done either with the cooperation of the software developers, or by external auditors alone, though regulations could force developers to submit to testing, assuming that software testing doesn’t become an expected part of doing business. At minimum, a software bill of materials could be provided, as the CISA has recently advocated.

As cyberattacks increase in scale and scope, and enforcement agencies take them more seriously, there will be a greater requirement for retention of data for forensic examination and legal discovery. Proper detection by all parties’ aids investigation, informs the security operation and can protect the company from legal liability. Also, by establishing a system of retaining data for this purpose, an organization is relieved of the burden of scrambling to collect data, delaying any forensic work and creating a large project for your team at the same time they’re trying to mitigate any potential attacks or breaches.

Mitigating risk requires enhanced detection and communication

In the world of business, the instinct for self-preservation will always be king. Outsourcing work does not mean outsourcing risk, and assuming that a vendor will look out for you and your organization’s interests is unwise. Your organization’s cybersecurity must be a priority and anything vendors might do to protect you should be seen as supplementary, because legally, financially, and reputationally, the buck stops with you. 

Beyond making sure your organization is “covered,” collaboration and transparency in vulnerability identification and network detection actually improves security for all the parties involved. Ingesting intelligence and telemetry from the vendor side is necessary to have holistic coverage and truly be able to detect compromises from all lateral, ingress and egress points on the network and hosted workloads. It may require regulation and compliance to encourage vendors to be open with their customers regarding software vulnerabilities and embrace researchers, or it may become a standard part of good business, but this collaborative effort will be essential in fighting cybercrime. An unknown vulnerability or the lack of an IOC is a gap in detection for attackers to take advantage of.

 

Latest