Skip to content

A SecDevOps Perspective on SUNBURST

Much has already been said about the recently reported SolarWinds compromise. In this post, we are not attempting to further investigate the attack, but rather, to provide a SecDevOps perspective on a few of the underlying software and development processes that are reported to have been involved in the initial compromise at SolarWinds. These processes are not unique to SolarWinds, and in fact, are often considered best practices in software development.

But, "best practice" != "immune to compromise" and in some cases, extra automation/complexity may obscure attacks, or provide a false sense of security. E.g. "the code is digitally signed, it MUST be safe".

While understanding processes within the software development life cycle will not prevent attacks, it certainly provides a broader perspective for analysts and may reduce blind spots in hunts.

Supply Chain Attack

Sunburst is being reported as a "supply chain attack" which simply means that the software was compromised before it was delivered to customers. The end users of said software did not necessarily do anything wrong in the process. They were actually provided pre-compromised software, touted as "good", containing a backdoor.

Supply chain attacks are not new. NotPetya, CCleaner, and even the infamous attacks on Target can be classified as supply chain attacks.

These can manifest in any of several ways: from dropping a bad executable on the Downloads page of the vendor website, to (old school) replacing CDs or install media in the mail, to (new school) inserting malicious code into a build pipeline. An overly simplistic delivery process:

code -> build -> delivery -> installation

It can be a bit counterintuitive to consider the implications on "how far left" the attack happens. Is it easier to detect attack pre-build? or during delivery? Providing a checksum/integrity verifier/signature may prevent drop and replace attacks, but it also may provide a false sense of confidence for pre-compromised software (which is perfectly valid according the verifier).

The specifics of this attack appear to be fairly far left in the software process which makes it particularly challenging for external defenders to mitigate.

Build pipelines

If you aren't a developer, build pipelines may be an unfamiliar term. Builds, or pipelines, are automated processes used by developers to turn code into usable software and are an integral part of DevOps. This could be turning code into a publicly hosted web application, or into an executable delivered to customers.

Often, these processes are used at scale within an organization, which allows many developers to work on a shared set of code. A build pipeline may include things like functional tests (make sure the code works), review processes (deploy it to a test environment so someone can make sure the buttons click), and even automated deployment (everything looks good? yeet it into prod).

Automated builds are generally considered best practice in software development and are WIDELY used. It allows organizations who develop software to holistically enforce testing, deployment, and better software development. BUT, build servers/services/processes provide one more attack surface for adversaries.

Consider the following development process:

  1. A developer writes some code
  2. Developer commits (saves) the code to a 3rd party shared code repository
  3. A different third party "build" service is monitoring the shared code repository
  4. The build service runs DIFFERENT code to automatically test the software and build it
  5. The build service runs more code to automatically deploy the software to a special testing environment
  6. If everything passes the "tests", the build service may run EVEN MORE code and deploy the code into production

Now, an attacker can target (a non-exhaustive list):

  1. The developer
  2. The developer's workstation
  3. The 3rd party shared code repository
  4. The organization's accounts at the 3rd party shared code repository
  5. The 3rd party build service
  6. The organization's accounts at the 3rd party build service
  7. The organization's build code
  8. The testing/staging environment
  9. ...

A successful compromise at any of these levels could lead towards the deployment of pre-compromised software. Which, as indicated by a talented adversary, may STILL pass the functional tests required, and behave like the software should.

This is even FURTHER complicated by the scale and size of modern software. The SUNBURST attackers are reported to have compromised a component of the SolarWinds software suite: SolarWinds.Orion.Core.BusinessLayer.dll Software is often not a single "thing" in modern platforms and is made of many segmented pieces, which all work together to produce the end product. "Libraries", "modules", "packages" and other terms are used to describe these segments. So, an attacker need not compromise an entire software suite, but merely a small piece of the entire software puzzle.

In large organizations, these components may not be created by the same teams (or even the same company). They may have different levels of testing rigor, or use different 3rd party services at different steps, or have different security oversight into the processes.

So, builds/pipelines are good, but need special attention.

Code Signing

Code signing can take several forms. The definitive term means to append a digital signature to the delivered code, which can be verified against a trusted source, and indicates the code is "from", who you think it's "from". It uses cryptographic hashes, a trusted certificate authority, and provides a high level of confidence in the authenticity and integrity of the software. But, in this case "integrity" means that "the software has not been modified AFTER it was signed".

Screen-Shot-2020-12-15-at-9.31.27-AM

The same types of principles apply with simple hashes and checksums. E.g. a software provider can simply create a sha256 hash of the codebase, executable, etc. and provide that hash via a different medium than the download. The end user can hash the software locally and compare their derived hash to the vendor posted version. This does NOT use a trusted source and an attacker could gain access to publicly posted hash, simply overwrite it with their own, and break the implied integrity.

Looking back to the development process outlined earlier, if an attacker gains access BEFORE code is fully built and signed, the code produced could have a perfectly valid signature (it was signed by the vendor for reals!) but include any amount of terrible things. When a customer receives the software, any digital signature checks pass, and their confidence is high that the software is good.

So, code signing is good. It prevents numerous types of attacks, just not the kind enacted during the SUNBURST attack.

Uh oh, now what?

Enter SecDevOps. SecDevOps is a field which exists, in part, to address these issues: how can we make our code more secure? and how can we make our DevOps processes more secure?

Highly automated, highly complex software development processes can introduce new avenues of attack for adversaries, some of which can be devastating (as demonstrated) and the security of these processes needs attention. Standard security hygiene practices are certainly in play. Some of the same things an organization uses to secure an email server, also needs to happen on a software build server. Best practices for organizational accounts in any 3rd party service need to be considered for 3rd party software build services. But, there are additional considerations as well.

How can we build automated tests to ensure our code is secure? How can we add canaries to our build processes to alert on unauthorized accesses? How can we validate or test EVEN FARTHER left, to catch bad stuff earlier?

These are questions which may be outside of the historical perspective of some security operations teams, but are certainly worth considering and recognizing if your organization produces code (and most of us do) or if the people you protect produce code (and most of them do).

Further Info

We (mostly Whitney) have written a ton about the measures we take to secure all of our awesome automations.

And, as always, we provide some bada$$ training to give analysts real world experience detecting attacks like Sunburst in our Live Online Network Defense Range Training.