Managing Misconfigurations to Stop a Data Breach

Vulnerability management is mainstay of most cyber security programs. It is seen as essential by enterprise teams, but rarely do defenders get excited about finding and applying a missing patch or tightening up access controls to critical systems. Sure, it feels good to know you’ve plugged a hole that needs plugging, but all the glory goes to the threat hunters and even the red teamers who first get to exploit and then fix (or at least tell others how to fix) a vulnerability.

Nonetheless, enterprises would be in a much worse state than they are without sturdy vulnerability management programs. Bubbling to the top of the list of remediation activities for these important teams (which may be an amalgamation of system admins, database admins, cloud architects, security staff, and other asset owners), is configuration management. According to a 2020 survey of 300 CISOs,[i] conducted by IDC, 67% of CISOs said that security misconfiguration is a top concern associated with cloud production environments. The 2020 Verizon Data Breach Investigations Report[ii] shows that misconfigurations are likewise a top contributing factor to data breaches, increasing as the facilitating factor in data breaches since 2015, and rising 4.9% since the 2019 DBIR—the highest one-year jump for any of the action varieties listed. Of those misconfigurations, a full 21% were due to error rather than malicious intention. What’s more, a study by McAfee estimates that 99% of cloud misconfigurations go unnoticed.[iii]

There are many more statistics to be found about the state of the problem, but let’s focus on what companies can do to drive down misconfigurations and (likely) breaches that could result from an exploit of one of those vulnerabilities in companies’ cloud environments.

Vie for visibility

One of the main causes of cloud misconfiguration is lack of visibility. Given the ephemerality and distribution of cloud instances, vulnerability management teams are challenged to identify default settings that need security’s attention. Traditional vulnerability scanning may not identify misconfigurations because the scanning is not trained on the right resources or the scan is not continuous, thus not accounting for new resources spinning up and down.

Cloud-native security scanning and asset management tooling can help. Not all solutions are created equal; ensure that the tool of choice doesn’t require vulnerability management teams to poke holes in firewalls to conduct an identification process, thus creating another vulnerability for the organization.

Caretake your credentials

Needless to say, compromised or weak credentials pose a major threat to unauthorized access. A threat actor posing as a legitimate user gives unfettered access (especially if accounts are overprovisioned—see below) to the cloud environment and its sensitive or proprietary data and information.

To prevent compromised or weak credentials from becoming the vulnerability your organization doesn’t need, enable multi-factor authentication for all IAM users and deploy a tool that can discover and remediate unused security groups

Polish up permissions

In the same vein as auditing cloud assets to identify risky settings, user and service account permissions must be a focus for cloud vulnerability management. Excessive permissions easily go unnoticed because the defaults for new resources and services are almost always too much. Threat actors can leverage excessive privileges within a compromised node to access an adjacent node and find insecure applications and databases. The result: a destructive data breach with potential compliance consequences.

To remediate this vulnerability, make certain access permissions are reviewed regularly, that least privilege access is applied by default, and that no instance is publicly accessible (which is surprisingly common).

Ensure Encryption

Encryption is one of the easiest ways to prevent unauthorized individuals from seeing what data reside in companies’ systems. For instance, enabling S3 bucket encryption will protect the bucket and all new objects stored in it (for data at rest and data in transit). That said, even though it’s called “default encryption,” the setting is, ironically, not enabled by default. It is trivial for users to configure the setting, though. Be mindful, however, of existing objects at the time of encryption. Objects stored in the bucket prior to flipping the switch on the setting must also be encrypted. In S3, this can be accomplished via Batch Operations.

For every cloud environment, users/admins must review encryption settings to make sure the data is properly protected throughout its lifecycle. Encryption can be client side or server side—or both. Not all cloud providers’ environments are the same, though, so understand the “default” settings for each provider and what “default” means, then take appropriate action.

Conclusion

At present, cloud misconfigurations present a high data breach risk. The reasons for this are myriad: lack of visibility, misunderstanding of “default” settings, inaccessibility of settings, not enough expertise to manage configurations, time and resource constraints, and more. However, fixing misconfigurations is manageable via cloud-native vulnerability technologies. From discoverability to policy enforcement, tools and techniques are available to help organizations understand and control their security posture.

[i] https://www.businesswire.com/news/home/20200603005175/en/Ermetic-Reports-80-Companies-Experienced-Cloud-Data

[ii] https://enterprise.verizon.com/resources/reports/dbir/

[iii] https://www.helpnetsecurity.com/2019/09/25/cloud-misconfiguration-incidents/