Visibility and Control at the Application Layer

Join Us Now

In the past, the axiom, “humans are the weakest link,” was the battle cry of many cyber security professionals. Users, that is, non-security employees, were constantly doing “dumb” things in the eyes of the practitioner. More recently, though, security practitioners have realized that labeling someone as “dumb” is a) not productive and b) not accurate. The fact is, employees across all areas of business are overworked and stretched thin—those aren’t special distinctions for cyber security teams. When a busy person receives what they think is an invoice from a known vendor, yes, they might open an email attachment. Opening attachments is commonplace and a requirement for doing business. Similarly, employees traveling, even to high-surveillance countries like China, India, or Russia, often need access to applications and files, and they haven’t been given alternative ways to use business-critical tools.

In other words, anyone, anywhere can fall for phishing or inadvertently access a malicious entity, especially if they don’t know what to look for, are overburdened, or if the attackers are crafty (which they often are). Fortunately, over the last several years, security pros have come to realize that treating users as stupid is neither helpful nor correct and have thus put extra effort behind endpoint controls to mitigate the threat of a compromised endpoint that leads to a major security incident.

Gaining control

One such security professional is Danny Jenkins, now the CEO at application security company ThreatLocker. With a background in both enterprise and vendor-side security, Jenkins saw how easy it was for cyber criminals to get past humans and wanted to build technology to ensure that if something risky happened at the endpoint, the criminal would be unable to further their attack. His idea was to develop a technology based on the idea of whitelisting but take the good parts (i.e., the fine-grained control) and improve upon the not-so-good parts (i.e., how manual whitelisting is to maintain).

“Whitelisting is a great technology,” Jenkins told me and Ed during a recent briefing, “but it must be adapted for dynamic environments and applications, especially today with DevOps. The goal of ThreatLocker was to build a solution to deal with whitelisting’s deployment overhead and tedious approval process, and that could handle constant updates.”

Jenkins ran us through a demo of Threat Locker’s Application Control product. The technology extends the concept of permitted versus denied by adding an automated learning model and least privilege access requirements. The first step after deployment is visibility; ThreatLocker scans the network to see and log which applications and files are running, which entities are talking to one another, and what apps/files can do once they connect. Jenkins says the “full granular audit of every executable, script, or library can take anywhere from one day to one month,” though his team recommends a week in learning mode since not all apps or processes might be communicating on the first day. Though the product learns as it goes, the best-case scenario is gaining a complete view before blocking is begun.

Next, based on what it learns, Application Control builds policy based on pre-built definitions, ensuring that unapproved apps, files, or systems cannot communicate on the network. The approach is default deny, but administrators can override settings by clicking through each application logged.

Dealing with dynamism

But what about the dynamism mentioned previously? How can whitelisting handle today’s CI/CD approach? Jenkins says the company has relationships with major vendors and receives update information before it hits the wire. This means that new hashes are automatically added in ThreatLocker before the updates are pushed to the market, ensuring that customers’ environments won’t fall over when a patch is released or when an app is updated by the developer. This step removes a lot of the complexity and frustration of traditional whitelisting.

Then, ThreatLocker applies granular application control through ringfencing; based on the initial audit, admins can define how approved applications are permitted to communicate with each other and what resources—such as networks, files, or registries—they can access.

The takeaway

With the recognition that the application layer is just as important as the network layer, it’s nice to see a technology like ThreatLocker that provides network and security teams greater control over what’s communicating and how it’s communicating, and gives admins a means to instantly block risky and malicious applications. Further, the approval process for permitting previously-denied applications is easy once the admin knows what they’re looking for (which should become pretty darn obvious if an employee is screaming about an inaccessible app). All that being said, the market for application controls is crowded and technology which requires a good deal of manual intervention will face pressure.

If you’re looking for greater control over your applications using a tried and true technology—albeit one updated and modernized to accommodate today’s networking—give the team at ThreatLocker a call and let us know what you think of their approach to whitelisting.

Join Us
Register to join our Executive Leadership Network & Newsletter.








Powered by