Log analysis is the art / science of making sense of raw data. Trunc contextualizes raw data and creates insights for administrators.
One of the biggest challenges we face in the modern DevSecOps world is the mountains of data we are responsible for managing and making sense of. One of the most important bits of data are logs. These logs can be noisy, and yet we are tasked with reviewing, deriving intelligence from them, and storing them in such a way that we can retrieve them if necessary. Making sense of the noise can be an excruciating task even for the capable DevSecOp teams.
Logs are a critical pillar of any security program. They exist as a record that helps incident responders better understand what happened, because in the world of security it is not if you've been hacked, but a matter of when. With enough time and motivation attackers are often able to defeat most defensive measures. Maintain the integrity of your logs is paramount to the success of any incident review.
Not all logs are helpful. Although extremely important, they are often an after thought to most developers which nets us very bad logs. Bad in that they collect the wrong information, or they collect information that is not helpful, or recorded incorrectly. This adds to the challenges we face as administrators when we're trying to make sense of the information.
That is why at Trunc we work to get rid of the noise. We create parsers that help us contextualize the logs, making them helpful to an administrator, while removing the logs that are not.
Here is a simple example using SSHD. If you were looking at an SSHD log, this is what you would see:
Accepted password for john from 149.1.x.x port 23414
SSHD is an example of an application that records clean logs. That being said, when you have a lot of users, this would get lost in the noise and you also lack any other information on whether this is good or bad (outside of the fact that John looked in with a password).
Log analysis would be the process of extracting as much information from this log entry as possible to help make a determination if this is good or bad. As it stands, that is impossible. Especially when compounded by the fact that this is one entry of thousands on one server.
In a real-world example you're talking N number of servers, N number of entries, N number of logs, where N is an infinite number. It's a near impossible task for an individual, or team, in most instances.
Let's take a look at what it means to contextualize the log, analyzing it to derive more intelligence, in an effort to make a determination if this is good or bad.
This is what Trunc sees with the same log from SSHD:
Login success via SSHD.
IP: 149.1.x.x (Germany)
149.1.x.x: Tor exit node.
149.1.x.x: Flagged in multiple blacklists.
That's interesting. So John SSH'd into a server, using a Tor exit node, and that exit node has been blacklisted for malicious activity across other networks. That node also happened to be in Germany, but John lives in California.
In the world of security, this would, should, be a big red flag. But in many instances, it falls through the cracks because most organization lack a mechanism to effectively parse, or make sense, of their logs.
Trunc is on a mission to simplify this process. By default, all logs will be categorized against our rules and we have made it super easy to quickly sift through your logs according to those categories. For example, an administrator would be able to do a search like this right from their dashboard:
category:authentication_success AND category:tor_connection
We turn logs into actionable intelligence. Everything can be searched through our Google-like dashboard, making it simple to investigate incident and conform to multiple compliance requirements (PCI-DSS, GDPR, SOC-2). The best part is you can parse through all your logs, not just one log file on one server, but all of them from all locations at the same time.