Eliminate toxic data from logs

Log files are piling up in the cloud in a dizzying variety of formats and originating from everything from mobile apps to SaaS services. Sensitive data is creeping into our log files, creating security incidents and compliance violations. How do you find and eliminate toxic data in logs before it becomes an incident?

Pick the right time to start

In spite of the fact that even small changes can result in sensitive data leaking into logs, there are moments when taking a closer look is imperative. For example, are you about to roll out a new application or overhauled service? Are you newly hired and need to get a sense of what your risk posture is? Has there been a recent security incident and you want to make certain you’re in the clear? All of these (and more) are good reasons to analyze your logs for toxic data.

Ready, set, go!

While logs can reside in many locations, S3 is a great starting point for analysis as it is the storage service of choice for unstructured and semi-structured data at AWS. Open Raven will automatically discover your S3 buckets for any account or organization you have configured.

From there, creating a log analysis is straightforward. You can select some or all of your available buckets, select some or all file types, and configure a scan optimized for completeness or for a partial sampling of the files within.

The final step is to select the type of data to look for-- are you primarily concerned about finding developer credentials? Regulated personal data? Patient health information? Select all or some of the available classes or create your own if you have a special case.

Creating a scan job inside Open Raven

Select S3 buckets to scan, and optimize scan job for completeness or speed.

Working with results

At the scan’s completion, it will have automatically identified all relevant log files by type and examined them for instances of the data classes you selected. Fixing any discovered problems is a click away as the results are deeplinked to the AWS Console itself for fast remediation.

Metadata associated with the file can help you identify the offending application, allowing you to stop the leak at the source.

Open Raven product showing an asset group with credit cards

Sensitive data found in logs is plainly visible in the scan results & directly linked to the data source.

Making routine log analysis fit your workflow

Automating future sensitive data exposure from logs and response is simple. Create a schedule inside Open Raven for an area to be monitored, the desired policy, and the frequency to check for problems and you’re good to go.

Open Raven fits your existing response workflow through built-in integrations for Slack, GSuite, PagerDuty and more. Need a custom integration to make things work perfectly? Our firehose API and webhook features ensure we fit the way you already get things done.


Open Raven integrations including Slack, Jira, Gmail, and more

Stream results with the firehose API, send alerts to 3rd party services or customize your integration with webhooks.