Company

Magpie Cookbook #1: Discover Your Public S3 buckets

Jason Nichols
Principal Engineer
April 7, 2021

Overview

Welcome to part 1 of the new Magpie Cookbook series, where we'll be demonstrating practical uses for Magpie in your environment. The goal of these cookbooks are to provide easy to follow guides for extracting useful information, actions, or automations using Magpie.

Today's cookbook is discovering public S3 buckets within your AWS account.

If you'd like to know more about Magpie check out the Github repository. To get regular development updates and chat with the core Magpie developers and the community join us on Slack.

Prerequisites

For this cookbook you'll need the following on your local system:

  • Docker
  • jq
  • AWS credentials (in your ~/.aws/credentials folder or environmental variables)

Running the Command

If you've set your AWS credentials via ~/.aws/credentials run:

docker run -a stdout -a stderr --env MAGPIE_CONFIG="{'/plugins/magpie.aws.discovery/config/services': ['s3']}" \-v ~/.aws:/root/.aws:ro quay.io/openraven/magpie:latest 2> magpie.log
| \ jq --stream 'fromstream(1|truncate_stream(inputs))'
| \ jq -c 'select (.isPublic == true) | {"bucket" : .bucketName, "region": .region}'

If you've set via environmental variable, run:

docker run -a stdout -a stderr --env MAGPIE_CONFIG="{'/plugins/magpie.aws.discovery/config/services': ['s3']}" \ -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN quay.io/openraven/magpie:latest 2> magpie.log
| \ jq --stream 'fromstream(1|truncate_stream(inputs))'
| \jq -c 'select (.isPublic == true) | {"bucket" : .bucketName, "region": .region}'

And now grab some coffee, if you have a large infrastructure this command may take some time. Hint: If you're feeling antsy and wish to view logs just open another terminal and run:

tail -f magpie.log

Command Breakdown

We start with a basic docker run and set a few parameters

docker run -a stdout -a stderr --env MAGPIE_CONFIG="{'/plugins/magpie.aws.discovery/config/services': ['s3']}" \

The -a arguments connect both the docker image's standard out and standard error to the terminal's. This allows us to take advantage of Magpie's default behavior of writing log files to stderr and output JSON to stdout.

We then set the MAGPIE_CONFIG environmental variable. This allows us to override Magpie's default configuration which scans all AWS services. Instead we only want to scan S3. For the full service list and more about overrides see the README.

The second line in both options primarily sets credentials. Magpie will search for and use credentials as explained here. We then specify the latest Magpie Docker build and redirect stderr to magpie.log.

The last two lines use the power of jq to 1) Parse the JSON stream, 2) Filter out entries that are not public, and finally 3) Output JSON objects that consist of the bucket's region and name.

Results

If your account has public buckets you'll see them printed, one per line:

{"bucket":"my-western-bucket","region":"us-west-1"}
{"bucket":"my-eastern-bucket","region":"us-east-2"}

Magpie pulls a lot of data on S3 buckets, to see the raw JSON (and to remove the isPublic filter) just remove the last two piped jq commands from the code above. The same techniques presented here can be used for extracting any of Magpie's discovery data based on custom filters. We'll explore more possibilities in future cookbooks.

Don't miss a post

Get stories about data and cloud security, straight to your inbox.