top of page
Writer's pictureCraig Thomas

Leveraging Docker Secrets for Security AND Scalability

One of the key tenants of DevOps is its scalability and agility. However, this often flies in the face of Security, or so we are often told. We have seen this get better with the concept of DevSecOps and security scanning as an integrated part of the Continuous Integration/Continuous Deployment (CI/CD) pipeline. C2 Labs recently worked with one our clients on another way Security AND Scalability can go hand in hand. We did this using Docker Secrets and Docker Swarm.


What are Docker Secrets?

A secret is a blob of data, such as a password, SSH private key, SSL certificate, or another piece of data that should not be transmitted over a network or stored unencrypted in a Dockerfile or in your application’s source code.

Outstanding. So now we have a better way to use and store sensitive data. Our current options were to store this information as Environment variables (easily exposed and viewable in code and also in running containers) or as part of the Dockerfile (not very scalable and also easily exposed). Additionally, these are available to all workers running as part of your Swarm and are set up on the Swarm manager.


How Do We Setup Docker Secrets?

Docker secrets are setup using manager nodes. This can be done via the CLI or the GUI if you are using UCP manager as part of Docker Enterprise Edition (EE). As a reminder these secrets are encrypted at rest within your manager nodes and in transit between the mangers and the worker nodes. For this example, let’s use the following example, which is as Base 64 encoded “token” of user:password.

Secret Name: api_token

Value: dXNlcjpwYXNzd29yZAo=


I find it best if using Docker EE to set this up using the GUI:

  1. Login to UCP and Navigate to Swarm->Secrets

  2. Click Create

  3. For Name, use api_token (from above)

  4. For Content, use dXNlcjpwYXNzd29yZAo= (from above; screenshot here)

  5. Click Collection on the left-hand side. This will allow you to tie the secret to just a specific collection of containers.

  6. Navigate to the Collection you want to be able to use this secret. If you want it to be accessible to All in the Shared Collection, navigate to Collections->Swarm, and select Shared (screenshot here)

  7. Otherwise drill down to the correct collection, for instance C2Labs in this case (screenshot here)

  8. Click Create

**NOTE: You want to use the concept of least-privilege here. All containers within the collection, you select will have access to use the secrets. In some cases, that is exactly what you want, but in other cases, you will not want all containers to have access to it.


Once you have this created, it is stored encrypted within the Swarm. You will be unable to edit it, so if you need to change the data, you will need to remove it and recreate it. You can change the collection but not the data.


How Do We Use Docker Secrets?

Now that you have created your Secret, you need to be able to use it. This is pretty straightforward, with a couple caveats, which I will walk through.

Docker Secrets are simply stored as files. Within Linux containers, the default location for these is:

  • /run/secrets/secret_name

For Windows containers, these are located at:

  • C:\ProgramData\Docker\secrets\secret_name

Use via Docker Compose File

Here is an example Docker Compose for a simple service, utilizing an Alpine Linux image, utilizing the secret:


Let’s take a look at the key pieces:

  • Lines 8-9 define the name of the secrets. This is a list, so you can list out multiple secrets to use.

  • There is also a long-form for this list, which we can leverage and we will talk about shortly.

  • Lines 35-37 have more information about the secrets

  • Line 36 is the name of the secret

  • Line 37 defines it as “external,” which means it is stored as part of the swarm and has already been created as we did earlier

Now, if you simply do a docker stack deploy -c docker-compose.yml mysecret, you will see the file located at /run/secrets/api_token from within the container:

Limitations

Note, these are created at runtime in the container, so you cannot cat the contents directly to an environment variable. However, you CAN use them within a script or point an environment variable simply TO the file.


How Does This Help Us Scale Better?

So, this is great and provides us a way to securely transmit and store sensitive data. It ALSO allows us to scale across multiple environments. We can do this by renaming the secrets as part of the docker-compose file instead of having to rebuild your containers. For instance, I have a container that is part of my swarm that needs to upload a file from a central location to two different APIs. Each has its own API token, similar to the above example. Plus I have a development environment. I need to upload this file daily. I want to build this one time in DEV and then deploy daily to do the file uploads. Rather than modify my upload script, I simply do the following:

  1. Define the two Docker secrets in my production environment (api_token1 and api_token2)

  2. For this example, we can just use sample plaintext, but you can easily create hashes or other items as above (api_token1 = Testing1; api_token2 = Testing2)

  3. Build the image (actually optional, if you did the build previously; nothing has changed in the image)

  4. Build two services as a part of my deployment

  5. Pass the URL of API 1 as an environment variable for container 1

  6. Pass the name of the api_token1 as an environment variable, renamed api_token (which is what my container is looking for)

  7. Pass the URL of API 2 as an environment variable for container 2

  8. Pass the name of the api_token2 as an environment variable, renamed api_token (which is what my container is looking for)

  9. Deploy/run the service, and both containers will come up with the same name for the secret, but with completely different values

  10. You can easily scale this within your compose file without ever rebuilding your image.

Here is the docker-compose.yml file to do this:

Let’s take a look at the key pieces:

  • Lines 6-7 define specific environment variables for the first service. We define a specific URL to the API_URL environment variable, so you can simply use this variable in scripts.

  • Lines 10-12 define the name of the secrets for the first service. This is a list, so you can list out multiple secrets to use. This is the long-form for the list, which is where the magic happens

  • Line 11 is the source name for the secret. This is the name you gave the secret within the swarm

  • Line 12 is the target name of the secret within the container. This allows you to have the same name for the secret with different data

  • Lines 34-35 define specific environment variables for the second service. We define a different URL to the API_URL environment variable, so you can simply re-use this variable in scripts.

  • Lines 38-40 define the name of the secrets for the second service. This is a list, so you can list out multiple secrets to use. This is the long-form for the list, which is where the magic happens

  • Line 39 is the source name for the secret. This is the name you gave the secret within the swarm

  • Line 40 is the target name of the secret within the container. This allows you to have the same name for the secret with different data

  • Lines 65-69 have more information about the secrets

  • Line 66 and 68 are the names of the secret

  • Line 67 and 69 define each of these as “external,” which means they are stored as part of the swarm and has already been created as we did earlier

Now, if you simply do a docker stack deploy -c docker-compose.yml mysecret, you will see the file located at /run/secrets/api_token and the environment variable within each container, with different values.


Secret Service 1:


Secret Service 2:

This allows another way to truly create a DevOps environment with items defined at deploy/runtime in addition to just Environment Variables. These can be leveraged by other teams, while being set up by an Operational or Security team. Plus they are stored and transmitted securely: Security AND Scalability.


At C2 Labs, we love to work on challenging problems such as this. We serve our clients as a Digital Transformation partner, ensuring their projects are successful from beginning to operation handoffs. We would love to talk to you more about the exciting challenges your organization is facing and how we can help you fundamentally transform IT to Take Back Control. Please CONTACT US to learn more.

415 views0 comments

Comments


bottom of page