What Are Containers?

Container technology—often referred to simply as “containers”—is a mechanism for packaging an application so that it can run in isolation from other processes. Getting their name from the shipping industry, containers are units of software, made up of code and dependencies, that enable applications to run quickly, reliably, and uniformly, regardless of their computing environment. 

In this post, we’ll briefly explain how containers work, examine why container technology is so popular today, introduce common container service providers, and outline some security best practices that you can follow when taking this approach to building software.

How do containers work and why should I use them?

Developers face many challenges when the supporting software, network topology, or security policies they use in different environments (e.g., staging vs. production environments, physical data centers vs. private or public cloud environments) aren’t identical. These inconsistencies, for example, can make it difficult to effectively identify vulnerabilities and add unnecessary obstacles to an app deployment. 

Containers, often compared to virtual machines (VMs), help to minimize problems like this. How? Through a decoupling process that enables applications to run consistently in whichever environment the developer chooses. Like VMs, containers enable applications to be packaged alongside libraries and other dependencies, creating isolated environments for running software. But since containers virtualize at the operating system (OS) level—instead of the hardware stack—and can be run on the OS kernel directly, they are far more lightweight and therefore easier to work with.

This allows developers to focus on application logic and dependencies, while also enabling them to work faster, deploy software more efficiently, and scale at unparalleled levels. Furthermore, it helps IT operations teams to focus on deployment and management without having to worry about application configurations and versioning issues.

Popular container technologies

From popular enterprise solutions to free, open-source platforms, there are several container services providers to choose from—and each works a little differently.

  • Docker, for many developers today, is the standard for building containerized apps—so much so that “Docker” and “containers” are almost synonymous. Its Moby project is behind most major infrastructure-as-a-service offerings and open-source serverless frameworks. 
  • Kubernetes is a free, open-source container orchestration system. Compared to Docker, which allows us to use containers, Kubernetes is a solution that helps to automate the processes of deploying, managing, updating, and scaling containers.
  • Red Hat OpenShift is a platform that provides the foundation for on-premises, hybrid, and cloud containerization deployments. Put simply, it’s a Kubernetes-powered and Docker-supported platform-as-a-service offering that helps developers deploy applications.
  • Rancher, an open-source solution developed by Rancher Labs, was designed to simplify Kubernetes deployment and manage multiple clusters across different infrastructures. 

Benefits of containerization

Broadly speaking, container technology is easy to manage and maintain, meaning developers can more readily deploy applications at scale.

Greater efficiency and agility 

As previously mentioned, containers are far more lightweight than virtual machine environments because they virtualize at the OS level, with multiple containers running atop a single OS kernel. This means applications can be more rapidly deployed, patched, and scaled, which helps DevOps to remain agile and accelerate production cycles. They also use far less memory. 

More consistent environments

With containers, developers can create consistent environments that are more easily isolated from other applications. And since containers can also include software dependencies—such as binary code, programming language runtimes, configuration files, and other libraries—they guarantee consistency regardless of where an application is deployed.

Having a consistent environment means developers and IT teams can be more productive, spending less time on tasks like debugging and diagnosing environment issues and more time on developing the functionalities that users demand.

Increased portability

Containers can run anywhere, on all operating systems—including Linux, Mac, and Windows. They can also run on virtual machines, in data centers, or in the public cloud. This is ideal for developers because it gives them the flexibility to run their software wherever they prefer. 

Challenges of containerization

Like most other software, containers can also introduce challenges and risks to the development process.

Lack of adoption 

First and foremost, it may be challenging to implement container technology across your organization. Adopting containers—and using them most effectively—requires changing processes and infrastructure, and it can be tough to get buy-in from developers.

Potential for larger attack surfaces

Containers are often considered to be less secure than VMs because they technically create a larger attack surface for bad actors to exploit. A vulnerability in the container’s host kernel, for instance, could offer an attacker a route to other containers that share it. 

However, in recent years, many container platforms have made efforts to develop software that enhances Docker and container security. These solutions profile containers’ expected behaviors, processes, networking activities, and storage practices, ensuring any anomalous or malicious behavior is flagged.

Improper use by developers

Containers are agile, portable, and relatively easy to deploy and use. However, it’s still possible for developers to make mistakes along the way. Using layered images, for example, can expand an organization’s attack surface and make it more difficult to defend. 

Container security best practices

Containers may be powerful, but like many digital solutions, they need to be carefully deployed and maintained in order to minimize risks. Implementing (and automating) processes will save organizations time and help guarantee that they deliver flexible, agile, and secure software.

To show how the below best practices can help secure containers, we’ll use Docker as an example.

Only use trusted images

Developers that want to create Docker containers need to build their own Dockerfile from scratch or take one that’s built on a base image—two things that can introduce issues related to image integrity, provenance, and vulnerability. 

To protect your containers, it’s vital that you only download official images from public repositories, such as Docker Hub, and signed images from trusted developers. The Docker Content Trust uses public and private keys to verify image integrity and author identity.

Educate your dev team

Organizations can ensure they only use trusted images by making sure their development teams are aware of these best practices. One of the most effective ways to do this is to set up a Docker Trusted Registry, which enables them to build an internal library of images that can be published and reused.

Use secure container versioning

When new versions of Docker container technology are introduced, developers may unwittingly introduce security flaws. Not only that, but newly-patched versions of legitimate container images can also destabilize a build. 

To avoid this, tag your Dockerfiles with the version of a container that it was developed on or the precise image it used.

Scan for vulnerabilities

While Docker Hub provides some confirmation of images being official, it’s still important to scan all downloaded images for vulnerabilities—especially considering that many have still been found to contain bugs

Organizations can scan for bugs using tools like Docker Security Scanning or OpenSCAP, alongside integrating vulnerability scanning into their container management system policy.

Observe network traffic

Security should be top of mind whenever containers are deployed—and that means monitoring network traffic. For example, in some container frameworks, the host OS permits all network traffic between containers, which may allow unauthorized access to a third-party or malicious program. Furthermore, it could enable an attacker to observe sensitive traffic. 

To keep your containers safe, disable inter-container communication using the —icc-false flag in the Docker daemon.

Avoid container breakouts

Docker’s daemon runs as root, which means a user that has root access from within a container could compromise the host OS. As containers all share the host OS kernel, this could grant access to multiple production containers. 

Where possible, ensure containers aren’t run as root—and deploy least privilege access to minimize risks.

Automate security controls

Developers can automate security controls across their environments and cloud container technology, speeding up the Docker instances used for development and enabling them to start Docker container production more securely.

Strengthen security with authentication

Authentication is crucial to starting a Docker container—especially considering that enterprise environments could contain thousands of containers. Authentication layers like OAuth2 and OpenID Connect can be used in conjunction with Kubernetes to issue an ID token that verifies users’ identities and provides secure access to specific containers.

Get started with containers

At Okta, we’ve done a lot of thinking around container technology—including Docker and Kubernetes. Check out the following resources to learn more about working with containers:

Tags

developer