Join Our Popular Newsletter
Join 4,500+ Linux & Open Source Professionals!
2x a month. No spam.
How KernelCare Helps You To Keep Your Containerized Workloads Secure
OS virtualization was a huge step forward for the delivery of large-scale enterprise computing applications. But virtual machines were just the start. Containers take virtualization a step further, delivering unprecedented flexibility as applications become almost seamlessly transportable.
However, containers come with a hidden security risk that derives from the nature of containerization. In this article, we discuss the role of containerization in the enterprise, explain why contains can be an enterprise security risk – and point to effective solutions.
Hold on. What Is A Containerized Workload?
To understand why containers can be a security risk you need to understand what exactly a container is – and what a containerized workload is.
First, a step back to virtualization. Not that long ago, a computer OS and the hardware it runs on would be inextricably linked – one physical server is associated with one operating system. Virtualization changes this by inserting a layer between the hardware and the operating system that runs on it. It removes the one to one, binding relationship between the operating system and hardware.
To a virtualized operating system it looks as if it is the only operating system on the hardware. In the background, the virtualization software (the hypervisor) manages the abstraction layer to ensure each OS is unaware of the presence of other operating systems on the machine.
Thanks to virtualization you can run several operating systems on one machine, and easily transport an operating system from one machine to another. As a result, you enjoy a boost in efficiency and flexibility.
From Virtualization To Containerization
We point to virtualization because it helps explain what containerization does. Where virtualization puts a layer of abstraction between the physical equipment and the operating system, containerization adds an abstraction layer between the operating system – and the application.
Essentially, where virtualization virtualizes hardware, containerization virtualizes the operating system. With containerization, each application is encapsulated in an isolated unit called a container. Containerized applications do not share the operating system environment, because each container operates discretely.
However, containers do share read-only access to elements of the operating system, including its kernel. Nonetheless, to each application, it looks like it is running alone in an operating system all of its own – and applications are mutually unaware that they share the operating system environment.
By consequence, a containerized workload refers to an environment where the applications that support enterprise requirements are running in isolated containers inside of a host operating system.
How Containers Are Used In The Enterprise
You might guess that isolating applications inside of containers provide benefits in terms of security and stability, and you would be right. However, the benefits of containerization go far beyond that.
At its core, containerization allows enterprises to package and deploy applications across different types of infrastructure in a standardized manner. In theory, any host that is capable of hosting a (compatible) container is also capable of hosting your containerized application.
This happens because containerization abstracts the application layer. Container technology such as Docker facilitates this abstraction in a standardized manner. The key to this standardized behavior is something called a container image. A container image includes not only the application code but also system libraries and other key settings that ensure that a containerized application is ready to go.
This brings us to a key benefit of containerization that goes beyond application security and stability: containers are intrinsically portable. It leads to several important advantages for enterprise applications:
- Portability means agility. Containerized applications can easily be run in a new, unfamiliar environment. A developer can release a container image and enterprise clients can confidently deploy the application as long as the container fits a standard containerized environment such as Docker. Containers allow for incredibly agile application deployment.
- Containers are resource light. It is easy and fast to ramp up a containerized application – far easier and faster than starting up a full virtual operating system. At the same time, a single host machine can support far more containers than virtual OS instances. That’s why containers have even bigger benefits than virtualization when it comes to data center efficiency.
- Automated deployment and management. The standardized nature of containers means that enterprises can dynamically deploy and manage containerized applications. It is called container orchestration. Orchestration tools such as Kubernetes automates rolling out and monitoring applications at a massive scale.
Containerization extends and amplifies the benefits of virtualization. Enterprise users gain unprecedented levels of scalability and manageability when applications are deployed via standardized containers.
The Relation Between Containers And The Host
Containerization delivers significant benefits, particularly where large-scale, enterprise applications are concerned.
Containers are a sea change in the way applications are deployed, however, and from a practical and indeed a security viewpoint, it is helpful to get a view of the relationship between a container, and its host.
While it is true that containers run in an isolated fashion, it is important to understand that containers also share components. Doing so eliminates the overhead of running a separate operating system for every application in a container.
It also means that containers are quicker to start up – compared to a full operating system. So, what elements of the host is shared by containers? The operating system kernel is the most important shared aspect: there is only one running kernel in the host and every container in the host shares this kernel.
Next, drivers and OS application binaries are shared by the containers – while containers also share host OS storage, though the storage is isolated. Finally, containers also share the container platform – such as Docker, for example.
The ability of containers to run in such an isolated fashion stems from a few core Linux features that are used by container platforms (e.g. Docker) to enforce isolation. Kernel namespaces and cgroups facilitate the ability of independent containers to all share the same Linux host.
Isolated, Yes, But Still Vulnerable
It is clear that containers deliver a high level of isolation, which in turn delivers a degree of protection – threats can’t propagate from one application to another all that easily.
However, due to the sharing of resources inherent to containerized workloads, containers remain vulnerable – and can indeed introduce new vulnerabilities that organizations must watch out for. Let’s take a look at a few examples:
- Image security. It is critical to ensure that you only run container images from a trusted source. Both a poisoned image and an unpatched image can open the door to attacks. Image safety and security really matter.
- Container platform security. Just like any other software, your container platform can contain security risks. Consider for example the runC root access remote execution vulnerability, inherent to the runC libraries that are commonly used by container software vendors.
- Privilege escalation risk. Though applications should in theory not break out from containers, it is worth being very careful with the user privileges in use in a container. If a container app has root access for any reason the risk exists that a breakout will give the attacker root access to your entire machine. The runC risk outlined in the previous point is typical of a breakout security risk.
So, while applications are isolated, the isolation mechanism – containers – bring their own security risks to the table and enterprises need to manage their containerized workloads in a way that mitigates these security risks.
Unpatched Kernels: A Hidden Container Security Risk?
The shared components of containers clearly lead to security risks. But arguably the biggest security risk for containerized application is the shared OS kernel. The security risks posed by the host kernel essentially hides in plain sight.
Remember: every container in a host shares the same operating system kernel. There is a risk that organizations may misunderstand the isolation and therefore security benefits of containerisation by neglected to factor in the risks posed by a shared OS kernel.
However, once the OS kernel is compromised, the application inside the container can also be compromised. And we know that OS kernels have a long track record of leading to security breaches of all shapes and sizes.
That is why kernel security matters so much when it comes to securing containerized workloads. There are a few things you can do to ensure a secure kernel for your containerized workload: stay aware of the latest kernel security risks and ensure your Linux kernel only contains the services you need for container workloads.
Don’t forget, of course, about kernel patching.
The Problem With Patching Kernels
The Linux kernel is continuously patched. As vulnerabilities emerge, the community adjusts the code in the Linux kernel to combat these vulnerabilities – and releases a patch. Unpatched systems are at risk, patched systems are not.
Patching should be a no-brainer: it’s a simple security measure that significantly boosts your container security. Yet patching consistently can be very difficult:
- Patching is time-consuming. Given the volume of Linux kernel patches that are released many organizations can struggle to keep ahead of patching – particularly across large technology estates with thousands of machines.
- Consistent patching is expensive. Unless automated, the roll-out of patches can drain the resources of even the best tech teams. Patching becomes an expensive process and an easy target for savings and cost cuts.
- Patching is disruptive. A kernel patch often requires a server restart. For many workloads, a restart will be highly disruptive. Restarting a container can be very fast – restarting the entire host can lead to a noticeable service disruption. As a result, patches are often delayed.
Clearly, one of the biggest security risks for containerized workloads is, in fact, very difficult to manage consistently.
KernelCare Live Patching Integration
Patching automation is the first step to successfully reduce the risks of kernel vulnerabilities under containerized workloads. Another critical step is live kernel patching: the ability to update the OS kernel without requiring a server restart. KernelCare live patching offers both.
Enterprises that manage containerized workloads can use KernelCare to ensure that the kernels in their container hosts are consistently patched, and patched without disruption. It is simple to do so: simply install KernelCare the way you normally would.
When you KernelCare to patch your container hosts you also, by consequence, enjoy fully automated patching of the kernel in use by the containers. Because containers share the host’s kernel you only need to install KernelCare on your container host once – and kernel updates will apply to all the containers on that host.
In short, KernelCare is an efficient and practical way to handle one of the biggest security risks associated with containerization.
Other Tips For Container Kernel Security
Before we conclude the article we’ll touch on a few other points worth considering when keeping your container host’s kernel secure. Yes, patching is critical, but you should also consider the following points – most of which are simply just good practice for server security:
- Remove the root user. By restricting the root user you can ensure that a container that breaks out of isolation does not automatically gain root access to the entire server.
- Limit the kernel modules you run. When you set up a server you can extend it by adding kernel modules. For maximum security, we suggest you install the absolute minimal amount of kernel modules you need for your containerized environment – but do ensure that you keep critical modules that enforce security across roles and permissions.
- Make use of container security tools. There are tools specifically developed to scan your container configuration and to match it against best practice. docker-bench-security is one example, it checks for dozens of points in your configuration and evaluates it according to best practice rules.
As always, comprehensive security requires careful server configuration and ongoing, pro-active server management.
Containerization is transforming enterprise application workloads by reducing the cost of infrastructure, speeding up the process of rolling out applications, and introducing unprecedented flexibility and scalability.
However, running applications inside of containers also bring unique security risks that may not be immediately obvious. Kernel security and patching is one critical area that must be handled with great care.
KernelCare gives your organization the automatically update the kernels that support your containerized workloads, and to do it without the disruption and downtime associated with system restarts.