The Many Faces of Patching
Keeping your systems up to date can be done in many different ways, each with its own pros and cons. Some so-called “patching” methods are not even patching at all. This is your one-stop guide to making sense of the different patching offerings out there.
Patching is both an IT process as well as a fundamental security practice. It is generally described as the process of fixing issues or adding features to a given system by adding or replacing existing components with updated ones. This can be done at many different levels like by updating code at the subsystem, dependency, or core functionality level. Similarly, you can patch specific applications, operating systems, drivers, or any other component.
While the goal is always to have a more up-to-date version of whatever is being patched, how you go about accomplishing this is the differentiator between the multiple patching approaches.
As with other terms in IT, like debugging, patching derives from a physical process. It referred to the actual covering of holes in punch cards i.e., you would “patch” a section of the code. This seems to have dropped out of fashion, so we’ll cover the more modern approaches instead.
This is what most people will recognize as patching. It consists of downloading updated versions of a given software, then replacing the corresponding files on disk with the new versions.
It’s relatively simple to implement and understanding it is straightforward: you have an old version of a given software that gets replaced by a new version, either entirely or partially. You need to restart the application or reboot the system to pick up the new version, but there are no other moving parts. Of course, therein also lies the biggest drawback restarting applications or rebooting entire systems is a slow operation that causes significant disruption to whatever workload is running. Because of this, it’s not something that can be done in an ad-hoc manner, so it usually takes place in what is called a “maintenance window”a pre-approved period of time where systems are expected to have availability or performance issues.
Again, this is slow to set up and prone to go over predicted time allotments in case of issues.
This is something that has “patching” in the name and appears to provide patching results,at face value, but, in fact, operates at a completely different level. There is no actual code replacement or correction taking place with virtual patching. It consists of implementing threat detection at the firewall level, blocking known attack patterns. From an attacker’s perspective, the attack fails, so the attacker might assume the system is patched when, in fact, it isn’t.
This method of “patching” has several disadvantages: you don’t actually get corrected code into the systems at any point from it, as the protection only covers known attack patterns – which immediately ignores local-only issues and can be tricked by modifications in the network traffic corresponding to a given threat.
If the disadvantages don’t immediately disqualify this “patching” method, then the pros include no disruption (since nothing is actually done inside a system), a single deployment can cover multiple systems (it’s actually a modified application firewall process), and when new remote-only threats are added to it, it will immediately protect the systems behind it.
Firmware updating is a specific case of traditional patching, with the added caveat that you’re patching code that is contained in usually difficult-to-update storage media – directly into EPROM (erasable programmable read-only memory) chips. This used to be the norm for early smart devices and first generation IoT devices. The software would be loaded in the factory and would rarely – if ever – get any updates. This was due in part because updating and security were not primary concerns like they are today, but also because the way the software was stored made it difficult to update. More recent devices will have the code, completely or partially, written in more standard media, like SSDs, SD cards, or similar, which are much easier to modify.
When you had code running on chips that had to be connected to specialized writing hardware with proprietary software requirements and cables, it added a layer of complexity that made the whole process very unappealing to most IT teams. As a result, those devices would simply never see an update throughout their entire useful lifetime. Unfortunately, many are still around print servers, IP cameras, and data center sensors are all good examples of devices that operate in this way.
Live patching is the process of modifying running code and replacing known buggy sections, or functions, with corrected versions of those same sections. This is done entirely in memory and does not require the application to be restarted for new changes to be picked up. One moment the software contains a bug in a function and the next moment a corrected version of that function is used instead.
There are no restarts, no reboots, and no disruption. Because of this, patches can be deployed immediately, as the operation does not require a maintenance window to perform – allowing for faster response to emerging threats. Rather than waiting for weeks or months until a patch is deployed, as in traditional patching, it can now happen within hours or even minutes of a patch being available.
The added complexity of live patching happens at patch creation time. The patch has to be created in such a way that it can be loaded into the right memory space using the same variable sizes and alignments as the original code. This complexity is hidden from the users of a live patching solution, as it is only visible to the live patch provider.
A live patching subsystem has been able to deploy patches inside the Linux kernel for over 10 years now, and multiple different solutions have been created. KernelCare Enterprise is an example of such a solution, and can live patch the Linux kernel, critical system libraries, databases, and even hypervisors.
There are many variations of live patching, including temporary and permanent. This relates to the way the live patching process is implemented – either as a stop-gap solution or a more permanent alternative to traditional patching.
There are many ways to approach patching. For any organization that has even the smallest concern with keeping their systems, data, and users secure, patching is one of the fundamental concerns that has to be addressed at the strategic and operational level, and each environment will have its own quirks, so be sure to choose the right process for your specific situation.