When Is Information “Too Much Information?”
You’ve surely noticed the trend – it’s hard to miss if you’ve been paying attention. Changelogs have been getting more and more sparse, especially in the last year or so. Sometimes, a simple “bugs fixed” or “performance improvements” is all that’s offered to describe what exactly was done between versions. There are some pretty compelling reasons for this change, as well as several equally interesting counterarguments.
Gone are the days where you had more paragraphs of text describing a code change than lines of code for said change. Changelogs were enthralling pieces of text, crafted with penmanship capable of making some literary authors rethink their career choices. Happy days.
But those days are now behind us. Take any Windows update changelog, and you’ll have to dig deep to glimpse what exactly was wrong that required a fix and what was done to solve the problem. There was a considerable amount of (very loud) backlash when this came into effect some years ago, but Microsoft didn’t budge, and the policy is in place to this day – affecting both consumer and professional products.
One might think that this was simply Microsoft being Microsoft and that this is par for the course of the “large corporations” playbook. But looking across the aisle to, say, the Linux Kernel, and the same is happening there. Sure, you can follow along with all the discussions, comments, and threads on multiple forums about bugs, vulnerabilities, and new features for the Kernel, but when you examine the changelog itself, you’ll struggle to identify specific security fixes addressed in a given version.
Security through obscurity tends to not last for very long, but it’s a fine line that developers are threading here.
The Security Concerns
Advocates of reducing the amount of information in changelogs often argue that they are (in no particular order):
- Reducing the information that bad actors have to identify new vulnerabilities present in older, already deployed versions (and thus vulnerable);
- Providing a small amount of time for the “good guys” to patch their systems before hackers start trying to breach them, as the time between disclosing a vulnerability and exploits appearing “in the wild” is shortening to mere minutes;
- Asserting that users don’t need to know about all the tiny details since they’re supposed to be taking the latest version as soon as possible; it’s cybersecurity 101 to always patch.
However, there are just as compelling arguments on the other side:
- Bad actors have the resources, know-how, and incentive to compare the actual code differences between versions and don’t really need the changelogs. This is even more compounded by the fact that AI bots can now provide assistance in this task, trivializing the knowledge level required.
- Hiding these details makes it harder, or even impossible, to prioritize what to patch since there is no comparable risk measurement available to assess.
- Taking the latest version every single time is not always completely safe, as no patch has ever caused any unintended consequences, right?
Arguments about this can become heated very quickly, and it’s easy to miss the point and devolve into “I know best” arguments that are ultimately unproductive. However, these debates do effectively highlight a problem that has tangible effects on security. It is, undeniably, true that you should always keep your systems running the most up-to-date version of any software on it. But common events like loss of hardware support or changes in scope, requirements, or availability can make that impossible to accomplish. In that case, the lack of information will hinder much more than help, as you have no way of identifying some new vulnerability to which you are now exposed – until something bad happens.
Not identifying security issues as such explicitly also means that you won’t request (or delay requesting) a CVE for the problem. In a field like cybersecurity, which is focused almost exclusively on tracking, managing, scanning and patching CVEs, having security issues that sidestep all the tooling and monitoring is, to say the least, debatable.
Where We’re Headed
There may come a time when changelogs effectively become worthless.
On the Windows Server Insider builds, there isn’t even one – and the irony here is that you’re running a beta version doing test work for free and yet you have no way of knowing what you’re supposed to be testing. This could become the norm rather than the exception. The consumer OS versions no longer provide easy access to the patch description, which now just contains a generic “bugs fixed” mention. You have to dig deeper to find more detail, and even that is gradually becoming available to a smaller audience.
Again, it’s important to stress that this isn’t something tied to just closed-source software. The open-source world is also adopting this approach as valid. In fact, this creates a peculiar situation for open-source projects. On one hand, you have freely available source code, bug reports, and a desire to incentivize developers to work on fixing bugs and adding features. But on the other hand, you purposefully hide information that would help in those tasks. Something doesn’t add up here.
There is a very lengthy list of arguments around this debate that you can find in this thread, where some Kernel leaders discuss the merits of their chosen approach to this issue – as well as the extensive criticism that goes along with it.
The AI Impact
AI is evolving rapidly, and it is hard to keep up with all the new features. But one well-established feature of the current AI bots is their ability to help you understand and find issues in code. It is trivial to paste a before and after code block into an AI bot chat interface and prompt it to “find the bugs that were solved with those code changes” and “how could those bugs be exploited.”
Open-source software will feel this impact much faster than closed-source software, as the accessibility will work against it in this regard. Checking diffs between different versions of software to identify security vulnerabilities was already something that threat actors engaged in, and the bar has now been considerably lowered. But it’s still not enough that good guys have the availability and resources to do it themselves – they have other concerns as well, as opposed to only focusing on finding and exploiting bugs.
The “obscurity” in the “security through obscurity” is about to become pretty well illuminated.
Magic Bullets or Lack Thereof
There is no magic bullet to this problem. So far, it has proven intractable, and, incredibly, everyone believes their own approach is the best – hence why they are doing it that way. It can be as extreme as having three different software applications on a system, all with different approaches – the OS, the web server, and the database. In the end, someone is paying the price of this disparate approach to information sharing (or hiding), and it poses yet another hurdle that IT professionals have to overcome to effectively do their jobs.
Ultimately, the debate surrounding changelogs and the amount of information shared in them is complex and multifaceted. As technology and AI continue to evolve, the industry may need to reevaluate its approach to security and information sharing in order to strike a balance between transparency and protection.