ClickCease Why you should automate Linux kernel updates - TuxCare

Join Our Popular Newsletter

Join 4,500+ Linux & Open Source Professionals!

2x a month. No spam.

Why you should automate Linux kernel updates

June 10, 2019 - TuxCare PR Team

Why you should automate Linux kernel updates

Software is complex and constantly changing. Bugs are inevitable. Before the internet age, bugs were just faults to fix. Now, they are opportunities, one of the ways hackers get unauthorized access to systems. The cybersecurity industry thrives on this threat. Their products ‘defend’ and ‘protect’ but cannot plug a simple security loophole: the exploitation of vulnerabilities that persist in outdated and unpatched operating systems and applications.

This article reviews the background to this problem, and gives tips to remedy it using unattended update packages for Ubuntu, Red Hat and Fedora, and live patching solutions from KernelCare, Kgraft, Ksplice, and Livepatch.

Change on Steroids

There’s a quiet crisis in software development.

For programmers, things have never been so good. Their work touches so many people in almost every walk of life. The opportunities for developers are stratospheric, and the possibilities for us, the users, mind-boggling.

Paradoxically, for programmers, things have never been so tough. The pressure on them to learn and make new stuff has never been greater. Long before cutting a single line of code, they spend hours reading and learning languages, rereading and relearning when APIs and behaviors change. Languages even have their own monthly ‘hit parade’ (www.tiobe.com), and programmers applying to tech giants need at least six languages on their résumés to be in with a chance of an interview. Gone are the days of getting a job with a dog-eared K&R and a photocopied SQL cheat sheet.

No more heroes, anymore

And there are no more heroes. Now, they all work in teams, and build on the work of teams. Their members are far-flung and flexible, connected and keen, yet few of them ever meet in person. From them code cascades and flows into other teams who test it and stretch it and do their best to break it.

This all takes time, but there is no time, because you and I are waiting for the new stuff, the next iteration, the latest update. We get it as code that comes coalesced into a great big bubblegum ball of concentrated human effort called a release. It’s the product of a lot of people writing code in a lot of languages. It’s change on steroids, and the prize is progress.

Continue reading: Enabling compliance with faster patch management

The Good, the Bad, and the Cuddly

How software complexity becomes risk.

First, let’s agree that progress is good. So, if progress is good, then change is good. But change means risk, and that’s not good, because code that always changes has a greater risk of bugs. Before the age of the internet, bugs were just bugs. Though risks lurked in unintended and undetected bugs, there was a limit to their scope and effect. In the age of the internet, limits no longer apply.

With the explosion of interconnectivity among applications, bugs are no longer merely an inconvenience. They’ve become an opportunity for a species of programmer whose talent equals those making the code. These are programmers who are trying to break the code. They go by various names, and work toward various ends. The name most popular is hackers, and their aims, multifarious.

I see three kinds of hacker: the good, the bad, and the cuddly. The good are the active research community, constantly probing new and old software in search of vulnerabilities. The cuddly do it for fun and thrills, for reputation and education. The bad do it for all these reasons, but mostly for money, or its equivalent, data.

The Cybersecurity Blind Spot

Outdated software is a cybersecurity loophole.

Cybersecurity is a large and growing sector. There are a plethora of products to choose from, all helping to some degree to protect computer systems from illegal infiltration and exploitation. These products have malware and virus scanners, firewalls of various kinds, login checkers and password checkers and data sniffers, different components, each designed to defend against a different method, or vector, of attack.

But they have a blind spot. Even the most sophisticated security product can’t totally protect a system that is out of date. It should be simple to keep software always updated, but we don’t. It shows, through the many recorded incidents of systems compromised by vulnerabilities in outdated software.

So let’s ask: “Why do systems get out of date?”

The most obvious answer is because we let them. It takes time and effort to install and update systems. We have to schedule downtime, log in, run commands or use a GUI. It’s fiddly and dull. We may have to repeat this many times a year on tens or even hundreds of servers.

The task is easier with server configuration management tools. That way, all servers get updated at the same time. But the good ones cost money, and the free ones take time to learn–few of us can afford either. We can script it, but that takes programming skills that some of us don’t have.

The Linux Perspective

Linux can’t self-update like other OSes can.

So far, everything said applies to most contemporary, mainstream, non-mobile computer platforms. I want to focus now on a subset of them, the community of Linux servers. Linux has evolved differently to other operating systems. It began as a hobby, building on MINIX. It grew through community effort, mostly non-commercial. As the number of live Linux platforms increased, so did their appeal to the hacker community. Another attraction was Linux’s popularity as a cheap and flexible hosting platform. Hackers love these: A single server will often host hundreds of websites, and one hacked site can become a gateway to other sites on the same server.

Linux has a large reservoir of free and well-supported software, and a wealth of active distributions, such are the benefits of open source community development. The disadvantages are that features evolve slowly, possibly because of the phenomena of design by committee, possibly because most contributing developers work for nothing.

For example, Linux still lacks a completely integrated, automatic, self-updating software management tool, although there are ways to do it, some of which we’ll see later. Even with those, the core system kernel cannot be automatically updated without rebooting.

And here we see another reason why systems get out of date so easily: the urge to avoid downtime. Server administrators may decide to put their active users and critical applications first, and put off installing patches that need a system reboot.

A paranoiac’s whisper

But of all the reasons, there is one that system administrators prefer not to say out loud. It is the fear that the server won’t start up again, that a patch will break something. It’s not always acknowledged because it smacks of paranoia and is impossible to prove.

I feel this is a legitimate worry. A kernel patch is, after all, a change to the core operating system. Badly-written patches exist and it’s not unheard of for patches to break or subtly change a system. Whether it’s a change to performance or functionality, to most managers looking after live systems, neither are acceptable.

Closing the Linux Security Loophole

Security improves when automating Linux updates.

You can automatically update Linux applications and kernels yourself by combining a scheduling program, like cron, with your platform’s package maintainer, such as yum, apt, or dnf. Some Linux vendors have done this by creating packages that do unattended updating for you. And as with everything in Linux, each flavor does it differently.

However, anyone using these without reconfiguring the settings is likely to get a shock like this at some point:

The computer needs to restart to finish installing updates.

This is because, unlike applications, unattended updates doesn’t mean you can install kernel updates without rebooting.

And there is the security loophole. No one wants to arbitrarily reboot servers that are in active use. This is why kernels are usually excluded in unattended update configurations. But outdated kernels are vulnerable kernels, and vulnerable kernels are prone to exploitation.

Related post: 5 Bad Reasons To Update Your Kernel

Missing links and loopholes

The answer to this dilemma is live patching. It’s a way of keeping kernels updated to the latest security patches without the need to suffer a restart or tolerate downtime. It’s the missing link in the full automation of your Linux system updating strategy.

As with unattended updating, each Linux vendor does live patching differently. Also, doing it for free isn’t easy; live patching became too useful to avoid commercialization.

For Ubuntu, the Canonical Livepatch Service will install Linux kernel security patches without rebooting. Red Hat came out with Kpatch, and SUSE with Kgraft, both for the same purpose, the two vendors only spurred into action when Oracle bought Ksplice, only to withdraw support for anything but their own flavors. An unlikely savior emerged in 2014 when KernelCare joined the market, supporting all major vendors, and kernels as old as 2.6.18.

Get a FREE 7-Day Supported Trial of KernelCare 

 

Conclusion

Automate your Linux updates for reasons of security, not convenience.

Linux both benefits and suffers from the way it’s developed. No amount of concern for cybersecurity can solve the problem of unpatched vulnerabilities.

While there are solutions for auto-updating applications, the kernel remains the weak spot–an update of it almost always means a reboot.

Live patching fills the gap in auto-updating strategies. You should consider it an essential part of your Linux server security strategy, not just a convenience.

Looking to automate vulnerability patching without kernel reboots, system downtime, or scheduled maintenance windows?

Learn About Live Patching with TuxCare

Become a TuxCare Guest Writer

Get started

Mail

Join

4,500

Linux & Open Source
Professionals!

Subscribe to
our newsletter