Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's probably the most serious Linux local privilege escalation ever.

Look, the Azimuth people have forgotten more about reliable exploit development than I have ever known, but, no, as stated, this is clearly not true. Not long ago, pretty much all local privesc bugs were practically 100% reliable.

What I think they mean to say is that this is unusually reliable for a kernel race.

I still think, though, that the right mental model to have regarding Linux privesc bugs is:

1. If there's a local privesc bug with a published exploit, assume it's 100% reliable.

2. In almost all cases, whether or not there's a known local privesc bug, assume that code execution on your Linux systems equates to privesc; this is doubly true of machines in your prod deployment environment.



You said it: If you are not explicitly on the business of providing external access to your machine, the privesc isn't your problem (it's a problem, and it's bad, though), it's the fact that anybody could exploit the privesc in the first place.


no, because a bug like this turns any code execution exploit into remote root...


The point is that code execution is almost always remote root, because lots of bugs like this exist. Also: most engineers overestimate the relative value of root vs. simple inside-the-VPC code execution, which is almost always gameover anyways.


Thomas has elaborated on this a few times over the years, but to elaborate for people who weren't around for those conversations: if you can make an HTTP request from inside the firewall, which probably doesn't require root, you can pivot the attack to a variety of internal services which are not designed with security in mind. That could let you e.g. reconfigure networking appliances, grab credentials to internal or external services from DevOps-y credential stores, grab all manner of business secrets, pivot to direct SQL access to the DB laundered through e.g. internal analytics dashboards or admin tooling, etc.


If you are a client of such a busines you would have to care too.


Of course you care about this sort of flaw: you need as many lines of defense as possible. But if anybody can exploit it in the first place, you've already got a major security hole.


> "In almost all cases, whether or not there's a known local privesc bug, assume that code execution on your Linux systems equates to privesc; this is doubly true of machines in your prod deployment environment.

It depends. I've seen "oh well if someone has rce they probably have root anyway" used way too many times as an excuse to avoid defense-in-depth measures.


Those people might be right. Defense in depth is a legitimate tactic, but that's all it is, and it's often an excuse for people to waste time layering stupid stuff on top of real security controls.

ASLR, NX, and CFI would be an example of a defense in depth stack that is meaningful.

SSH, Fail2Ban, and SPA would be an example of a defense in depth stack that basically just wastes time.

I would be more comfortable with a system where I knew I had to burn the box if I lost RCE on it than I would be with a system that somehow depended on RCE not coughing up kernel, and persistence, to an attacker.

The other thing defense in depth can provide is increased attacker cost. That's why there are economically valuable DRM systems (BluRay's BD+ is an example here). All you have to do is push attacker cost across a threshold (for instance with BD+, that's keeping titles secure past the new release window) to make a defense in depth control valuable.

But if someone has a kernel exploit, probably nothing you've done for defense in depth is going to meaningfully increase costs.


> That's why there are economically valuable DRM systems (BluRay's BD+ is an example here). All you have to do is push attacker cost across a threshold (for instance with BD+, that's keeping titles secure past the new release window) to make a defense in depth control valuable.

A really good example of this is Spyro 3: The developers set up a system of overlapping checksums (which could in turn bet part of the data being checksummed by other, overlapping, checksums) so that it was virtually impossible to change even a single bit without failing the test. It was eventually cracked, as the check only ran at boot time (it required 10 seconds of disk access, and adding 10 seconds to every loading screen in the game would have been unacceptable), which meant it took over two months for pirates to get a crack working (unusual for the time). And since most game sales come in the first two months...

But that's really just me using this as an excuse to share a bit of technical trivia.


I'm confused, how is SSH an example of defense in depth? It is an access method. You should absolutely harden your SSH configuration. Fail2Ban is useless on a properly configured SSH server (no root, no passwords, no kerberos, only keys). Managing the keys at scale, well that is a different story.

I agree with you that ASLR, NX, and CFI are the most important system level defenses to employ.


> Fail2Ban is useless on a properly configured SSH server (no root, no passwords, no kerberos, only keys).

This assertion confuses me.

I use fail2ban on boxes I have key-only ssh configured for.

Are you aware fail2ban works for services other than ssh?

If an attacker / script knocks unsuccessfully on my ssh door, other doors are then closed to them.

I also get much (much!) cleaner logs thanks to fail2ban.


>This assertion confuses me.

I suspect that you're confusing fail2ban and port-knocking (or using fail2ban as a port-knocker).

The point of fail2ban is to prevent an attacker from brute-forcing your server. In a key-only config, the chances of getting brute forced is smaller (by a few orders of magnitude) than getting hit by an asteroid and having the server get hit by an asteroid, so fail2ban doesn't really help.

_In theory_, the same would be true for port-knocking.

However, in practice, sshd can have security holes which a malicious scanner could exploit. And while port-knocking doesn't help against a determined attacker (it's subject to MITM, replay-attacks), it does help with defense-in-depth.


That is true and a good use case for fail2ban. Useless was probably a strong word, what I really meant was of limited utility in increasing the security of the SSH service.


The main reason I use fail2ban is I got tired of the log file noise/bloat. I use key-only access on my servers already, with the key stored on a hardware token (Yubikey).


I guess the question then is why you're looking at failed Auth logs. Failed auths are boring, doubly so on a key only server. Successful auths are where the fun is at.


When I first set up fail2ban it was because I got annoyed that the machine on my desk was making regular "clunk...clunk...clunk" noises from the hard disk as it wrote another failed-auth attempt to the log every second or so...


SSH is fine. Stacking extra stuff on top of SSH to create a defense-in-depth stack for it SSH is what's silly. Just disable passwords and use SSH.


Not entirely reasonable for all use cases. If there's a machine that you need access to from many different locations, a keyfile is more of a PITA than a long passphrase.


A HPC center (that is, lots of users coming in via ssh) I know about disabled key logins IIRC due to some incident where an attacker had got hold of a password-less key.

Too bad that sshd can't enforce use of password-proctected keys on the server side..


You got the thing backwards. It's not "too bad that sshd can't enforce keys" of some property that happened to be missing in the key attackers got their hands on. It's "too bad the HPC center staff didn't have tools good enough to manage their servers". CFEngine and Puppet being two examples of such tools the staff missed (or didn't know how to put into use in this case).


The problem, AFAIU, was that some user had a password-less key stored on some external system (their personal home computer, for all I know). That system was hacked, and allowed the attacker to access the HPC system. I don't see how the HPC center staff getting the Puppet-gospel could have prevented that person from using a password-less key. Well, except by disabling key-based logins (which, AFAIU, they could have used Puppet/cfengine/whatever for).

My point is that in general it would be better to disable password auth and only use key based auth, but only if you could somehow guarantee that the users wouldn't do crazy things like use password-less keys. But as you can't do that on the server-side, what other options do you have?


> I don't see how the HPC center staff getting the Puppet-gospel could have prevented that person from using a password-less key.

It's about reaction of the staff to key leak:

>> A HPC center [...] disabled key logins IIRC due to some incident where an attacker had got hold of a password-less key.

This reaction seems just silly.


sshd can have vulnerabilities which port-knocking can (temporarily) block.


I know what everything else is, but what is CFI? An attempt at googling came up with results that didn't make any sense right away.


Control-Flow Integrity. It's a bit of the new hotness in exploit mitigation, however it's quite complicated and there are various solutions that have different advantages and disadvantages. clang docs: http://clang.llvm.org/docs/ControlFlowIntegrity.html


Shorter CFI: when doing codegen for calls through function pointers (which will involve indirect calls through registers), emit extra code to make sure the register being jumped to is a legit function, thus breaking ROP payloads.

There's more to it, but that's the flavor of it.


Sure, it can go either way. But in the absence of a kernel 0-day, segregating services on the same host is useful.


And if a kernel 0-day is available, putting the services in a VM might help. Depending on whether an exploitable bug in the hypervisor exists.


What's a better alternative to SSH?


SSH is a waste of time?


>assume that code execution on your Linux systems equates to privesc

Tell this to the container community. They would have you believe containers are as secure as VMs.


Given qemu's security track record, they're not necessarily wrong.


It's always a matter of increasing attacker cost. I am not sure that attacking QEMU, then finding a privilege escalation on the host that can break out of SELinux is much easier than just staying in the VM, hopping through the internal network until you find a host that lets you do what you want.

Chances are what you want is "simply" access to a shared folder rather than root.


That's a bit unfair since:

1. Most users won't be affected by all the exploits (you don't stuff in a VM all models of network cards, SCSI controllers, etc)

2. Many deployments of QEMU (through Xen or Libvirt) are protected by AppArmor/SELinux. This would at least forbid access to /proc/self/mem but I can't say if this is enough to prevent evasion. IMO, this is likely to make the task quite harder.


To be fair, Docker now defaults to using AppArmor and seccomp too. And the defaults seem to be not completely toothless either (I had to "disable" seccomp to get things running multiple times. For example, you can't just ptrace() in a container.)


Even if they break out of qemu, then the best case is they've reached the level of the container or user running it.


Which is mostly root. Rootless containers are still not widely deployed


Citation needed.

That's certainly a goal, but I've never heard the claim.



> will emerge > thin walls

This article is very hopeful and positively worded, but at its core it acknowledges that security parity is still a work in progress.


Well another thing to keep in mind with this one in particular is that there is no way to mitigate it. grsecurity can't help with this kind of bug, nothing can so it may not just be about reliability of this exploit but the fact that there's no mitigation other than to update.


It seems like both SELinux and AppArmor could be configured to block access to /proc/self/mem which should mitigate it.


It's sad actually that this is the perfect type of exploit to block with SElinux, a simple write to unauthorized files. But since no one uses the user contexts of selinux then no one blocks this.

Your shell runs unconfined because your user role is unconfined. Any process you might start will therefore run unconfined, unless stated otherwise in a policy.

So this exploit will run unconfined and will be allowed writes everywhere on the system.

I once tried the staff_r role on a Fedora 23 system and it worked out of box but there were more errors and it would not be recommended for beginners.

I believe the same goes for apparmor since apparmor only defines "armor" for processes, not for users. How many use pam_apparmor today? [1]

1. http://wiki.apparmor.net/index.php/Pam_apparmor


>Your shell runs unconfined because your user role is unconfined. Any process you might start will therefore run unconfined, unless stated otherwise in a policy.

Just to clarify this, any process you start from the shell. Like the PoC exploit.

But in an actual scenario, if the exploit were launched from Firefox, or Nginx, it would run under a confined context and be prevented from overwriting most critical system files.


> Your shell runs unconfined because your user role is unconfined. Any process you might start will therefore run unconfined, unless stated otherwise in a policy.

I am actually surprised that sane and safe defaults are ignored and left to user's discretion. Most users think Linux is secure by default.

It's interesting to see Windows going into other direction and locking down more and more by default.


Yes, it is ironic that Windows and macOS are the desktop systems taking this route, while GNU/Linux is starting to look like the swiss cheese many FOSS used to joke the other OSes for.

The scale is so high, that kernel security has become a major discussion subject.

http://arstechnica.com/security/2016/09/linux-kernel-securit...


Well it's an ongoing effort in Fedora too. Every release of fedora or centos show some improvement around the user of SElinux.

I only wish I had the competence to help out because I think it's a very important effort.

Sad to say that in Fedora 23 I was able to easily put my user into the staff_r role, and thereby confining it. But in fedora 24 there seem to be only three default user contexts defined. Not sure what happened but that likely means I have to define my own user context and then I can't know how well supported it is in the policy.

It's impossible for ordinary users to do any of this.


Err.. what about running SEL in permissive mode? The process works, and you'll get a nice log file filled with what would have gotten blocked.

It's invaluable in setting up new policies.


I agree. There have been far easier local exploit in the past. For example CVE-2006-2451 whose exploitation was quite simple and not using any race condition. Also CVE-2009-2692 or CVE-2010-3049. Browsing exploit-db makes it easy to find them.


Yup, the best solution here is to make privesc ineffective via VM isolation. Privilege escalations are rampant on most operating systems, they're not worth relying on. VM isolation breaks are much rarer.


> 2. In almost all cases, whether or not there's a known local privesc bug, assume that code execution on your Linux systems equates to privesc; this is doubly true of machines in your prod deployment environment.

I think this goes for any mainstream OS, Linux is not particularly special here.


So basically, if you wouldn't give a user sudo, they shouldn't have login access at all? Certainly works for some scenarios, but not practical for many others.


It depends on why you wouldn't give a user sudo. If you're worried that they might get bored and do an immature prank, or do something ill-advised (like changing the root password, or giving sudo to someone else) and render the system insecure/inoperable/unmaintainable, you probably can give them shell access. A good example here would be giving shell access to employees or the like, if their job is aided by it. The time and effort it takes to research a privesc vuln is usually sufficient to deter them, and if it isn't, you just revoke access and fire them if they do it.

If you're worried that someone might be trying to deliberately compromise your security, you can't give that person the ability to run code on your system.


the main reason you don't give users sudo is so they don't do anything stupid, not so much to prevent them from acting maliciously.


Correct.


Assume it on any system. Even OpenBSD.

Nobody's perfect. Not even Theo.


> FreeBSD

> Theo

Do you mean OpenBSD ?


Why yes, yes I did. Thanks for pointing that out.

Edited to fix.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: