I guess I don't really understand the objection to NIST's reasoning. While a larger capacity (c) would increase pre-image resistance, it wouldn't help generic collision resistance (which would always be limited by the output size). There are certainly contexts where a pre-image attack would be more damaging than a collision, but collisions can be a problem in other situations. Asking the user to figure out which security level applies to their application seems like a source of potential issues. Giving them a generic security level parameter to work with might make things easier.
On top of that, Keccak's software performance is nothing to write home about and SHA-256 and SHA-512 look a lot safer now than they did when the SHA3 competition was started. The reasonably large performance boost the capacity changes could bring about might help SHA-3's case a bit for software developers who have barely started moving away from SHA-1 (if they're not still hanging on to MD5).
While 512 vs 256 bits of pre-image security for SHA-3-512 (which the author seems to think is an unacceptable security downgrade) is interesting to discuss in an academic sense, it seems ludicrous to suggest the change would actually benefit someone trying to break SHA-3. Any adversary able to perform 2^256 operations is so far beyond modern cryptanalysis that everyone might as well just give up.
Designing a crypto primitive that generates N bits of output having only N/2 bits of primary preimage security is unprecedented. It's a radical design decision.
The competition Keccak proposals specified a healthy safety margin, NIST proposes to eliminate it entirely. Why? Is it worth being able to hash messages 24-89% faster if in return you have to make your MACs twice as large? For example, if you currently use SHA-1 and want to move to SHA-3 without losing any preimage security, instead of 20 bytes you'll need to send 48 bytes over the wire with every message.
Attacks only ever get better and c/2 is an upper bound. If weaknesses are found in other parts of the function, we may wish there had been at least some safety margin built in.
I agree that it's a new design decision, but I think there is reasonable support for making it. Hash function history has pretty well demonstrated that most implementers don't understand the difference between pre-image security, hash output length and collision resistance. And the security margin is far from eliminated entirely. Not only is there significant margin in the round function itself over the full number ofrounds, but the 256-bit security level provided by the proposed SHA-3-512 seems fairly substantial to me.
I don't quite understand your MAC output length math. As far as I can tell, at least HMAC-SHA1 security could be achieved with SHA-3-512 truncated to 20 bytes of output. From what I understand of sponge functions, pre-image security is based on capacity (more specifically, half the capacity) and that truncated output up to the desired security level is valid. The Keccak page parameter calculator claims 160-bit security can be achieved with a capacity of 320 bits and 160 bits of output.
But I will admit that changing the pre-image security level association with output length does seem like it could create some confusion among people who do understand the current hash security situation. To be honest, I would be OK with NIST making the capacity for both be 512 bits (or 576 as DJB suggested). SHA-3-512 performance doesn't suffer much, since it has the most to lose there, security levels are reasonable and implementation is simpler. I would still like a higher performing version of Keccak given that there is already little enough reason to switch to it, but the 512/576 universal capacity might be a reasonable compromise since it wouldn't make SHA-3-256 THAT much slower from NIST's proposed parameters.
At this point, your fellow software developers are pointing and laughing at you for "inventing your own cryptography". Your function doesn't come standard in any library. Every single 3rd party audit of your architecture raises a red flag about this and you have trouble finding any official documentation to back you up. (Ask me how I know about this :-)
I'm not trying to say you don't know what you're doing or that that wouldn't meet the security properties, I'm just saying that the whole point of NIST defining these standards is to save us from having to come up with this kind of thing on our own.
NIST actually does define truncated versions of SHA-2, e.g., SHA-2-512/256. But they specify a different IV so that the functions are distinct.
> some confusion among people who do understand the current hash security situation
Right. I totally agree that most of the time we should assume that collision resistance is the relevant figure, whether we can see the attack or not. But still, SHA-2-256 has has 256 bits of preimage resistance just like a 256 bit random oracle. But SHA-3-256 will have 128? Can we use SHA-3-256 to derive an AES-256 key?
Don't forget that NIST has mentioned they'll standardize variable-output-length SHA3: SHAKE512 with 20 byte output would be perfectly fine. They've also mentioned they might include MAC and AEAD standards, so I'm not convinced the situation is as bad as you make it.
I think you're on to something there with regards to the performance. Schneier was making some noise towards the end of the competition that none of the entries really improved on speed. ( https://www.schneier.com/blog/archives/2012/09/sha-3_will_be... ) Contrary to to article, hashing is used a lot, and there are a few areas where performance will be an issue for sometime (mobile space, and high-density virtualized environments) It may be that the key space was indeed deemed excessive, and of little added benefit compared to the rewards gained in greater speed.
As another mentioned further down, no one is simply going around busting up 256bit keys, and when it finally happens, it will be with a computational paradigm that will require a defense built to an entirely different set of requirements: out of the scope of this competition.
On top of that, Keccak's software performance is nothing to write home about and SHA-256 and SHA-512 look a lot safer now than they did when the SHA3 competition was started. The reasonably large performance boost the capacity changes could bring about might help SHA-3's case a bit for software developers who have barely started moving away from SHA-1 (if they're not still hanging on to MD5).
While 512 vs 256 bits of pre-image security for SHA-3-512 (which the author seems to think is an unacceptable security downgrade) is interesting to discuss in an academic sense, it seems ludicrous to suggest the change would actually benefit someone trying to break SHA-3. Any adversary able to perform 2^256 operations is so far beyond modern cryptanalysis that everyone might as well just give up.