Cryptocat is not any different from any of the other notable privacy, encryption and security projects, in which vulnerabilities get pointed out on a regular basis and are fixed.
I feel bad for the team that worked on this (although I stand by my belief that they shouldn't be working on it), but this is an extremely aggravating statement. When was the last cryptographic vulnerability discovered in any mainstream implementation of PGP?
Vulnerabilities are found semi-routinely in TLS, which was designed by several of the smartest crypto people in the world. But there's a key difference between TLS and Cryptocat: the whole world is working on TLS security. Vulnerabilities in TLS that are far less critical than this one are career-making. Vulnerabilities that devastate the security of Cryptocat earn a blog post.
I'm also a little confused: if the team put Steve Thomas on their thank-you page, why did Steve Thomas write a blog post linking directly to that page saying he wasn't on it?
I bring this up because it's a valuable lesson for startups. You should have a thank-you page. But you should also err on the side of quickly adding people's names to it when they report things. It looks (from the pull request) like much of this was reported over a month ago. Don't wait a month to thank people who report vulnerabilities in your code.
>I'm also a little confused: if the team put Steve Thomas on their thank-you page, why did Steve Thomas write a blog post linking directly to that page saying he wasn't on it?
The thanks on the cryptocat page wasn't there about 4-5 hours ago (last time I checked).
The Google cache [1] from 26th of June does not show Steve Thomas' name on that page, so it seems that Steve is right and CryptoCat are not telling the truth. That said, 26th is quite a few days ago so it might have changed between then and now.
Yes this is scary but I believe everything is/was over
https. So this just means that it was host based security.
Meaning we have to trust that Cryptocat didn't
store/transfer encrypted messages or leak their SSL
private key. They should generate a new private key, to
prevent someone from breaking into their server and
stealing it. Which might let them decrypt old captured
messages.
Cryptocat response:
Our SSL keys are safe: For some reason, there are rumors that our SSL
keys were compromised. To the best of our knowledge, this is not the
case. All Cryptocat data still passed over SSL, and that offers a small
layer of protection that may help with this issue.
$ curl -vI https://crypto.cat
* About to connect() to crypto.cat port 443 (#0)
* Trying 94.254.0.157...
* connected
* Connected to crypto.cat (94.254.0.157) port 443 (#0)
* successfully set certificate verify locations:
[ ... snip ... ]
* SSLv3, TLS handshake, Finished (20):
* SSL connection using ECDHE-RSA-RC4-SHA
If my understanding is correct, ECDHE offers "perfect forward secrecy", as discussed on HN a few days ago — each session uses an ephemeral key created for just that session. So past sessions are already protected against an SSL private key compromise, and changing the key wouldn't add protection.
Of course, only the Cryptocat folks could clarify what percentage (all?) of their users actually connect over an SSL method that offers PFS.
As this blog post says, ECDHE got implemented only "in the past couple of weeks", which mean that there are still about six months of conversations that didn't use ECDHE and can be easily cracked in case someone leaks their SSL keys.
Assuming the server is set up to prefer these suites, and the browser supports them, this is good. Pretty sure all modern browsers will support this, though they may prefer other suites and the server may go with client preferences.
'course RC4 is considered more than a bit iffy itself these days.
OK I had to think about this for a sec, not being a crypto expert by any means.
Steve is not suggesting that their SSL private key HAS been compromised. He's saying that IF it were to be compromised, anyone who had captured CryptoCat traffic would now be in a position to break it easily.
The suggestion is to generate a new private key (and presumably destroy the old one irrevocably) to prevent this eventuality.
"The vulnerability was so that any conversations had over Cryptocat’s group chat function, between versions 2.0 and 2.0.42 (2.0.42 not included), were easier to crack via brute force."
That's quite an understatement - and they helpfully don't link to Steve Thomas's blog post so readers can't easily discover just how weak the encryption actually was. (See https://news.ycombinator.com/item?id=5989707 if it's dropped off the front page by the time you're reading this comment.)
" At Cryptocat, we’ve undertaken the difficult mission of trying to bridge the gap between accessibility and security. This will never be easy."
Really, Cryptocat should be commended for this
Because the crypto people usually don't care and in some cases makes extra-hard for regular people (or even non experts) to have security. Like the ECB fiasco. http://www.codinghorror.com/blog/2009/05/why-isnt-my-encrypt...
At least the documentation warns it's really a bad choice.
Making technology accessible to everybody is very important
Wanting hard problems to be easy doesn't make them easy. Computer science and math don't care how much you need cryptography.
And, a reminder: while I agree that we could use better UX for GPG, that's not what Cryptocat is. Cryptocat is simpler to use because it's a simpler, less secure system.
The Cryptocat team seems to have a flair for attractive, usable interfaces, and people seem to like the way Cryptocat works. They should spend their time building a similar interface for GPG and give up on the custom cryptosystem. They wouldn't be able to say they'd designed their own cryptosystem, but in reality, people with a professional understanding of cryptography would think more highly of them for that, not less.
I thought they actually had given up a custom cryptosystem and went with OTR (for person to person chats)? Although only after a year or two of crying about how the community was meaning mean to CryptoCat, etc. and viewing every bug report as a personal attack.
The problem is there is no "mpOTR" for them to switch to for multiperson. I'd argue for SSL protected IRC (like, SILC) and some kind of transparent/tamper-evident server on an .onion as an interim measure, instead of a roll-your-own alternative.
Without perfect forward secrecy. I'd probably take perfect forward secrecy over not needing a trusted third party for the protest use case, although neither is ideal.
It's easy to fetishize forward secrecy, but the reality is probably that forward secrecy is much more important in the context of TLS, where the lost of a single key might compromise billions of messages, than it is in the desktop case, where losing your key to an attacker also lost your keyboard inputs forever.
Or in a scenario where seizure of a device is likely incident to arrest (i.e. your mobile phone), or where the key is stored online, or where your key is derived from a passphrase.
Resistance to compelled disclosure (at least for historical stuff; they could still turn someone and force him to chat with people with an FBI agent watching over his shoulder...) is the benefit for PFS in the user to user case.
No, that's not true. HTTPS is simpler than GPG and also provides less security. You rely on HTTPS/TLS today with certificate-pinned DHE HTTPS connections to Gmail. Clearly, the people who use Cryptocat don't feel like they're getting enough security from Gmail. HTTPS is simpler to use because it's a simpler, less secure system.
How much less secure exactly? Certificate pinning certainly helps in some cases (unless the pinning itself is attacked, which may be possible).
You just seem to be confirming my point, that in the name of "more security" they're shunning everything that's not their idea of "perfect security" (which may be faulty). Result: less people using encryption.
It's one thing to use a fragile encryption key like the Cryptocat failure, another to exclude RSA in favour of ECC (except if you're dealing with leaking government secrets of course)
I don't know how to answer this question. I'm not sure what you mean by RSA and ECC. The answer to how much more secure GPG on your desktop is than TLS to Gmail is "much more secure".
The question can be answered in terms of "how many people do I need to trust in each case for my communication to remain private?" In the former case, it's "the intended recipient" and in the latter case, it's "a few thousand people".
RSA refers to Rivest-Shamir-Adleman and ECC refers to Elliptic curve cryptography. They are both algorithms which take different approaches to public key cryptography. RSA has been in use for longer than ECC and it's considered to be more well understood. However there are some exciting things going on with ECC, for example it is thought that you could have the same or better security as RSA with a much smaller ECC keysize.
SSL has various problems, mostly related to CAs.
Also it is only easy to use because the server admin has done the hard part for you, therefor you have to trust them.
To apply SSL to a cryptocat style system would require both parties to generate keys and somehow verify them, not so simple.
While I think SSL/TLS is "less secure" in some sense than GPG, which is fundamentally end-to-end, I still think SSL/TLS is a good system, if for no other reason than that it's the focus of a huge amount of research.
The CA problems with HTTPS/TLS aren't fundamental to SSL/TLS. They're a consequence of browser UI. If you build your own software that uses TLS to make secure connections, you have a variety of options to avoid the browser CA problem; you can run your own CA (every instance of Burp Suite, the industry standard web pentest tool, does exactly that), or you can verify certificate fingerprints statically, or you can invent a new authentication system.
Yes, that's a good point. However you are always somewhat reliant on a third party to do the security for you whether that is a CA or an app developer. That third party has to be trusted. It doesn't solve the fundamental problem of:
Alice -> Bob
without doing.
Alice -> Facebook -> Bob
Unless both alice and bob know how to generate and manage their own keys (and verify each others keys).
Right, but web of trusts/PGP are inconvenient in a way that "look for the padlock icon" isn't. Which is likely why they are not in widespread use by the general populous.
It's not because crypto people don't care, it's because privacy is hard.
Say you and a friend are in opposite corners of a crowded room and you have to somehow come up with a mutual secret by yelling at each other with everyone listening. Do you think you'd be able to design a way to do it? Thanks to Diffie and Hellman, that's not only possible, but rather easy.
Do you think you could design a seamless way for all your snail mail to be private, even when the post office has an unlimited budget and its #1 priority is to read your mail? I'm pretty sure you'd make some concessions, such as "you have to lock it in a box before sending", and then make sure the recipient had the corresponding key, etc etc. Saying "these security guys make me jump through all these hoops, if they cared about me I could just send the exposed letter and it'd somehow be secure" is not only unfair, but completely unrealistic.
>Making technology accessible to everybody is very important
What if the technology is fundamentally flawed and the users are at risk for using it?
GPG and such are terrible to use, imo, but at least they work. CryptoCat could be exploited with relative ease, at least at times. Considering their considered demographic, this is quite dangerous.
"Making healthcare accessible to everybody is very important"
So we should commend young, inexperienced kids with no medical training and an obviously deeply flawed understanding of neuroscience - for bringing easy-to-use "home brain surgery kits" to "regular people or even non-experts".
"At Trepan-o-cat, we've undertaken the difficult mission of trying to bridge the gap between practical first aid kit use, and frontal cortex lobotomy. This will never be easy."
Does anyone know if there are any issues with the javascript OTR library they use (https://github.com/arlolra/otr)? Or are all the issues only with their custom multi party chat encryption system?
> Private chats are not affected: Private queries (1-on-1) are handled over the OTR protocol, and are therefore completely unaffected by this bug. Their security was not weakened.
Hey, that sounds great to me.
Also wouldn't it be possible to talk to multiple parties, by simply doing the OTR-Jazz to every party? Would kinda be hack, but would make sure you can use that cool OTR protocol that is secure or am I wrong somewhere?
I would be careful before trying this. I don't know much about crypto, but you might be able to do some interesting statistical attacks if you are encrypting the same plaintext with multiple keys.
The man in the "black suits" that were folloing Nadim Kobeissi, didn't have anything todo with these bugs right?
However I'm sure that the NSA can crack SSL/TLS in a matter of minutes, not because they have such powerfull secret hardware, but because they probably have agents that spotted or added bugs here and there in different software.
Wouldn't that mean that even when you invented a perfectly secure super encryption, they could wiretap your OS and get the message in cleartext without your consent. Thanks to backdoors, bugs and incompetence.
I feel bad for the team that worked on this (although I stand by my belief that they shouldn't be working on it), but this is an extremely aggravating statement. When was the last cryptographic vulnerability discovered in any mainstream implementation of PGP?
Vulnerabilities are found semi-routinely in TLS, which was designed by several of the smartest crypto people in the world. But there's a key difference between TLS and Cryptocat: the whole world is working on TLS security. Vulnerabilities in TLS that are far less critical than this one are career-making. Vulnerabilities that devastate the security of Cryptocat earn a blog post.
I'm also a little confused: if the team put Steve Thomas on their thank-you page, why did Steve Thomas write a blog post linking directly to that page saying he wasn't on it?
I bring this up because it's a valuable lesson for startups. You should have a thank-you page. But you should also err on the side of quickly adding people's names to it when they report things. It looks (from the pull request) like much of this was reported over a month ago. Don't wait a month to thank people who report vulnerabilities in your code.