Oktane19: Cryptographic Wrong Answers



L Van Houtven: Hello everyone, so thank you all for coming, and welcome to Cryptographic wrong answers. So brief introductions, hi, I'm LVH, I am originally a cryptographer, and now I do a bunch of other stuff at a start-up that I co-founded called Latacora.

L Van Houtven: Latacora is a consultancy and we kickstart security teams for start-ups. So the idea is, if you are thinking about your first security hire, instead of trying to find that person and trying to figure out how to vet them, you hire us. We come a bunch of people with a bunch of different expertise. And for example, cryptography, most of what I do in my day job is manage our cloud practice.

L Van Houtven: Now disclaimer, this talk does represent the opinions of my employer because, you know it's a tiny company. It's easy to say that when you're a co-founder. But I don't speak for Okta or any of their partners. Generally I don't speak for anyone other than myself and Latacora. So I'm going to talk about the past 25 years and so of the things we can learn from that.

L Van Houtven: In the last 25 years, crypto, and also anytime I say crypto that always means cryptography and it never means cryptocurrency unless I explicitly say so. So crypto has become necessary. Because in 1995 SSL2 just got released right? And this is like phlogiston era crypto, phlogiston is like a theory of physics that has been completely debunked for a very long time. But the bottom line is, we didn't really know how things worked, we had a couple of theories and they completely fell over. Things were not good.

L Van Houtven: In 2005 like maybe a couple of companies vaguely understood how to store passwords, right? But still, we're not talking like a significant maturity level.

L Van Houtven: Now in 2015, you have SSO everywhere, every single application has like 30 external services or 20 external services that all have API credentials that you need to store somehow. Decent HTTPS, decent password storage, decent data encryption are now table stakes. Like it's not even optional anymore, your session cookies is suddenly an encrypted blob for some reason. And we have magic Internet money that uses chained blocks and zero knowledge proofs. So crypto has become much more a part of our daily lives.

L Van Houtven: And crypto has generally become, I think, less scary, at least for cryptographic engineers. I don't know if that's true for a wider developer audience. But again in 1995 phlogiston era crypto, nobody really knew what they were doing, right? And the official story if you tried to learn more things, very often you'd hear, what I call, abstinence only education where people just tell you like, oh you want to do some cryptography? Okay don't.

L Van Houtven: In 2005, hypothetically you could get it right, like the information was getting better, it was still kind of hard to access, but it was, you had a snowballs chance in Hell of making it. There's still abstinence only education, right? 2005 you want to go and learn something, odds are you're still going to get a door slammed in your face.

L Van Houtven: Now in 2015, I think that situation has markedly improved. There's a couple of reasons why it's improved, some of it is about accessibility, some of it's about education, and some of it is about conservatism.

L Van Houtven: So for accessibility, like in 2005, what did you have, right? Realistically if you wanted to encrypt something in a database, you were going to call an open SSL API. And that was not going to end well, and it's not your fault, it's because a lot of these APIs are just a complete beast to use. Documentation is missing, incorrect, you know all of the above.

L Van Houtven: In 2015, we really worked very hard on producing better libraries with primitives that do what it says on the tin and they come with instructions on how to use them and there ideally no wrong way to use them. And there's a much, much better chance that you're going to get a libsodium call right than you're going to get an open SSL call of any kind right.

L Van Houtven: From an education perspective, we've done a lot of work as well. There's Cryptopals which is sort of, it's named because it's supposed to be like a pen pals thing. The idea is, there's a bunch of exercises, people teach you how to break cryptographic designs that have some kind of flaw in them. And once you complete a set, you go to the next one, you go to the next one, et cetera, et cetera.

L Van Houtven: And the idea is that, to make a lot of these attacks more approachable. I get the impression that still to this day, but certainly in the past, a lot of cryptographic attacks have been maybe not taken as seriously as they should have been. And I think that's because a lot of developers feel like, well nobody's actually going to mount a cryptographic attack, right, that's for like ninja alien space hackers. But the idea behind Cryptopals is, well no, you plus five minutes from now, plus a perl script or a python script I guess these days, are going to be able to break, you're going to be able to break this thing right? This is not abstract, we can make this very concrete for you right now.

L Van Houtven: I wrote a book called Crypto 101, I give it away for free. Similarly same approach, the only difference is instead of just focusing on exercises, it focuses on kind of walking through it.

L Van Houtven: Some of the partners, well, one partner of Latacora, Thomas Ptacek has the questionable honor of being the person with the most karma on hacker news. And the dude posts a lot, right? And one of the things that he posts a lot about is cryptography. And one thing that we wrote recently, Tom and I, is a blog post called cryptographic right answers, it was an update from a series of previous blog posts and the idea was, okay lets give you, you know instead of, you're going to google something, you're going to get a bunch of answers, how about we just tell you what we think you should be doing for this one specific thing. So it's very straight and narrow, we're not saying this is the only thing you could do that might be correct. We're saying, heres what we think you ought to be looking at first. And obviously the name of this talk is a direct reference to that blog post.

L Van Houtven: And also generally just conservatism, I think you can safely summarize this talk as, use less exciting things. There's a lot of stuff that will end up biting you. So really what I'm saying is, all is well, right? Well maybe it's not. I don't think we're out of the woods yet.

L Van Houtven: A lot of the problems that we're still faced with is, you know people will make bad choices today. Like the market penetration of our attempts at dedication isn't a 100%. A lot of those choices are very hard to walk back. Either you're trying to encrypt some stuff in the database and if you wanted to update it, you have to now re-encrypt the entire database, that might be really annoying. There might be wire protocols in embedded hardware. So if we told you to go upgrade it then you'd now need to go like RMA 10,000 devices. Even just waiting for people to patch stuff, like even if you're in the easy case, turns out that people will take a very long time to do stuff. I mean their entire company is like for example Red Hat, as far as I can tell, their main source of revenue is software necromancy.

L Van Houtven: So even though all these, even though it's, we've done a lot of work, we've done a lot better. There's still going to be bad ideas tomorrow, and they're still going to impact people tomorrow. So the idea is, okay lets look back at some of the things that haven't panned out over the course of the last five, ten, twenty years and lets see what we can learn from that. A lets see if a new spec comes, if we come across a new spec tomorrow, can we decide ahead of time whether or not that's likely to be a good idea, if we would still agree with that five years from now.

L Van Houtven: Now the talk is necessarily kind of phrased as a negative, right? As cryptographic wrong answers. I'm going to say a lot of things about protocols that are not very nice. And maybe that's okay because we've said a lot of positive things in the past and when we asked for feedback a lot of people seem to say, no we also want you to tell us how it goes wrong. But I want to be clear, I'm not impugning anyone's character, in fact you'll notice that I name very, very few names in this presentation. This is just about, first all it's just about the tecnology and second of all I'm not saying that the technologies are broken, I'm saying that they could have been done better.

L Van Houtven: So the alternative title slide for this talk is Cryptographic wrong answers, a rant in B-flat minor. B-flat minor is the funeral march if you're wondering. Set to set the literal and proverbial tone for the rest of the talk. Now of course this is not the right room, because you're all here, you're all people of exquisite taste and distinction. You already know all of the things that I am about to say. Now even then, I'm hope to make this talk useful because you're going to be in a conversation tomorrow with someone of the Internet or perhaps a co-worker and that persons going to say something that might not be great. And it would be helpful if you had a bunch of rhetorical tools in your proverbial toolbox to help that conversation go right.

L Van Houtven: A lot of good talks in my opinion are born out of a specific frustration and for me that specific frustration for this talk, I'm not saying this talk is good but you know it be the audacity of hope. The frustration is that a lot of really bad, no good and downright silly ideas, can be made to sound like good ideas to smart and well-meaning developers. And when I say be made to, you know that sounds like there's sort of active deception going on, that is not what I'm saying. What I'm saying, it's just that it's very easy to make a bad idea sound good, and we'll show some examples of that.

L Van Houtven: So for example, OAuth 2.0, when you look at the predecessor for OAuth 2.0, OAuth 1.0a it did HMAC-SHA1 which is a weird crypto thing right? And the explicit goal in the spec was sort of maybe kind of work without TLS. And Oauth 2.0 early draft immediately called out, weird crypto is bad you should just use TLS. You know what are we doing with this whole HMAC-SHA1 thing? All of this sounds like good ideas, I think, to well-meaning, well-intentioned developers. So some of the fallacies here, first of all, HMAC-SHA1 isn't that weird, I have a hard time thinking of a more conservative cryptographic primitive than HMAC-SHA1, there a few things I have more faith in. Second of all, TLS gets you transport security but then if you look at the actual vulnerability OAuth 2.0 is faced with, very, very, very few of those are transport security vulnerabilities. There's also redirect bugs, CSRF bugs, domain confusion, we'll go through some of those in more detail. But the bottom in that OAuth is perfectly capable of losing your credentials over TLS.

L Van Houtven: Another example is JWT. Now people say, we want encryption, encrypted tokens, crypto is complex and Interop is good. Some of the fallacies there are first of all, I doubt that you actually legitimately want a signed token except in very, very few cases. If you're absolutely certain that it's better than 16 random bytes it should be easy to answer why is that actually better than something I go store in the database. You have all sorts of problems like revocation and these sort of like repeated bananas vulnerabilities that lead to authorization bypass. Now you can say look those are all implementation vulnerabilities, it's not actually JWTs fault, I disagree. If I can find the same vulnerability in five popular libraries, then maybe the spec did a bad job of making sure people avoided that vulnerability. So in conclusion, a) I'm not entirely sure you wanted signed tokens to begin with and b) if you do, I don't think you want JWT.

L Van Houtven: And then finally, DNSSEC. Again, very well-intentioned idea, everything starts with DNS look-up. DNS is plain text and it's very easy to spoof, right? If I'm in the same Starbucks as you, then I can basically make DNS say whatever I want it to say. So let's sign DNS records. The whole pile of fallacies in there, but like there's all sorts of problems like for example DNSSEC doesn't protect the last mile of DNS. It doesn't do anything when your laptop is trying to ask the local DNS server for what something is. Spoof DNS doesn't actually matter for compromising TLS security, like literally anyone who logged into a captive portal presumably either on the flight here or at the hotel you ran into this, like your browser will tell you like, hey Google.com doesn't actually look like Google.com because there's a naughty DNS server in the way.

L Van Houtven: There's all sorts of other problems but if I keep talking about DNSSEC I'm going to be talking the entire rest of the slot. So there's a bunch of problems here, so again the pattern is, there's a problem, there is a proposed solution, it sounds like a good idea and it turns out to be really bad. And I find that very frustrating. I want the good answer to be obvious.

L Van Houtven: Now lets be clear, I realize that I'm at Oktane, I realize that Oktane is a company that ships OAuth 2.0 and JWT, just to repeat again anything that say, I don't speak from OAuth 2.0, yadda, yadda, yadda. I am not saying that if you use OAuth 2.0 and JWT then therefore you have a vulnerability definitely for sure, I'm saying that there are poor specs. I'm saying that there are problems that were caused because of a deficiency in the design that could have been avoided. So the question is, can we learn from that and how? DNSSEC is definitely always bad though, so it doesn't count for DNSSEC.

L Van Houtven: So in conclusion, we're going to look for pitfalls that we can recognize. And to do that I'm going to run through a bunch of bad ideas that keep coming back and that every time lead to disaster for some reason.

L Van Houtven: And so one of the very popular ones is algorithmic agility. Same spiel, you know the idea is look we have primitive-A and primitive-A might be, I don't know, AES or something. But what happens if is breaks? We like that, we want to use that primarily but we want to have option just in, we want to have something to fall back on. And the idea is we support both, and when A breaks, we just go turn on B and everything copacetic. A very related problem is negotiation, so lets say that I support A, B and C, you support B, C, and D and some how we're going to figure out that B and C is what we can both agree on, but we like C better so somehow hopefully we're going to end up with C. This is a really, really plausible sounding engineering decision and turns out to very regularly turn into, get us into trouble.

L Van Houtven: The poster child for this is TLS. So TLS has a cornucopia of things that you need in order to make it work, right? There's signing, there's key agreement, there's bulk encryption, there's MAC algorithms in there. I'm not even going to mention like the variety of curve choices and key sizes. But for each of these choices, TLS gives you a handful of options. And it's not like a perfect Cartesian product, but it's pretty darn close. Now the question is, why does it hurt to support more things, just go turn them off. Well it doesn't really work that way, because very often you'll see protocols come back from the dead. So FREAK and logjam were real world TLS vulnerabilities that exploited export grade ciphers which pretty much died out in the late nineties.

L Van Houtven: DROWN in 2016, very recent TLS vulnerability, exploited vulnerabilities on real servers on the Internet and a very significant portion of them exploited vulnerabilities in SSLv2 which is almost 20 years older. People generally don't minimize cipher suites if you look at SSL labs you look at like the A plus servers and you go click through on them, you'll see that sure they support good things, but they also support a bunch of bad things. So it is true that TLS in the past has been saved by algorithmic agility and I content that this is still a bad idea. Those of you who remember BEAST, it was an attack from a couple of years ago and we as sort of an emergency answer to BEAST we started turning on RC4 everywhere. But the problem is, at the time, we already knew that RC4 was broken, it was not as badly broken as BEAST affected cipher suites were, but it was still pretty broken.

L Van Houtven: The reason I think that it's still no an argument for algorithmic agility is that, the real reason is that updates lied. People were way behind, BEAST was from 2011, POODLE was in 2014, a similar sort of vulnerability. But the attack, the core attack it exploits is from 2002, it was fixed in TLSv1.1 which was 2006, again predates all of these attacks by many years. And the real reason is that browsers took like six to implement it. In the end only implemented it because there was a vulnerability, and it was a clear and direct need for it. And browsers, I don't want to put all the blame on browsers, browsers were following the servers lead. The servers were even worse. So really, I don't think the answer here is we need algorithmic agility, the answer is, you need to patch your software.

L Van Houtven: As a counter example of algorithmic agility and this is an example I'm going to use a bunch in this talk, Wireguard. Wireguard is a modern VPN. It is currently available on every platform, it brings me great joy to say that, for a long time it was Linux only. There's one version of Wireguard. There are no ways to misconfigure Wireguard it is either right or it doesn't work. It doesn't negotiate, it always gets you strong primitives. And we expect it to hold up for the foreseeable future. And if something happens to it, we will get a Wireguard 2.0, we will not add, well I mean I can't speak for the author of Wireguard but with almost epistemological certainty I can tell you, nobody will add a version negotiation to Wireguard. There will just be a new version of the protocol and you either update or don't.

L Van Houtven: JWT for some reason is not really the poster child for algorithmic agility, I don't know why that is because it also supports a cornucopia of algorithms and it does get it in trouble and we'll talk about that more in the rest of the talk. But to just give you and idea, for example with JWT you can do RSA encryption. You can do that with PKCSv15 and RSA-OAEP. The good news is at least one of these is safe, the bad news is that it's not the one that anyone actually implements. Also JWT supports alg: none so you know, I'm not sure if that's an algorithmic agility bug or, but the take away is that algorithmic agility was a very defensible idea in the nineties, all of our primitives protocols were legitimately worse then and we had a good reason to have less faith in them. But at the end of the day it turns out that it cause significantly more problems than it solved and even very recently, again DROWN 2016, we're seeing attacks from 20 years ago come back just because of things that fundamentally are the consequence of algorithmic agility decisions. So instead version your protocols, update aggressively.

L Van Houtven: Another really bad idea are committees. The problem with committees is that they have no focus and obviously I'm over generalizing. But the product kitchen sink specs and they are very often so slow that they end up being unresponsive to the real problem or they end up being so distracted in wanting to write a specification that they forget what the actual problem was that they set out to solve to begin with. Now were going to talk about kitchen specs plenty in the talk but just to give you an idea about unresponsive.

L Van Houtven: So I mentioned that OAuth has serious problems and I think that committees are one of the reasons that OAuth has problems. So 2010, OAuth 1.0 just to give you an idea of like contemporaneous technologies, that was Backbone, that was Angular, right? Like single page apps were happening. 2012, OAuth 2.0 we've got the iPhone 4S and React was right around the corner. So clearly native and single page apps they were already a thing when OAuth 2.0 came out. Like it was not just a fad like it was pretty clear that that was going to be a significant huge case to support. So the OAuth 2.0 RFC supports a number of flows, and this is just different ways that somehow a third party can get some credentials on behalf of a user.

L Van Houtven: Now when you look at RFC 6749, this is the original OAuth RFC and you ask it like what do you do about native apps, there's literally a section about native apps in there and I really recommend that you read it, because it's very strange. It suggests things like, well I guess you could register a scheme with the operating system. Or I guess you could run a local web server and then you could talk to that web server. Or maybe you want a web extension or something. Or I guess you could embed a browser, that one I particularly enjoy because if you recall, the entire point of OAuth was, let's stop giving our credentials to random applications just because they want to access my pictures or whatever. And apparently the answer is, oh actually you should type in your username and password into a web view that is entirely controlled by that one application. Which is exactly the same thing as giving them your username and password.

L Van Houtven: Now predictably because there was essentially no recommendation and therefore this resulted in disaster, very common problem with OAuth 2.0 has been that if you have let's say a phone, you have two apps on the phone and because the phone is incapable of making sure that the redirect in the auth flow so the authorization code goes to the right app. A malicious app will intercept the code and because the, you know you can download the app, I can inspect the APK or I can inspect the thing that I download from the app store. So it's not like I can put a secret or credential inside the good app to distinguish it from the bad app. And so as soon as you have the Authz code, it's game over.

L Van Houtven: Now Authz eventually decides to fix and he fixed this with a thing called PKCE. Now PKCE, actually the diagram that I just showed you is lifted from the PKCE spec. It reintroduces cryptographic binding, so if you remember the entire point of OAuth 2.0 was, oh forget about that whole HMAC-SHA1 thing, we should just do TOS instead, actually reintroduces cryptographic binding. This was the open problem for three years but don't worry I'm told absolutely nobody wrote any mobile apps in that period. Also by the time that PKCE got done, the major mobile platforms already had a mechanism or were contemporaneously releasing a mechanism to securely link into an application. And the fundamental problem that PKCE solves is, how do we fix the problem that iOS or Android can't securely link me into the right application?

L Van Houtven: So even when it's successful, it almost so slow as to be unsuccessful again. But the good news is that OAuth 2.0 is complete now and we sort of know what to do except for that whole PKCE thing. So it's 2019 and I have a single page app and I want to know what to do, if you believe Auth0 or Okta docs, the answer is implicit flow. If you believe Aaron Parecki, then the answer is with an Oktane employee I believe, the answer is an auth code flow with no secret and maybe PKCE. If you believe the security best practice RFC from December 2018 then the answer is, nope everyone use PKCE at all times, forget the implicit flow. And if you believe the top voted stack overflow answer then the answer is, no don't use PKCE that makes no sense for single page applications, didn't you read the spec?

L Van Houtven: So in conclusion, nobody knows. But the good news is, we do know how to do the auth flow except for the whole PKCE stuff. Except there's still redirect bugs. So when I perform an auth flow, does anyone happen to know, let's say I sign into Okta, I type in my username and password, and I get redirected back to the application, right? That redirect has to happen some mechanical way. There's a HTTP thing that happens. Does anyone know what the correct answer is for what the redirect code is? Because there's a lot of 30x's in there.

Speaker 2: 302.

L Van Houtven: 302? 302 in extremely close and you are usually okay as long as you're in browsers. So the reason is, so that the underlying problem here is, the really, really bad one, the one that's always bad and 302 is sometimes bad in that sense, is 307 temporary redirect. Which some web frameworks will encourage you to use if you're saying that something in not a permanent redirect. The underlying problem here is, like in the HTTP sense, sometimes redirect mean, okay go look at this other thing next. And sometimes a redirect means, oh the thing that you were asking for has a different name now and you should go ask this other URL. But the problem is, if you have that second one, that means that the browser is going to try to repeat that request. When you repeat that request, think about what the last thing is that you probably sent to whatever your IDP was before it decided to authorize you.

L Van Houtven: It is almost certainly like you put a password into your TTP code. So now you conveniently disclosed the username, password and TTP code to a different third party. Now there are other bugs but my point is about specification, and a lack thereof. The point of this talk is not 30 minutes of dunking on OAuth.

L Van Houtven: But I think the sort of core point that I want to get to is, you either have to be a spec or you have to be a meta-spec. OAuth 1.0a was a concrete specification and by comparison, OAuth 2.0 was a set of hopes and dreams. And I mean that in a nice, I don't mean a super nice way, but in a sort of nice way in a sense that it's not bad to be a meta-spec, it is bad to be meta-spec that pretends to be something that you can implement, because you can't.

L Van Houtven: And the RFC title kind of gives it away. Because if you look at the OAuth 1.0a, a spec that calls itself a protocol, if you look at the OAuth 2.0 spec it's the authorization framework which is significantly more vague.

L Van Houtven: OAuth 2.0 is not a spec. It looks like one to most people. Earlier when I compared OAuth 1.0a to 2.0 in the beginning of the talk, I don't there is, I mean tell me if I'm wrong, but I don't think there is anyone in the audience that went like, no, no, no, that's a category error. One of those things is an orange and the other kind is a wrench. And you can't compare the two. The RFC even say one deprecates the other, one is deprecated by the other, like it's very clear that you're supposed to be able to replace them. But I don't think that's really true. I think Open ID connects success is in part because it gets us a lot closer to being an actual specification. There's serious problems with Open ID connect as well, most of them inherited from OAuth 2.0 but at least you could plausibly implement it.

L Van Houtven: Now that said, this is the one slide that we'll admit is a little bit of a dig, but I think it's funny, it's in good humor. So it's not for lack of effort because serious question, what do you think has more pages, JOSE IETF work group docs that is jbt jdbk jdbs jdba jwtf like all of those specs? That has a bunch of documentation, that has a bunch of pages. The OAuth IETF work group has a bunch of documentation and James Joyce's Ulysses. Which one of these documents has more pages in it? Anyone want to guess? The punchline is it's OAuth. So OAuth significantly more pages, well no, it's not significant, it's slightly more pages than James Joyce's Ulysses.

L Van Houtven: So you can be a meta-spec and still be good. And my example of that is Noise. So Noise is a pattern language for building wire protocols. You pick a pattern, you get some properties. It explicitly doesn't fix the implementation. It tells you what you can replace about it and still have a protocol at the end. Everything that, that said, it's still far fewer options than JWT, we'll see later the JWT gives you honestly more options than you should have within the same spec. Everything sort of follows the same mechanism. And example of a spec, again I'm going to use Wireguard, Wireguard is based on Noise. So Wireguard is more of a byte- level defined implementation very concretely, you either do Wireguard or you don't version of Noise.

L Van Houtven: So takeaway of good specs, since it describes current design, they're not building a new one at least not in the context of the committee. Someone designed this and they're telling you about it, there's a handful of ways you can operate it. Ideally only one. It's highly specified, it has test vectors like there's like hex string there and if the thing you get out of your implementation isn't that hex string, it is wrong. There is clear usage guidelines, et cetera, et cetera. So I want narrower specs.

L Van Houtven: Another problem that keeps coming back is multi-step processing. So the idea is you have to purse the message in order to figure out what to do next. And so a typical example of that is JWT alg header, pretty much any SAML structure and TLS. There's a concept called the cryptographic doom principle. First coined by Moxie Marlinspike who is a very accomplished cryptographic engineer. If you have to perform any cryptographic operation before verifying a MAC on a message you've received, then it will somehow inevitably lead to doom.

L Van Houtven: And so one example of that is, PKCS7 padding, so if you have AES-CBC which is a very common way to encrypt a stream, it can only encrypt multiples of 16 bytes. And so when obviously not every message is multiple of 16 bytes, so sometimes you have to pad it at the end. And the way you do that is add missing bytes of the same value as the number of bytes that are missing. So if you have three bytes missing, you have three 03 bytes. And when you're decrypting, you have to check the padding. Now this is messed up TLS for example the POODLE attack, the BEAST attack are all examples of TLS falls over because they don't validate a MAC first. And the reason for that is, the way that TLS does it, is you first MAC the plain text and then you combine it and then you encrypt it. Which means that when you want to decrypt it, you first have to decrypt it now you have the plain text, the padding and the MAC and you have to check with whether the padding is valid before you move on to the MAC.

L Van Houtven: And like all of these attacks, Lucky Thirteen, Lucky Microseconds, just a giant pile of TLS attacks that are all a consequence of this very, very simple flowchart where there's just one box that's on the wrong side. There's only two boxes, and somehow they got already put in the wrong order. And to be fair, this was in 1995 we legitimately did not know what the right answer was. There were people who said, no this is actually the superior design because XYZ. But the jury is not out on that.

L Van Houtven: In SAML, any part of the message can be signed or encrypted. Real IdPs really use a fraction of the functionality. But the problem is it's not always the same fraction. So implementers have to deal with the entire spec and attackers get to play with the entire spec, which is not a good thing. So SAML uses XML signatures, there's basically two popular implementations. One of them is in JAVA standard library, the other one is libxmlsec1. Now those libraries are themselves sometimes somewhat troublesome, but neither of them is your XML parser, which mean whatever's validating your signature can disagree with your XML parser about what that XML means.

L Van Houtven: For example, let's say that I had some SAML for [email protected]. And I put comment there in the middle and I legitimately own the domain user.com.evil.com and I manage to get a IDP to sign up for me. This was a real vulnerability in the last two years or so got released. The problem with this is there are multiple canonicalization strategies. DSIG will typically do one called exc-c14a because that's more or less the only one that counts. And parsers will generally just like sure whatever, it's some XML parsing. So the problem is Sig validation will disagree with your parser about what the actual XML tree looks like, and then you get a vulnerability like that.

L Van Houtven: As a general rule, canonicalization, bad idea. JWT has a very similar problem, lots of supported styles of operation all gated on alg. So we have asymmetric encryption, you have ECDH with an ephemeral and static key. You have symmetric encryption, you have signing you have MACing. These are totally things, right? They don't fit in the same universe, they have like completely different opinions on how you should be using them. But JWT all combines them in the same spec.

L Van Houtven: An example of where that breaks, JWT had RS256 which is RSA signature, I send you an HS256 token which is HMAC, your JWT library uses your RSA key material for some reason that I can't even begin to fathom, and specifically it will use the public key, but of course the public key is public, which means I know the key that was used to authenticate the token and you get the idea. So and then you get arbitrary JWT forgery. Again you could argue this is an implementation bug, this is not a spec bug. I disagree, if I can find the same vulnerability in multiple implementations, then I consider that significant evidence that the real problem is the spec did not do a good enough job preventing that vulnerability from existing.

L Van Houtven: Again JWT: alg: none obviously it's nice that it's mandatory to implement, I say that as someone who is regular in the position of an attacker at least. You know literally any token would validate and it's not great. You could argue what's sort of bug that is, it depends on your perspective, I don't really care for the ontology here, but the takeaway is, safe specs have one authenticator, it is always of the same type and it is all the way on the outside of the thing. And you can't do anything until you validate that authenticator.

L Van Houtven: And your takeaway if you're Satan and your job is to design poor cryptographic specifications is you want to make sure it's very confusing why your spec is bad. You want to find like five or six different reasons, because then there's like no angle to start at it. It makes it much harder to explain why it's bad. And before you know it, JWT.

L Van Houtven: so another problem, rich external error messages. Again from a engineering perspective, super obvious that this is a good idea right? Error messages are good, like error messages should be descriptive. But in reality this is really, really bad. A lot of the attacks that i just mentioned rely on something called and oracle, what that means is I will craft a message, I will manipulate a message in a very specific way so that it is almost certainly invalid. But the way respond to the invalid message tells me something that I want to know that I'm not supposed to know. It will tell me something like, how do I make an RSA private key operation, it will tell me how do I decrypt a message et cetera, et cetera. So very often these will be like repeated so I make one small modification and I try the next thing, try the next thing, try the next thing. And after a couple of thousand messages maybe, I will learn how to decrypt a block.

L Van Houtven: So there's lots of ways you can have different behavior in the face of errors. One way is, have an explicit error code. This is the problem that was in original SSL. Like it would literally give you a different error code depending on what the problem was and it was very easy to mount that attack. Modern versions of TLS don't return the same error message, but some implementations will take a little bit longer or a little bit less time to respond depending on what the problem is and then you can leverage that.

L Van Houtven: Another problem is, the thing you should look out for is the lack of security proofs. I mentioned all the way in the beginning of the talk the phlogiston error of the crypto, one of the big things that got out are security proofs. So there are general proofs of security for like primitives, and there are also techniques for approving things about protocols. Mostly you're, I'm not going to try and make you an expert in security proofs. I don't think that's going to work in the time slot allotted to me, but I do want to make sure that you have at least an inkling of what to look for. Like if you see a spec, you know what is the thing that should give you a little bit more confidence. If you see words like game adversary advantage, that's a good sign. If you specifically, if it tries to prove that there's a reduction to an already assumed hard problem or there's a reduction to a different primitive, like AS or something else that you have a lot of confidence in. Then that's a good example, it's the sort of thing that you're looking for.

L Van Houtven: Protocol proofs are a, not going to say newer, but they've gotten more popular recently. You're looking for words like Canetti-Krawczyk, you're looking for words like Tamarin. I'll put these slides up, please don't try to remember all that. But essentially what they argue is that in a particular protocol, something can't happen. So for example, it will argue that the key will remain secret, if these things hold. It will argue that you get forward secrecy, it will argue that certain subtle bugs can't happen. And these work in real life so TLS1.3 very, very recent, there was a draft and it had a bug in it and the automated protocol approver and none of the people looking at it found it. Or at least not in time for, the protocol approver found it faster. And similarly Wireguard for example has a tamarin proof that proves all sorts of things can't happen.

L Van Houtven: Big problem here, I promised that I wasn't going to name names, a big problem here I've noticed is that the quacks have gotten more efficient in particular, just because it says QED doesn't mean that it's proof, that's one of the reasons that I found a couple of those recently. So another failed thing that should be relegated to the ash heap of history is key parsimony. So what I mean by that is trying to be stingy with key material. Or reusing the same key in different context. For example with RSA, RSA you can encrypt you can sign, and so all RSA cipher suites use that property. Because you will always sign with RSA if you have an RSA certificate in TLS. You can optionally also chose to have the client encrypt some secret to you using that same key.

L Van Houtven: DROWN exploited this, like the reason DROWN was possible was because RSA did this thing wrong, or sorry, TLS did that thing wrong with RSA. So just don't do that, have multiple keys. Same thing with SAML, because SAML uses asymmetric cryptography technically you can have one IdP, have one set of keys and every service in the world that you talk to, you all use the same IdP key pair. Best practice tells you not to do that, there's a bunch of reasons why, but one of them is that audience restrictions are really easy to ignore. Like it's unsafe by default, so the interesting caveat there though is that because you end up with one key pair per IdP relying party, that means you might as well have used symmetric crypto. And that the whole thing, like all of the problems that SAML has inherited because it used more complicated crypto end up being for naught, because in practice you don't actually get any benefit from that.

L Van Houtven: Another idea that we should stop having is key encapsulation. So the idea is that public key crypto is somehow easy to use, I mean it's not but let's say it is. You can't really lose a public key or at least that's the idea. But public key crypto is slow so we're going to take a symmetric key, we're going to encrypt it with a public key and that's how we go from there. The biggest problem with this is that very often you'll lack forward secrecy. At some point my TLS certificate, what forward secrecy means is that at some point is my TLS certificate gets compromised, does that mean that all the previous under that certificate, are they compromised in the sense that you can go decrypt them now or are they, does it just mean the attacker can pretend to be me from now on. So if you have forward secrecy, they can only do the pretend to be me from now on, if you don't then that means all of our previous conversations are no longer private.

L Van Houtven: Static Diffie-Hellman in RSA, same idea. Anytime that you see a long term set of key and it is not being actively used with a new ephemeral key, that is a bad idea. Just don't have it. One of the ways that this is currently being implemented is in eTLS which I am told is enterprise TLS I think the e stands for. And I think my opinion on that can be summarized as TLS1.3 is secure and some vendors think that, that is bad and therefor we need to fix it.

L Van Houtven: So some other problems you can probably see these coming, these are getting more and more obvious is think are raw primitives. So there's a lot of cases where a primitive is not being put in an appropriate construction. People are being told to use the primitive itself, and that generally leads to disaster. For one for example RSA, I mentioned a pile of RSA attacks but there are many, many more and you need complex tricks to make RSA safe. These are called padding typically, I don't think padding is a good word because it sounds too fluffy. It's like extremely necessary, it's more like safety break or something. RSA was an extremely important protocol, I don't want to beat it down too much but it's time to put it out to pasture. It has lead to way too many problems.

L Van Houtven: Finite field Diffie-Hellman finally this thing is mostly gone. (EC)DSA, if you see (EC)DSA directly, pretty unsafe, in particular because the K parameter in (EC)DSA is like the worst possible case for all cryptographic parameters ever. If you have a tiny, tiny, tiny bias in your randomness for example then eventually you'll leak your private key. This is not a hypothetical like, for example Playstation, Sony lost the ability to sign Playstation 3 games because they screwed up (EC)DSA. Generally direct ECDH like again anytime you're taking a long term static key and you're not combining it with a temporary ephemeral key, just avoid.

L Van Houtven: And finally I want to introduce a new concept called the axis of concern. And the axis of concern is basically, the idea is I want to give you a tool so if you're looking in your spec, you can like skim it, see what it introduces and decide how worried you should be about this thing. I'm not saying if you get a high axis of concern score for lack of a better phrase, then everything has to be bad, but I am saying it deserves a lot more caution. So one thing that you can always do where I will give you, where I'm essentially always okay with whatever you're going to do is, you can read a pile of bits from urandom. That is essentially always safe, that is what session cookies should be. People don't use urandom enough, people introduce JWT for some reason, just do this more often, please.

L Van Houtven: HMAC similarly I mentioned, you know HMAC is one the more conservative specifications that I can think of. You can use a decent AEAD but now we're already upping on the concern score a little bit. You can use like libsodium or KMS. If you start using signatures I'm getting a little bit worried. Asymmetric encryption set a gage like I mentioned earlier, like there's way too many things that go wrong there. You have to be extremely careful. If you start using RSA anything, like to me that just tells me at the you know, it's not very good. Unfortunately it appears that my fonts have changed between this and the last time that I, the magic of presenting in the browser.

L Van Houtven: If you use bare primitive, if you see like just AES or SHA256 or whatever show up in the spec, that is essentially I mean at that point you should be ready to throw it away. It does get worse, if you see pairings or zero knowledge proofs, I think those things are really, really cool, but also they're similarly concerning. There's a lot of potential problems with that, you really want someone who knows what they're doing to review that. If anyone uses their primitives, I hear there's a crypto currency that decided to implement a hash function in trinary for some reason. Obviously it's almost certainly not going to work out. Any kind of bespoke wire protocol I'm generally concerned. Block chains, just no. If your spec requires animal sacrifice that's probably also a bad thing. And finally if you use JWT I don't know what to tell you.

L Van Houtven: So in conclusion, look, hindsight is 20/20 and I get that, and please don't take this as me just ranting about OAuth for 45 minutes. But I could have told you that alg was a bad idea in 2015, which is when JWT came out. History rhymes so I think a lot of the problems that we're seeing, they keep coming back, they're very similar problems. You see them in different context, but you can learn from history a prevent problems in the future.

L Van Houtven: My goal with this talk was to build some intuition, and there are good things that come out of IEFT too, there are plenty of bad things that I said about it. But you know there are useful things that you can get out. You too can learn crypto vulnerabilities, that is actually possibly the most important lesson I want you to learn out of this. There are many, many ways that you, if you want to, like literally anyone in this room, if you are here, you are smart enough. There are a lot of people for reason think that crypto is weird alien science, it is not. We can teach you how to do this. And with that, thanks that's all I have.

L Van Houtven: and finally one more thing, by listening to me yap for 45 minutes you have unlocked the me free tier, so if you have any follow-up questions, I'll be more than happy to take those, either through email or I'll be around the conference. So don't hesitate to talk to me. Cool.

The past few years cryptographic engineers have worked towards safer APIs and primitives. Today, developers have access to a set of tools that are relatively straightforward to use and unlikely to get them in trouble. Latacora likes to call these "Cryptographic Right Answers": they're the things we like to hear when a company describes how their cryptographic designs work.

We're not out of the woods yet: people don't always choose those right answers, and companies are often bound by bad calls they made years ago. This talk walks through some of the common dangerous, discredited or otherwise bad ideas using popular protocols and designs as examples. By learning to recognize problematic patterns, you can make better decisions tomorrow.