Oktane19: Keeping Mobile Secure

Transcript

Details

Hans R.: So, quick thing before we get started. How many of you are iOS devs? Cool. I apologize to all of you because I'm an Android dev. How many of you are Android devs? I apologize to you because I don't use Kotlin yet. So, cool. Got that out of the way. And the rest of you, I don't know why you're here, or maybe you just don't want to raise your hands, but cool.

Hans R.: Today, I'll be talking about how to develop mobile apps securely. Obviously, before we get started, I thought they were injecting JavaScript into my slides at first, but it turns out it's legalese. I'm sure it's important for something. Quick disclaimer of my own, I am an Okta speaker. I do not speak for Okta. So anything I say is my own words. Any examples I may use, I'm not trying to pick on anybody, any one country or group or anything like that. It's just I'm making up examples on the spot..

Hans R.: Cool. So let's start things off. Is perfect security possible?

Speaker 2: No.

Hans R.: No, it's not. Even if you put your server in a cement block at the bottom of the ocean, somebody's got a submersible with a drill, all right? If you want to be usable, it's not possible to have perfect security. That's not what we're aiming for here, all right?

Hans R.: So what are we aiming for? We're aiming to prevent scalable attacks. Basically, hackers are lazy. We want to make sure that if they hack one of our users, they don't get everybody. We want to make them work hard. If yours is harder to hack than somebody else's, they're going to go for the easier target. So you want to make them really work for it. And then, finally, end users will use the easiest path forward. So you really need to think about everything you do in terms of their point of view. Try to make the easiest path the most secure path, the best path, because otherwise, they'll work around you, and that voids all of your effort, all right?

Hans R.: The top OWASP models ... OWASP puts out this list every year of attack vectors versus varied different groups, and the top five are all things I'm going to try to at least touch on here today. So it's things like improperly using the OS, insecure data storage. That's a big one. Insecure communication or insecure authentication, basically using protocols wrong or just not using them at all. And then insufficient cryptography, which basically means trying to encrypt something and just doing it wrong.

Hans R.: So what are we trying to protect here? What are the things that are important? First off, we have session tokens or access tokens. Obviously, these are the things that let you hit your server endpoints. Your server is where the data is. That's where the important stuff is, all right? And along that same vein, anything that can get you an access token or a session token is just as valuable as the actual session or access token itself. That's something a lot of people forget and something you really need to keep in mind.

Hans R.: Thirdly, especially these days, more and more government regulations are out there, like GDPR or all these other things popping up in various US states. PII is just as valuable these days because, also tying into that second point, PII can be used to get a session token. Everybody has that forgot password flow or that forgot username flow, and those tend to fall back on PII to determine who you are, all right? And finally, you're all making your own applications. You all have your own secrets. I don't know what they are. You don't know what they are. If you're a bank, it could be your social security number. If you're a chat app, it could be your chat logs, your chat history. Every app has their own secrets.

Hans R.: Also, a few things that are listed here are some things I'm going to touch on in a minute. These are things that are key to your mobile code or should be that also are secret or at least pseudo-secret, things like cert pinning, tamper detection, obviously, usernames and passwords. I'm going to say this every time I put username and passwords on here. Don't store these on disk ever. And then, obviously, API tokens because, again, accessing things that are secret.

Hans R.: So I think there were other talks about this. I'm just putting it up here real quick. I'm not going to go into OIDC, but some of you may not be familiar with the flow or that things come out of it. These are things I've been referencing, so I wanted to show a quick little common use case flow. So here, the client will talk to the server and be like, "Okay, hey, I want a code, an ID token, and I want offline access." The offline access especially is common in the mobile perspective because you don't want to have to prompt for username and password every single time you go through. And then the server endpoint will return back the ID token, the refresh token, and the authorize code.

Hans R.: So that authorization code, you then go to a token endpoint, which could be the same server, could be different. It really depends on what resource is being guarded, and use that authorization code to get another ID token and access token. So this is a very common OIDC flow, and these are the important things you get out of it. So a common use case would be you're a chat app. You want the user to sign in. You're doing, say, Google's OIDC login. So you kick out to a Google server for the authorized endpoint. You say, "Hey, I want a code so I can get an access token, this ID token so I know what their profile is, who I'm talking about here, and the offline access because I don't want them to have to sign in every single time they open the app," all right?

Hans R.: And then you get the flow code back, go to the actual chat server endpoint and be like, "Hey, I want to get the chat logs. Here's my authorization code. Give me the token for those." So I mentioned these real quick, but I'm going to go into it real quick. What are the important things I'm getting out of this flow? These are the secrets we're trying to protect. These are the important things.

Hans R.: So first off, that authorization code I mentioned, this lets you get other tokens. This lets you get access tokens. This lets you get ID tokens. So that's something you need to protect. Anything that lets you get something else? Got to protect it, all right? The access token, obviously, it's the same as session. This lets you actually get the chat logs, get whatever information you need, whether it's profile information, chat logs, chat history, yada, yada, yada, all right?

Hans R.: That ID token? This is something specific to OIDC, so you may not be as familiar with it if you use OAuth typically. But this ID token typically has a lot of information about the user, various claims. It's very customizable depending on whatever server flow you've created. So this ID token is something that contains a lot of PII data typically. So it's not really something you should be passing around a lot. Just get it once, use it for yourself, then it's not like don't send them every request because you don't really want to add to the risk.

Hans R.: And then the refresh token. This is the important one. So the refresh token, the purpose of it is to be kept on disk, is to be kept around so that you can get access tokens later. So this is the one I'm going to try to focus on for a little bit here because this is important. You want to keep it around, but it can get you access tokens. So that's that second point I talked about earlier of anything that can get you something else is just as important as what you're getting.

Hans R.: But before I really get talking about how to protect it, first is what are the things I shouldn't be doing? What should I avoid? First off, and this is something I see surprisingly often, never hardcode secrets in your code. Your code is deployed to an OS. The OS has to execute the code. That means, by definition, anything in your code can be reverse compiled because otherwise Android or iOS would not be able to execute it. So if you're ever putting secrets in your code, that's a big no-no because even if you obfuscate, even if you do all this advanced stuff to it to make it hard to reverse compile, it's always possible to reverse compile. So somebody can always find those secrets. And as soon as the secret's lost, it's not a secret and it has no value.

Hans R.: So another one. Don't store anything on disk that's a secret in cleartext. Obviously, this is the big thing about the refresh token and other stuff. This is the obvious one. Don't store things in cleartext. Secondly, be really careful about allowing secrets to be backed up to the cloud. So the problem with here is not that I don't trust Google or Apple. It's that I don't trust Google or Apple. So we don't know what they do. It's a black box. We don't know if they cache these on the phone in an unencrypted manner. We don't know how they store them on the server.

Hans R.: There was that big thing a couple of years back where NSA tapped into all of Google server-to-server communications unencrypted. We don't know if they're doing that. So it's really best to just avoid backing up secrets at all. Back up non-sensitive data. Sure. That way you have that nice, continuous user experience. But let's try to keep the secrets a little more sensitive. Let's try to avoid having those backed up.

Hans R.: Don't try to create your own encryption algorithm. I'm not a PhD in math. I know PhDs in math who also don't want to create their own encryption algorithms. These things are tried and tested. Millions of people try to hack them every day. Trying to roll your own is just a great way to invite disaster. The math here gets really complex. It's just something you should try to avoid as a data developer. Go with the defaults. They're good enough. It's fine.

Hans R.: And then finally, try to avoid sending data over unsecure connections. I'm not talking about just non-SSL internet connections here. I'm also talking about Bluetooth, NFC, ZigBee, all those local communication wireless patterns. Those are unsecured typically, so you really shouldn't be passing things over them. Right now I guarantee you if I had my phone in Bluetooth mode and I was broadcasting signals, there's at least probably eight people out there sniffing just for fun. So try not to send anything over unsecure communications.

Hans R.: But Hans, you say storing things on disk in cleartext is bad, but all modern OSs, they're sandboxed. iOS. Strong sandboxing around apps. Android. Just as strong, especially more modern versions of OS. But it's completely worthless as soon as the phone is rooted or jailbroken because anything that has superuser access, like Android specifically, it's Linux user permissions. As soon as an app has superuser permissions in Linux, they can change whatever access they want. They can see anything in cleartext inside your user profile, inside your user permission space, inside the OS itself. It's just all the sandboxing goes out the window as soon as it's jailbroken or rooted. So as long as you plan on running on jailbroken or rooted phones, it's really something you should avoid. And a lot more phones than you think are jailbroken or rooted.

Hans R.: All right, so have you ever heard of Heartbleed? And some of you may not. It was a niche thing but a big deal. So a couple years back, SSL itself broke, this thing called Heartbleed. And it was basically down to this really subtle race condition-y thing in the code. And that brings up my point of OSs are extremely complicated. There's always undiscovered bugs. There's always undiscovered issues. No matter how hard you try, no matter how sound you make it, there's going to be something that some clever attacker's going to find. So you shouldn't trust the OS to maintain its sandboxing. There's probably at least a dozen undisclosed vulnerabilities out there right now that we just don't know about or we know exist but don't know how they do it.

Hans R.: We know that the NSA has a couple. We know that Israeli research firm that helped out the FBI for that San Bernardino incident, we know they got one that still to this day we don't know what it is. So you can't really trust the app sandboxing. So Hans, you're being really paranoid here. Yeah. That's our job. We're supposed to be paranoid. If you're developing in a secure manner, paranoia is your baseline. That's your standard line.

Hans R.: So, okay, now that I've gone and tried to scare you all a little bit, what can we do? What can I do to try to protect myself? There's a few basic patterns that you can go through. So first, there's obfuscation. Basically what obfuscation is, if you're not familiar with it, is you basically shuffle around your code, add a couple of fake paths, add a couple fake variables just to make it harder and more confusing to figure out what's going on. And there's a second category that some people think is not obfuscation but I definitely consider obfuscation, and that's quote, unquote, "encrypting" something when all the secrets you're encrypting with, they're right there next to the encrypted file.

Hans R.: If you have a JKS with a strong 64-character password and right next to it is a text file that says, "JKS password," with the password in it, that's not encryption. That's doing nothing. Even if it's not a text file next to it, if you're using something like the serial number of the device, something that's easily discoverable where all it takes is the attacker to figure out the pattern, that's not encrypting it because they can reverse it as soon as they figure out your pattern. That's obfuscation, all right?

Hans R.: Step up. This is what you see in a lot of apps where they have a pin code. OptiMobile does this. And that's encrypting using user passcode, some piece of information that's in your user's mind but not on the device. And what you do is basically use that user passcode either to create a key every single time or when you're storing that key on the device, all right?

Hans R.: Then we have the new hotness, storing in the keychain or the keystore. This is the secure enclave. This is that trusted execution environment. This is where there's a separate chip on the device that the keys are generated in and executed in, and they never leave that device. The chips are actually designed to not be exportable. And then we have the easiest, the best, arguably the least useful version, only keep it in RAM. Never even bother putting it down to disk. This is a good option in a lot of cases.

Hans R.: So let's dig in on these some. First, we have obfuscation. Like I was saying earlier, if the effectiveness of something requires the attacker not knowing how it works, then it's not security. It's just smoke and mirrors, all right? This should really only be used for things that are baked into your app code, things like key pinning, tamper detection, root detection. Your code has to be executed, so the best you can do is obfuscate. So it is helpful sometimes just to make it, again, like I said earlier, make it harder. Make them work for it. That's where obfuscation comes in, things where you can't practically encrypt it but at least make them put a little effort into it.

Hans R.: I'm going to dig in a little bit on these three points because these are things that not everybody's familiar with and I think are important as well. So first off, a quick aside on the key pinning. If you're not familiar with key pinning, what its intent is is to help prevent man-in-the-middle attacks, all right? If you're not familiar with man-in-the-middle attacks, that's where somebody's sitting in your Starbucks Wi-Fi, they hijack the DNS so that your first communication goes to them instead of Google or wherever you're trying to get to, and then they basically rewrap your entire communication so they can read everything in-between.

Hans R.: And it relies on CA certs and swooping the CA cert system. A lot of state-level attackers use this route because they'll go directly for the ISP and be like, "Hey, you have that trusted CA cert. You're going to re-sign all your communications with that because everybody comes through your router anyways." So key pinning helps prevent both the state-level attacks and the guy-in-a-coffee-shop attacks. And there's two different aspects of this I want to dig in on because a lot of people aren't familiar with it.

Hans R.: Actually, let me talk about how it works first. So if you're familiar with SSL or even if you're not, you have this cert chain. Basically, you have your company's cert, Okta.com signed with Okta's key, and then we'll go up one layer and we'll go to that CA and be like, "Okay. You signed on saying, 'Hey, this is us. Sign it saying you trust us.'" And so on and so forth all the way up to those core, high-level CAs that governments trust, things like RSA or Comodo or what have you. So basically it's a chain of trust saying everybody trusts this guy just because the internet wouldn't function without it, and then he said, "Okay, I vouch for that guy, who vouches for that guy, who vouches for that guy."

Hans R.: So the problem with the attack we're trying to prevent is one of those middle level guys not actually being trustable. So what you do is you check your leaf cert that's from your, from your servers, and you say, "Okay, I'm going to check the public key of that. I know my public key. I've talked to my ops guy. I know what keys we're actually using to sign our certs. If that key doesn't match what I'm expecting, then we have a problem." That's the core of key pinning.

Hans R.: So the trick there is that how do you get that public key? Where's that public key come from? That public key, you have to have a list on the device somehow so that your phone knows what public keys to check. What is coming from our ops guys? And you should use a list because say an ops guy leaves or your CA cert's expiring. You should use a new key every time. These things rotate over time for a variety of reasons, so you want a backup list so your apps just don't all break every time an ops guy leaves. So you have a list of them, all right? But how do you get that list? Where's it come from?

Hans R.: There's two approaches. One, bake it into your app code. That's where the obfuscation comes in. You have this hardcoded into your app, and that way, every single connection checks that list of keys that's in your code, and if it doesn't fit, throw it out, all right? The downsides of this is it requires your users updating your apps. And I don't know about you guys but getting users to update apps, surprisingly difficult. So eventually, they'll fall out or be vulnerable as you rotate keys because those bad keys will still be trusted by them. But the upside is every single connection will have it. It's there as soon as they install the app.

Hans R.: The other approach is a trade-off. It's a much more updated way to do it is the first time you talk to your servers you say, "Hey, server. Along with your response, give me the list of public keys that you should be trusting." All right? And it sends back a list, and you just put that list and encrypt it onto disk, do whatever goodness with it. And that way, you can keep that updated even on old code much more frequently. But the downside is you have that one untrusted connection.

Hans R.: So really choosing between the two is up to you. It's up to your use case. What's your tolerance? What do you feel like going with? It's a trade-off. And the two styles I mentioned here is there's two ways to do it. It's either open pinning where you only pin against your sites, and if it's not your site, you're like, "Well, let's just let it go and trust it and see what happens." This is good for if you use a lot of metrics in your app or third-party services where it's calling Firebase or it's calling Apple servers, and you don't know Apple's public key. You don't know Firebase's public key.

Hans R.: The other option is closed options where you know your app's only going to be talking to your own servers so if you're ever talking to a different server, that in itself is something you should be alarmed about. So there you'd just use closed pinning and be like, "If I don't recognize the domain at all, just throw the whole thing out. Just get it all out the door." All right?

Hans R.: And I'll try to be quicker about this one, but tamper detection is basically trying to tell if your app's been rewrapped. A common attack pattern is to go through every app in the App Store, every app in the Play Store, download them, just put a quick little script that injects something into it so they can track them, and then upload it as a new app. So tamper detection helps prevent that. Basically, you're just trying to check to make sure your app's not modified, and you just check the hash of the code in your app against what it actually is on the device. And you hardcode in what the hash should be and just compare the two and if it's not, flag an alert or break the app or whatever you want to do. It's pretty easily bypassed, but it's great for those script kiddies who just run a script against everything in the store.

Hans R.: And then root detection, as Apple says, "Root detection is impossible." That's Apple's official stance because it basically is. There's so many ways you can do it, and by definition, if you're getting something rooted, it's something the OS maker didn't intend anyways. So it's one of those chicken and the egg problems. You can't permanently detect it. You can try, but it's a best effort thing. And basically, some people do it to try to take defensive measures or stuff like that. But it's difficult but still an option for some people.

Hans R.: So, okay. Let's get past obfuscation. Let's get on to the real stuff, the stuff you care about, encrypting things on the device, any of those secrets like the refresh token. So user passcodes. Great option. Slightly more complex to implement. It takes effort. You have to add a whole UX flow for this. You have to get a PIN from them. You have to have PIN creation flow. You have to have recovery flows. It's really only useful for things that exist while your app's in memory. So if you have background operations that happen a lot, you don't have a PIN while that's happening or while that's going on. It could have been six hours. You don't have that in RAM still. So anything encrypted by it you can't really get access to. So it's only good for your flows that are only available while the user's present and actively in your app. But it's very useful for those flows.

Hans R.: A side note. When you're doing it, definitely have a brute force prevention mechanism in there. Don't let people just guess a million PINs because common use case is not that complex of a number space. You're probably doing six-digit PINs or something like that just because it's a usability trade-off. But it is reasonably secure, depending on how long the passcode is. It's not bad.

Hans R.: So how do I do this, all right? Here, high-level method. Hopefully you can read the code. I know it's a little bit small for you guys in the back. I'm sorry, guys. Not Kotlin. But first step, generate a random AES key, all right? You need something to encrypt things with. And then after that ... I'll dig into this method in a minute, but you store the key. This is where that passcode comes in. You're storing the key with the passcode. The passcode is not really there for the rest of the encryption algorithm. But I do want to go into the rest of the encryption algorithm just for those of you who aren't as familiar with it.

Hans R.: So you initialize the cipher. And I won't really go into this method because there's a million examples online. It's fairly standardized, but you initialize a cipher and you get two things out of it. Well, two things to create it, actually. You get the cipher itself. This is what will do the encrypting. This is your encryption cipher. There's a couple different options. You can look online for good examples. But this IV stands for initialization vector. This is there to prevent that second point I had back in that goal slide, scalable attacks, all right? This initialization vector means that every single thing you encrypt starts with a different seed.

Hans R.: So when you encrypt something, it's just like a cycle over a bunch of bytes, and it uses a little bit of the last one, a little bit of the next one. So this initialization vector means that if they happen to crack by accident one of your encrypted files, they can't just use the exact same thing against all of them. It starts you from a different point to ... It helps the math out, but it's not really a secret. It helps prevent the scaling. It helps preventing every single thing you encrypt, but you have to actually store it on disk with it because otherwise you can't decrypt.

Hans R.: So that's why the rest of it, you convert your data to the bytes. You do the encryption. But that last step is where the IV comes in because you have to keep that around. So a common technique is to pin that IV to the bytes of the encrypted file, front, back, doesn't really matter, as long as you know what you did, all right? So typically, you just combine those bytes together, encode them bit-64, there you go. You got an encrypted string, all right?

Hans R.: So let's dig into the storing part because that's the interesting part. So here's how you store the key. The technique I'm looking at here is the slightly easier technique to implement, and this is where you generate a key store and use the passcode as the password of the entry in the key store. So that's how you get the security aspect of it. So you make a salt or get a salt, depending if it's the first time through or the second time through, because the salt, by the way, also finds a store. I'll go into the salts bit in a second. But then you hash the passcode, and you hash the passcode in case somebody has a bad passcode, in case it's a short passcode. So that's why you want to salt it and hash it because this way, if it's one, two, three, four, and as you well know, 100,000 other users are going to be using one, two, three, four, the attacker can't just look at the hash password and be like, "You know what? That's one, two, three, four because I got eight other examples of it."

Hans R.: So you take the salt and you take the passcode and you hash it together so they at least all look different. And that way, your actual key entry, because you're using this as your password, will be different. So they can't just be like, "Oh, I know one, two, three, four hashes into this. Boom. Let's try it in the JKS and see if we can get it." So what you do is you hash the passcode with the salt which is unique per entry of the device, and there you go. You have it so it's unique for every single JKS entry. That way if they sweep all your JKS files, they can't just say, "Here's the hash for one, two, three, four. Boom, boom, boom. I got 100 users." Prevents the scalable attack. Make them work for it.

Hans R.: And then you can look through the key code. This is Java specific, so it may not be useful for a lot of you. But you put it into the key store password protection type. You can use the default. Perfectly fine for this option. Put it in a secret key entry. Take the key store. Output it to the file. Boom. There you go. All right, so that's salt. What is that salt? All salt is, random bytes. Secure random, X number of bytes. I did 64 here. It needs to be the same as the cipher size, I believe. So just secure random bytes. I store it straight to share preferences. Not a secret. Purely there for scalable attacks, all right? In fact, it can't be a secret because, like I said, you have to be able to use it over and over or you kind of screwed yourself. Cool.

Hans R.: All right. So that's how you do user passcode protection, but what if I need things in the background, all right? That's where hardware-backed solutions come in. This is the new hotness. It does depend on the device though. iOS, quite strong, especially in your devices and OSs. Android, you know Android. It's a mixed bag, all right? So depending on the device, this can be quite strong. Most of these are their own chip on the device. Some are software-implemented. Those are the weaker ones, but most are hardware-backed these days.

Hans R.: This is actually a separate chip on the device. The key is generated in the device, never leaves the chip, and whenever you need to encrypt something, you actually pass that into that encryption chip and just get back strings. You never see the key yourself even. So very strong, and typically, and this is the nice part, it's available as long as the device is unlocked. And you can actually customize that. That's still evolving over time. They change things up all the time, but typically, as long as the device is unlocked, it's available, which is very handy if you have to do a lot of background operations.

Hans R.: If you're checking for chat logs in the background or if you're a banking app and you just want to send a heartbeat once in a while just to make sure you're still alive, that's where this really is handy. And it's the best you can get. On a mobile device, it's about the best you're going to get these days because they can't export that key. They can't try to brute force it. They can't just sweep the whole phone and do whatever they want offline.

Hans R.: But the downside, if you rely on transferring between devices, if you're one of those more common use cases where you want to be seamless as soon as the user gets a new phone, not quite possible with a hardware key store. Like I said, it's not exportable. That chip won't allow that key to be exported, so even if Apple wanted to, they couldn't pump it through iCloud and put it down the new device. So you're going to have to assume that anything encrypted by this key may be lost, may be gone. So make sure that you have recovery flows in place where you have most of your data but all the encrypted stuff is gone. Just make sure that you're able to bootstrap yourself back up from that. Have some sort of flows in place. Be defensive about it.

Hans R.: And this is great for modern OSs, but when they first added this, very finicky, both in iOS and Android. To speak from experience, in Android, basically 6.0 and up is about the only thing I would use it on. If you still support older OSs, be very cautious about using the hardware key stores. I believe iOS nine is about where it stabilized pretty good, which these days, most people only support nine and up anyways. But a lot of Android people support more legacy OSs. Be cautious on those older devices. You may want to do a split where older devices do one path, newer devices do another.

Hans R.: So how do I actually do this? Really, most of it's the same. It's still a key. You still have the same algorithms like the APIs actually kind of mask a lot of the usage portions of it. The difference is when you generate it. So right there, you see key generator algorithm that get instance provider. It's not standard bouncy castle. It's actually Android key store. That provider is specific in Android for using the trusted execution environment. Apologies.

Hans R.: So iOS, similarly, it's all down to when you generate the key, and then there's API wrappers around it for actually using it, which masks all the nitty-gritty. And there's two more important points here I wanted to point out and that's purposes. That's the first one. So especially in the new OSs, they're getting better about this where this helps in again those unknown broken scenarios, the scenarios where the OS itself is vulnerable because, again, we shouldn't be trusting the OS. So when you generate the key, you can dictate exactly what purposes this key is able to be used for.

Hans R.: I'm only using this for encrypting and decrypting things, so don't give it the signing purpose. Don't give it the verification purpose. Don't let people use it. That way if they try to use it for that, you get a little flag and be like, "Hey, what's going on? Somebody's trying to use my key here." Same if you're only using this for signing like JWTs. Just give it sign and verify. There's no need to let people encrypt things with it. And then the user authentication required is the next one. So this determines when you can access the key, and this is the one that's currently evolving on both platforms where in Android now it's just yes or no or you can make it biometric I think.

Hans R.: Apologies. Let me get some water real quick. But this user authentication requires, especially in iOS, is very customizable. So basically you can be only five minutes from when the user authenticates or 30 minutes or half an hour. That is 30 minutes. An hour. Sorry.

Hans R.: So that right there sets your paranoia level. That's what that is very useful for. So say you know that in your app you only want it used when you're active or something like that. Then be like, "Cool. Only make it so that if the user's authenticated and recently." Or, "This is for background purposes. I'm just doing these background checks periodically. I don't really care how recent it's been." Then just don't even require user authentication. You can use it wherever. You trust the key store. We don't have to have it have been decrypted recently.

Hans R.: Cool. So the last option I mentioned earlier is what about RAM-only storage? This is as secure as you can get. It's the best option. If they can read your RAM, you are beyond hosed. So it's really the best you got here. But it's really only good for things that are volatile because this is not something that you can bootstrap with. It's already been bootstrapped. If you had the refresh token RAM-only, it doesn't really do you any good because your app's not in RAM terribly long.

Hans R.: This is really good for things like access tokens or PII that's very small or you don't mind fetching often, very volatile stuff, or usernames and passwords should always be RAM-only. Any passcodes, anything that's part of the user given portion of your encryption flows should be RAM-only because it's the safest. It's the best. If you don't put it on disk then it can't be read by anything else reading your disk.

Hans R.: But wait. There's more. So there's two more concepts I wanted to get into that aren't really about how to store things on disk, how to store things directly. So the first is a client is only an expression of the server data. There are cases where your client is generating data or has business logic to it, but you shouldn't really be storing the heart of your customer data on the client because you shouldn't trust these environments. So clients need to be reactive to the servers. The server at any given time can revoke your access token. They can revoke your refresh token. They can revoke or rotate certificates. So you really need to always code in a very defensive manner. Expect the server to do weird things at any given time. Want them to. That's a good thing. Having the ability to revoke these tokens is very good from a security posture perspective.

Hans R.: So you got to make sure your client apps are reactive to that and they respect that. If they revoke a token, cool. Go to the sign off logan, And then have good sanitation. Rotate things often. So rotating things helps for those attacks where you don't know they're happening. You don't know it's happened yet. Say you had a bug in one version of your app, but the very next app, you made some change without even realizing that you patched the bug or you patched it but you didn't think anybody used it yet. If you rotate things often, that really narrows the window of how useful that attack was because if you rotate, say, the refresh token every 30 days and make the user do a full reauth every 30 days, that means there's only 30 days that attacker could attack your servers without you being aware of it.

Hans R.: And then, cool, the next five years of Facebook's completely unencrypted passwords in the database are not secret, or, sorry, not vulnerable. So really rotating things often is a good thing you should really consider about any of your secret stuff, any refresh tokens, any certificates, any signing keys, any keys used for encrypting data. Make the user rotate their passcode once in a while. It just really helps mitigate those undiscovered attacks.

Hans R.: And then as we wrap up here, if I forget everything else ... I've covered a lot of territory so far in this talk. What should I remember? Client code is always public information. If it can be run on an OS, somebody can find out exactly what's in your code. Nothing secret should go in client code.

Hans R.: Don't trust the OS or hardware. This is deployed code. This code's not in your servers behind a VPN, behind a firewall, with only advanced server access gaining access to it. These things are out in who knows what OS, who knows what hardware. Could be an emulator. You don't know. So don't trust it. Be defensive.

Hans R.: Perfection's impossible. It's just not. Not if a user wants to use it. Make your attackers hate you. Make them work. Hackers are lazy. Make them put some effort into it. Make it so there's another target that's a lot easier and is a better opportunity. That's what we're going for here.

Hans R.: Then don't let attacks scale. Ensure uniqueness. Every user should be unique. Every encryption should be unique. If they can break one piece of your data and break all of your user base, that's a high reward, low risk scenario for the attacker because they get everything for one single break, one single user with a bad password. Make sure that if one user gets compromised, everybody else is fine. Don't let it scale. That way, sure, they get that user's data, but they don't get everybody. They don't get everything. They don't take your whole company down.

Hans R.: And then rotate things often. Have good sanitation. Look at the OWASP guidelines. Look at the NIST guidelines. Just have good general sanitary habits about rotating things. And then ... and this is something people forget a lot, which I really want to drive it in. If it can be used to get a secret, it's just as valuable as the secret, all right? All that PII data? Those are effectively passwords. All those things you see on Facebook where it's like, "Oh, if your birthday is this month, post this, and month is this month, post that, then you have some funny name." They're trying to get at your PII. They're trying to find your birthday.

Hans R.: How many recover password flows out there say, "What's your birthday? What's your social security last four digits? What's your grandmother's maiden name?" So all that data is just as valuable because if they can reset your password, they have your password. That's it. That's what you're trying to protect. So anything that can be used to generate a secret is just as valuable as the secret.

Hans R.: And with that, I'd like to open up with any questions. I think we have a mic to be passed around, so yeah.

Speaker 3: Just go for it if you want?

Hans R.: It's right behind you.

Speaker 3: Oh, it is?

Hans R.: Yeah.

Speaker 3: So I was curious what the ...

Hans R.: Sorry. Is the ... ? Yeah. Flip the switch on it.

Speaker 3: There it goes. Okay. All right. Good. So I was curious what your opinion is on modern MDM, let's say, encrypted containers as far as the say you're running everything out of encrypted containers and as far as ... I mean, what your feeling is as far as the security level and the trustworthiness of, say, AirWatch MDM container as far as that's another layer of security as far as potentially if the device is compromised. What are your thoughts on that?

Hans R.: I would say it's good, but I wouldn't say it's another layer of security. And Android specifically, I can talk in-depth about that. iOS has a different model that I'm not an expert on, so I'm not going to fully speak to. But the encryption they're using is the same as the device encryption. So if the device is already encrypted, it's not adding a whole lot. It's changing it so that way if there's a malicious app on the personal profile, it can't get anything in the work profile. So there is a secure boundary there. But it doesn't really protect against rooted attacks. It doesn't protect about anything that has escalated privileges because if they can break the device encryption or they can break the permission model, then it doesn't really matter which profile it's in, whether it's a work or personal profile. But it does help against un-escalated attacks within a profile.

Hans R.: So if there's some vulnerability in intents where you can manipulate intents or broadcast or something like that to get it sums instead of data, the work profile does shield you against that because those don't really cross the profile border. But when it comes to actually file-based encryption, it's the same file-based encryption. It may have a different passcode, but that's about the best ... So it's not a huge leap up. It's more like a separation of concerns.

Hans R.: Any other questions? Cool. Well, if there's nothing else, I would like to thank all of you. Please like and subscribe, and thank you very much. Hope you have a great time.

Have you ever built a new mobile app and wondered where to put all those little secrets that pop out of flows like OIDC or OAuth? Or, have you been nervous about hackers targeting your app to get at your customer data? You should attend this talk! One of Okta's top mobile developers will share all the tips and tricks for making sure that secrets stay a secret within your mobile apps.