Speaker 1: Ladies and gentlemen please welcome Karl McGuinness.
Karl: Welcome, I am excited about this afternoon’s panel, for the last four Oktane’s I’ve been doing a presentation called; The future of identity and security. For our fifth year Oktane, I wanted to do something a little bit special. I’d like to call up some guests and do a panel. George, John, Grant, could you guys come up please. I will let you guys sit down and introduce yourselves. Why don’t we first off by just going down the line, give yourself an introduction, what you have been up to. Where you are working and we will start from there.
John: John Bradley. I’m an architect at Yubico but working on the fighter standards, most of you probably know me from openID connect and OAuth. I am responsible for all the parts you don’t like.
George: George Fletcher. I’m an identity architect at Oath. If you’re wondering what in the world oath is, it’s the combined companies of AOL and Yahoo after the Verizon merger and Verizon purchase of yahoo earlier this. We got combined, that’s the new company. I work on B2C and B2B identity products.
Grant: I’m Grant. I’m a software engineer at Goggle. I work on identity and authentication teams on the enterprise side.
Karl: Excellent, we got some really deep insights here on the panels. I think we'll make great conversations. What I would like to do is sort of go through some QA questions. I would like to leave maybe some 10 minutes at the end of the presentation and the session open for our opening question with [Mics 00:02:00] because I think that it is going to be some interesting feedback. I would like to start off with piggy backing on the themes this morning.
The first theme was really about monetizing the extended enterprise. There is some significant challenges obviously as identity is shifted out to the cloud and outside the perimeter. I want to get the state of the union comment, I’ll go down the road. Can you describe where the core challenges of what’s happened in the last few years, give the audience a sense of where we’re at before we can talk about where we’re going?
John: I was doing federated identity standards for somebody whose name shall not be mentioned here prior to this. There has been a progressive move from Saml to openID connect. I was in UK few months ago, I was meeting with one of the big eight banks. They were explaining to me that they didn’t actually care about Saml anymore. That what they were looking for were products that would deliver openID connect and OAuth.
If we wanted to throw in Saml for free they’d take it but they weren’t looking for a Saml product. That was at the point where I realized that openID connect has its own moment, had reached an inflection point. I didn’t need to necessarily keep pushing that rock up the hill. The base parts of openID connect that we designed to be basically LoA3 compatible. One of the design goals was, that it should be as secure as possible even in the simplest configuration.
It was massively more secure than openID too that preceded it even in the simple way that people do the code flow. I think that’s progressing. I think some of the stuff that we have been working on with token binding is an underlying blue that’s going to tie together the primary authentication like FIDO to openID connect all the way down to the access tokens. It’s a matter of building on that momentum that we already have and it is coming together.
What we have seen is having to rebuild the protocol stack up, solve the dispute cases and thinking about some of the lessons learnt on the previous protocol stacks and sort of critical mass. The next generational technology is where even the conservative companies now are sort of realizing that, that’s the gold standard going forward. We don’t have to necessarily build for the older oils. That is a pretty big progress. George what are you seeing?
George: I think in addition to leveraging the standards like openID connect and OR2 externally, which has been sort of the model in the past. There’s a greater need to use them internally. The rational for that I think is that we have to assume that our network is compromised. As soon as you do that, you have to basically authenticate and authorize every transaction, and standards are there and usable in that context. I think that there is a shift from just looking at them externally to looking at them internally as well.
Karl: Great point, that brings up a point about zero trust models. I’ll assume zero trust. That’s a big initiative that Google’s been up to. Why don’t we hear about Google there?
Grant: Sure, zero trust, we made an observation in 2009 when we were attacked by China that relying on the network and even it was not going to be sufficient for providing security against state actors. We have sort of been on the long transition towards reducing our reliance on the network and de privileging pretty much all access. Essentially moving in points outside of the network boundary.
We achieved that internally a couple of years ago, we’re still improving on it. We’re still bringing it to bear in our platforms. That we’re offering to other companies out there who are building on our clouds or otherwise. I think it is really true that even conservative companies as George was saying are starting to realize that the VPN centric network base model is not going to work going into the future.
It started with mobile devises but even now with desktops we’re seeing the same thing. They need to move their endpoints outside the network. I think it is really a question of dealing both with legacy applications like how do we inter operate between legacy application environment where maybe you as the security engineer don’t really trust the developers to handle identity inside their application.
You need some way of providing the identity in a trustworthy way as part of the platform that still inter operates with the application but yet doesn’t rely on the network for security directly. That is a tricky little piece to navigate. Also, how do we enable the next generation of applications that are being written in the cloud natively.
Whether they are using silver less structures or whether they’re using containers or whatever the latest and greatest technology that people are using to build the next generation of applications. How do we make sure that the identity platform that, that’s working on is also zero trust, is also sort of rock solid? I think that’s the next challenge in my mind that we have to solve. Unfortunately in parallel with still dealing with all those apps that we have to fit in and move out of the network and then do an identity based security model.
Karl: Does that mean the proxy is going to stay around forever. We kill the VPN just to replace it with a reverse proxy? Is that where we’re headed for at least in the near term future?
Grant: I think the proxies aren’t going away, I think taking the endpoints out of the network is a huge part of the puzzle. If you take the end points and put them on the no trust side, you have a much smaller trust surface. I think there’re a lot of applications that won’t need to rely on reverse proxies, right? Applications that ae being built in a SaaS application, applications being built in a cloud friendly way from the beginning as the authentication protocols get simpler.
It’s much easier to implement an IODC relying party than it is to implement a sample relying party. It becomes more possible with good client libraries for people to do identities correctly and securely in the app. Yeah, I think the reverse proxy is going to be with us for a long time but it is not the only tool in the tool kit.
Karl: Maybe John can touch in. What are some of the other tools you mentioned a bit token binding being another tool, maybe you want to expand on that.
John: We’ve had mutual TLS for a while, if we want to see a pained expression on your face, we could ask how hard that’s to implement from a cloud perspective. You get all the credit for actually having done what I thought was probably impossible. That doesn’t mean that it is for the faint of heart for other people and SaaS providers to try and do that on their own.
One of the things that the token binding does is take that static configuration and abstract it so that it can work with SNI and all of the other parts of the cloud infrastructure that we’re used to, and dynamically provision credentials on the clients. You can do things like have a centralized cloud identity provider. Have that identity provider bind, key pair for that browser to do whatever ID token it’s issuing.
Which then goes to the relying party, which can then issue like Google does for its properties, issues token bound cookies, which are then used as proof of possession so you can’t, you are protected against the very sorts of man in the middle attacks. You don’t have that sort of static configuration that cookie, the key pair is bound to the actual domain name that you are talking to as opposed to being negotiated it at lowered down at the TLS layer. It’s all designed to work more with the modern cloud infrastructure.
Karl: You’ve got a big scale factor and aligned with the deployment factor of the cloud as being the femoral and scalable resources. The protocol stack is sort of evolving to all sorts of end to end ability to chain the protocols together and build a trust there.
John: Most importantly it doesn’t really suck for the user like mutual TLS. If you’ve ever had the browser pop up and give you five different serial numbers and say, you have one chance to select the correct one for authenticating to this site and after that the browser won’t remember that forever and you’ll be denied access. It’s probably happened to most of us.
Token binding gets rid of all the bad user experience problems. I spent 15 years trying to get browser vendors to actually make mutual TLS work properly and it didn’t work.
Karl: There is a key aspect there that I heard you mention maybe George is going to explain to us the user experience part of it. As we look at the extended enterprise and the challenges, you mentioned a lot about the protocol stacks and the technologies if you are building net new applications. There is obviously the other thing that we talked about a lot about here, it is about digital transformation, changing the experience side of it all.
As we scale and look at more identities on the external side connected to our enterprises that have user first experiences. What’s maybe happening on that side of the house that you can maybe comment on to how it’s changed?
George: I’m not sure specifically, obviously things like the fingerprint readers, the iris scans are some of those things as additional factors and authentications are changing. I think there’s a long way to go there, a single bi metric tends to not be that secure on its own but you putting combinations with a whole bunch of others and it becomes a lot less easy to fake. I think that there is some good things going down that way.
For sure as we can get more people involved in a single sign on federation that reduces the number of credentials you have to manage. That sort of plays into social login-ish kinds of things. Whether it is an actual social login provider or just an identity provider that you trust. I think that there’re some decent movement there.
I think there is still a lot of interesting challenges for us as industry to resolve when it comes to supporting federating identities inbound to your applications, and how you manage that especially from the account recovery flows. I think there is, we’re making it better especially on the authentication side, I think we have work to do on some the account recovery side.
Karl: That definitely echoes a lot of things, which is account recovery is definitely the double black diamond problem with any authentication system is designed and then all of the nice front door mechanisms. Then factor devised, step on one foot and then the recovery channel is just to convince the guy next door to hit the reset button and you’re in.
I think we definitely agree that there’re a lot of challenges on the recovery side. Does anybody on the panel have any idea as to sort of how recovery might be changing in the next few years or things we can look at the different models for recovery?
John: Facebook did put forward a proposition not too long ago about developing an account recovery protocol. In a lot of cases we have implicit federation through email. That’s the way that people do account recovery. If you are just firing off an email from Facebook to Google, Google doesn’t have an opportunity to know, you just clicked on the URL, that doesn’t Google give the chance to perhaps to run the user through account recovery process like Google.
The reason why the person’s Facebook account may have been compromised is that their Google account may have been compromised first. We need to do a better job of signaling. In the openID foundation, we have a risk working group that’s looking at some of those issues. It’s both signaling account compromise, and doing a better job of signaling account recovery, and not to be the token binding too much.
If you have the appropriate credentials stored in the person’s user agent et cetera. You can get a huge uplift by getting the person to do account recovery from a device that they‘ve already logged in from, so that you at least have some factor to protect against remote attacks. Some of these signaling things that we may be able to make more persistent and more explicit account recovery stage would help thing.
Grant: Let me throw in a couple of two cents on account recovery, one of the things that we realized a couple of years ago when we started thinking seriously about account recovery, was that people often optimize for different metrics for account recovery than they do for login. When you are talking about sign in people are often optimizing for the metric.
How often I'm I keeping the bad people out, whereas for account recovery people are optimizing for the metric. How often I'm I recovering people’s accounts. It turns out that sign in and account recovery are exactly the same problem with different challenges. On the one hand you are optimizing for how many people are you letting in, on the one hand you’re optimizing for how many people are you not letting in.
You end up having these metric conflicts that can lead you to creating a much bigger backdoor than you thought you were creating. I think part of solving this problem is recognizing that in fact it is the same problem as sign in and you need to use the same metrics to judge both of them. You want to let the good people sign in or recover as easily as possible but get the bad people out.
That sounds obvious but I think it’s not obvious to a lot of people and it’s important to keep that in mind. Then the second point is, I think federated account recovery whether it’s using the standard that Facebook used or using a sort of more traditional federated identity protocols because again account recovery and sign in are very much the same problem.
It’s really important because we’re in this world where a very small number of entities know a lot of data about you that they can use to effectively challenge you to recover your account. Facebook knows your friend ruff and they can use that as a way of recovering your account. Google knows a lot about data that you have, photos that you have, emails et cetera.
We can use that as data to challenge you to recover your account. Your company is like Xperia and have their Octavia factor. I think KBA can only take you so far, knowledge based challenges can only take you so far. I think that in the small set of vendors that do know a lot about you as a person are in the best position to try and do that kind of account recovery and then feeding that out to the internet.
I worked in the Obama administration briefly and I was there when OPM was hacked and my SF86 was stolen by the Chinese. Basically if the Chinese want to take over any of the knowledge based questions that most websites ask from you, they have all the data that they need. I don’t really have any interest in account recovery by anyone other than Google or Facebook who knows much more about me that’s not in these forms.
I think that, that’s really an important problem for us to keep working on.
George: The main interesting aspect to account recovery where we need to change the identity model so it is not a nothing scenario. There are certain levels of challenge answers getting you certain levels of challenge answers, get you certain levels of access and maybe to regain full access I need to use a very trusted device potentially on a trusted network, if it’s for an enterprise scenario.
I don’t get back to full access until I go into the office and I do some flow where I go to the help desk in person and show my driver’s license or whatever. I can get some access but we don’t tend to think about the models in that tier level of access, maybe that’s a thing that we should look forward to is, how we tear our level of access based on how much trust we have in the persons' recovery process.
Karl: That is a great leading because that’s exactly what we tried to identify this morning. Our initial first set and I was thinking through some of that which is, as you go into a security operations and then maybe reviewing something. How could you constrain the access so that you then get approved? You could do the same in recovery as well.
Thinking about the recovery flows as part of the policies and authorization access just as much as you typically think about the factors and authentication, I think it is a pretty insightful way to look at it going forward. One thing I want to talk about next is SaaS, obviously Oktane has done a good job in making SaaS successful to the enterprise, connecting it to the enterprise, active directory or adoption of SaaS.
SaaS gets further and further in becoming the way people are doing business, it introduces more skills problems. We have the zero trust problem of its great when we talk about our apps for stacks, we can control our applications we build doing some of the zero trust techniques on the SaaS model now as an ecosystem ISP problem.
The same with the authorization problem statement as well now, which is now you have all these different islands of all our clients and scopes and access across every single SaaS application and then now you have to manage great federated identity but now mobile devices with work flows once you’re logged in you have a long-term session in those applications.
Maybe some of those signaling things, how can you guys paint the picture of how we’re going to deal with the skills and factor of SaaS now as we got past basic federation, basic user provisioning, what sort of the next and the next part of the horizon?
John: One of the things that William Dennis from Google and I have been working on is devise posture and we’ve talked about this a bit. Essentially in modern applications you care about what is the device, what’s the application and who is the user? In some ways you have three different identities that you are trying to juggle and make some sort of authorization judgments about.
One of the things that we don’t have a good handle at the moment is, what is the device, is it managed, does it have a devise policy controller and what is the application? Which, means that we’re flying about two thirds blind half the time. Certainly in my previous life one of the most common questions that larger enterprises ask, well could you just let the apps come in from managed devises?
That sounds a lot easier than it actually. Well from problems you have then starts getting introduced, you have to log in to the devise policy controller to be able to download the cert but if the account requires a cert to log in. You get a bootstrap problem where even on the android devise that’s listed at the moment you can’ actually expose the certs on the device when you are when you are logging in through Google for a bunch of technical reasons.
They are working on that, the android team. One of the things that we have been looking at it is allowing aps to actually take on more responsibility. On Android there’s something called safety net, there’s something, which works much less well on iOS but the principle is the same. Microsoft also has an at a station service.
In principle, you can get the at the station from the operating system, which tells you what the device is, what was the signing key of the app et cetera so that at least when you are talking to the token endpoint using token binding as a way of doing remote at the station you will be able to know what the app is, what you know and can trust the app.
The app can actually tell you a lot about the device, now my manager who is managing me what have you. Karl and I have also been discussing, that’s good for the SaaS to know what the native app is talking to the SaaS. The next question is how do we actually communicate that information or some abstraction of it back up to the identity provider so that the enterprise can actually say, no you shouldn’t be using that app on that devise or such.
We have sort of two options going forward for scaling one is, we can all do skim and have the enterprise push devise policy is down to the SaaS, which has more pluses and arguably more minuses, or perhaps actually push more the information up to the authorization server. No authorization servers haven’t typically done a very good job actually or, your typical Saml IDP, it authenticates the user and just sends it back.
Most people haven’t really used that for making policy decisions about, well it’s not a good relying party that you are going to. Do you have any rights for that relying party? Perhaps your role isn’t appropriate for going into that relying party. There needs to be perhaps more policy built into the identity provider that can look at well, do you have the right role, group et cetera to be able to go to that relying party than just saying yes, it is the relying party all the time.
Is the thing that you are coming from actually allowed to do the thing that you want to do? I think that perhaps we may be looking at a movement to make identity providers more intelligent and take more responsibility at the enterprise or identity as a service as opposed to trying to have the SaaS providers intermediate.
Grant: I will throw on a couple of points on that too. One of the issues that comes up when you are trying to do the thing that John is talking about is, it’s like session link. Session link is the being of my existence because for something like this, if the IDP is in control, the RP session has to be time limited or there has to be some alternate protocol for resynchronizing the state if it changes.
Having short RP sessions just leads to crappy user experiences for all kinds of reasons. I think figuring that out, in some cases the platform, if the RP owns up to it, you can have out-of-band protocols for handling those kinds of authorization decisions when the state changes dynamically. I think figuring out how to get that all sort of fit in to the federation protocols more naturally.
Risk is a step in this direction but there is a lot that needs to be done in this space in order to really get a handle on synchronizing these sessions, when you have much more dynamic identity states. Otherwise, you end up with a really crappy user experience or a situation where, oh I can only sign in on the managed device and then I’m signed in forever on the managed device until, even if it becomes unmanaged. You have to deal with this problem.
George: I think one other aspect is their trust models, as you adopt more and more SaaS you are adopting their trust model for how they deal with potential attack. It is not just the functionality of SaaS provider, it is about their ability to do security and authorization. When we look at especially mobile apps and the direction towards mobile.
What that effectively means is that the mobile apps is making API calls into the SaaS provider and at the API level then, we need to do continuous authorization if you want to think about it that way just like we do continuous authentication. That means grabbing the signals that John was talking about, putting them through your risk engine and potentially every partner of the SaaS app would have a different risk model.
I think there is some really interesting challenges that we’re going to have to face in that space.
Karl: One of the key things that you mentioned at the station and the thing about at the station is verify the attributes. You can’t have a conversation about verified attributes in the future of identity unless you talk about block chain. I want to first get a perspective as we look at centralizing the essentialize model for verified attributes. We look at some of the problems with KYC proofing your users. Being able to have a verified identity into your system with low friction.
A lot of hype around block chain and block chain use cases, there has been some sort of attempts just to do some role deployments of it. I would like to go through the panel and get a sense of where we’re at on that, what problems do you think it is trying to solve? What user cases do you think it would be or not be applicable for? I think it would be attempt to go through…
John: Should I be honest? The main problem block chain is trying to solve is startups ability to raise money. This next problem is trying to find something sustainable and useful to do with the technology. There are good things and interesting things you can do with distributing consensus models and there’re mini services inside of Google that use distributor consensus for internal things.
Not everything is a digital currency. Creating different sorts of ledgers and mutability allowing, essentially a distributed database needs to come to, all the nodes needs to come to an agreement. That’s sort of what some of these things are good at. They’re not necessarily good at privacy protection that the key management is a hard problem.
Having world readable information in a globally available ledge is much easier than having information that needs to be decrypted because then you need to manage the keys perhaps and then other block chain and it becomes circular. I’m not sure, there’re people that are trying to do this sovereign identity thing around individuals, controlling their own identities.
Mostly that’s through, you have a completely self-controlled identity as long as you go through somebody that’s providing identity to you. Which again because you’re outsourcing the key management becomes somewhat problematic. Users aren’t really good at managing keys. Yubico attempts to make some steps in that direction but I don’t, people losing all of their money and resources without having anyone to point to for recovering it could be a problem.
I think that we still have a lot of traction in the existing federation model that most enterprises believe that their attributes that they are resorting to their partners are ones that are under control of. Protecting them, having a fiduciary responsibility under GPDR and all of that. I think we can still make a lot of progress. There may be some things that we’re still able to do around publishing keys in the block chain et cetera.
I think, the focus should be on the real problems that actually get us traction and most of the block chain stuff that the hype is around is mostly startup stuff.
George: Just to comment, a lot of the block chain identity is all about I get to put on my verified cleans and my wallet and I get to show them when I want to. John brought up a crucial issue there, most users have no way to do that themselves. They’re using some other third party. We haven’t really gotten away from some entity out there in the cloud knowing that I am presenting this claim to this relying party.
I think that there’s some interesting thing there and it does invert the model in some ways but I haven’t seen anything yet that’s like really solves the problem they are trying to solve. The knowledge of the IDPR and the authorization server and it’s capabilities to understand what the users are doing. It can actually be of benefit to the user by doing the exact kind of things John was talking about.
In a sense it is like you are going to that relying party, I know that relying party is really bad, you shouldn’t do that. I think, I would agree with John, we have a lot of mileage on the model we have.
Grant: I have almost nothing to add, I think that there is a deeply libertarian streak in our industry and the tech industry generally. I think that has led a lot of people to really want to have a model, which is decentralized and distributed and doesn’t place any amount of trust in central entities. I think that has encouraged a massive amount of hype in a technology that I don’t really think is the right technology to solve this problem.
If someone proves me wrong that’s great. I think cryptocurrencies are interesting in a way, for criminals to secretly move money around. I don’t think they are a viable replacement for hard currency in the same way I don’t think that block chain is a viable replacement for our existing identity technologies.
Karl: I think this will be the last question before we open it up to everybody. So as ADaaS matured, it’s our fifth Oktane we’ve been doing this for a while, Gartner has even merged the MQs. As we look as ADaaS matured what can organizations do today that can prepare them for a couple of years going forward.
What’s sort of top of mind, what is the thing I should be thinking about as an IM architect today to help prepare for the future that we look at in the next few years? George can go first.
Grant: I think the big thing that’s going to happen in the next five years is the move to the cloud is going to push itself in the cloud even further down the chain of organizations. That means if you’re just starting to use cloud based identity provider today, that’s going to mature more SaaS applications and then hosting all of your legacy applications out of your data centers and running them on AWS measure or a GCP or whatever it is going to be.
I think as you do that, the reason I am in this industry is I think it is a once in a generation opportunity to rebuild our security foundations for the next 50 years to do a better job against more sophisticated actors. It is hard and I doubt that we would be totally successful. The amount of data that’s in the internet, the amount of data that all the companies out there who you’re all IM architects for are protecting is massive.
We have to try and I think that, identity is truly at the center of that right, like relying on the network, relying on the fire wall is just not going to cut it. As we move to the cloud we need to take this as an opportunity to reimagine our security paradigm and really take seriously the notion of devised trust and identity centric security models and all of these things.
You don’t often get the chance to reinvent all of your infrastructure, it’s a once every, probably 20 years thing. The last time this massive migration happened was when we started to just seriously digitize things. Infrastructuralisation, I think that was a little bit of a smaller transition. Anyway, it’s a once in a lifetime opportunity. I think it’s really important that we move away from the network and towards identity based security as we go to the cloud.
George: It’s not just to follow that but a simple thing. I think that we need to start looking at leveraging the security based practices around identity inside the enterprise. I have mentioned that earlier. I think it is petty critical leveraging all of the best practices that have been learnt in the sense of external and apply them internal. I think we still have a couple of interesting problems that we could solve to make it easier for enterprises to deploy.
I think that’s in the next couple of years, it is going to be pretty important for enterprises to address some of the zero trust issues.
John: As we attempt to move more stuff to the club especially identity, we need to take seriously moving towards proof of possession as opposed to bare tokens. Keeping passwords of any sort in the cloud is just asking for trouble. I think that we will see more traction for flight 02 based agents, both built in to the platforms Google is working on it, Microsoft is working on it on external devises and some people sell.
We are seeing more work around being able to take a fight 02 token and plug it into a windows' computer and not just wake it up and to unlock it but to actually take a stock computer, do a stock credential out of a flight 02 token and do a domain joint to as your active directory. Completely provisioned to the users credentials and to be able to get them in for both local and federated access.
There’re things that are in the pipeline around being able to use your phone or some other enterprise devise. Even my watch has got a devise policy controller in it these days. If you have a personal device and there’re minivans that check your heart rate and such, eventually in the not too distant future it will have some trusted enterprise provision result that you can bootstrap other devises on.
As people have multiple devises it is unlikely that they will lose all the devises at once. The current hurricane might prove me wrong about that but in general people don’t lose all their devises at once, which we can actually keep some cluster of trusted device around account recovery and such from the zero state should become a fairly rare proposition that you could put a lot of resources into.
As, opposed to something that you have hundreds of people doing on a regular basis.
Karl: Makes sense, I’d like to open it up for any questions from the audience if anybody has any questions for our panelist, there is one right here.
Audience: To follow on what you were just talking about the trusted devises and things that you have versus things that you might virtually own maybe. Where do you see the move towards embedding devices into people? As definitely having something you own that you can’t lose?
John: Obviously you haven’t worked in a saw mill. What I did when I was young. I don’t know what the ethical issues are, certainly, I know that people in bars and some places have chips so that they don’t have to carry purses et cetara. I suppose where the demand is highest people are already doing that thing.
I suspect that asking employees to have embedded microchips may be going a little too far at least in the new future. I suspect that between other wearable devises and the phones and things that people have that, and of course waterproof, indestructible security tokens that are available from very considerate vendors that you can keep in your safe deposit box and what have you.
We probably don’t need to necessarily chip people. I know our privacy issues especially with these FRID things that have a unique identifier that can just track you around passively. I prefer something that I could at least turn off if I wanted to as opposed to constantly beckoning out to anyone who wants to track my location et cetera.
Audience: In terms of enabling trust, do you think any progress in AI machine learning can help in that area, are you guys working on anything specifically using that technology?
Grant: I guess I will take that one. Obviously it can, in the last seven years we’ve seen problems in computer vision. That went from computers being able to recognize hard objects but not doing good at soft object recognition to computers being really good at soft object recognition. Some problem and we have seen things like alphabet and what not right?
Obviously the growth of computing has enabled deep learning and architectures to do things that the same technology, which has been around since the 80s was unable to do with the previous generation of computers. I think it is unknown at what point that would be next cliff right, how powerful this particular round of AI can take us. I think it is clear though, we know from up here at a certain density we can do some really amazing stuff.
I think in the authentication space, there is examples of statistical machinery and whether they are using deeper learning technology or more conventional technology for risk analysis and continuous authentication of all these technologies. These have been flowing around for a while.
I think they are practical deployment problems, some of which are the ones we were talking about earlier around like synchronizing the state between barriers and when do you do this and all this kind of stuff. Yeah, I think continuous risk using machine learning whether just for doing more login challenges, that’s the status quo today with major identity providers.
That technology will only get better. Then I think the more interesting. From my point of view, the interesting future direction is like, how do we help that technology in at more places during the session, doing session risk and those kinds of things. I think there will be a progression in terms of the precision of the classifiers just as the machine learning gets better.
There is also just an infrastructure and engineering problem, which is like how do you inject those classification decisions at the right point in this distributed federated identity architecture. That problem is independent of the technology you are using to do the risk analysis. I think they are both really challenging problems.
One is a computer science research problem and the other one is an engineering problem. If, we can crack both of them then we can really call some really cool stuff down.
John: One of the first things that would be nice would be to actually do single logout. The next stage and I did a spec, which hasn’t really been picked up at the openID foundation on distributed resources management, which starts that sort of bi directional feed between the things that you are doing, the resources back up to something that could really feed into your personal data.
Which you could have some agent look over and go oh, wait a minute he normally doesn’t do that at that time and provide some of the analytics. It is a combination of, if you don’t have the information coming in it is hard to actually do stuff with deep learning. Also, presents some privacy issues if you, in the perfectly privacy preserving world of software and identity, there wouldn’t be any of those connections to actually provide that.
The information to be able to use your agent. It is part of a trade off ideally, we could come up with something that was perfectly privacy preserving but centralized in all of your information that in a way you could all use. That’s probably not going to be any time in the next year.
Karl: Any last question, we’ve got a minute left here, is there one more here?
Audience: Do we know when we’ll get support for token binding and like major browsers like apache and stuff?
John: Document shepherd, which is me is trying to get it through the IESG review at the moment. That’s going to be happening, some people have been holding it, some vendors hold off until there’s an actual RFC number against the spec. What I can say is that’s its currently behind a feature flag in chrome and Google’s front end processors have it turned on.
It is currently turned on by default in edge, IE and windows 10 but Microsoft servers are behind a feature flag. When that happens more or less simultaneously all of the edge browsers started doing token binding to Google and it actually worked, it didn’t blow up. The other browser vendors are in progress but is really just a matter of a few months before both chrome and IE edge have it turned on.
It is built into windows 10, which is why it’s compatible IE. There are libraries, we’ve been thinking of including it in the Apath stack for both android and iOS. There are TLS libraries that support it. It’s in no more than a year but platform support comes along slowly, one of the biggest roadblocks would be getting oracle to update java, which ironically one part of java was demanding that it be updated.
They really want to use token binding and another part of oracle doesn’t care. There’s complicated, some people on this stage would probably agree. There’re complicated legal issues with java. It’s hard to wrangle but we’re making progress, Google helped engine X do a token binding module so it is available there. There’re a modules for apache, there’re the apache open id connect client already supports token binding. It’s out there, it’s coming along.
Grant: I will also throw in open SSL, right? There is a couple of open SSL layer features that are required that are not in. open SSL versions are going back decades at this point out there in the wild. You need to be past a certain point on the open SSL curve. I think in a couple more years a lot more people will be past that point. Other times I hope that python go ruby these kind of run times will also pick it up after it gets standardized.
It’s great, it is one of these things where it’s just a little bit too early right now. When you sort of think about how the world fits together with token binding, you can truly do X509 level security with cookies. If you think about the impact that has on user experience, if any of you has ever been a government employee, probably no one has but you get a little piff card.
It has a client certificate on it, it has your name in it three times, pops on airlines and says would you like to sign in as Grant W Dasher or Grant W Dasher or Grant W Dasher. You have to pick the third one otherwise it doesn’t work. It’s great, it will be worth the wait.
John: The FIDO standards use token binding, which then feeds on to openID connect, which also uses token binding so you can have a high security government deployment using the new standards and something based on it or derived piff cards. Hopefully that will also allow cross agency authentication to happen in a reasonable way beyond passwords.
Karl: I thank my panelists for speaking today and thank you guys for coming.
Fireside chat with industry experts on the future of identity, security, and access.