Looking for Okta Logos?

You can find all the media assets you need as part of our press room.

Download Media Assets

Oktane18: Okta and AWS -- Tips and Tricks

  • Transcript
  • Details
  • Related Content

Share:

Adam Fitzgerald: This is a disclaimer slide. They made me put it in. So, read it carefully. I'm going to talk a little bit today at probably a much higher level then sort of might be indicated by tips and tricks. I'm going to talk a little bit about the architectural journey that we're seeing customers have. But we do have some subject matter experts on site here form both AWS and Okta. So, when it gets to Q and A or post-session, if you want to like really to dig in to some technical details, we're happy to do it. My name is Adam Fitzgerald. I've been at AWS for nearly five years. I run technical evangelism, develop a marketing style, up marketing, and a couple other things. And if you want to Tweet at me, you can Tweet at me right here. Let me just get a bit of understanding, hopefully you guys might have heard of AWS. We got a couple of computers. But I'd like to get an understanding of how many of you guys are actually AWS customers and actually using AWS. Well, thank you very much. It's very humbling to see so many hands.

I won't spend too much time explaining what AWS is. We've been in a very fortunate situation where the service that AWS provides has found use across a broad collection of industries. So, whether you're a startup customer, like many of these start up customers, the economics of cloud computing really make a lot of sense for startups. You have the ability to start up with low capital expenditure, the ability to scale effortlessly with your business, the ability to stand on the shoulders of giants and use services that might previously only been available to enterprise businesses. But they're also really valuable for our enterprise customers. Enterprises use AWS to reduce their total IT spent, become more efficient in their usage, concentrate on things that make a difference to their business, as opposed to doing all the craft and heavy lifting of managing IT that isn't differentiating for them. And the same is true whether you're in the private sector or in the public sector. Increasingly we're seeing large government bodies and organizations as well as nonprofits and education institutions, all moving towards the cloud and adopting AWS.

So, there's lots of great value from AWS and we've been fortunate to have customers like yourselves take advantage of our services. There's been a lot of collection of changes for AWS, and if this is something that you see every day, this is the AWS console. So, there's a lot of things in AWS now. We now have over 125 different technical services to help you solve your technical problems, help you build intelligent solutions that help your business. All away from the basics like storage, compute and networking, to advance capabilities in analytics, machine learning, as well as data storage and isolation.

So, there's lots of things you can do with AWS. So what I thought I'd try to do in today's talk is talk a little bit about, first of all how we think about security, how we think about those services that you might be interested in using and how you might protect those for your internal development teams using Okta. And then, more generally talk about the journey that we see a lot of our customers going through when it comes to thinking about the architecture of their applications that our customer's facing. So, let's get started. Let's talk about the internal stuff first. Well actually, let's talk about some generalities first. AWS takes its security very seriously. It's our top priority for our customers, and we take care of the security of our data centers, both the physical security, network security, as well as the security of our systems, the security of the people that are involved in interacting with our systems. We take care of all the security of the physical hardware, the networks, all the way up to the hypervisor level. That's AWS's responsibility and it's their commitment to you as a customer, that we're taking security seriously.

But security is a shared responsibility with our customers, and you need to have a responsibility from your side for making sure you're doing the right things with the applications, with the solutions, and with the business critical pieces that you're putting together to help your company. And we try to give our customers as many services as possible, whether that's the ability to encrypt our data in transit or a rest, or to provide permission control and access, whether it is to monitor network traffic, protection against denial of service attacks, firewall controls. All these different AWS services just for security, as well as audit mechanisms for tracking your security profile. But a lot of that is the responsibility of the customer to work out what pieces matter most for their application. Maybe they got a regulatory body they have to satisfy, maybe they've got a collection of controls they have to be responsibility for, maybe there's something they're trying to do about information sharing that requires replication across geographic regions that have different data controls. AWS can't possibly understand all those different concerns on our customer's behalf, so we work very closely with our customers to try to make sure we meet their security profiles. And Okta's an important part of that. Okta gives you the ability to provide that secure access and controlled access to your platform's applications and identity and access management mechanisms. In fact, Okta is certified as an AWS security competency partner. And this is actually a very high bar for evaluating you people's security profiles on our partner network. We don't just look at their architecture and say, "Are they meeting our standards?" We actually take a customer journey through Okta's systems to make sure we understand what kind of exposure our customers will have to using Okta as a service. And I can say that Okta has passed that at the highest level and has the involvement and certification from AWS as a trusted security partner.

And there's lots of really great reasons for that. First of all, they make a great product. Second of all, you guys are using Okta yourselves, you understand what sort of value it provides to your business. But another thing that's really important from AWS's perspective is that Okta themselves are actually an AWS customer. That gives us great understanding of how they've built security into their own systems in order to protect you, and understand what security practices are. Okta runs on AWS's, uses EC2, S3, VPC's, Cloud Front, Lambda, API Gateway, as well as other services in order to build their solution for you. And it's a great example of how they've been able to scale across multiple availability zones, multiple regions, to provide a global presence for you and your business, and a solution that's built on top of AWS. So, great customer and a great partner for us. So let's move on to that idea about, what does it look like for your internal teams if they're looking to use AWS? Maybe you've got somebody in your DevOps team, maybe in your IT management team, maybe one of your actual developers. And they want to store things in S3 or write things through a Kenesis stream or access something on an API Gateway endpoint. And you want to think about, you're tasked with the responsibility of, "How do I make sure that that identity of the developer in my organization is the right identity for accessing these services?" Well, Okta's got a great solution. The Cloud Connect for AWS solution, which provides secure access to the console. Gives you the ability to make a request, do your authentication through Okta to whatever your identity provider is, and then take the identity provided to give you a token back to the developer, and then that token allows you to access the AWS services.

This works in the console, there's been an extension that's made that allows you to do this through the AWS CLI that gives you the ability to federate your identity there. And then also, there are certain services that AWS has, like Red Shift, where you might want end user access to make queries that are different from the administrator access for configuring your cluster, for example. And Okta's actually got a solution for that as well. I would encourage you guys to check out the developer portal for Okta that has explanations and examples of some of these. So, this is a great way to think about using a solution from Okta to be your provider to broker access to your underlying systems. But that's a very internal focused. And what I want, and frankly it's not exactly my area of expertise, so what I want to do now is sort of pivot towards, what does it look like for you when you think about the architecture of your application, the architecture of your solution, if you were going to write it on AWS, and what does that mean in terms of identity access? What happens in those situations?

So, I'm going to steal a collection of slides from my friend Adrian Cockcroft. He works at AWS, he used to be the Chief Cloud Architect at Netflix. And transformed Netflix into a Cloud first streaming delivery solution from a technical perspective. So, giving my apologies to Adrian for mangling his slides, or my thanks for letting me borrow them. Okay, so we talked about the evolution of business logic, from a monolithic application into a service component architecture, all the way down to a function architecture. And we see this complete collection of architectures across the AWS customer base. There are lots of large organizations that have traditional applications that are monolithic, that they can't move or they don't have the resources to break up and change. There's people that are already on their way to fully fledged micro service architectures, and then others that are on the cutting edge of fully server less models using the functions that are available from AWS. So, let's walk through those in some sort of detail. So when I think about a monolith, I'm thinking about application structure from ten years ago, maybe longer, when I was actually hands on keyboard coding things up. And you think about a monolithic application, quite often you think about things that might be broken up into a couple of tiers inside the application, all inside one application server. Maybe you've got a presentation tier or a UI tier in some kind of way. You've got your business logic encoded in a collection of objects that you have relationships with. Some of those objects themselves have a relationship with information that needs to be persisted, whether it's in a relational store, into an enterprise information system, there's some kind of transformation layer, object relational mapping tool there, that maps your business objects to some persistent store. Quite often, when you're buying solutions from companies that you're installing on PRAM or maybe even in the Cloud, you're actually buying something that looks like this monolith. You're buying something that is a package solution you go run. Whether it's your HR system, whether it's your internal review system, whether it's your inventory tracking system, that's what your monolith looks like. It's not to say that the technology ten, 15 years ago didn't exist that would allow you to do this not inside a single virtual machine or inside a single piece of hardware. A lot of these systems were designed with remote access capabilities, so whether it was with EJBs or Dot Net or CORBA or whatever your system was, they're actually designed with this idea that you'll be able to call from one object to another object running a completely different machine.

The practically of it was that that was way too much overhead for what was available in terms of computer networking at the time, and so everyone was forced into these situations where thee application itself was this big mangle of code all in one deployable unit. This matched up with development practices at the time, which was sort of waterfall-based and maybe you got one release every twelve months, if you were lucky. That's the way that the application looked. From an indentiy prospective, what did that mean? Well, that meant that if you had somebody that accessed your system and they make a request, and then they wind up having to make some sort of query against a dedicated part of the user interaction that was for collecting credentials from them, in some kind of way. Those credentials we pass to some kind of business object that you usually said was sort of an identity manager or some signed up permissions control that usually delegated its responsibilities, some kind of internal configuration information, could be like 57 pages of XML, if you were lucky, right?

That would actually be brokering out to the real identity system that you were talking about, whether it's Active Directory, LDAP, or in the early days, just storing users in a database, anyway. So, what's the problem with this? From an identity perspective, it's that identity logic is now pervasive throughout all the different parts of your system. It's part or your configuration, it's part of your business logic, it's part of your UITR. If you want to go ahead and adapt your identity management, or you want to add IO wall capability, or you want to add a different language support for your identities, those things are really, really hard to do. You've got to go rebuild your application, redeploy your application, you've got to worry about downtime, migration. All that intermingling of that capability inside the application is really painful.

So, it's been pretty clear that the monolith isn't actually serving the needs that we want, and it's not adaptable. It's not flexible. It doesn't move with the speed of business. So that leads a lot of people to think about, "How do we go ahead and break apart the traditional monolith into something different?" Ten years ago, we were kind of handicapped. The way that information was exchanged between big systems was XML, SOAP packets, distributor over multiple machines. The translation alone into on-the-wire information that could be used in memory was painful. The network speeds weren't fast enough. The compute usage for doing all that marshalling and unmarshalling was so costly, you weren't going to be spending any of your compute time actually doing the real business work. So splitting up a monolith was really, really hard. It didn't feel practical. So everybody instead started using scale-up capabilities. "I'll put more CPU, I'll put more RAM, on that box that runs that critical system." But, as the CPU speeds increased, as network capabilities inside your data center, inside the cloud, improved, things became a lot easier.

There was also a big paradigm shift. The information that was being exchanged between layers stopped being exchanged in these heavyweight formats, and moving to a much more lightweight format. JSON packets, or binary encoded packets, distributed via REST interfaces between services. The movements of this model made things much more straight forward from data exchange, and actually had a result. If you take the increase in network speeds, the increase in CPU power, and the decrease in package delivery, Adrian told me, at Netflix, they estimated in the span of about five years, they get almost a thousand fold increase in its speed with which information would be exchanged. So now we were sewing the idea that everything should be inside one machine, which makes it even less dense than it did before. So we should start thinking about taking these pieces of the monolith and breaking them up into their constituent parts.

Thinking about them is saying, okay, that login flow, do I have that identity flow? That should be its own service. That information I had about a customer account, that should be its own service. The workflow process for sending messages to customers should be its own service. So, I can now talk breaking up this monolith into service components, and I start thinking about a service architecture. When I think about service architecture, each of this units are now dedicated to a collection of tasks that represents some kind of responsibility to the business or to the application, and they get connected together in a mesh that describes what the structure of the application is. This becomes a much more reasonable way to think about your application, and, as an additional benefit, you can actually scale each of these service components independently. So, if you run them separately from one another, you have one that has to deal with a lot more workload, will scale independently from the others, you can have them recover from disasters or failures much more effectively, and you can also think about iterating on them more quickly.

So, this changes the way that you think about your development model, and this is entirely tied in with the move to continuous delivery and agile development that's happened on our software delivery. So, five years ago, people were starting to move towards this microservice model, and at the time, in very early versions of this, in Netflix, they actually did it using standardized Amazon machine images for particular types. So they would say, "Here's the baked army with Java runtime. Have at it. Go knock yourself out. Put anything you want on top of that JVM." The development team used that as their deployable unit. That was their unit of deployment. In the interim, there's been a lot of great developments beyond just that, and the rise of containers have actually really accelerated this move towards microservice architectures. The idea that you can say, "I've got a container environment that I run on my developer desktop, and it looks just like what's going to happen within production, but in production, it's just going to be run across multiple machines. It's going to be scaled independently for me." That's going to be incredibly great value for developers and the speeds in the way that they do development.

So, the container side and the move to virtualization accelerated microservices, and this has become a very, very popular model now for thinking about development. Much more discreet units that have their own individual scalability and disaster recovery characteristics, graceful failover, and then have faster agility in terms of their development cycles and deployment cycles. When you look at CICD, it's all about delivering solutions that look like this. This means that there's still a service mesh with lots of information flowing from each node to another node. When you think about this picture on the left, you're probably thinking, "Okay, what's the first thing we do?" Well, that's probably logging, right? That's probably identity. That's probably working out who this person is before I can actually go make these requests with a collection of different systems. So ultimately, it'll cascade over to some point on the right hand side where you're talking about persistence or caching in some kind of way. Like I said, datastore service, maybe some other kind of mechanism on the right-hand side. You start thinking about these services, you can start thinking, "Okay, well what are the things that really matter to my business? What are the ones that actually encode the business logic and what of the rest of them are infrastructure?" Well, it's usually the stuff right in the middle that is the business logic, and less about the actual infrastructure. If you're going to do identity, you might think about replacing that very first service there with some kind of best inbreed class of identity service, and Okta's a great solution for that.

Once you start thinking about it, thinking that the business logic's in the middle, why should I be writing a database service? Why should I be writing an SMS service? Why should I be writing a streaming service? Why should I be writing an object store? Why should I be writing the things that are infrastructure instead of business? You start having the opportunity to start thinking about, "Well, maybe I should just select the best inbreed solution for that infrastructure topic." And that's where AWS has seen a lot adoption. When you start thinking about infrastructure as a service, and you start thinking about all the different things you can do, you don't want your development team spending their time writing another message queue, right?

SaaS are message queues. Some of them are pretty good. Some of them can be managed for you, so why don't you use SQS, or a managed version of message queue that you were interested in instead of having somebody invent something or build something that doesn't differentiate for your business? Similarly, for normalization with datastores, with DynamoDB, streaming methods with Amazon Kinesis. What this does it is concentrates. By moving these models, it concentrates your services on the business logic that matters to your company. So that's the glue between the infrastructure layers, the infrastructure services, around the idea. The next step is to look more closely at that business logic, and to say, "Is that the granularity I want?" With the introduction of AWS Lambda several years ago, and the rise of this concept of functional programming, functional execution as a service, that started to really change the way that companies are thinking about business logic.

Some of the most forward thinking enterprises and startups out there are now moving to a serverless model, where instead of having the collection of capabilities inside a service, we're thinking about breaking those things up into even more granulated components, individual functions. Those individual functions themselves can be expressed as AWS Lambda functions. Who here has actually used AWS Lambda? All right, that's about a third of you guys, that's fantastic. So, Lambda, for those of you who don't know, Lambda is a mechanism from AWS that allows you to write a function in a collection of different languages, JavaScript, Go, Python, Dot Net, C Sharp, Java. Did I say Java? Then we'll take care of the execution for you and you're only charged for the compute time and the memory usage while that function runs. So, this is great. If you've got a lot of execution, we'll you take care of the scaling of everything up starting multiple instances. If don't have a lot, we close them all done. When they're not running, you don't pay for them. Okay.

And this has got ... this granularity moving to just per function level, has actually greatly simplified development. Made the development models very, very small. And you can actually increase the speed with which you deliver new functionality and new capability by adding new functions to your service. It increases your architectural complexity a little bit. You outta think about chaining functions together in order to build something that actually matters. And you got to start thinking at it as an event-based model, as opposed to sequential request response model. But it's still works in a way that's very effective. And so these are ephemeral functions, replaced the ... a lot of our customers been replacing their service architectures and give you a path for execution that allows you to do things. Whether that thing might be say, making a request to put something on a queue. Or make a request to store something in a database and stream something out to a type of a log, or some kind of streaming service.

Whether it is storing an object on a mobile users phone and sending them some kind of SMS notice about the fact that that objects now been stored in the cloud for them. These workflows can now be expressed in a very, very simple functions, these Lambda functions, there chained together to make this application model and allow you to do lot more. So, there's a lot of really great benefits for this I've mentioned, when nothing’s happening you don't pay for it or it shuts down. But it also means that you have this scalability; inbuilt scalability and the additional benefit is there's nothing easier to manage as an IT operator as no services at all. Okay? So. Not managing the service, in a service model is one of the great benefits.

Now of course there's actually a server underneath; AWS is just managing and running them for you. But from your perspective, it's all about the deploying functions. And so, when you get to this point and you start thinking about, "Well my business functions are now just this cascade of landed functions together, why would any energy writing a log in function, or an identity management function. Why won't I select the best in breed solution for managing that part of my infrastructure and application myself". Why wouldn't I go ahead and say, "I don't want to sp... have my engineers spending their time puzzling over identity management or anything else. Why don't I use somebody else that already got that stuff figured out".

And that's where Okta will fit in. This is a great part of AWS, they've got great integration. They've got a fantastic integration with AWS Lambda and APS Gateway that allows you to exchange access tokens so you have a workflow for your IDP through the chain of execution in Lambda. And so there's great information available from their development portal talking about, so you can actually have an execution where Okta provides your identity, the work flow goes through AWS lambda, does the right to S3, notifies the customer and then completes the execution all within this environment.

So, that's kind of the tour between ... It's kind of the latest stage in architectural evolution that I see when I talk to customers. We have lots of customers that still have models of the applications. We have lots of customers that are still purchasing models of the applications from other providers and having to run them in their data center. Running them in their cloud. But those applications are ... show their age. They show their inflexibility. They don't move at the speed of business. And that's what’s been driving people towards these more flexible, more discreet component architectures at the service level all the way down to the function level. And some of the most forward thinking companies today are using AWS Lambda to build full suite applications that comprise their business. Whether that is providers that provide online training about cloud computing, whether that is a financial services company. Their using it for backends for their mobile applications, or mobile banking applications or whether it is for companies that are doing data intensive workflows and using Lambda to trigger off of events that happen with bot processing of large data sets. There's hundreds and hundreds of different [goose 00:28:54] cases for using AWS Lambda, and it's really become one of the fastest growing services at AWS.

Okay. So, there's a collection of resources that I can point you too, that identify the things you talked about here. There's the AWS shared security model, describes the responsibility AWS has for understanding security and providing security on the AWS platform. There is the Okta cloud connect for AWS. A solution we've talked about for your internal teams. They'll provide you log in access and identity control into the AWS console, as well as the extensions for the CLI tool and then for things like the Redshift cluster. There is a tutorial on the AWS website for breaking the Monolith, how to go ahead and take an example of a traditional multi-tier application, subdividing into a collection of subcomponents and deploy them as containers on the EC2 container service. Here's a linked to AWS Lambda, your starting place for understanding things about AWS lambda. There's an online workshop that you can take, that will walk you through getting up to speed with AWS Lambda and how to use it. And then I also want to point you towards the Okta developer center that has great resources that talks about not just the Okta cloud connect for AWS, but also the information on how to make sure that you've got this coordination between Okta, APA Gateway and AWS Lambda for your identity provider management. And that's about all the architecture stuff I'm going to bombard you with.

So, I want to pause now and take any questions you might have from the audience. I can't promise I'll understand the technical answers but I'm hoping some of the gentlemen here in the front will be willing to help me out. I'm also willing to stay afterwards and answer questions. So, if you haven't already, I'll remind you guys to take the survey and possibly win an Amazon gift card. Fantastic. Okay? Yeah, what's your question?

Audience 1: You mentioned Okta access for Redshift, do you have the same integration for RDS?

Speaker 1: Yeah, I can take that question. I manage the Okta AWS lines. So, right now for RDS now, I think that's something we'll have to look into a little bit further. Redshift was the first request that we kind of got, so our engineering teams] kind of have been working with a couple of customers and really perfecting that. But I'll take that as a take away and then see how quickly we can get that out.

Audience 2: So the biggest push back I've heard about using AWS Lambda or just AWS specific functions is that you get locked into AWS, right? So, in the future if AWS disses us off, what do we do, right? So, how do you reply to that?

Adam Fitzgerald: Sure, so when you're making a technical choice, you want to understand what your opportunities are for execution yourself. So, we understand those concerns. So you're writing ... in the functions that you are writing, you're not writing anything that's AWS specific code, it's still your code. The execution environment is just the environment that AWS runs. So if you want to, there are ways for you to go ahead and think about, "Are there other ways that I can execute this?" You can always return those functions to a shared cluster of containers that you execute yourself based of a collection of triggered events.

There's also opportunities for thinking about other providers that have function as a service offerings that could do something similar. There's a local model that we provide for developers to run and test things locally on their desktop for Lambda functions. I'm not recommending anybody use that as in production. But it's common question that people have and the truth is, is that you're not actually locking in anything. You're writing a solution that provides some connective between the different parts of your application. You always have the option to take that and move it to any other provider or any other platform you're interested in. But what you're doing, you're deferring a large amount of undifferentiated heavy lifting to somebody that's going to do that for you. So same as saying, "I've got a database, and I've rather have someone else manage my back-up automation, my patching and my migration than me doing it myself all the time. So it's the same sort of questions like, what do you value in your business? Do you value being how to iterate faster, build business solutions that matter to the business and solve problems for your customers? Or do you want to worry about an existential threat that maybe somethings going to be a problem in a million years that you don't know about? We have those questions all the time. We'll worry about them all the time. Okay?

Speaker 2: Any other questions?

Adam Fitzgerald: Alright. Well thank you guys for attending, thanks for your questions. We're happy to take more here up at the front. I hope you have a great week here in Las Vegas. Thank you

Adam Fitzgerald, Head, Worldwide Developer Marketing, Amazon Web Services
Adam Fitzgerald
Head, Worldwide Developer Marketing, Amazon Web Services

By combining Okta and AWS, organizations can provide the end-user experience to their customers with the scalability and resilience. In this session, watch as AWS shows how to integrate Okta as the primary authentication for AWS.

Share: