CarahCast: Podcasts on Technology in the Public Sector

Build, Deliver and Improve Applications Faster Using AWS

Episode Summary

In this podcast, you’ll take back key action items to your team to accelerate the process of building, launching, and automating secure applications on AWS.

Episode Transcription

Speaker 1: On behalf of AWS and Carahsoft, we would like to welcome you to today's podcast, focused around build, deliver, and improve applications faster using AWS, where Megan McElroy, Public Sector Partner Sales Engineer at AWS, will discuss the tools and skills your team needs to automate your DevOps cycle, deployment, and monitoring implementations.

Megan McElroy: Hi everyone. Thank you so much for joining today. My name is Megan McElroy and I work on the public sector partner team dedicated to distribution at AWS as a sales engineer. So long title. What that means is I essentially get to work with different distributors, different partners, and ultimately end customers such as those who have joined today, truly understand the complex requirements that you might have in looking at cloud technology, how you can optimize, but really translating those into use cases and seeing how AWS and Carahsoft can help you.

My goal today is really to educate you all on DevOps, the role it plays for organizations who are really looking to efficiently build applications and tie that into how AWS can help. I also want to hit on the security aspect as well and explore some of the key tools that AWS offers to achieve synergy between security and DevOps.

Looking at DevOps from a bird's eye view, what it really is, is the combination of different cultural philosophies, practices, tools, anything that really increases an organization's ability to deliver applications and services at a high velocity. What we're really looking at is taking collaboration between different teams. Typically that's going to be your development team from a technical point of view, your operations team, and really collaborating between those two to make sure that things aren't working against each other, that things are in harmony and that goals are being achieved as it is throughout the business.

We're also looking to do automation where we can. A lot of manual processes, just in general, we dive into DevOps. There's a lot of processes that require human intervention if you haven't necessarily optimized with the correct services. We'll take a look at some automation, how that can be weaved in.

And then speed. I think every process in building an application, getting something delivered, speed is crucial. Sometimes it's working for you, sometimes it's working against you, but what I really hope is to show you in a way in which that you evolve and improve the products that you have at a faster pace. That is really going to be crucial for organizations who are looking to switch up the traditional software development and infrastructure management processes that they have. Maybe these processes have been in place for a long time, years, decades, and we really want to hone in on that speed because speed is really going to be what enables organizations to better serve their customers and compete more effectively in the market.

Under a DevOps model, development and operations teams are no longer siloed. We see this collaboration and sometimes these two teams are merged into a single team, where an engineer might work across the entire application life cycle from development and tests all the way out to deployment and operations. Really developing a range of skills not necessarily limited to a single function. But in other DevOps models, quality assurance and security teams may also become more tightly integrated with development and operations. I'll get to it in future slides, but when we're coupling security and that's the focus for everyone on that DevOps team that's typically going to be referred to as DevSecOps. I'll kind of bounce between those terms throughout the presentation, but really looking at DevOps and kind of weaving that aspect in.

Looking at the two teams, a lot of teams use practices to automate processes that have historically been pretty manual pretty slow, and so we really want to look at that automation piece as well. These processes might have been using certain tools, requiring certain intervention from different teams, but if you're able to use the technology stack and tooling to operate and evolve applications, you're really going to see those quick results and reliable results that you can build on.

The tools that you choose within your DevOps practice that you use in order to build your code, deploy that really run through that whole pipeline. Those are really going to help engineers independently accomplish different tasks. Maybe it's deploying code, maybe it's provisioning infrastructure, but they'll be able to really accomplish those tasks themselves instead of requiring help from other teams, and this will free them up to work on those projects and increase the team's philosophy towards the ultimate goal of, "I need to build these applications and I need to build them fast through my organization."

Looking at organizations and kind of examining sometimes what is keeping them from moving as quickly as possible to deliver applications? It can be a whole host of things. It's not always just going to be the possible competing forces between two sides of the business coin. While certain applications might be a priority for the business as a whole, it's really imperative to understand, are there any roadblocks? Are there conflicting goals? What is standing between the development team and the operations teams from delivering these certain applications?

The push and pull of needing to build at speed, but also, we have to have a stable environment. There's ongoing business operations. We can't just drop that because we want to build something quickly. Sometimes those can be opposing forces. I hope to show you today some of the tools that we could use to possibly remedy that and ensure that everyone is using something to stay on the same page.

If we then are throwing security into the mix, you'll really see the need for uniformity, some sort of planning and really just efficient services. You're working across now than larger swath of teams and incorporating security into your DevOps practice. That's really going to be so beneficial on so many levels. The ability to integrate and automate, that is going to set up a pipeline that has security controls that are not only preventative, but also able to detect things as they're happening and be responsive. Having those essentially guard rails in place is really important.

This also ties in to the AWS well architected framework. And that framework is something that helps cloud architects or individuals who are building the most secure high-performing resilient and efficient infrastructure as possible for their applications. This is something that internally we really stand behind. It's a framework that we hope our partners and customers who are adopting AWS take into account too, because when you're architecting technology solutions on AWS, there's five pillars. If you neglect one of them it can become a bit of a challenge to build a system that really delivers on the expectations and requirements on that standard that your organization is going to set. And security is going to be one of those, one of those pillars.

So, security, you have operational excellence, reliability, performance efficiency, and cost optimization. I think that those are things that a lot of architects and teams think about and think, "Yeah I want to build that into my application. I want to make that a practice." But really ensuring that you're acting on that. And you're thinking of the different ways that you can weave in security that you can weave in operational excellence. It's really going to make a difference for those applications, ultimately throughout the process, and then at the end as well.

Why AWS for DevOps? AWS works to really provide a set of flexible services designed to enable different companies to more rapidly and really reliably build and deliver different products. Speed and reliability are key AWS tenants. They really apply to DevOps practices as well. The services that you can use from AWS, you can simplify provisioning, managing infrastructure, deploying application code, automating software release processes. And then ultimately after you built that application, how can I monitor this? How can I monitor my application, the performance of my infrastructure? Really ensuring that you have metrics in mind as you're building the application out, what you see as a measure for success. And then AWS has some services that I'll touch on later, but they have services that then you can use to track that and to monitor things and to constantly keep going back and ensuring that you're meeting the success criteria that you set for yourself.

I want to touch on each of these just because I think they're really important. And they set a foundation not only for DevOps, but also just... Maybe you've heard of AWS, maybe you're using it, but how does this apply to me if I'm really focused in on DevOps for my team? Get started fast. This is a key tenant of AWS as a whole, but each AWS service is really ready for you to use, as long as you're able to open an account. That's something the care staff team can help with, something that we can walk you through. But once you have an account you don't have to install any software. There's no long set up. The services are very, user-friendly, easy to get started. You're not really having that barrier to entry in terms of, "Hey, where can I begin? Like, how can I get through this."

Fully managed services. These services that we offer for DevOps really help you take advantage of AWS resources very quickly. And so, you're not worrying about setting things up, you're not installing or operating an infrastructure on your own. Allowing AWS to kind of take that list really lets you focus on your core product. Built for scale. Scalability is a key tenant for AWS cloud, but being able to manage single instances, if you're just using one instance, that's great, you can manage that. But let's say you need to scale a thousand using our services. That's absolutely possible. It's quick and easy to do. And so, the services that we offer for that really help you make the most of flexible compute resources by simplifying the provisioning process, the configuration, and then ultimately scaling. We try and plan for so many different scenarios and build that into to how we view our services and ultimately how we view your requirements as customers that having the ability to scale up or scale down is game changing in terms of the flexibility it offers.

It's also programmable. You're able to use each service using our command line interface, or it can be through APIs or SDKs. Pretty flexible there, but you can also model and provision AWS resources and honestly your entire AWS infrastructure using a tool called CloudFormation. Infrastructure can be tricky, especially if you're coming from mainly being on premise and moving over to cloud. Infrastructure is going to be key, but really getting that right. And with cloud formation, you're able to essentially create a template that's going to declare all of the resources, services, parameters that you need in order to run your infrastructure. You deploy that template, it sets everything up, and then you can manage that as needed. It's repeatable, it's declarative. You can just go in and see exactly what you're doing easy to detect bugs as well. That's just a quick plug for you for cloud formation. I think it's a fantastic tool if you haven't checked it out yet.

Let's hit on automation too. Automation is going to be really key during the DevOps process itself, but AWS is really going to help you use automation so you can build faster and more efficiently. We don't want to weigh you down. We don't want you to spend time training things or trying to figure out what should I do with all of these processes and tasks. We want to help you automate those so that you can really focus on your deployments. You can focus on how do I want to do my dev and test workflows and ultimately how do I want to configure everything and manage that in return? Automation is hopefully going to add to the speed at which you're hoping to run.

Security, that is going to be built into a lot of our tools and we use identity and access management. IAM, the acronym you might see more commonly, but we use that for user permission and policies. It gives you that granual control over who can access your resources. How are they accessing that? What are they doing with those resources? That's going to be important in any company, any vertical, but especially in public sector. Having the ability to apply different security controls and really make it as limited as you need to, based on the requirements that you have. It's really going to bake that into your application as you go through this process.

All right. Last two. Large partner ecosystem. What's great about working with Carahsoft is they have a ton of great partners that they work with within the public sector who have expertise on different topics that could be migration, it could be DevOps, it could be the federal vertical in general. But we as AWS and Carahsoft, support this ecosystem of partners to help you integrate any tools you might have how can we help you with your end-to-end solution? How can we train you? If that's of interest please let us know. We love our partners and that's really how we help you grow and scale as well.

And then pay as you go, this is an AWS tenant for all services, you use them as needed and you only pay for what you plan to use. There's no termination, no fees. We also have a free tier that helps you get started on AWS too. Depending on the resources that you look at and then after you're done with them, terminate them. You don't start running a fee right away, but you're able to get started quickly pay as you go.

Looping all of these different eight tenants together, it makes getting started with DevOps easy. Easy to a degree. It takes away that list that sometimes can be involved with some of the pipeline tools, some of the different source tools or tools that you'll need to monitor your deployments afterward. I hope you're able to see the benefit of AWS for DevOps as a whole.

Now that we've looked at AWS for DevOps, let's look further, really into those best practices and answer why AWS for DevSecOps. I want to go through each of these. The CICD aspect, microservices, infrastructure as code, and then touch on logging and monitoring as well. We will go into these specifically.

Continuous integration. Continuous integration is going to be a software deployment practice where developers regularly merge their code changes into a central repository, and you're able to automate builds and tests and run those.

This is great because you're really be able to find and address any bugs before they make an impact. You're able to address those bugs quickly, improve the software quality, and reduce the time it takes ultimately to validate and release new software updates. It depends on the organization, how teams are spread out. Typically you have your source code. There might be multiple developers working on it, but we have the option to version control, there's different branches. So you can fork into those, but really having the ability to not wait until the end and then have to go back and debug everything and figure out why is this not working? I know that's incredibly frustrating as a development team.

They're really continuously integrating and looking into where there might be errors and fixing those, is going to ultimately serve you in the long run, so that you're improving as you go, maybe even making improvements that you hadn't planned. Based on media bug you see, or reexamining the code, you're able to build that in, test it and say, "Okay, great. That really was a great feature to add in." It could be a possible innovation.

We hope that by using some of the CICD tools that we have, you're able to speed things up and maybe have a little bit of time to go back thoughtfully through some of the code changes that you might've made.

And then continuous delivery on the flip side is going to be another software development process, but this is where the code changes are going to be automatically built, tested, and then prepped for release to your production environment. It expands on continuous delivery, it expand on continuous integration by deploying all your code changes to a testing environment and possibly to your cloud environment after the build stage. As we go through the build stage, so we go from source, build, test, and then ultimately to deploying and monitoring.

When continuous delivery is implemented properly, developers will always have a deployment ready build artifact that has passed through their standardized test process, depending on how that's defined a team by team. The tools that we offer really help you securely store and version your application source code, and then automatically build, test, and deploy your application to AWS and really see the benefit of how quickly you're able to work.

Being able to start with your source code, you build out your, your continuous integration or continuous delivery workflow. And then weave in some of our various DevOps focused services, you're able to deploy that, you have your application, and then we can get into, "Okay, how do we want to monitor things? How do we want to measure these now that they're actually in either your staging area or in your production environment after you built this out."

Looking at continuous integration, continuous delivery, continuous deployment, that's a mouthful with a lot of continuousness on this slide, but that's really the goal. That is what we were looking at. By incorporating best practices into your pipeline, your team's really able to stay agile and have continuous visibility into your overall process of building, testing, and deploying an application. Using tools that are of the same vision that are taking on the same approach of meticulously vetting and testing things, it makes them very streamlined from start to finish. We want this to be streamlined. We want it to be efficient. We want you to feel like you have the power to innovate because you're using tools that enable you to go through this process so smoothly and ultimately be able to deploy an application that you see fit based on your requirements and your standards.

We covered CICD. Moving on to microservices. I like this quote, "When you're impactive change is small, relief philosophy can increase."

Looking at microservices. Typically, microservices do one thing. What I mean by that, we look at a microservices architecture. This is an approach where you're able to build a single application as a set of small services. Each service will run in its own process and communicate with other services through a well-defined interface. It's typically going to be something lightweight through an API. Microservices going to be built around business capabilities. Each service is scoped to one single purpose. You can use different frameworks, different programming languages to write the microservices, but then you're able to deploy them independently as a single service or as a group of services.

In keeping with the agile theme, agile functionality, breaking our applications into interconnected microservices allows for a more lightweight changeable architecture. When we think of legacy software, typically going to be monolithic, you might feel a little bit stuck in terms of not necessarily being able to make a lot of changes.

It does do everything, but that's also the blessing and the curse of legacy software. If you wanted to make changes, try and make something that that was more lightweight, there'd be massive reconfigurations, tool-wise, training wise, migrations, rearchitecting. A lot of things would need to change in order to affect any type of change application-wise. And it would likely not be something that a team would describe as speedy, pleasant. It's going to take a long time. Going with something that is going to be in the style of a microservices architecture is really going to take a lot of lift off of the team. Either that's managing the application, or that's trying to affect that change and build something that is evolving with how our technology is evolving now.

You have the option to use some really innovative technology too if you're interested in building out those microservices architectures, so you could use containers of which there's a few different types on AWS that you can use. Or you could use serverless technology like Lambdas. Lambdas are fabulously lightweight, very easy to use. They're really cool compute resource. I'm employee to check that out. If that's something that interests you, or if you want to chat about that later.

But microservices are really going to change the way that you're able to build applications to communicate via APIs, and really be able to see an interconnected architecture where you're able to make changes that beat up either your projects, your applications, timeline. A lot of things that it's going to affect. Keep that in mind with microservices.

Now let's get into infrastructure as code. This is a practice where infrastructure is going to be provisioned and managed as it says, using code in software development techniques. You're able to do version control. You can do continuous integration, but it's really an incredibly different look, an incredibly different outlook, I guess, at how you are looking at your infrastructure.

If you take how infrastructure is typically managed and provisioned, this is really going to be a different path to make sure that your pipeline is being treated as code, and it's adopting the clouds, API driven models. We take that using the different APIs and how lightweight that is. We take that model and we enable developers and CIS admins to interact with infrastructure, programmatically, and at scale. You're not needing to manually set up and configure resources. Engineers are really able to interface with infrastructure using code based tools and treat it in the way that they would their application code.

So, because this is going to be defined by code itself, infrastructure and services can be deployed very quickly using standardized patterns. You can apply patches versions and really make this a repeatable process. Some of the benefits that we see from infrastructure as code as well is, it's going to be, I have about five pillars that I typically see. It brings a lot of benefits and looking at things from this kind of context.

Visibility. If you're looking at templates that you can build to provision your infrastructure, it's going to serve as a very, very clear reference of what resources on your account and what their settings are. You don't have to go all over the place, finding those parameters, looking at different things. It's very clear where you are allocating your resources for infrastructure, what those perimeters are. And ultimately translate that out of the code, "This is our goal. This is what we're provisioning, and this is what we're going to be managing."

You also have an incredible amount of stability. Let's say you accidentally change the wrong setting or delete the wrong... The dreaded, "I accidentally deleted the wrong resource in my web console." You think everything's broken. Infrastructure as code, I feel like I've had that nightmare before. I've been afraid. I accidentally clicked something in the console getting started, but infrastructure is code is really going to help solve that, especially when combined with some sort of version control system.

Scalability. Playing into that DevOps tenant with infrastructure as code, you can write it once and then reuse it many times. That's why that template, once you have that one well-written template, that can be used as a basis for so many services that can be used across multiple regions and really make it easier to scale horizontally. Once you write one template, you see how easy it is and the benefits, what we've seen with different partners with AWS is it's really a springboard then for teams to see, "Wow this is so easy. This is how I'm managing things. What are some other things that I could innovate? What could I also use this template to automate?"

I see it as a way obviously, to provision your infrastructure, but it's also a springboard into things that you can kind of ideate on and see if there's changes that you could also make in other areas.

Security. Security is going to be big here, the different policies that we have, but this really gives you a place where you create that one, well secured architecture, and you can reuse it multiple times. What's great is that you have that template and you have that version that you can take each deployed version and follow the exact same settings. I think it's just a way to have a very solid piece of mind in terms of the resources that you're creating and the security that you have to have on those resources and have that be repeatable across your teams or across the different areas within AWS there.

And it's also going to be transactional. If we're looking at some of the services that we use that not only create resources within your account, they also wait for them to stabilize while they start. If you've ever been on AWS console and maybe provisioned a resource, you'll typically see it's going to be the yellow dot while it's provisioning. And then you'll either see it turn to green or turn to red, depending on if that was a successful provision. But what's great though, is that it verifies that provision was either successful or not. And if it's a failure, you can roll that infrastructure back to the past known good state. I think it avoids a lot of stress and really ties back into that stability aspect as well.

I want to touch on logging and monitoring. This was kind of that it's not directly right in the pipeline as you're going through, but it's going to be that portion of, I guess, that extra step after you kind of go through your DevOps pipeline.

A lot of organizations in general monitor different metrics and logs to see how are my application on infrastructure? How are they performing? Because that's ultimately going to impact the experience of your end user or your teams, depending on where you face. But if you're able to categorize and essentially analyze the data and the logs that are generated by these applications, you're able to understand how changes and updates affect users. You can have more insight into maybe there's a problem that was affecting one of your main applications. You can have a lot more insight into the root cause of that, or if it was caused by an unexpected change, how can we get to the bottom of that and figure out a way to remedy that and a way to avoid that. Active monitoring has become increasingly important as services are now available 24/7 and application and infrastructure, they update frequency a lot more.

Creating alerts, performing real-time analysis, all of those processes that are really feeding on the insights what these tools provide. This is going to help organizations more proactively monitor their services across the board. There's two that I really see as being key for the applications that you build. If you're looking more on the cloud and network monitoring side CloudWatch, they both have called names. They're easy to confuse, but they do different things. CloudWatch is our monitoring system. The way that I see it remembered in a lot of sessions and trainings, think of watch someone is watching this, they're detecting everything they're protecting. That monitoring service for AWS is going to collect and track metrics, collect and monitor log files. You can set alarms and you can automatically react to changes in your AWS resources.

Let's say you were looking at maybe a compute resource. It goes above a certain threshold. You get an alarm, it automatically scales back using an auto scaling service. That's just an example, but it can really be applied to all the different services that you're using.

The other side of logging and monitoring would be, "Okay, I want to track all the activity and API usage across the board." The service that we use there is called CloudTrail. CloudTrail is going to be a web service that we have that records any AWS API call for your account, and you're able to access those log files. Recorded information could include who the API caller is, when they called that API, the source IP, request parameters, really just able to monitor any type of API call and a lot of the information that goes along with that. It's great for logging and monitoring and weaving in that security aspect as well.

I wanted to just talk to you... An example, pipeline. There's some pipeline tools mixed in that I will get to, but the key principles that we really see are acting fast, acting thoroughly, and planning for fail over. We'll walk through these. Essentially this is kind of combining some of the things that I've talked about, but I thought it was just a fantastic illustration to really see, we start here with our source code, we have our developer working on the code that they ultimately are hoping to deploy in their workstation and they're passing that onto code review. We're in the development stage, most likely working on some sort of local branch, some small changes. "Okay, I've finished with that. Let me publish this for review with my peers."

Then we go to that mid stage. "Now that we have the code, what are any dependencies that are running on this and how can I combine them into my code, into looking at all those dependencies and make them in a one combined artifact? Once we have that build artifact, everything's accounted for, you can run your unit testing, and you can package that up and it will be accessed to as a build file. And this is really the time to be thorough, to really vet everything. This is pre-production, you're out of development. And you're kind of in that test and dev stage and need to ensure that everything is running smoothly. There's no bugs and that this is something that ultimately needs to be put into production. I know it's weird to say, look for any reason to fail, but I think that's something that we see constantly with AWS is, always planned for fail over.

You never want that to be your reality. It's kind of like disaster recovery. I plan out this whole strategy, but gosh, I hope I never have to use this. Only promote on success, test your failures. If you fail, roll that back. Baking in that failure is really going to be crucial because you don't want to have to be calling AWS or Carahsoft saying, "Hey, my production is down. What can we do?" Obviously, that's something that happens to people and that we're able to address and work with our support teams, but ultimately you always want to monitor that and make sure you're baking and everything that you need in order to run a successful production application.

I want to move on to our DevOps tools on AWS. You've seen a few sprinkled on, but now that you've seen the principles, we've talked about some of the best practices that AWS recommends, what we offer, what we stand by. I wanted to look at some of the key tools. We'll walk through these. Let's look at some of the pipeline services.

Within our developer tools, what we're ultimately trying to do is help you securely store and version your application source code, and then automatically build, test, and deploy that to AWS. You start with code commit. That's where you're storing your source code. You move on to our code pipeline to build a CICCD workflow that uses code build, and then code deploy is going to help you deploy that out to your different environments.

But let's start with code commit. This is going to be a fully managed source control service that hosts secure get based repositories. This makes it easy for teams to collaborate on code in a secure and highly scalable ecosystem. If you've ever used any type of open source, source code, the interface looks very similar, but what's great is it's fully integrated within AWS. You're not having to pop back and forth in between different things. That eliminates the need for you to operate your own source control system, or worry about scaling infrastructure.

You can use code commit to securely for anything from your source code to any binary's. And it works seamlessly with any existing get tools. As we move forward, we're looking kind of at that software release workflow and that stage. This is where our AWS code pipeline that comes in. This is going to be our continuous integration and continuous delivery service for fast, reliable application and infrastructure update.

Code pipeline is ultimately going to build, test, and deploy your code every time there's any type of change based on the release process that you define. It enables you to really rapidly and reliably deliver features, deliver updates, and really work through that specific workflow.

Moving into the build and test curd phase, this is where code build is going to come in. This is a fully managed billing service. It compiles your source code. It runs tests, and it produces the software packages that you'll ultimately deploy. You don't need to provision manager or scale your own build servers. Code build is going to scale continuously, and it can process multiple builds concurrently. You're not sitting and waiting for those builds to happen. Or they're not left waiting in a queue, which can be time-consuming. Code build is a great tool for that.

And then deployment automation. This is where code deploy is going to come in and it automates code deployments to any instance. Including ECQ instances. Code deploy makes it really easy for you to rapidly release new features and help you avoid downtime during application deployment, but it also handles the complexity of updating your applications. Very valuable last step in that pipeline.

Now I'd like to get to the reference architecture. We're in our AWS cloud environment, we start here either with our command line, with our SDKs, with AWS cloud nine. This is an IDE, a browser based development environment that you can use. I've used it before. I used other services before that, but it's really cool. You stay within all the same tools. The integration is seamless. I highly recommend that as well. But those are going to live within your ECQ instances.

Then you go out to code commit you're building your source code out, and then this could be your code pipeline templates. You see code pipeline, code deploy, you're using your build, and this can be stored in Amazon S3 as well. If you want to integrate that into your architecture.

And then this goes out into some of that pen testing and ensuring that your integrations are all performing to the ability that you want them to close in your logging and monitoring. And it also has some of the features that are going to be in our auditing and management and governance angles. AWS Config, CloudTrail, IAM, which is that access and identity management, and then our security hub. This is what we would recommend if you wanted to have a full DevSecOps approach using all of our tools. We also have KMS which is our key management service, where you can store all of your secret keys in order to access different services, use the command line, SDKs and whatnot, and then CloudFormation as well for your template in general.

That would be essentially a final architecture. This is pulling in a lot of different services. You don't have to have all of that mapped out, but you're able to see how these play into each other. Let's say you start using EC2, you deploy cloud nine, and then you say, "All right, I'm going to use code commit. I'm going to start building out my source code." It's a great place to start.

We would love to talk more about this and help you integrate these. This is an example that was in the AWS, our normal cloud environment, but you can do this in AWS gov cloud, which I know is important depending on the different agencies and different organizations that you all as attendings work. I hope that this was helpful. I hope this kind of gets the wheels turning how you're thinking about DevOps, what you're doing today.

Speaker 1: Thanks for listening. If you'd like more information on how Carahsoft or AWS can assist your organization, please visit www.carahsoft.com or email us at aws@carahsoft.com. Thanks again for listening and have a great day.