CarahCast: Podcasts on Technology in the Public Sector

Transforming DevSecOps with Cloud Native Security

Episode Summary

Federal agencies are increasingly viewing DevSecOps as an enabler of their migration to the cloud. DevSecOps brings rapid application development, more reliable applications, and increased security to their applications. Palo Alto Networks has continued to enable our customers to streamline their application development and shift security left. Listen to the podcast to hear Brain Wegner, Systems Engineer for Palo Alto Networks, discuss how solutions have enabled our customers to achieve a Continuous Authority to Operate (cATO) and implement Zero Trust Architecture for the applications.

Episode Transcription

On behalf of Palo Alto Networks and Carahsoft, we would like to welcome to today's podcast focused around transforming DevSecOps with cloud native security, where Brian Wenger, Systems Engineer at Palo Alto Networks will discuss how to achieve a continuous authority to operate and implement zero trust architecture for applications. 

Brian Wenger: There's a lot going on in DevSecOps in the federal, and pavilion and celeb spaces. Today, we're going to focus predominately on security aspects of DevSecOps. I want to take a quick second to introduce myself and introduce David. I'm a systems engineer with Palo Alto Networks. I've been in the industry designing, implementing solutions around cybersecurity and zero trust for 12 years. I'm focusing on federal markets. I've designed some of the largest implementations of zero trust inside of federal groups. I hold a CTIE and CISSP and a bachelor's degree from Towson University. 

The topics we're going to cover today, like I mentioned, we're going to be focusing specifically on cloud security. We have a lot of customers that are migrating to the cloud, and everyone is in their own timeline to migration. It's my hope today that we can help you and your teams understand how we can help speed up your application deployment, how we can reduce the time it takes to get applications into production. I hope that we can help you guys understand how we can increase the security posture of your applications without breaking functionality, and how we can get you deeper visibility into what the health of your applications that you're running inside of private cloud or public cloud actually is. 

The cloud is driving modernization from the development process. It's doing this because we're constantly seeing changes inside of the way developers are working, solutions are hosted, and securities implemented. Recently, we've started to see in the past five, 10 years developers go from an agile, or from a waterfall approach, to a more agile approach. Technologies are constantly changing things. 10, 15 years ago, we had monolithic applications. Five, six, seven years ago, containers really changed the way that we deployed services on those monolithic applications. With the advent of Kubernetes, we're seeing these applications actually deconstructed and deployed as microservices or containers inside of the cloud or inside of private cloud environments. 

We're also seeing vast deployments which used to take maybe two, six, 12 months to get these new updates out. We're seeing these more rapidly deployed, and that's really driving the DevSecOps approach inside of the industry. We're also seeing the fact that the cloud is able to help a lot of customers, a lot of development teams start to standardize their operating approach to really make sure that they've got a templatized method to deploy their development environments, their QA environments, and continuously roll that into production environments. We're starting to see technology change a lot of things, but as we start to see these changes they don't come with their own challenges. 

From a security perspective, these challenges can mean a lot of things for federal [inaudible 00:03:33] customers. Our security research team has actually found that 43% of cloud formation templates are insecure, and that's something that's extremely important for DevOps teams. Our DevOps teams are constantly looking to build their applications stats with more consistency, and they're heavily leveraging infrastructures code to do that. Our threat research team is finding out this 42% of these baseline infrastructures code configurations have insecure security controls around them. That's something that we need to focus on. When we're looking applications that are migrating into the cloud, we also need to be concerned with vulnerabilities whether we're talking about vulnerabilities at the container level and these base images that are being used, or whether we're talking about vulnerabilities within the host or some of the binaries that are being implemented from serverless functions. 

These are all things that we as technologists need to be concerned about as we're starting to migrate our applications into the cloud. Lastly, I know this is something that developers and cloud engineers, and network engineers don't really want to hear about, but compliance is a major driving factor with our applications moving into the cloud. We no longer have the safety net of the security team implementing a huge firewall and security controls in front of our devices once these devices and these applications are moved into the cloud. In order to actually make sure applications can run and the security team is going to bless these configurations and these applications to run, we need to make sure that we've got a more streamlined approach to implement in compliance controls so our applications are able to be accredited and authorized to run in different environments. 

How do we do that? How do we eat these security concerns while leveraging the best of what the cloud has to offer? Palo Alto has a couple of key characteristics that we think that your team should be following. That starts with shifting security left. I know this is something that a lot of developers have been hearing, a lot of applications and security practice [inaudible 00:05:49] has been preaching for a while. But what we're actually talking about here is instead of waiting until an application is ready to go into production, we need to be making sure that we're building security into the entire life cycle of the application. We need to make sure that this is built into the build phases, it's something that we're using to integrate in the deployment phases, and again it's something that we're able to continuously monitor once these applications are up and running in the cloud and moved into production. 

We also need to make sure that there's better collaborations between security teams and developers. IT is sort of a word soup of acronyms. Security teams have their own acronyms, their own tools, their own concerns. Development and operations teams, they also have their own tools, their own concerns, their own acronyms. Oftentimes, when developers, operations teams, and security practitioners meet together to talk about how to improve their applications, there's a lot of friction between these two teams. A lot of that comes down to there's no consistent terminology used, there's no sharing of tools, and the sharing of information isn't at the level that it should be. That's certainly something that we need to address as we begin to move forward and really implement DevSecOps, and that's because this friction can increase the amount of time it takes for our applications to get from development into production. 

As technologists, I think it goes without saying that change is the only constant. Anyone who's been in the industry more than a few years knows that every single quarter we get new tools, we get new feature sets. If you're looking at the cloud, there're new services implemented almost weekly that your teams can take and leverage. What's really important for me as a security practitioner is to not tie the hands of the developer. I don't want to take services away from the cloud teams. The reason the cloud service providers are implementing these services is to make developers have more flexibility when solving problems. It's to give operational teams more liability out of their applications. For the security practice [inaudible 00:08:11], we need to take a mindset where we're enabling these teams to build their applications, to make them more reliable, and to make sure that these updates and these apps can get in the hands of the users at a much faster rate. 

That's why Palo Alto Networks developed our solution, Prisma Cloud. This is a cloud native security platform. Our cloud native security platform has been built around solving the use cases that we've seen as major stumbling blocks for cloud-hosted technologies, private cloud-hosted technologies, and development teams. Our security platform follows four major constructs that we use to implement cloud security. Our first major construct is cloud security [inaudible 00:08:57] management. This gives any teams using this deeper visibility into all of the assets that are deployed across your cloud that's going to give you more visibility into what the risks of these assets are, maybe any risky configurations that are being reported among these assets. It's also going to give you compliance and governance recommendations to help you more rapidly meet your different compliance framework, whether we're talking about Myth, PCI, HIPAA, SOC2, or any number of a different compliance frameworks. 

We also provide most recently data security around data hosted in the cloud. Think about any PII that might be hosted in object-based storage like a blog store, we have the ability to uncover that and report that as a potential threat for data ex-filtration. That's what we call protecting the services plane of the cloud, but we also protect the compute plane of the cloud as well. What do I mean about protecting the compute plane? Well, your cloud is running a number of things in it. Your applications could be running on VMs or hosts. It could be running as containers. You could even add serverless functions that are being used to deliver your applications. In some of your applications, you're openly exposing APIs and have web app functionality from those hosted devices and applications. 

That's exactly what I'm talking about with cloud workload protection. Prisma Cloud helps implement security and recommendations around the compute plane of the cloud. But not only are we doing cloud-specific and resource monitoring, we also have the ability to inject ourselves in line to provide network-based security. The network security we have, it delivers visualizations of the way all of your application components communicate with each other. It helps you map in real time how all of your apps talk to each other, what microservices communicate with each other, what serverless functions call other serverless functions. It even gives you the ability to implement micro segmentation all the way down to the process level within your cloud workloads.

The last major pillar of our cloud [inaudible 00:11:19] security platform is our identity management solution. This gives you the ability to enforce permissions, uncover overly permissive rule sets for different roles, and even come across any secret keys for access that might have anomalous behavior associated with it. As you can see, Prisma Cloud provides a very robust solution to providing cloud security. We don't just provide protections for one single use case. We're not just looking to provide compliance and governance. We're not just looking to provide GMDDs for all your cloud assets. We're looking to provide that, along with workload protection and a wide variety of other use cases. We do that across all cloud providers, so you don't have to worry about stitching together a bunch of point products or cloud-specific products. You can leverage one product to provide security across all of your cloud environments, across all of the different security concerns you might have whether it's compliance and governance, workload protection, in-line network securities. We meet a wide variety of use cases. 

What's most important here is that we're doing this from a single pane of glass. We're providing these cloud-based workload protections, we have the ability to provide workload protections on prem for private-hosted clouds, or on-prem data centers. This is all being managed from a single pane of glass so your team has one device to learn, one set of technology to implement, and one place to get all of your visibility across the entire life cycle of your applications. As I mentioned, Prisma Cloud provides cloud security across the entire lifecycle of your application. This starts at the building, when your development teams are building their images and putting their applications together. It transcends all the way through to deployment. So, across the entire system development lifecycle of your application, you have security that's being provided.

You have the confidence that there's visibility into all of the security posture during the entire time that your application is being built. You have the flexibility to do this across a wide variety of form factors. Prisma Cloud provides security for virtual machines and bare metal. It provides security for containers. It even provides security for serverless functions. Your teams have this flexibility to deploy their applications to meet their business needs, but you have the confidence to deploy those because you've got security across the entire lifecycle, across a wide variety of form factors. 

So, that brings us to our first use case. The use cases that we're going to be covering today, we're going to start with the continuous authority to operate. Any one of the federal customers out there probably understands what an authority to operate is. For those of you who aren't familiar with authority to operate, think of it is as an accreditation and approval process. This is the process that a lot of organizations go before their applications are able to hit production. This is where they go through an audit process to validate they've met all their compliance concerns. This is where they go through the process of scanning the applications to make sure there're no vulnerabilities inside of the applications. This is the process that all companies that are implementing an authority to operate goes through as their applications are getting ready to go into production. 

Now, I know we started to talk about accreditations and approvals. This is something that probably starts [inaudible 00:15:07] down the back of a developer, of a cloud team, maybe network engineers. We all have horror stories about the painstaking process it takes to get your accreditation and approval of your application. I know that as the development process goes through, our developers are working day and night. They are pouring tons of creative energy into solving technical problems. They're building these amazing applications, and their teams often work late hours driving themselves off of coffee and snacks. The last thing a development team wants to do is know that they've been through a number of sprints getting ready to release a great technology that's going to have wide user adoption and have that security guy barge into their office with a ton of spreadsheets that shows exactly why they're not ready to go live.

But that doesn't need to be the case. There's a much better way to go through the ATO process, or the accreditations and approval process. There's a much better way to handle this to keep yourselves from having to push back dates and to actually allow your applications to be deployed into adoption at a much faster race, driving the velocity of your updates. We do that with our continuous authority to operate. What is a continuous ATO? What a continuous ATO is at a technical level, is it closely follows the [NIFK 00:16:43] risk management framework. It goes through a number of processes that are actually able to allow your team to streamline and implement your accreditation and approval process at a much faster rate. 


It's important right now because microservices are changing the way that this can be implemented. The fact that microservices are frequently deployed as images from repositories, the fact that the technology has changed to allow this, gives us an opportunity as technologists to change the way that the ATO and the accreditation and approval process has to happen. Instead of taking a snapshot of security like we used to do as we were going through the ATO process, cross our fingers and hope that nothing changes over time, we don't have to go through the process that way anymore. By leveraging tools that integrate with the different components that are used to build applications, we can continuously monitor the security posture across the lifecycle of the application. 

We can see the vulnerabilities that exist before the images are built. We can make sure that unsecure images are never deployed. In the event new vulnerabilities are released, we have the ability to see which applications have these vulnerabilities and make sure that our teams are able to pass or implement compensating controls. The easiest way to think about an ATO, it starts with a porterhouse steak. One of the best metaphors that I've heard for the continuous ATO process is actually the way that your meat goes from factory to delivery. When we kind of look at it through that lens, it really helps to understand what the continuous ATO is. The USDA wants to make sure that we're only eating healthy hamburgers. To properly do that, they need to be able to put their stamp of approval on all the meat that's leaving these factories. 

Now, if you think about that, is it really possible for the USDA to fail to inspect every single hot dog that comes off the line? Is it really possible for them to show up and make sure that every single filet mignon that you're getting ready to cook has gone through the proper process? It's not, right? So, the way that they make sure that they can scale, is instead of inspecting every piece of beef that comes through the factory, they're going to inspect the factories and make sure that they're using sanitary processes and proper techniques. They actually provide the approval basis off of those processes and those techniques. I don't know about you guys, but I'm not looking to wait two to three months to get my steak to my dinner plate. So, what we're looking at from a technologist's perspective, it's something very similar to that. 

Instead of having an auditor go through every application, every update before deployment, and go through with a fine tooth comb everything that exists in there, and really slow down the velocity at which your updates and applications can be delivered, we're just looking at working with these auditors to go through and approve the processes that are being used. This requires implementing some security gates to make sure that things can't get promoted through these processes without being inspected or passed. Once you go through the process of having a monitor, make sure that all the proper techniques and workflows are being implemented, it's really going to help streamline the way that your applications go into production. It's really going to reduce the time it takes to get all your updates and your great features and functionality into the hands of your users. 

So, how do we do that? Well, we have a six step process that I'm going to walk you through to help you understanding according to NIFK's framework how you can implement a continuous ATO. We're going to do this specifically with Prisma Cloud. The first process is to actually prepare. This could be getting the proper paperwork filled out. This could be deploying some technology, maybe implementing a new server or some cloud environment. Specifically, what we're talking about with Prisma Cloud, is this is actually setting up and installing Prisma Cloud and then setting up our boundaries which is going to be the portion or the applications that we're focusing on. We would start by simply creating a label that we're going to use to map to all of our images for this ATO process. 

The second step is to categorize. The categorizing is the process of attaching different categories based on the potential impact to the organization if a certain event should occur. The easiest way to kind of understand this, and sort of the way that DOD classifies data: we have unclassified data, we have classified data, we've got secret data, top secret data. There's a wide variety of different classification levels. You can also classify it by impact level. When we're talking about the classification of data, we actually use that to map the different vulnerabilities and compliance concerns as well. Once we know what the impact is, and once we know what our different classifications are, we start to map that to the different vulnerability level. 

Maybe we're talking about making sure that our high value assets don't have any critical, medium or low vulnerabilities in them. Well, then we would use that as our classification and our categorization. Maybe we're talking about something that's not as important as our high value asset that doesn't have any TII in it, and we just want to make sure there's no critical vulnerabilities in that application, well that's where this categorization step would come into play. The way that Prisma Cloud executes and implements that, is we do it by using what's called "Collections". This would be leveraging that same label that you just created, and making sure that all of the images, all the containers, all of the components of the application are mapped to that collection. That's going to help you keep track of all the components and what their current vulnerability compliance concern level is in their risk grading as your application gets developed. 

Once you have that classification build, once you have that collection mapped to your components, Prisma Cloud Compute will automatically generate a visual map of the way your application's components interact with each other. You'll have a nice dashboard, which is going to show you all the components. It's going to give you an idea on what the risk rate of these components is by color. It's going to help you see the way that all of the different components interact with each other. You can find this inside of the radar view of Prisma Cloud. The next step would be to select. This is the aspect of where we go through and we're going to select all the controls that need to be implemented. This is the process that we're going to use where we can implement our different POAMs or mitigated controls for the application. 

The first step when we're going to be selecting the controls is going to be exporting all of the different vulnerabilities and compliance concerns with the components of our application. Once we've exported all of the different vulnerabilities and compliance concerns with the application, we can go ahead and start filling out our POAMs, or building out our mitigating criteria. Again, the Prisma Cloud product team has put together a number of scripts and automation tools that can be used to export this information out of Prisma Cloud. The whole process of selecting is to make sure that all of our components don't have any vulnerabilities and compliance concerns associated with it. 

After we go through the select stage, we're actually going to put our technologist hat on and we're going to start to implement. We do this by taking all of the vulnerabilities that we've recently exported and we're going to come up with different tags to these vulnerabilities. Maybe we mapped a bunch of them to POAMs that we've already put together, or maybe we've mapped a number of these different vulnerabilities to mitigating concerns that we know that we can implement or compensating controls we can implement to mitigate the concern of these vulnerabilities and compliance concerns. Well, once we do that, that's where we actually go through and we start to put up our stopgaps and our security gates. 

The way to put up those security gates is we're going to build a policy around it. We're going to use the collection that we defined early on in step one that's already been mapped to all of the components from step two, and we're going to make sure that we build a policy that specifically only focuses on that categorization, on that collection that we've built. Then, we need to make sure that as we continue to assess those applications through time, that we're not going to be bombarded with all of the vulnerabilities that we've already implemented a POAM around, that we've already implemented compensating controls around. 

The way that we do that is that we need to make sure that we put together a list of exceptions. All of those vulnerabilities and compliance rules that we used in the previous steps, we're going to make sure that those mapping and those tags that we created are used as our list of exceptions. That's going to make sure that every time we pull the system as we continuously monitor it, we're not going to continue to fire alerts. We're not going to continue to alert people because these are already vulnerabilities that we know of, and we've already taken care of these either through POAMs or compensating controls. One thing to note here is with our POAMs, we can set an expiration date on the exceptions list. 

If we have six months to implement our POAMs, well then we can set a timer that expires after six months. So, after six months, our scans are going to show that those vulnerabilities or compliance concerns [inaudible 00:28:15]. In the last step of implement, is to actually put our goals in place. This ties back into our classification level. Maybe we want to alert or block all critical vulnerabilities. Well, that's where we would select to alert and block on all critical vulnerabilities. Maybe this was our high value asset, and we want to block any vulnerability from making it into the system. Well, this is the section that we would go to set up our blocks to make sure that no components with vulnerabilities ever make it into production, QA, or development environments. 


The important thing to note here is that this is applied across all spaces of the application. As developers get ready to build their applications, well the blocks can be applied there. As we start to leverage images that might have been sitting in a repository for a long time, once they get deployed, we have a security gate set up right there to ensure that that doesn't happen. These will even take action in production as well. When we have applications that have been deployed, and we get new vulnerabilities released on those applications, well we have the ability right then and there to fire off alarms so our security and operations team know that we need to patch some of these components of the applications. 

So, what do we do once we've implemented the controls? Well, this is where the assessment happens. Instead of having an auditor sit there and go through a vulnerability scan with 250 different vulnerabilities that were found throughout the entire application in an Excel spreadsheet, we can significantly reduce the amount of time it takes for our teams to assess the health of our applications, and we can leverage this as documentation to show the auditors what the health of the application looks like. Because we've already built out the exception list, we're not going to continuously be bombarded with all of the vulnerability and compliance concerns, which we either have POAMs documented for, or that we have mitigating controls for. 

Because we're monitoring through the development inside of the repos to the deployment and even through production, we're not just getting a snapshot of what the security posture of the application is. We're actually seeing this in real time, because the assessment is continuously happening and we've got all the logs, we've got all of the reporting capabilities to show the current health of the system. That's because Prisma Cloud uses an intelligence stream. This is a stream that integrates with a wide variety of different third parties to get all of our vulnerability information. As you're looking at the vulnerability information inside of the reports, you know what patches are vulnerable, you know what the CVSS score is, and you know if fixes are available for these patches are not. 

Once we go through SS phase, this is where the stopgap actually gets implemented. This is the aspect of the security gates being put in action to ensure that no new vulnerabilities and compliance concerns are actually introduced into the application. We started the build phase when developers are putting together a new image, and they're getting ready to check this new image into the repo and then building it. Well, we integrate with a wide variety of different CI tools, and we can actually stop those images from being built. But most importantly, we give the feedback to the developers inside of the tools they're using so they know why their builds failed, they know if it was a compliance issue, they know if it was a vulnerability. There's also feedback inside of the CI tools to help them understand how to fix those problems. 

They can find out if there's an update available for one of the packages they used with a vulnerability in it. They can find out what compliance rules they actually broke. Maybe they're trying to run a container as root, or as with elevated privileges. We'll have the feedback in their build tools so they can address those concerns much earlier in the development lifecycle as opposed to at the end. We do more than just inject ourselves into the [inaudible 00:32:56] phase as well. We can make sure that some of these vulnerabilities, some of these compliance issues that exist in images that are already in the repository never make it in the production. We do that because of our integration with the container orchestration system, like Kubernetes. 

As an image gets ready to get deployed, or as an insecure package gets ready to be deployed, we work with those orchestration systems to figure out if that image is an image that contains any vulnerabilities or any compliance concerns. You guys can take a passive approach. You can implement alerting only. Or, you can implement blocking, which would keep any of these vulnerable images, any of these images with compliance concerns to inevitability making their way inside of a production environment. The last step is just to continuously monitor this. So, NIFK would call this exactly that, continuous pondering. It gives us the ability to monitor the health and the security posture of the application throughout it's entire lifecycle. We can start by getting deeper visibility into the containers and images as they're being built. We can see how things change over time. 


Because we have those security gates deployed, we have the ability to reduce the amount of drift, or completely eliminate the amount of drift that happens inside a production system. It gives us the ability to make sure that our secured boundaries that we've implemented throughout the accreditation and authorization process is just that, it's continuously secure. So, I'm going to take this moment to kind of shift gears a little bit. We're going to go from talking about secure continuous monitoring into Palo Alto's approach to cloud native zero trust. For those of you who are not familiar with zero trusts, quite honestly you haven't spent enough time at security conferences or talking to your vendors, because that is the [inaudible 00:35:05] of the year. 

Zero trust is something that Myth has recently released their guidance on that. They've put together their special publication, which is a very detailed look at how to implement zero trust and the wide variety of use cases that it covers. Our approach to zero trust, the workloads that are running in a private cloud or a public cloud, it actually does something that's extremely unique here. We take a look at all the workloads and what we do is specifically we decouple security from the IP address. As you guys understand, implementing network segmentation across a containerized orchestration system is very difficult to do. That's typically because there's a lot of netting going on. There is a lot of IT overlapping going on from on-prem deployments and off-prem deployments. You traditionally can't put a firewall in between these workloads. 

So what Palo Alto has done is we implement our network segmentation specifically based off of machine identity, not the IP address of the workloads involved. We start by learning about the application as it get deployed. We can see the way that all of the different components of the application communicate with each other. We can see this for components that communicate from an on-premise deployment to a cloud deployment. We can see this as applications are migrated from a data center into a public cloud. We can monitor the communication between those two components, even within inside of different clouds. If we're talking about applications running AWS that have other components they need to reach out to, that are close to the manager, we can see that as well. 

Even more important than that, when we're talking about containers, we can see the communication that happens with inside of the node. So, two containers routing on the same node, well we can see a way that they interact with each other as well. As we learn about the way the application functions, the different network paths that get used, the different processes that are called, we put together a very nice visual map of the application so your operational team, your security team, and maybe even some of architects can have a better way of understanding all of the different components that make up your application and how they communicate with each other. 

Then you take that, once you have the understanding of how these applications communicate, well that's when you can start to put together your segmentation strategy. The most important thing that I think any security practitioner would tell you is that there is framework that we need to follow when we're implementing security controls and segmentation, traditionally referred to as "Crawl, Walk, Run." We don't want to inject ourselves with stop processes from communicating with each other before we know what the impact is. So, we take a passive approach. As you start to implement your segmentation policies, we're passively going to tell you which rules would fire, which rules would not fire, which processes would fail, which ones would not fail. 

That way, as your team gets together and gets ready to build your application, there's zero impact to your production environment. You're strictly monitoring the way that the application functions and seeing if you tune your rules the right way before they get implemented. Once you are ready to implement security, it's simply a toggle of the switch. We provide authentication and authorization for all the requests, and we do this using that machine identity-based enforcement policy that we talked about before. That's going to help keep lateral movement outside of your environment in the event that any threats make it into your environment.

That's the unique way that Palo Alto handles micro segmentation from a workload perspective. We provide zero trust network segmentation for any cloud. We've got a wide variety of different use cases and implementations that we can provide for this network segmentation strategy inside of the cloud. One, your teams can use it to separate different environments. Maybe you're separating environments based off of team. Maybe you have one team that builds one application and another team that uses another application, and they're deployed throughout the same hood. Well, we can segment the environment that way. We can also segment the environment based off of what level the systems are in. Maybe you want to segment development from QA from staging to production. Well, this tool gives you the ability to implement that level of segmentation. 

Maybe we're talking about real zero trust. Maybe we're talking about implementing granular segmentation for a high value asset. Well, Prisma Cloud will give you the opportunity to provide segmentation based off of obviously important protocol, but it also gives you the ability to implement segmentation all the way down in the process level within your applications. As we spoke about before, continuous security is something that's top of mind. It's something that is actively being pursued throughout our federal [inaudible 00:40:37] team, and continues security like the continuous ATO we already spoke about in the major use case that we see being deployed right now. Maybe we're talking about securing two different environments. As these applications start to get deconstructed and migrated into the cloud, some components may live on-prem, some components may live in the cloud, some components might live in a whole separate cloud provider. 

Well, wherever your application components live, we have the ability to implement segmentation across those environments, and most importantly, ensure that the same policies are implemented whether it's implemented to the public cloud, the private cloud, or inside of your own data center. The last major use case that we're going to cover is the ability to encrypt data in transit. There's a wide variety of use cases out there to implement user level data. So, user [inaudible 00:41:34] application. A major construct of your trust as NIFK defines it, is actually having the ability to implement encryption for east/west traffic. That's container to container level traffic. The easiest way to implement this is with Prisma Cloud. It's just a simple toggle of the mouse, and then you can have all your east/west traffic encrypted, meeting your zero trust controls. 

I hope that I was able to give you guys a little bit more information on how you can improve the security posture of your applications. I hope your team has a deeper understanding of how we can help you with deeper visibility into your application across the entire system development lifecycle. 


Thanks for listening. If you'd like more information on how Carahsoft or Palo Alto Networks can assist your organization, please visit www.Carahsoft.com or email us at paloaltonetworks@carahsoft.com. Thanks again for listening, and have a great day.