CarahCast: Podcasts on Technology in the Public Sector

Topics in Government Mainframe Transformation to Azure Gov Cloud with Microsoft

Episode Summary

In this podcast, Topics in Government Mainframe Transformation to Azure Gov Cloud, Microsoft’s Azure Global Engineering – Critical Infrastructure team discuss transitioning from mainframe, midrange, and other non-x86 platforms into Azure Gov cloud, including uses cases, technology patterns, and example reference architectures.

Episode Transcription

Speaker 1: On behalf of Microsoft and Carahsoft, we would like to welcome you to our podcast focused around topics in government mainframe transformation to Azure Gov Cloud, where Microsoft's Azure global engineering critical infrastructure team Vice President and Chief Technology Officer Bill Chappell, and principal program managers Jonathan frost and Larry meet, discuss common topics and government transformation of mainframes, mid-range and other non x86 platforms into Azure Gov Cloud.

Bill Chappell: It’s an exciting day here to be able to present the work that the team is doing. My name is Bill Chappell, I am the Vice President and CTO of Azure global engineering. We focus on special capabilities of using our cloud, which includes both things like enhancing critical infrastructure, making sure people can migrate on to our clouds provide their protection, but also unique uses of our cloud like for the space industry and for 5g Technologies. Today, we're going to focus on getting on the cloud in the first place, and how do we upgrade our legacy systems? And how do we protect those systems? Once they are there, I'm joined by Jonathan frost, and Larry, me. So here's our day. We'll walk through each of these throughout the day. And we'll start with first our bios. So I joined Azure, from the government. I was the Microsystems technology office director at DARPA. So before that, I was actually an academic at Purdue University working on next generation electronics, and specifically with a focus on the intersection of digitization and the wireless spectrum. But when I went to DARPA, I ran a series of programs on next generation hardware computing payloads for the Department of Defense. As I came to Microsoft, I was able to build a team that sort of carries that intersection of next generation technology development and national security and national defense, focusing on both government and critical infrastructure customers. So with that, I will pass it off to Larry to introduce himself.

Larry Mead: Yeah, hello, everyone. My name is Larry Mead. As I mentioned earlier, I'm a principal Program Manager with the Azure global engineering team, the critical infrastructure area, and I've been with Microsoft for over 25 years. I started back in June of 1995 17 years prior to that, I worked on various technologies, including IBM and unisys mainframes, and what I call enterprise Unix systems, which would be things like AI, AIX, Solaris, and HP UX. So I've worked both in the government space. And in the commercial space, within the government space, I've done things like worked on aerospace and defense as well as civilian systems. Part of that I used to develop us systems for mainframes, etc. And actually ran a few big way back when I concentrate on transforming, you know, workloads that, you know, most people would be considered mission critical things that just can't go down into running in Azure. And quite often those come from mainframe or other legacy systems. And what we try to do is make sure that they run at both at the volume and the availability that you need for those types of systems. And then includes both having the software vendors and the systems integrator partners to make sure that those run properly. But that also means that we have to work with the architects and political leaders with our customers. And so that's the kind of thing our team does. And with that, I'm going to turn it over to Jonathan.

Jonathan Frost: Thanks, Larry. Thanks, Bill. So yeah, I'm also on the same group as Larry. So my name is Jonathan Frost, also a principal Program Manager with the agricultural engineering group in the critical infrastructure area. So like, Larry, we work in a very similar arena, where we are very focused on federal government, defense, DoD, and on complex non we like to call a non x86 workloads. So our title for this webinar is mainframe transformations. But we're also going to be covering what we like to call mid-range and other non x86 platforms as well which we'll be discussing some of those patterns and pathways into Azure for myself about Microsoft for over 13 years. And prior to my current team with Larry, I was a lead in our Azure data migration group. And prior to that, had spent a lot of my career architecting and Lean development of large complex enterprise systems, both in Azure and before the cloud on premises. So with that, I will hand it back over to Bill for an introduction of a few areas around Azure and our global reach.

Bill Chappell: Great. So, you know, when we talk about Azure, we talk about having the world's computer and I'll be the first to say there's quite a bit of hubris behind the statement, you know, that's quite so grandiose. But the reality is, we do have 70 plus Azure regions, which means redundancy inside of those regions, multiple data centers for high reliability 220 plus data centers across the globe connected by the first or second largest fiber network, kind of depending on the accounting that you use. So very large network, lots and lots of redundancy around the globe, I think what is emerging recently, his Express Route partners across the globe were a direct connect into our, our network. So we spend, you know, a fairly hefty amount of money every month for building out this infrastructure across the globe. And this is for our global partners. So that is one side of the story. And it's kind of what you'll hear about and what's kind of oftentimes mentioned in the press, what's happening at the same time, though, is catering those resources specifically for unique customers like government. So it's not about just building more and more global assets, it's actually tailoring those assets, specifically for the needs of our highly regulated industries and our government customers. So we have the most comprehensive compliance coverage in the industry, you can see this listed here, I won't drain the slide and go through every one of these, but you can see that we do have a very robust amount of regulatory hurdles that we've cleared. And when we clear them, it then allows us to make it easier for you to operate in these regulated environments, if you look at sort of the aggregation effect of the cloud. And what we can provide. One of the aggregation effects is not just lower cost compute and more higher and higher reliability compute, it's actually the ability to be certified accredited and trusted across these many different regulatory hurdles. So specifically, in the US, if you look at what has been built in what's been announced, we have our commercial system which is FedRAMP. High, we have government, which we call us gov, which is impact level for impact level five and handles ITAR information, secret is live. Now we have multiple customers operating their high redundancy, multiple regions separated, so you don't have any type of geo specific failure mechanisms. So you end up still getting the redundancy and reliability of the cloud. But at that higher level of support required for the government, what's recently been announced as our top secret system, it's going through accreditation right now. And similarly, geo redundant capabilities and a large enough scale that you can do very, very large workloads, even though it's catered for those specialized users. If you look at how we're approaching that cloud infrastructure, whether it's in the commercial side, or in the God or intelligence space, or for the broader government, we are still maintaining our high level of availability. One of the things that we pride ourselves on is commercial parody. So we aren't trying to build something for the government that is a unique snowflake that is hard to operate. If you have something in the commercial side, or working very hard to be able to have parody as you cross that barrier up across those different clouds, we have a guarantee to the government of 30 day freshness and parity of our SLA. We support hybrid cloud and on premise. So I think that would be useful for this group. Our legacy is that we've done a lot of hybrid cloud, specifically for our internal uses is we have some of our major products that support systems around the world there. They are oftentimes a hybrid instantiation. We're putting cross domains across the community. And so that's something that we are building with our partners and so that we can have flow of information across the government with a flow that doesn't exist today, pairing that with our global fiber network. And so you combine that both commercial parody, focus that cross domain focus where we have lots of information that can get to where we need it, then I think you have a unique capability that will support the government for years to come. We are all in we have spent a lot of money to support the US government that is not going to change these capabilities are here and here to stay. So how that applies to the legacy migration. Pass it off to the experts and they will take it from here.

Jonathan Frost: Thank you, Bill. Excellent background there and great segue into our main topic for the presentation here, which is going to be what we refer to as inner leg when we see the word legacy that it covers both mainframe mid-range in other non x86. So, so really looking at these are the classic, high performance, high throughput critical infrastructure on premises is then what are the pathways to have these systems and have parody for these systems, while modernizing and having this run in the Azure cloud, and specifically in the Gov cloud when it comes to God and US government. So this slide here kind of shows in one view, our point of view in terms of the landscape of these artifacts, and that all these artifacts actually have a pathway into Azure. So just kind of talking briefly through some of these and some quick background which we can get, we'll get a bit more into in our subsequent slides, when Larry talks in a few points is that, you know, when the mainframe space, the two big classic systems that we see are the IBM z. So that's gonna be your Z OS, your Z TPF. And then also then now more commonly, we can see the Z Linux running on those mainframes. On the other side of the coin is the unisys systems, which historically were known as the Libra in dourado, which then run the MCP and the LS 2200 operating systems. So for unisys, as well, we have pathways into Azure for both re host and refactor patterns on the mid-range side and the enterprise Unix, this is where we get into the IBM Power Systems, historically known as the as 400 systems. But this is where you have your system I, which is what as 100 of course evolved into then your system. And so with these we have, again, pathways into Azure, we also have dedicated hardware options for these as well. We have some very specialized ISV partners who we work from engineering to engineering connections with these with these ISV partners where we are trying to improve Azure through iterations and through engineering feedback. And then we work layer in my myself with customers to also oversee migrations and into Azure and learn from those experiences in which to get more feedback and make Azure the best place possible for these platforms to land in every house in rearchitected pattern. And lastly, we have such other non x86 as like sun Spark. So what do you have Solaris running on the spark processor? It's kind of counterintuitive how that could run in Azure on x86 hardware, but we actually have pathways for emulation for not only spark but also Dec alpha and PA RISC based systems. And then lastly, also for other Unix variants like AI x and HP UX, we have pathways into Azure as well. And you can see them from a sizing standpoint, it's not only the smaller MIPS, but also up to potentially 100,000, MIPS as well. So very large, complex systems in the mainframe space, potentially, if it's an IBM z OS that has a parallel sysplex and coupling facilities, and you have that kind of a high availability and scale out pattern, we actually have patterns in reference architectures for how to have that pattern move into Azure as well, is for a bit of clarification to for MIPS, that's, you know, millions of instructions per second, which is essentially a scaling factor for the size of the throughput and compute power of the mainframe. And then from here, I'll hand it back to my colleague, Larry, who will talk through some of the transformation patterns from our point of view.

Larry Mead: Thank you, Jonathan. So we find that you know, customers almost wherever they come from, VOD, a civilian commercial work wherever there's, you know, falls into a number of different categories, what they're after. And there's no one thing that a customer is after. And sometimes we have to sort of work with them to find out what they're the reason that they're doing things. But just to give you something that applies to a god system I worked on about 16 years ago, we thought going into this, that cost was the main driver, because there were a number of mainframes that the customer had, that they want, were looking at, should I be doing something else with it? And interestingly enough, after talking to the senior leadership, one of the generals that general you know, said to me, why would I put at risk by billion dollar asset 70 satellites at the time for saving, you know, $50 million a year. And so I thought, well, you know, you probably wouldn't. But the real problem that they were having was really more the agility, they had new systems that they were putting into orbit, and they had new features that they needed in those systems. So therefore, being able to respond in that legacy mainframe environment was a much more difficult thing for them to do. So what we did is we found a way as they were transforming, you know, as they were sending up new assets into space, we were actually then bringing online new systems that could then handle those types of capabilities. So that would be an example of what our team does is we try to find what's the best choice for being able to work with a customer is there, you're going through this sort of transformation. But there's other things sometimes cost is the main driver for this. Other times, there's really, it's about flexibility and choice. There's some other types of systems that you know, what the customer wants to introduce. And then finally, skills. I mean, there's literacy in some of these legacy platforms, the mainframe, so Unisys, and IBM, sometimes it's hard to find them, you know, especially if they are with some of the languages that maybe aren't commonplace. So the commonplace ones are things like COBOL, and Fortran. But there's also less common ones out there like assembler, and certainly in the DMD space, I've run into something called Joe veal. And that's, you know, much less common to find someone who has those type of skills. So finding the way to migrate those off is one of the main kinds of things that we could do, but you can't just say, I'm going to move it, you have to have the ability to do the kind of functionality that those systems accumulated over years, remember, a lot of these systems have been in production for maybe 2030 years or longer. And so, you know, they run very reliably because of that. And they've also been in a lot of features. So we don't just say it's, you know, you can move it across without taking all these things into account to make sure that we give the same kind of capabilities and hopefully better, and do it at a more reasonable cost. And we're also getting the ability to enhance the platform as we're going. So, you know, it's sometimes not a single factor that a customer has. But more than one of these, and for a large customer, you might have different systems, and each system might have a different reason for doing this. So the important thing is to, since we do have a fair amount of experience with doing this, we can at least help guide in what are the right ways of taking that and hosting that in our Gov cloud. So just to talk a little bit about you know, what it is we do at Microsoft for this, we do have what we call a mainframe transition program. And a while Jonathan Bill and I, we represent engineering, we also work with other groups within Microsoft, who we work with our infrastructure teams that are more focused on direct customers, as opposed to doing the engineering piece at corporate. And so we, you know, work these people into our group plus, we work very closely with the tools and technology providers. So Jonathan did mention that briefly, we have partners that we work with, to do that. And, you know, as I mentioned earlier, we do work with systems integrators, and managed service providers to be able to also be able to host systems. So, you know, one of the things that we want to do is come in and make sure that we're bringing the right ecosystem to have you know, both the tools, the service providers, hand, micron Microsoft solutions, to be able to find what we need to run in Azure for these types of systems. And what that does is that reduces the risk from a proven experience. And at least from my experience with government systems, reducing the risk is one of the things you really have to have before people would take that into account. That example I was giving up that one, Air Force general is a very much one they, he had no interest in any kind of risk, but he did have an interest in monetization. So if we could show how to do that in a manner that reduced the risk, then that was something interesting to him. So let's just look at you know, what some of the ways you can go into Azure from these systems. And what we have here if you're familiar with Gartner, they have something called the five R's, retire, retain, replace replatform and re envision and we added a sixth one in which is a remote hosting. And I'm going to go to that one first remote hosting is where we have a partner who actually hosts mainframes that you might use as a stopping or stopover in order to get into Azure. And why would you do that? Well, if you have to close down your data center, and it has to be done very quickly, we may not have the time to go through the process, we need to host it, you know, with the reduce risk that we need in Azure. And under those conditions, you might actually need to have a Unix system, or an IBM System that's running in one of these, you know, systems. Another thing to keep in mind, Jonathan did mention, we have partners that have racks of power systems, those are actually running in an Azure Data Center more than one as your data center. So we actually do host non x86 hardware in as you're in if you need to have the power platform, which is predominantly either the AI series or AI x, then that is another way of what I would call remote hosting systems without really any change to them, then you want to transform them into more cloud native solutions, you can do that more at a pace that maybe meet your schedule, I'll go through the others very quickly and show how they apply. retire, we find is there's not all code issues, not all applications are used, I would say easily 20% of what we typically find with a customer is something that does not need to be migrated. This is especially useful if I'm re architecting this solution because that means I don't have to invest in a system that I'm no longer using. And there's tools that will tell us if these systems are still in use, and whether or not they're getting time in their current environments, we do run into retain this is where a customer doesn't either move everything at once off a mainframe. In other words, they've got some systems running in the Microsoft Cloud, and they've got some systems that are still on the mainframe. Well, we know how to do that. Sometimes that's called a hybrid system, and how to do that in a reliable fashion. Now the one we'd look at is replace, quite often a lot of systems were done. years ago, when there was not a packaged product that could do something similar. We've seen a number of customers, especially if they're getting into like HR systems, or maybe financial systems go to something like, you know, SAP or something of that nature to you know, move those and those can be hosted, you know, very robustly within Azure. And I'm not going to spend a lot of time on how that's done, because we actually have a separate team that discusses how to do that with moving for instance, SAP into Azure, probably the last two boxes, they're the one with the re-platform, re-host refactor, and re envision those are what we do most often. The first one, what's the difference between re-platform re host and refactor. Re-platform means I don't want to change anything in the execution of the systems. And that's, I might not even have the source code to recompile it on another system. So I've got to take the load modules as they exist in a mainframe and then be able to move those over into another platform. And we do work with partners that allow that kind of capability. That's probably not something you do with all your systems. But if that's, for instance, the last thing that is keeping the lights on in the data center with the mainframe, maybe that's the right thing to do to, you know, get that moved into, you know, a cloud platform. So that is possibility. Re-host to us means you're taking the source code, and you're re compiling it for the target platform that we're running in Azure. Now, in Azure, the types of ways we can do that are for instance, we can do this with VMs. That's probably what most people are familiar with. But we can also do that. That could be either Linux or Windows. But we can also do that, with deploying it in like container solutions. You're not changing the code, but you could deploy it, for instance, in Docker containers and things of that nature. So these are some capabilities that exist in the re host arena. Last one that I have here in that box is refactor. Refactor means I want to get off the language, or I want to deploy much differently with the language that I currently do. And this would be for instance, I've got COBOL code and I want to change that code either into Java or I want to change that into C. Sharp, for instance, those refactors, then, you know, we have to, I'm not planning to re architect how the code works. But I want to have different languages. So I could perhaps use people who are familiar with these newer languages to do the work to maintain them going forward. And I'll worry less about maintaining the legacy skills, we see a fair amount of demand. And that has really picked up that particular area of doing that, like I mentioned, some of the systems that we work with, there's fewer and fewer people who have the capability of both maintaining and developing them. The last one is re envision sometimes referred to as re architecture. This is where I really need a different system, the old system is just not doing what I need to have it done. That's probably going to go back to my example of that satellite system. Re-envision was looking at the new assets coming online, they had different capabilities. So I needed to re envision how I was going to integrate that into my environment, and not try to sort of force fit it within my existing mainframe environment. So that's an opportunity for being able to then leverage what I'll call cloud native solutions that allow for scale out capabilities, and other things in that nature. So these are the different approaches we take no one approach fits all. And that's why I say when we do look at working with customers, we look at what is the right thing for their situation, then sometimes the right thing is actually to take it in multiple steps. In other words, I might want to rehearse is the first step, refactor is the second step. And then re architect or re envision is the third step. So there's different what is sort of the Northstar of where I want to be versus the practicality of I need to get there now. So these are all possibilities and things that we work with, you know, in this comes back to it's really, what it is you're trying to accomplish is driving which one of these types of solutions that you want to pick, and we're more than happy to look at what's the best way to do it. Now, we'll also be very open and honest and saying that if you're looking at doing certain of these, and you have certain business goals in mind, you know, certain capabilities in mind that perhaps one method is not going to meet that as well as another and we're more than happy to at least make sure that you are understanding what those are. Okay, so now I'm going to go even a little bit deeper. So how do we do this, and I'm going to pick specifically on the IBM z OS system. So this is just looking at some of the things that we've done over the years to make it so that you can run things that you know, make it give the kind of reliability that you'd expect from a mainframe system. My experience with mainframes goes back to the mid to late 70s, we have another person who actually goes back to the early 70s. So we basically understand where you are coming from if you have a mainframe, and then we can see how that you is best able to do it. This goes from everything to improving computer chip architecture, with our providers of hardware, to also looking at mainframes do some pretty nice things for scale out capability. In a tightly coupled banner. That's essentially what a parallel sysplex can do. It works with something called a coupling facility, to then allow multiple systems to scale out and have more power in a shared environment. And those can actually be done in a geo dispersed environment. Okay, well, how would you do that on Azure? Well, we have the capability of running some of the features we have both within our database, but also within some of our partners solutions that allow us to both share the data and the memory needed, and also through things like caching, but also do that with concurrent systems that allow us to sort of do rolling upgrades, which allows you the capabilities that you would have like with a parallel sysplex. And one of the great things about Asher and Bill's already mentioned this is we have geo dispersion that we can have with our systems, both for our compute and our data. So that's more or less built in. It's a built in capability that you can have if you choose to implement that. So some of the other things. Mainframes actually were early to the game with virtualization, so they've been doing virtualization since the 70s. Of course, we do offer virtualization in Azure. And recently we you know, are even working with other virtualization partners to do these types of things. They'll also mentioned, security while from mainframe offers things like rack f top secret and ACF to is security systems, what we do is we have the ability to support those type of capabilities using our, either our active directory or Azure Active Directory, what we do is we enhance the schema of our systems to then take in the attributes that these other systems have, you know, you look at capacity on demand, which is something mainframes have, well, we have elastic computing. And we can not only do that, we can have things called scale sets that allow you to have compute on demand, you want to take the data and turn it into action, you know, well, you know, a lot of IBM Systems use SAS and other types of tools, but we have a very rich environment that you can run in Azure. And one of the things that we've seen is, for instance, if you are moving data into Azure, that, for instance, you know, I've worked with some air force aircraft manufacturers who've got maintenance data, they can take that maintenance data, and then run analytics on that versus, you know, what's really been happening in real life and to sort of anticipate fixes rather than to be, you know, reactive to them. Other things we have the ability to be, you know, developer productivity, that's a big thing that you move to the cloud, you now have all the cloud, DevOps and tools that are available to you. And, you know, when I mentioned sort of changing the way you work with systems, we have the ability to then work with some of the latest technologies to deploy into Azure, that would be things like Kubernetes service, which is what a kms is, to something called Service Fabric, which is more like serverless compute that we have to just regular raw images that are within containers like Docker, and we even work with openshift for being able to deploy in our Azure cloud. So with that, we're going to, you know, this is this is one of the questions that comes up quite often and I want to address it, well, how do I size something, you know, I've got a mainframe is this big, powerful thing? You know, how can I run this in a cloud? Well, we actually worked with Gartner to do some studies of the benchmark that we have, we did it with a partner called Microfocus. And we also did it with HP, they were HP at the time, what we found is that you can run a lot of compute with cores that are available in x86 systems. So what we found is that in a typical batch, because most you know, legacy applications have an online and then they have batch systems that are run on a batch, you know, system can get about 170, MIPS per core, whereas something that's running online through CIC s, which is kind of like an app server on the mainframe, we can actually even get more power per core. I guess the point here is, we do have experience with sizing this and you can actually get quite a bit of power out of these systems. And we've got the ability to deploy, you know, in literally 1000s of cores if we need to. So we can run almost any workload in Azure, we just know how to properly size it and administer it. So with that, you know, we just want to talk about, and Jonathan and I will go through this, you know, some of the systems you might want to consider putting up there. And some people might have thought, Well, I don't want to do my mission critical systems. But let's say high availability, I want to enhance the availability, that's very expensive to do with mainframe, so I might have to introduce a parallel sysplex, I can enhance my availability within Azure by adding in another region, I can have the ability to failover within a region and then failover to a different region, if that becomes an issue. So we can actually typically enhance overall availability, and certainly, your SLA, your RTO and RPO, for being able to recovery time and recovery point for being able to handle things if you have some kind of disaster recovery. Another thing that we typically almost run into is cost management. The thing about a cloud is clouds are much more dynamic, you can spin things up and down as you need them. You can have scalable VMs that are being able to run and you can now actually even if you can, if you're using past services, they even respond, you know, more dynamically typically than VMs. And with that, maybe Jonathan, you want to talk a little bit about some Yeah, features there.

Jonathan Frost: Thanks. Perfect. Thanks. Thanks, Larry. Great, great points there. And yeah, just to share a few more examples and a couple of quick stories on this from past projects on in the federal and defense space. Is that for more than the cost management topic, and looking up has was working on a recent migration that we now have in production from a defense customer, where when the key capabilities was moving the database in the data from a hierarchical database on a mainframe system into Azure SQL Database, part of the great offerings of the Pez platform is that we have multiple tiers within that. So we have the general purpose, but also we have a we have a business critical tier, which then has a higher SLA, and has a better RTO and RPO for recovery, and has the newest and most powerful hardware underneath it. And that proved ideal for the requirements of the system. And then we're able to scale up with the number of cores dynamically, where we're able to run test loads at one level isn't simply by changing the slider of the V course, we're able to do another run in and find exactly the point that we're able to meet the performance and the response times that we're looking for, but Barbro not over committing or under committing the resources. So that's one of the key advantages of going to that past system and breaking can leverage that for cost management to find that ideal spot of the service. Another key point here that Larry had touched upon earlier, and Bill as well, is looking at unlocking cloud capabilities. So a key thing here, a key pattern that we're seeing is, you know, three hosts in refactor is that we are getting into Azure, which is fantastic. But there's so many services that we want to be able to leverage beyond simply real re hosting a system in their environment. So just a one quick example. And story on that is in the repo space, I'm working with a customer right now where they have a mainframe system, they're looking to move into Azure. But one of the key capabilities is looking at the Azure cognitive search, which is then a machine learning, enhanced search indexing service that you can feed it a training data, of which then you can leverage and then have a richer, searching indexing experience. And so even though the mainframe data is if it's re hosted into Azure is still running in the hierarchical data model in a wrapped mainframe environment, is that there's actually data endpoints that can be exposed, that then can be consumed for the training data for the kinds of search. And so there's, so it's kind of example how even with some of these patterns like the rehouse, we can still take advantage of those capabilities that are in Azure and abridge the monetization. So maybe someday, that rehearse pattern can be converted to a re architecture pattern within Azure. But you don't have to do that day one. So you can kind of do an incremental progress. But even in that first step, there are ways to take advantage of those capabilities. In later on, we have a few reference architecture slides that will show in which we show examples of like that. And also things like virtual tape, and how you can take physical tape or virtual tape on premises and use things like Azure Blob Storage to have a lower cost solution for that. So I think that will bring us to our next slide here, which is one of our refactor patterns. And, Larry, do you want to talk on this reference architecture?

Larry Mead: Sure, this one would be one where you would essentially have COBOL programs that you are either going to want to re host as COBOL programs within Asia, that's what the right side of the stack that we have here is that that's underneath the load balancer. And also refactored into potentially Java to run within a Kubernetes cluster. That's what the left stack is about. So essentially, we have patterns. And we've worked with partners that do this, where they can take the code as it's originally done without dramatically modifying the business logic, or perhaps not modifying it at all, and then deploy it either as a newer language like Java or keep the cobalt because they want to move forward with COBOL. So and, you know, and interestingly enough, you can host the COBOL in a Kubernetes cluster to with some of our partners. So there are capabilities to either run more of a traditional VM or as a more of a Kubernetes cluster. And just to show what this has, it's got, you know, the way we're able to take advantage of hosting in these environments as we have as your has built in the load balancers to be able to do that. And importantly, we have off to the right there more to the side. We've got the different types of data stores to either take data that's in transition that's going through the process, or be permanently stored. We can be running analytics, so we have spark analytics we can do with data bricks, for instance, we could be doing Stream Analytics with this. And we could be doing this simultaneously as we're running through like an update process, we have a number of customers who are going through that, we need to store this, we probably need to take that and have regional capability of the storage that we have the backups, etc. And as your data factory is what we take to sort of ingest data, or take data and move it back out into the cloud, or not, I should say back out from the cloud system that you're running in use and other systems. So this kind of gives you not, you know, a little bit more of an idea of it's not just the programs that get transformed or re hosted. But also, they are then hosted within a cloud environment that provides all these type of capabilities. And with that, I think I'll let Jonathan talk about the next architecture.

Jonathan Frost: Thanks, Larry. Yeah, so just to build on that, and to connect back to the non x86 migrations into Azure, specifically for SPARC Solaris, this is an example of one of our other reference architectures, both of which this one in the previous one are published on the Azure architecture center in there are links to this in our carousel, microsite, as well. But to talk through this one here is that this is shows an example of a few key things is that first is that if you have a spark based Solaris instance, and workloads in code running on premises, by using Backup and Restore patterns, we could actually have that running in Azure by virtue of a emulation technology that runs on Azure VMs, both Linux and Windows. And so in this case, we're showing mostly Linux VMs. And the agent running on those where we can essentially scale out the workloads where you can have one or more child VMs. Inside of the Azure VM, running the instance of Solaris, it thinks it's still running on a spark processor, but actually, it's running in a kind of container like environment on the Azure VM, was also key here is that these IP addresses here are able to be specified by virtue of adding multiple network interface cards onto the parents VM. So therefore, if there's cases where we could actually preserve the private IP address from on premises into Azure to try to minimize the impact of that migration. So this is a great example of one of those first steps or bridge migrations into Azure where you can lift and shift that spark workload, even if the source code is lost. Even if the original installation bits are lost. By virtue of doing a backup restore, at just as if you're backing up to disk or to tape, you can take that approach to then restore this environment into a installation of Solaris in Azure. Then the other key piece of this architecture is the Azure storage account, where this emulator also provides a virtual tape emulator as well, where we have worked with the engineering team for this IC partner, and written the book on this and how that published where we tested in our lab using Azure storage for the virtual tape. And we were successful in doing that, where then we could offload them virtual tape into Azure Storage, which of course, is a much lower cost storage infrastructure that can be used. Moving on to some other topics here, I'll let Larry touch upon some of these concepts and terminology,

Larry Mead: Just to sort of look at some of the things that we look at. We do address mainframes from their compute perspective, we look at, you know, what we'll call the general purpose engines. That would be you know, what you're running z OS on, but we also look at the ifl, which is that's the integrated facility for Linux, and also the zips, which is essentially an information processor, it's a database processor. So we are familiar with how to properly get those a size and running into a cloud environment like Asher, we are familiar with how l pars work on mainframes and AI systems. And that actually, you know, very easily can translate into things like VM. So we support the multiple la operating environments, as mentioned, and we do support even migrating from Z TPF, which is, it's not often used, but if you have it, you probably want to get off it. I know, there are a couple of government customers who have that they're probably looking for a way to get off. As mentioned. You know, we've got both the legacy Libra and the legacy gerado systems, the MCP, 2200, the various sparks, etc. We're very familiar with house. High Performance storage is used on mainframe. So again, lots of different languages that we've had to work with. Now, here's an interest one interesting one, mainframes have something called hyper sockets, which allows very low latency type of interaction between data and programs and between programs, for instance, lens programs, talking to CEOs program, so you know, in the same mainframe backbone, based on that, we have familiarity with how to deploy that in Azure so that we don't enter in to a lot of extra, you know, latency that can take that you know, in ruin that we've also already talked about coupling facility and parallel sysplex. This is essentially scaled out tightly coupled compute. And then we didn't really touch a lot on the different databases that we can work with. But we've run with, you know, things like relational DB to hierarchical IMS systems that are running around on, you know why unisys platforms but also more flat files kinds of things like v Sam, you know etc virtual sequential access method. We also have a lot of experience with the various transaction processors, both CIC s and IMS tm, but also comes on in tip, you know on the unisys machines. So, we cover, you know all of those different systems and have, we actually are going to have reference architectures for each one of those as time goes on, too. So, just going to jump into the summary here a little bit. And I think one of the main things that I'd like to just make sure, you know, we're getting through is, we've done this before, it's not the first time that we've worked on these types of migrations, we have people with, you know, a lot of experience with these mainframe and Unix systems. As I mentioned, I go back to the mid to late 70s. So we have another person that I work with a colleague who goes back to 1971. So we've been doing this for a while, in fact, that person actually holds patents in some of the technology that IBM is still using today with their mainframe. So that allows us to develop a mature ecosystem of both services and partners. It's not just a couple of people, we have companywide resources. You know, it's the people in engineering, it's the people in our corporate sales. It's our consulting organization and services, folks, we have another group that works specifically on delivering the data solutions across in other words, if you wanted to migrate DB to into a, into a SQL Server, past solution, say us SQL managed instance, they have experienced at doing that. And we also do tightly work with the partners, we have something called a partner success unit that allows us to, you know, make sure that we're kept up to, you know, capabilities there. And then we have a lot of technologies that we can specifically apply to mainframe style solutions to provide the types of performance and availability that's needed. I'm not going to drain this entire slide, but that just a way of being able to see that we do have these things that are capable of taking place. Thanks for listening.

Speaker 1: If you'd like more information on how Carahsoft or Microsoft can assist your mainframe transformation to Azure Gov Cloud, please visit www.carahsoft.com/microsoftazuregov, or email us at microsoft@carahsoft.com. Thanks again for listening and have a great day.