CarahCast: Podcasts on Technology in the Public Sector

Cybersecurity Executive Order: 10 Database Security Best Practices to Help Meet Mandates with Trustwave

Episode Summary

Listen to experts discuss database security best practices to assist in meeting the Presidential Executive Order on Cybersecurity.

Episode Transcription

Speaker 1: On behalf of Trustwave Government Solutions and Carahsoft, we would like to welcome you to today’s podcast focused around the Cybersecurity Executive Order: 10 Database Security Best Practices to Help Meet Mandates, where you will hear from data experts at Trustwave Government Solutions share the greatest risks organizations face when it comes to database security and tips to create a strong database security program.

Travis Lee: Thanks everyone for being able to join us today for those that are listening live. To start off, I just want to quickly go over what we're planning to share with you today to help you to better understand how you and your agencies can protect and secure the databases within your environment in line with the Executive Order, and how you can work towards meeting those Executive Order requirements. We're going to go over what the importance of protecting databases and sensitive data is what the NIST defined security measures are for protecting critical software. The Joe and I are going to talk through 10 database security best practices that will help you in meeting those mandates as well as securing your databases. And then I'll finish off by talking through a proven methodology that we've seen successful over 20 years of experience in working with agencies in the federal government as well as commercial organizations to achieve database security. To start us off, then, I'd like to just share the importance of protecting databases and sensitive data as it relates to the cybersecurity Executive Order 1428. Most all of you are very aware, and by now with the May Executive Order that came out the direction that came from it, as well as the following information that is coming from NIST come from CISA and other organizations to define what critical software is, and the security measures that are needed or recommended to be able to meet the guidance around the Executive Order. With this, there's been clear information that's been shared related to what should be done around this and timeframes that were given in a memorandum put out by the Executive Office of the President, the Office of Management and Budget. On August 10, it was clearly defined that agencies had 60 days to identify all agency critical software in use and or that was an acquisition process. And then now you have one year to be able to incorporate the defined security measures for these specific categories of critical software, as well as those categories that will come out later on in relation to what is deemed critical software or what is deemed as a direct software dependency. And we'll talk about that here today. NIST specifically called out that any software that has or has direct software dependencies upon one or more components, with at least one of the following items is the critical software, information related to running with elevated privileges director privileged access to network computer resources designed to control access to data or operational technology, performing a function of critical to trust, or operating outside of the normal trust boundaries. All of these items help define what is that critical software definition? Each organization, agency, should have gone through the process of determining what is critical software within your environment. This goes along with what the binding operating directive 18-02, and the CDM program has been directing for many years now I'm prioritizing and identifying, reporting and scanning all security or HPAs for security. Those HPAs are inclusive of critical software applications that are now being defined by NIST to establish a single agency proof of concept, point of contact, and then timely remediation of these and then in addition to that, his own 19-02 and 03 directive around reaffirming that the focus on HPAs and reporting of those and reinforcing the CDM guidance is all in line with what is coming forward here with the Executive Order. One of the key things I want to bring out is the fact that as we look at these items, the Executive Order doesn't specifically call out names of software, but gives us these guidance and directions. And they further clarify that the term of direct software dependencies, and that is defined as other software components that are directly integrated into or necessary for operation of software instances in question. These software applications that help drive the critical software that has been utilized by agencies can be many fold. And those as we would like to talk about today, at the core of this databases are a part of that direct software dependency without integration of databases and the data that drives those applications, whether they be on premise software as a service or other applications. Those applications will not have function. And so databases are securely in the realm and the radar have direct software dependencies. With that understanding of those direct software dependencies. We'd like to go through and talk about how this is giving guidance to given direction around securing these direct software dependencies and specifically databases, NIST came out and provided several objectives. We'll talk through four objectives today and the specific security measures that go in align with those. The first objective that I'd like to speak to today is that of protecting Executive Order critical software and critical software platforms from unauthorized access and usage of the security measures that were given. The three that specifically fit into protection of databases are listed here, measure 1.2 1.3, and 1.4. As we look at these, these all help us to be able to understand the importance of being able to determine who has access to the information, what type of information you have, and then how can you protect that. So identifying and authenticating each service is attempting to have access to that database. Knowing the privilege access that is given whether it be a user a service accounts, an application, and being able to put in place the principle of least privilege to manage that. And then employing boundary protections, network segmentation isolation Software Defined perimeters for this, I like to share a couple of best practices. And we'll go into a lot more detail later on, as Joe and I talked about this. But some of the best practices around this specific service or security measure is to limit user rights access, put in place that principle of least privilege no who has access what has access and determine whether or not they should have access to that specific information, needs to discover those privileged users and service accounts remediate any that should not have access, and then enforce that principle of least privilege setting baselines around acceptable use for users and service accounts. And then monitoring the activity of those, those users with the use of machine learning to detect if there's any type of anomalies around the privileged user access. The next objective is that of protecting the confidentiality, integrity and availability of data used by the Executive Order critical software or critical software platforms. With this, we see that these platforms depend on databases to store the data that they need to be able to utilize within that critical software. The security measures that specifically are called out within this related to database security is that up to one to two to four, establishing and maintaining a data inventory, using fine grained access control for data and resources by enforcing the principle of least privilege, and then protecting data in transit by encrypting sensitive data communications. So the best practices that we recommend for you and we'll go in more detail later on here is that you need to understand where your sets of data resides, Discovery across all your databases, that then will guide your user access, control your monitoring policies, and determine how you will protect that information at rest and transmitting and use. You need to classify that sensitive data, whether it's personal identifiable information, protected health care information, or other types of sensitive data categories. And then to assign and enforce specific users access according to that sensitive data information that you've discovered and classified so that you make sure that you're protecting it accordingly. This then allows you then to put in place the continually auditing of where that data resides, whether new databases or new data types are being created, and being able to control the principle of least privilege. So databases in relation to the Executive Order are considered as a direct software dependency. Let's go on and talk more about how we can protect those here with our next objective. And that is objective three. Within this guidance, which is to identify and maintain Executive Order critical software platforms and a software deployed to those platforms to protect the critical software from exploitation, specifically for this in relation to databases, there are a couple of the security measures under this objective that need to be followed. And that is establish and maintain a software inventory that includes cloud based resources. And then using patch management practices to prevent exploitation of known vulnerabilities by identifying and documenting and mitigating risks through patching, updating and upgrading to supportive versions. Some of the best practices for this, we want to be able to continually audit that environment where we have our databases to discover the current databases, we have a new databases that are available to automate the identification, assessment of any vulnerabilities misconfigurations or weaknesses within your environment, and then to be able to report on those relationships. According to the risk, the vulnerability in the stig is impacted. And then we definitely want to make sure that we remediate those hardening that environment so that you can protect those from further potential breach or attack. The last subject that we want to talk through is number four. And that is quickly detecting, responding to and recovering from threats and incidents involving that critical software or critical software platform. And this is in relation to being able to monitor those environments, being able to make sure that you're putting in place actions based off of the remediation steps that have been taken according to the vulnerabilities and misconfigurations. Understanding who has user access, and specifically monitoring the access of those that have been given those privileges, and then continually seeing what type of activity is taking place within your database environments. Another point of this is to employ endpoint security protection to those databases so that you can identify, review and minimize that attack surface going forward. Just putting in place a monitoring policy or monitoring practice is not sufficient. You need to continually assess, remediate and strengthen those environments, control who has user access, or your monitoring is this going to be or not. So the best practices around this are to monitor database activity, and establishing audit trails for all that previous activity to enable anomaly detection. And then to use machine learning to detect anomalies and new threats as you're being able to baseline and against normal activity. So with that, we'd like to transition now into talking about 10 best practices that we recommend, from our experience here at Trustwave. In relation to database security. As we've noted, databases are the lifeblood of today's digital business, the data that they maintain, and that they utilize or give access to, is constantly under attack from organized skilled and well-funded cyber criminals. And so we want to be able to give you some guidance around ways that you can ensure that you can protect that data and those databases within your environment. And to do this, Joe Malcolm, our director of sales engineering from Trustwave, is going to join me and we're going to talk through some of these best practices. And first of all I'd like to bring up is that we recommend that you first have to put in place and describe clearly what your database security program will be in the actionable processes that you want to put in place. Joe, can you just give us a little bit of direction on why you want to start out with this first step of describing and putting that program in place?

Joe Malcolm: Sure, Travis, so you know, to use the old decoy your cliché analogy, right, failure to plan is planning to fail. Same thing, when you're looking at processes like this, right? A plan is the key to success. You got to know what you have, you got to know what your processes are, you got to know who the stakeholders are? Right? That's one of the most important parts, right? When you look at any government agency, or, or any kind of any commercial entity, in fact, right. There are different stakeholders across a security program, there's DBAs, that own the databases or security folks that own the security process. There's management that needs to be involved. There's already processes in place, there may already be technology in place doing certain things, right. So understanding where those gaps are and understanding who all the stakeholders are, to build an appropriate plan is the first step in any good process, right? You got to have a plan of attack, you got to have all of that stuff lined up.

Travis Lee: Thank you. And I think it's clear that we can't do this within the security organization or the IT organization, it requires a coordinated effort. And your description was great for that. The next recommendation, or best practice that we bring out is that to clarify a scope baseline, through discovering all of your databases, inventory, and those and then determining what your plan is going to be. Can you give us some guidance on that?

Joe Malcolm: Sure, Travis, so you know, as many of you know, when CDM came out, one of the first functions within CDM was asset discovery, right. So many agencies already have asset management tools in place already have, you know, an ability to gather what assets they have. But I will, I will say in my personal experience working with many government agencies, sometimes don't always know where all your databases are. Sometimes there are some that were added as part of a software package that installed a database that you don't know exists. So it's very important to clearly understand where all your data sources are where all your databases exist, right. And the key to that isn't just production databases, know where your non-production databases are. Keep in mind that there may be databases in the cloud, there may be databases being run by a contractor, right? If you've got a program where we're different contracts have portions of your infrastructure that they're responsible for. There may be databases and other places that you're not aware of. Or if you are a contractor providing the service to a government agency. Make sure that you know you really put that in scope to discover where all those assets are. Because it's hard to have a program and a policy in place. If you don't know what assets you're scanning against.

Travis Lee: Thanks, Joe. The next best practice that we recommend is that a defining standards and security and compliance policies that you're going to utilize for making sure that you're secure and different industries have different policies or frameworks that they utilize. Can you tell us some information around this specific to the governor and how this will help them?

Joe Malcolm: Right. Absolutely. Travis. So, you know, there are as Travis has a lot of policies out there, right. So you got to figure out what policies are most important to your agency, right? Do you have a requirement for PCI? Do you have a requirement for just HIPAA scans, right, or NIS scans or FISMA scans? Do you have a distinct environment that you have to run the scans against? Right now, one of the beautiful things that comes out of out of our database and product right is an all comprehensive scan that includes all of those, right? Now, keep in mind that based on all of these policies that exist, there are agencies out there that have their own standards. So make sure that as you're developing your policies and your frameworks that you know what differences there are, or deviations from the normal standards, the FISMA is a DISA stakes and CIS benchmarks, etc.

Travis Lee: Excellent. Joe, can you tell us more about how you want to be able to conduct those vulnerability configuration assessments and how that will then help you to determine how you can strengthen your environment?

Joe Malcolm: Absolutely, Travis. Absolutely. So as we're moving through this process, right, once you've understood you know, what's in scope, what are my policies, you've got to scan this these assets on a regular basis. Now that regular basis is dictated by what governance you fall under, right? If you're under the CDM program, they recommend every 30, 60 days, there are some agencies that are dictated to Diskin every 24, 48 hours, as many folks that have worked with their DBAs from the past know, there aren't major changes to the vulnerability configuration portion of a database outside of normal change control process, right. But that's why it's very important to understand what your process is going to be what your organization's requirements are, and then conduct those vulnerability and configuration assessments within that window. It's also important to keep in mind that if you have the ability to scan, pre-production and know what your normal accepted baseline is under ATO, you now have the ability to do your regular assessments as a deviation to that norm, and then do your 30, 66 months scans with a whole scan, right. So you're able to, to make your program a little more palatable with the staff that you have based on understanding what all those requirements are.

 

Travis Lee: A key component of that best practice is making sure you understand what you've got, and determine now what you're going to do to be able to make changes within that within that environment. And that's the next step, which is from a user control and user access perspective. Joe, tell us about how it's important to identify who has access and then be able to put in place principle of least privilege.

Joe Malcolm: Absolutely. So as many of you are aware, there's more mandates of the government rights Zero Trust initiative, this ties directly back into Zero Trust, right. And those of us that have been around a while lease privilege access control, it's the cornerstone, right? If I want to truly protect my data, I need to know who has access, but not only who has access, but reduce the effective accesses of those individuals that don't need that level. If I have two DBAs, that are actively still DBAs, I need to make sure that their accounts mirror their tasks and their requirements to fulfill their job. If I have folks that have moved out of those roles, and need to be doing a normal baseline against that, we oftentimes in the field find that organizations have forgotten about different users that have permissions. And those permissions are still there for that user, very much a risk in today's world.

Travis Lee: Great. Moving on into our next. And our sixth recommendation that is implementing risk mitigation, and compensating controls. Assessing understanding what's work your vulnerabilities and misconfigurations are is not enough, you've got to then put those steps in place to fix us. So Joe, tell us how we can do that.

Joe Malcolm: Absolutely, Travis. So you know, risk mitigation is one of the most important things, right? By being able to take the vulnerabilities and misconfigurations to your database, you're able to reduce the risk, but it also gives you the understanding of okay, what risks does my database have Now compare that against your applications. Compare that against the users and groups that you have to reduce those privileges. By getting that comprehensive view, you're able to put compensating controls in place either at the database layer or the application layer that accesses the database. But if you don't have an understanding of those vulnerabilities, you're never going to be able to do that, right. So these steps go hand in hand by understanding who the what and the how I can then put something in place to safeguard me as a mitigation against that.

Travis Lee: Great. The next thing is to put in place the acceptable activity policies been able to define customized policies around what your acceptable user activity is. Tell us how we can do that.

Joe Malcolm: Absolutely, Travis so once you've know what these users access is right now you need to start looking at okay, hey, what policies do I want to put in place? Do I want to alert when somebody other than a DBA escalates or privileges? Do I want to monitor and establish some sort of activity policy around, did an individual do a large file transfer after hours, right, understanding what my environment looks like. And then putting these acceptable use cases in place, is really that next step in this pendulum.

Travis Lee: And then the next one is then being able to audit that access or the user activity in real time. Tell us how we can do that.

Joe Malcolm: Absolutely travel. So once you've once you've done all this stuff, right, you've got your users, you've got the policies in place of what the normal user behavior is. Now you want to monitor that in real time, you want an alert to trigger that tell me, okay, there's been a privilege escalation, there has been a large file transfer after hours, there has been access to a table that is, is extremely sensitive and should only be accessed from certain places at certain times of day, right? This stuff doesn't get better with time. It's not wine, it does not age, well, I want to know about it, the minute it happens, or as close to the minute that it happens as humanly possible, so that I can take the proper steps.

Travis Lee: Joe, can you tell me a little bit more about how the importance of knowing it immediately plays into the fact that you want to be able to pinpoint and customize what you're monitoring for, so that you're not just being flooded with every action, so that you then don't take activity action on anything, because you're just flooded with information. Can you tell us a little bit about that?

Joe Malcolm: Absolutely. Travis, right. I mean, that's we live in a world of information, right? I mean, if you look anywhere we get flooded with data, it is critical to understand, okay, what is truly an activity, right. And as I said, this privilege escalations, I don't want to get a privilege escalation alert for my DBAs, every time they do something, I want privilege escalation alerts for the individuals that shouldn't be doing it. Large file transfers, I don't want to large I don't want to get triggered every time there's a large file transfer, there are groups of folks that do that there are times of the day that that may occur, you want to hone in on those specific after our specific timeframes that make sense against your organization, right. And this is all going to be defined in that initial plan of understanding, okay, what is acceptable and what is not acceptable?

Travis Lee: Great, thanks. Next one is to be able to deploy policies based on activity monitoring that we talked a little bit about this already. But tell us how we can fine tune deploy policies that will then allow us to get that information, both from the actions that take place, and then also the anomaly detection from machine learning.

Joe Malcolm: Absolutely, Travis, right. So this is, you know, as everyone's noticed, I'm sure we're building on all of this, right, these, these principles were built to build on each other. So once I have an understanding of all this stuff, and I'm auditing the privileged users in real time, and I'm monitoring the behavior, then I can start to deploy policies based on what is normal, right. And Travis brought up a great, great concept here of anomaly detection, right? The ability for the system to identify, this is not normal behavior, right? We baseline your environment to say, okay, these are the things that normally occur with this database, with this table with a schema with these users. Any deviations outside of that norm, are important, right. And this is all based on the policies that you have in place. And it's all based on the activity that's really occurring today, now, in this instance, against your database. And then from a forensic standpoint, that gives you the ability to take proper actions against it.

Travis Lee: And the last step, that I'm detecting, alerting and responding to those violations that we've set up, we've customized, we've determined we baseline what the normal activity is, tell us how and why it's important to detect, learn and respond to violations.

Joe Malcolm: I kind of delved into this a little bit already, Travis, but you know, when we look at this side of things, right, I'm doing the detection, I'm doing the alerting. But now, how do I respond to it? Right, and the responses are different across different organizations. Do I want to have an automated machine response that says, okay, hey, if this thing occurs, send it to a honeypot, terminate this connection terminate the session? Or do I want to monitor that behavior? From a forensic standpoint, from a law enforcement standpoint, right? I mean, if we're, if we're now talking sensitive data, critical assets, things like that, I may want to monitor this behavior a little while longer and determine what kind of activity this individual is involved in and start looking across the environment. Right, especially when we live in a world of whistleblowers and leaks. And, you know, for lack of a better term, because it's just happened locally to me, folks that are trying to sell companies intelligence data, right? We just had an incident where there was an AV individual that tried to sell new killer data, right. So if this individual was gathering that data out of a database at the at the place that this individual worked, and we were able to see those data transfers and respond to them in real time, and not necessarily block that but alert and be able to Build that forensic and evidence trail that can lead to the apprehension and the stop of these leaks. Right then the ability to now uncover who the handler is and who else is involved. 

Travis Lee: Great example. Thanks. So these are the 10 best practices that we've shared with you today, there's a white paper that provides some detail around this that we'll talk about later on here that's available on our website, I can go into more detail. With that, then let's continue on to our next part of this. What I'd like to do now and summarizing what we've just shared from Joe's 10 Best principles here. And what we talked about around data protection and sensitive data analysis and protection is it go through a process of understanding how you can protect that information within your database. And it all starts with identifying risk. That continuous risk assessment is very important for you first need to be able to inventory those databases know where your classified are your sensitive data resides within your databases across your environment. The next step then is to determine whether there are vulnerabilities weaknesses misconfigurations within your environment, and who has access control issues that should not have that privilege to that sensitive data that you've been able to discover. And then very critical to this is not doing it one time. This is a rinse and repeat process to continually be discovering when new databases are being spun up. And being able to make sure that you're assessing whether our weaknesses within those environments, we then go into the fact that you need to do something about this, you can't just have that assessment and not do anything about it, you've got to take steps to reduce that risk. And that's where our first step here comes in into remediation, patch application, fixing those misconfigurations that are reported upon, and then removing anything that's going to create any type of a weakness within your environment. Through that evaluation, you may then need to determine the access controls and continually assess whether the weaknesses, the weakness or the misconfiguration, or the vulnerability is there, whether it can be removed, or how you can strike in that environment. And then you got to continue to enforce this enforce access to sensitive data, enforcing that privilege, access ability and fixing all of those items that you've come across. The last step then is that ongoing monitoring and understanding what actions are taking place within your environment that comes into the ability to detect respond and adapt. The first step is being able to have that real time monitoring against those databases specifically for where you know, there are vulnerabilities or misconfigurations, that you have not had a chance to remediate yet, or where you know, users have access to that specific sensitive data. And then to be able to specifically monitor and hone in on that information. We talked about the anomalous usual activity, being able to set a baseline of what normal activity is across each user and across each service accounts, then will allow you to be able to alert against when you see those anomalies take place response isn't vital for this because you need to be able to make sure that information gets to your personnel that are they can take action on this quickly. And that you're not flooding them with information that will provide false positives and cause them to just be overwhelmed with data. And the last is taking that data and adapting to it. Being able to continually look at how you can make changes within your environment continue to strengthen your environment, based on the information that you're learning. All of this comes together in the ability to apply a mature database security model. To bring it all together. From what we've shared today, we want to be able to share with you a proven security methodology database security methodology that we've seen work time and time again, with the clients, the agencies that we work with. And that is built around continuous assessments and continuous protection. And tying these together so that you can harden as well as monitor and strengthen your environments. The first key step to that, as we talked about is inventory. Knowing where your databases are knowing where your sensitive dig is with that information, you then have the ability to assess to scan to understand where there are vulnerabilities weaknesses within your environment. Once you have those assessed, you need to take action on it be able to fix those vulnerabilities so that you're working to strengthen your environment over time, and then continue to retest that steps one, two, and three over and over again. You couple that though with your user access control who has access to that sensitive data. And then to control that access by putting in place least privilege monitoring the activity of those users as well as any other suspicious behavior within the environment to protect the inside by continually alerting when there are anomalies are detected and making changes in your environment, taken action against those. And the last, the response to those having your personnel your threat hunters, your security operations center personnel, continuing taking action against those alerts that are being customized and being fed into based off of the anomalies that take place. We've seen time and time again, this process prove successful for our customers, securing their most sensitive data that resides within their databases. And being able to protect those databases that are direct software dependencies are part of critical software that is part of our Executive Order. This is the end of what we wanted to share with you today. Before we end, I want to give you an invitation, we have two white papers that will be of help for you today. The first is the 10 principles of database security program design goes into much more detail than what Joe and I talked about today around the 10 best practices. And then also a white paper that we just released this week around securing databases and complying with Executive Order 1428 gives you some detail around how you can take steps to be able to protect your databases. And to do that, in accordance to the Executive Order, invite you to go to our Trustwave comm site. And you can search for these in our resource library and download those for your use going forward. The Executive Order is complex and timelines are strict. And if you feel confused or overwhelmed, it's just as welcome to the group there. We see our customers coming to us asking for help. We are more than willing to be able to work with you to see how we can help you specific to database security as well as Trustwave ability to help you with endpoint security, managed security services and other solutions that we have that can help you to meet those Executive Order requirements. Gladly work with over 160 agencies now and helping them in that process and be glad to work with you as well.

Speaker 1: Thanks for listening. If you would like more information on how Carahsoft or Trustwave Government Solutions can assist your institution, please visit www.carahsoft.com/trustwave or email us at trustwave@carahsoft.com . Thanks again for listening and have a great day.