CarahCast: Podcasts on Technology in the Public Sector

How Can the Government and Education Sector Improve Its Grades in Software Security Health with Veracode?

Episode Summary

In this podcast, Eric Wassenaar, the Senior Account Executive of SLED at Veracode and Jason Phillips, the Senior Solution Architect at Veracode examine SOSS findings pertaining to the government and education sector.

Episode Transcription

Speaker 1: On behalf of Veracode and Carahsoft, we would like to welcome you to today's podcast, focused on how the government and education sector can improve its grades in software security health. Where Eric Wassenaar, the Senior Account Executive of SLED at Veracode, and Jason Phillips, the Senior Solutions Architect at Veracode will discuss the prevalence of security flaws in the government and education sector. And the steps organizations can take to nurture their applications back to health.

Eric Wassenaar: Thank you everybody who is joined today. I am so excited to bring this discussion to everyone who is online. My name is Eric Wassenaar, Account Executive here at Veracode covering all things state local government, as well as higher education and K through 12 throughout the Great Lakes region. Been at Veracode, helping folks operationalize application security for nearly three years now. It seems like just yesterday that I was just getting started. And I'm joined by my friend and esteemed colleague out in the field, Jason Phillips.

Jason Phillips: Thank you, Eric. And hello everyone. I'm Jason Phillips, Senior Solutions Architect with Veracode. I too work with state, local in education at Veracode. And have been with Veracode for about two years now. Prior to that, I was actually working on the Veracode program at General Electric.

Eric Wassenaar: Awesome. And one thing that I always look forward to every single year at Veracode is our State of Software Security Report. And this is going to be the centerpiece for a lot of our discussion today. We're going to start out kind of on an aggregate view of what the report said about application security overall. And then take a deep dive into what we identified specific to the state and local government, as well as education sector when it comes to their own results and grades in software security. A little bit about what the state of software security is. So Veracode is uniquely positioned as a cloud first and cloud-based platform to have the opportunity on an annual basis to take an aggregate view of all the analysis that we've done on a subset of sample applications. And provide that analytics and data back to our customers, and back to the industry as a whole. To make sure that folks understand the trajectory of certain trends in the field.

This year was really exciting. We had over a 130,000 applications as a part of this quantitative study. A million scans, and over 10 million flaws comprised the data that we'll be talking about today. The data stands across large and small companies, obviously state and local governments. As well as education in K through 12. Commercial software suppliers, software outsourcers, and open-source projects to help us get a sense of what is going on, and what is the pulse of application security today. Jason, I know you have seen your fair share of State of Software Security Reports, as you were a former customer. And this is our 11th time generating this report. I know you had some fun stories that I'd love for you to talk about a little bit as how you use the data in this report to support your own role day-to-day when you were on the customer side.

Jason Phillips: So I definitely saw my fair share of reports. And I welcomed them every year, because there's a lot of nuggets inside that really can help regardless of your role. It can really help you with your journey, right? So from information such as what types of flaws are prevalent, how can we maybe gear our training around the most prevalent types of flaws? What types of languages carry the most seen flaws, or the most critical flaws? And you can really action that information internally, influence the different products that are being developed, and selection processes going forward, driving enablement for training. So definitely a lot of very useful information.

Eric Wassenaar: Cool. Yeah. And there are three big questions that we get to answer, that I think really encapsulate the usefulness of this exercise. How vulnerable are applications on the whole? Where are those vulnerabilities? And what vulnerabilities are most common of those that we identified? So let's take a look at how vulnerable are applications, which is a great question to start with.

The data points to a couple of interesting things. That every application that we identified when we scanned it, 75.8% of them had a flaw. That's high, medium, low, informational, there was some sort of opportunity for improvement in that particular application. Of those flaws that we identified, 65.8% of those aligned with flaws that are in an industry standard called OWASP Top 10. 58.8% aligned with the SANS Top 25. And I think most interesting and most encouraging of all is that 23.7% of the flaws identified across the applications we reviewed were of a high severity.

I say that's a good sign, because although you can look at it and say, "Yes, there's flaws in almost every single application out in the wild." But this indicates to me and everyone here at Veracode, and the folks that I work with, that not all flaws are catastrophic. And that although it's likely that there is a flaw existing in an application, you're not alone. And Jason, I know as a previous developer yourself, that is a very comforting message when you first take a stab at examining the code in your application.

Jason Phillips: Absolutely. I think it normalizes it, right? Where as a developer, you could be doing everything correct. And you could still, as we can see, 75.8% of all applications have a flaw. That you're not alone in that journey. And it's okay. That's why we do the security testing. And that's why we build in the processes and procedures to be able to identify these flaws and get them fixed quickly.

Eric Wassenaar: Yeah, definitely. The next question that I'm excited that we have the opportunity to answer with this report is, where are the vulnerabilities? And this is where things get really, really interesting. And there's a second question that popped up here. Because it does tie back to kind of an opportunity to improve your grade in software security right away, based on the data we've already shared.

"Have you had any secure application training?" And I ask that question, because when we looked at the data, you'll see here that 69.1% of the findings that we saw in this particular report were identified in homegrown applications. Whilst 30.9% of the findings were in third party libraries. So this says a couple of things to me. First of all, there's an opportunity in the code that you have your hands on right away, to better educate developers and enable them to not introduce as many flaws into their applications.

It's also interesting to see that if you're only looking at that first party code, and ignoring or not taking into account the third-party libraries that developers use every single day to facilitate their development process, you could be missing out on about 31% of the entire holistic view of the application security posture of that particular app. Jason, what does this data say to you?

Jason Phillips: Yeah. I think that's a really valid point, Eric. Because you have flaws that you're introducing yourself. You have flaws that are being introduced by other aspects and other different libraries that are being brought into your application. And regardless of how these flaws are being entered in your application, I think you're going to hear from me quite a bit saying, in training, right? You need to understand what these flaws are. I feel like we can really help prevent, going forward, a lot of these falls from being presented into our code by offering an appropriate amount of training for developers.

Eric Wassenaar: It's something too. I mean, I feel like that 30.9%. And Jason, you can confirm or not confirm this. But in a lot of our conversations, it seems the expectation on the velocity of the software development life cycle is increasing. Meaning that governments are under increased pressure, and universities are under increased pressure to deliver to the respective constituents, whether it's the citizen, inter-agency partners, students, faculty, staff. To deliver functionality that enables them to do their work as quickly as possible.

And certainly now with remote working in mind. And that downward pressure of that expectation is sort of putting developers in a position where they already had really full plates. And so third party libraries are really the only way forward. You're not going to write a request function from scratch in Python. It's not feasible. You're going to leverage the request library. What's your take on that? Do you see that number growing or shrinking here in the next few years?

Jason Phillips: I see a third party usage increasing quite a bit. And it's just a nature of the need to supply new features quickly. And why create something new, when something nearly the way you need it is already available, right? So I definitely see that increasing over time for certain.

Eric Wassenaar: This next area is really cool. And I think there's a lot of lessons to be learned here, and a lot of interesting data. But the vulnerabilities that are most common, we see this chart every single year. And it doesn't change too often. Information leakage, cryptographic issues, code quality, cross-site scripting. All of these things should be relatively familiar to folks if they've taken a look at the OWASP Top 10 or SANS Top 25, those industry standards for evaluating your application security. And I think, Jason, you had a couple of thoughts when you first saw this right away, that really stuck out in your mind as this is a problem that it keeps showing up industry-wide. Can you share a little bit of that with the folks here today?

Jason Phillips: Absolutely. So year over year, we're still seeing a lot of the same types of flaws. So we have SQL injections still on this list, still in the top 10 list of OWASP, right? Cross-site scripting, information leakage. That is the most prevalent flaw that we have seen is, is information leakage. And just that alone is offering information about your infrastructure or your application that could be used to attack you in a way that's automated. You have low hanging fruit available for an attacker to be able to learn more about your environment, your application. That they can send their little attack bots to go and try to do harm.

Eric Wassenaar: Yeah. That brings up a really interesting point that I hadn't thought of before. When we're looking at the colors here, it represents, "Did we identify this most prevalent in third party libraries or first party code?" And something like information leakage being at the top of the list, and being most prevalent, most frequently found in third party libraries, I think opens up organizations into adopting that particular top four set of flaws, in particular information leakage, without necessarily knowing that it's happening.

They're working with developers to get them to release features and functionalities to help accomplish the mission of the organization faster and faster, and faster. And those third party libraries tend to have the top four problems. That's one aspect of that comment that you made. And then information leakage is an interesting one. I mean, we're seeing attacks more and more frequently leveraging automation. And that automation more often than not is leveraged to identify that low hanging fruit. And so if a bot is crawling or someone's leveraging a script to identify opportunities in the perimeter of a particular organization. And they're able to capture information like the web server, for example. And it doesn't even matter what version it is, not the details. But just the fact that they can capture it would indicate, if I were a hacker, that there's an opportunity here for me to successfully infiltrate that organization. What do you think of that, Jason?

Jason Phillips: Yeah, I definitely agree. I would even go further to say as an attacker, you only do need to know their tech stack, right? But it helps even more to know their versions. If you can gather that information, you can make a very specific targeted attack against that. So denying access to this information is the low hanging fruit I think that we can accomplish. And it needs to be something that we work towards, whether it be through remediation guidance that we can offer, or through a training that we can really identify what measures can we take to improve this going forward.

Eric Wassenaar: So I want to pivot a little bit to taking a deeper dive into the industry comparisons. So we spoke previously about the aggregate data, what does the industry as a whole look like? And now I really want to focus on what is this public sector industry? We labeled it government, but it encapsulates state and local government, as well as higher education, K through 12. Across a couple of different metrics. The four metrics we're looking at here are any flaws, high severity flaws, their fixed rate, and ultimately their half-life. And you'll notice here that for better or for worse, the government sector came up dead last in 80% of their applications when analyzed had a flaw exist. So a few points higher than the industry average. The government's doing very well. Government sector is doing very well in not allowing high severity flaws into their applications.

So they're about middle of the pack of 23% of applications having a high severity flaw. They're trending an okay fix, right? Not dead last, but certainly not leading the pack at 66% of flaws are being fixed once identified. And this idea of a half-life, how long does it take or how long does a flaw exist once it's been identified in the environment? The half-life is a little bit high. This one's a little bit worrying. It's about 233 days that a flaw is likely to remain upon subsequent scans after it's first identified. Jason, I know under the hood of this, there was some more detailed information specific to some CWEs, that really jumped out to you in terms of flaw prevalence.

Jason Phillips: Yeah. So when we look at the previous slide, right? When we break that out by this industry, government and EDU, we're still seeing the information leakage at the top, right? We're all really at the same level across all different industries. And again, we need to stop giving away information that's meant for us only internally.

And I think that would definitely help to eliminate ... Not eliminate, but it will reduce the amount of automated attacks against our applications. Cross-site scripting, this is really interesting. So we have identified 49% flaws within our industry against 30% of others. So that's almost a 20% increase over other industries.

Insufficient input validation. So this is where we're taking input from end users, and we're not sanitizing it, right? We're finding that we're just nearly also 20% greater. So 47% versus 35% in other industries. So you take all this information. We're telling our attackers about us. We're producing more cross-site scripting issues. We're not sanitizing the inputs as they're entered into our applications. This can really kind of be a mixture for some serious types of flaws. So we just need to do a little bit more training, I think in these areas. And it'll help a long way.

Eric Wassenaar: Definitely. And that leads us to this concept of nature and nurture that we had the opportunity to really dig into in this year's report. I'll speak a little bit to nature, because that is really relating to components of the environment with which application development occurs, that a developer really doesn't have much control over.

So when you think about an ideal environment or what we identified via the data was an ideal environment, is that smaller organizations with smaller applications, that are relatively new and have a low flaw density, is a little bit more ideal environment to influence a lower meantime to remediation when it comes to fixing flaws that have been found in the applications they're developing. Versus a less ideal environment, which encapsulates large organizations that are rather complex.

Large applications that are rather old or legacy in nature, and have a high flaw density. I am kind of leading folks along here. But it's it doesn't take too long to understand and start to see why the government is landing where it is amongst their peers. Because candidly, the environments that we all have to work within, land squarely more often than not in that less ideal environment. We'll dig into a little bit more as to how we can work with that instead of a feel defeated by it.

Jason Phillips: Yeah. And that's actually where I get really excited as a solutions architect. Because the nurturing side of things, this is where we're able to do something about that, Eric. So we can do things with the intent and purpose behind it to improve our position. Right? So providing a regular cadence of scans, being able to provide scans of different types, right? So maybe not just static analysis, but also dynamic analysis. Being able to move more into that ideal state away from scanning on-demand, or scanning only with one type of, of tool. Especially when you look at it from a static and SCA perspective, right? Which is software composition analysis. You're able to scan both that first party code that you're writing in-house, but also adding on that extra type of scan so that you can see what's happening to the code that someone else wrote that you brought into your application.

Eric Wassenaar: Yeah. That's a really great point. And I think that leads nicely into this deeper look at exactly what do we mean? And what sort of attributes can we pay attention to in this nature versus nurture? Who is driving application security in the organization? Sometimes it's owned by the security operations group. Sometimes it's owned by the IT and infrastructure group. We have folks who wear many, many hats in our industry. So perhaps it's only you. And some folks may have no idea, have no function right now for application security. So we have an application what a response there. I'd ask that you keep that in mind as we think about these attributes, particularly on the nurture side. Because it's going to help you think about what action you can take after today, if you're trying to drive more participation in your application security program.

So with that, I'm going to focus on the nature side, because this data really struck me as very interesting. When we looked at the application's age, and that's age in Veracode, meaning how long or how frequently have we seen this application scanned in this environment before? Folks in the government and education sector had the freshest applications to Veracode, which is interesting. Especially in juxtaposition to the government and education space, they're placing last in flaw density.

So when I connect those two dots, right? The very first time we scan an application, we're finding more flaws than any other industry vertical that we examined, indicates to me that the struggles that are described in a less ideal environment around nurturing ... Or excuse me, around the nature of an organization that app dev happens, really is present in the government space. The very first time we scan your application, whether you're a university or an agency, or a department within a larger organization, we're probably going to find a ton of flaws. And one thing that struck me as kind of curious was that although I think most people think that the government and ed space is going to have the largest applications out there, they're actually about middle of the pack. So application size was sort of a neutral attribute that contributed to some of the rankings and statistics that we've reviewed so far.

Jason Phillips: Yeah. And I think as we approach this from the nurturing perspective, we need to give ourselves some credit, right? Where scanning frequency is top rank, right? We're utilizing APIs better than most, right? So this is awesome. We have opportunity to improve. So we can start looking at adding different types of scans as we've mentioned. That's an easy one that we could add to the nurturing component to improve our position in rankings. And the next one is scan cadence.

So scan cadence is really important, right? This is where we're going to take the opportunity to set up a regular, whether it be weekly, a monthly dynamic scan. In addition to that, we set up a cadence for our static analysis and build that into our process. Just a quick story of how a cadence was really important to me in my past was, I was running a program where we were doing a monthly cadence and scanning of our applications. It was our health onboarding for the year. So everyone was coming on board to do their benefits, and so on. About two days prior to that launching, there was an update that was released. And our regular cadence caught a very critical flaw that if we had hundreds of thousands of people at that point in time signing up for benefits would have been catastrophic. So just having that regular cadence really does help contribute towards a better application security posture.

Eric Wassenaar: That's a really cool story. I think one thing that jumped out to me as I looked at this data is connecting the dots, and trying to understand the why behind scanning frequency being the most frequent whilst it's happening at not a steady pace, as you indicated here. And how that ties back to flaw density. That to me indicates, although the technology is perhaps working in a good way, people are leveraging automation. You're in the build pipelines, people are leveraging all the integrations that make it easy for the developer and the operations team to execute a scan.

But the fact that the flaw density remains, and the half-life of flaws remains 233 days. That to me, I just think there's a breakdown or there's an opportunity for operations and development to communicate a little bit more strongly. I'm alluding to dev ops or dev sec ops, whatever you want to call it. But this collaboration between the two groups seems to be the way forward to be able to close that gap, and see an increased ranking in those particular metrics.

Jason Phillips: Yeah. I couldn't agree with you more, Eric. If this is existing only in operations, and there is no operational way of approaching your remediation, you're going to carry in that tech debt month over month. So I definitely agree that that's a recipe for improvement.

Eric Wassenaar: Definitely. When we think about nature versus nurture, I want to think about, what is the impact that we can have on these two aspects? We have the opportunity to think about the nature of an organization, and sort of what that means to the development team that has to live within it. And what can you do to sort of nurture their way through the process? Particularly when you start to add additional scanning into the mix. And when you start to add automation into the mix.

I think automation is a really great opportunity, because it doesn't cost anything. You can think about your implementation plan and really ensure that you're leveraging the APIs. And that you're scanning at a steady cadence. And you're going to be able to reduce your mean time to remediate by about 17 and a half days, just by ensuring that automation is in place. And that you are implementing and utilizing the integrations for this technology and tool sets that you've selected, and are trying to roll out within your organization. And particularly interesting, Jason, you spoke to this a little bit about the impact that adding dynamic analysis has, while also doing static analysis. And speak a little bit to how that works, and how you've seen that manifest itself in the field.

Jason Phillips: Yeah, definitely. So we have seen a 24.5 days quicker remediation effort when you're including dynamic with your static analysis. So that's a different stage of the life cycle. You're testing early and often throughout your software development life cycle. And then you're adding in dynamic, which it's telling them a lot of different things, right? One, not only have we included operations and security working together with the development team. Because the development team has already been testing with static. So I think what you get there is a great recipe for success, because you're having the ability to not just have a bunch of flaws. But you have developers who are also already part of that process who can work towards fixing the flaws sooner in the process.

Eric Wassenaar: Yeah. And I don't want to beat this up too much. But this idea of having a steady scan cadence. Again, something that you can look at your own organization and look at the tooling you have, regardless of who the vendor is. And think about, "Have I implemented this in a way that provides a consistent feedback loop to the folks who have to action the change?" And that's the developer. If the developer is not empowered to have a clear view of what flaws are there, help when they need it and get stuck on trying to remediate a flaw. And a strong exceptions process that is happening with enough frequency to make an impact on the release cycle that your organization adopts. Things are really tough. And when you do leverage a steadier scan cadence, we tend to see an increase, or excuse me, a decrease in the mean time to remediate flaws that have been identified by about 16 days, which is a huge impact.

And really the focus there is not so much on patching up that gap with more technology and more acquisition of technology. It's really thinking about the process and the people involved with that technology to ensure that they have all the tools that they need to be successful. So this is a really interesting graph. It's kind of busy. But I'm going to try and break it down for you. So this figure shows how quickly each team with both positive and negative behaviors, are able to close flaws in each of their applications. On this axis over here on the left, we have the probability that finding is still open. Whilst we are also looking at the remediation rates of positive and negative attributes over the timeline of six months, one year, and one and a half years. And what's really kind of curious about this is that the slowest line, which is the top line to the upper right, represents an application that's challenged on both the nature and the nurture side.

So they're challenged with negative attributes of the organization in which development is occurring. And negative behaviors from the nurturing of their developers. A lacking scan cadence, not leveraging APIs, only scanning with a single scan type, et cetera. They do tend to close flaws at a much slower pace than folks who are leveraging either a single good attribute, or both good attributes and good actions. The quickest line, which is down at the bottom left of this particular graph, represents for us the best case scenario. That is positive attributes of the environment with which app dev is happening. And a proactive development team that has positive behaviors. So if we put this in line with the nature versus nurture metrics that we looked at before, these teams are scanning with multiple scan types, they're leveraging dynamic SCA, software composition analysis, and static analysis.

Things are automated. The cadence is steady. Their application is small. And the organization is nimble and less complex. In reality, quite frankly, as most of you probably already know this. A lot of the applications fall in between the two. And what's interesting about that is the impact that good practices can make. So this is things that you can kind of lean on the nurture side to drive an impact right out of the gate, by adjusting the behavior within the organization.

So an idealized good app with having just good practices, and a relatively poor environment means that 50% of flaws are closed in just under two weeks, about 13 days. While bad practices on that same application can mean it will take almost twice as long to remediate those flaws. That's a huge difference. The difference is even larger when you're looking at bad applications. So those attributes that make up a less ideal environment for the application developer, and then the behavior of that particular development team is also not that great. A team with bad practices working in a less ideal environment for that particular application, can take almost a year to close 50% of the flaws that had been identified. Which I just think it's just really interesting to think about what can we do today, and what can we do with what we have to make a really big impact? Jason, I'm wondering if there's anything that sort of stuck out to you, and how it might make sense to tackle some of this, knowing this information?

Jason Phillips: I don't know if we would call it offense or defense. But it kind of reminds me of the triangle offense or defense where we're going to attack this with technology, people and process. All three really need to be in place in order to take this line from this graph, and achieve the good actions, the good attributes to improve that flaw remediation quickly in the process. So that we don't have, as this is showing, a year and a half later, we have the same open flaws in our application. So a real three-pronged approach I think is really necessary in order to make positive action towards the tech in this.

Eric Wassenaar: And the idea that a developer can take it upon her or himself to influence the outcome of the security posture of that app. And the data shows that that action positive, even if the environment is not so great, I think bodes well for leaders to take back to their org and say, "You really can make a difference. Here are the things that we're expecting. And here are some tools to empower you to take those actions on a day-to-day basis." And start to turn the tide on how quickly flaws are fixed once they're identified in their apps.

So I want to round this out here and think a little bit about how can folks make the grade? So we've talked a lot about the data. We've talked a lot about this idea of nature versus nurture. And I think three things really stand out to me that are indicative of successful programs that I see and work with every single day. I think the organizations that are most successful with application security do take a hard look at the nature of their organization. And they in fact leverage that to their advantage.

And what I mean by that is, those folks who are building an application security program, or trying to increase the maturity of an existing program, engage the leadership with a documented process. They get the right people with the right process, and make sure that the technology supports the vision that they're trying to realize. Despite the size and complexity of a lot of SLED organizations, I've seen not just in IT, but just in general that the government and education sector takes a little bit of time for the windup. But once that ball is out into the world, it seems to accelerate. And the organizations that can make a positive change en mass as it relates to software development life cycles. And I think understanding the nature of the organization you're working with is key to making that happen.

Jason Phillips: Yeah. And I would also add we are better together. So working across departments, working amongst development teams, working with security development teams, working with dev ops and everyone together. Working by adding in different layers of leadership to be able to speak to policy, and what kind of flaws you're looking to eliminate first in terms of our prioritization approach. And just being able to work across all the different parts of the organization really helps build the expectations, it helps build the foundation. And helps grow the application security program.

Eric Wassenaar: Right. I think that ties in well to this last point that you and I have both called out multiple times. And that is, build momentum to a grander vision that you have a goal for in the future. Those organizations that start with a small focus set of dedicated people who are on board with your project, a well-defined inventory of applications, and key performance metrics around that particular core set or pilot set, if you will, for a larger vision later on. To make sure that that's successful. And you can take that success story and that narrative to other parts of your organization, and get by in that way. You start to see this momentum build up pretty quickly. And I think those organizations that try and do everything all at once, are oftentimes overwhelmed primarily by just the sheer amount of flaws that are introduced.

If this is your first time scanning your apps, the data shows us that you're probably going to have hundreds, maybe even thousands of flaws. And it can be debilitating for a developer and sort of demotivating in a way, to be saddled with all of that additional work. And not really understand how to work through it in a systematic way. Building momentum with a smaller scale process, validating that it works, validating that your organization is ready for it. And then ramping up as your organization becomes more and more mature over time, I think is the smartest way, and is indicative of all the successful programs that we get to work with here at Veracode, and in the government and education sector. Jason, you have anything else to add to this?

Jason Phillips: Yeah. No, I would agree a 100%. I think as you bring in more people, it really helps you level set expectations. And getting a large number of flaws returned from one of your scans isn't as daunting when you're doing it together. And you can set those expectations, those goals, the small milestones to get to where you want to be. It's a cool process to watch and be involved with.

Eric Wassenaar: It is. It is really cool. We have one particular customer, actually. They've done a really tremendous job over the years of building a really strong program that is making the A plus grade in almost every metric that we can throw their way. This particular org developed an application development security oversight group that helps facilitate conversations amongst security and developers. And that's been wildly successful. And I think it's a great model and a great example of collaborating across the boundary lines that can kind of get in the way of a lot of different projects. It's been a huge impact. And really, really cool to watch. So I want to say thank you. I know that folks are busy, and I know that everyone has a schedule to keep. And I just want to thank you for choosing to spend your time with us today.

Speaker 1: Thanks for listening. If you would like more information on how Carahsoft or Veracode can assist your institution, please visit www.carahsoft.com/veracode. Or email us at veracod@carahsoft.com. Thanks again for listening. And have a great day.