What is ASPM: A breakdown of the current state and its future⎥James Berthoty (Security at PagerDuty)
Welcome to the Elephant in AppSec, the podcast to explore, challenge, and boldly face the AppSec Elephants in the room.
Today, we're excited to have an amazing guest, James Berthoty, joining us.
James has been in technology for over 10 years across engineering and security. An early advocate for DevSecOps, he has a passion for driving security teams as contributors to products.
With all his experience, he's currently building Latio tech = a platform helping organizations find the best security tools.
In our latest episode with Tristan Kalos, we challenged James about his recent article on ASPM. We discussed what's right and wrong with its current state, what’s missing from Gartner's perspective, and what ASPM might look like in the future.
In our conversation, James shares:
- Why security teams are overwhelmed with the amount of alerts
- Why "too many bad tools" is the problem
- What is ASPM, and why is it a strategy to tackle the security tool sprawl/ alert fatigue problem
- What motivated James to write about ASPM
- What’s missing in Gartner's perspective
- What makes certain ASPMs unique
- Lacking capabilities of ASPM - no one is taking fixing issues seriously
- Who will win the race for complete coverage?
- The elephant in the room: CSPM and ASPM are converging (CSPM for the cloud and ASPM for the code), and CSPM is now a feature of ASPM.
- Advice for young listeners
Let’s dive in!
Referenced:
Find the recap of this podcast below :
Security people want fewer tools - is it true?
Tristan: James, a year ago, I recall reading an article by Mark Curphey, a security founder. The article was around the topic: security people want less tools, not more tools," called "A security tool crash is coming." You maintain a repository of application security tools called Latio Tech. What do you think of this statement: "Security people want less tools, not more tools"? Do you agree with that? Do you think there are too many security tools out there?
James: Yeah. I mean, it's sort of, I was glad to see this article and work my way through it. I'd say like, I half agree. I fundamentally don't think security teams are overwhelmed with the number of tools, but they are overwhelmed with the amount of alerts. The core problem remains: how can we fix things and actually make progress? I think that comes across; many teams walk away thinking, "I have all these alerts because I have all these tools," but in reality, if you had 20 tools generating true positive alerts, you would probably stick with having 20 tools. Because if they were all providing value, you wouldn't get rid of them. So, I definitely understand the sentiment, going through the rest of his article as well. I think a lot of what he wrote applies to the large CSPM marketplace. Because all of those vendors were heavily VC funded, heavily betting on overtaking Palo Alto and CrowdStrike, like we are going to be giant public companies.
When you have that sort of race to the top, it does exactly what he's talking about. There's no reason to have Wiz, Laceworks, Sysdig, Aqua, all in the same environment. There's just no benefit to that. But what I've actually seen, I just, because I think that some of those companies will fail, but the smaller companies, especially within AppSec that are operating really well, I don't think there's a reason that those have to go away. Because, one of the main things I've learned, because my background is not in giant enterprises, but giant enterprises are the main consumers of application security tools. And if you're a well-funded lean startup, that's providing real value, you only need like five to 15 of those contracts to stay afloat.
And if your goal isn't to, you know, take over the world, to put the end to Palo, whatever, like, then you can really sustain yourself for a long period of time here. And so that's where I just think that. I think, like me, before I got into the Latio space, I was very unaware of the amount of tooling that existed out there and the amount of things that I could do. I would have thought the same way that it was like, do I choose between Wiz, Sysdig or Lacework were like the main ones that I was aware of in? Aqua too, but just knowing that there are so many more options that are doing much more specific things and providing value. I really don't see it that way. It's getting harder because of budget constraints to argue for why these niche tools are needed. Because the attention span is shorter on that stuff. And so really that's the main battle is sort of everyone versus the enemy of well, I'll just increase our Palo, CrowdStrike budget. And then we'll get all the functionality we need. We don't need these new tools. So, I think that's the main struggle, but I don't think it's primarily a "too many tools" problem. I think it's a "too many bad tools" problem.
Tristan: That's a strong statement.
What is a bad tool?
Alexandra: What do you define as a bad tool?
James: I mean, I usually just tell the story of the first AppSec tool I used, which I won't name. I thought there was something like, I thought I was just stupid or something because I was getting thousands and thousands of alerts into a terrible dashboard across like a hundred microservices. And I was like, how do we ever make progress on this?
And I was having quarterly meetings with my developers, trying to talk through, is this a true positive? Is this a true positive? I was a lot worse at coding. I'm still not great at it, but I'm better than I was then. And like I was going through piece by piece, what is wrong with me? And then I finally tried Snyk, when Snyk was, this is like when Snyk was very much taking over the AppSec market in that consolidation phase. And I was like, whoa, I'm not stupid. Like this tool I was using just sucks.
Cause Snyk's telling me, oh, I need to go from this version to this version. And this is probably a false positive. And here's better information about the vulnerability. And I can just use webhooks to scan and pipeline. I don't need to stand up my own Docker images and all this stuff. And so really within that story, it's a bad tool is one that's just flooding you with pointless information. And that's not fundamentally helpful for developers, because every time I walk into a developer meeting with a bad tool, it makes me look foolish and it makes the developer waste a bunch of time. And so whatever can help, I think is the best tool that helps security look smart and fundamentally does that by telling the developers what they need to know to fix the issues.
Is ASPM the strategy to tackle alert fatigue?
Alexandra: And do you think that ASPM is the strategy to tackle that alert fatigue problem that you mentioned?
James: So this is why I fundamentally think ASPM is at a big crossroads. And I disagree so much with how a lot of the market has taken it where It's garbage in, garbage out. If you view ASPM as this single pane of glass for all of your application vulnerabilities, if those tools are giving bad information and all you're doing is creating, like, look, you have three more widgets to filter by, you can see which pipelines have this tool versus this tool. Like that's pretty meaningless information that doesn't get to the heart of the problem. And so I think the real opportunity with ASPM is if you look at it, I just try to parallel it directly to CSPM.
Where Wiz took over CSPM so heavily is because they didn't just aggregate a bunch of findings. They instead tried to push to the limits of like, what can we do without an agent to get visibility and to surface those findings in a way that makes a lot of sense to security teams. And in the same way, ASPM is recognizing the sprawl with too many bad scanners and the opportunity there.
But the actual opportunity is not like, let's consolidate all of this into one dashboard. The actual opportunity is like, let's fix and figure out how much we can do without an agent to scan all of this code and put the story together and give developers something that they can fix. And so there is an opportunity with ASPM, but it's not through the dashboarding approach.
What is ASPM
Alexandra: Most of our listeners are application security engineers, but maybe some are just, you know, at the beginning of their journey. So, it could be great to actually provide the definition of what ASPM (Application Security Posture Management) is and what it represents.
James: Yeah, for sure. So I think, my definition first, cause I think it's way better, which is just, it's everything you need to scan your product. In one tool.
And I think that's such an easier way to understand what we're doing. All of these SPM tools fundamentally just scan all of your stuff, whether it's code, APIs, cloud environments, whatever, and they surface to you. Here's the security results that you need to go fix.
Fundamentally, ASPM is just a recognition that, at this point, I've identified at least eight different scanning possibilities within a pipeline. Instead of having eight different tools, let's just have one tool that can put that full story together. And so it's really telling that full, to use the buzzword, Code to Cloud story of here's how your code is getting to production, and let's make sure we're scanning it meaningfully for security issues every step of the way.
But the reason there's confusion in the industry right now around it is because, and I learned this because, again, I haven't worked at enterprises, and so I had no idea that ASOC was a category, which was the initial category from Gartner. And this was a category of tools that just existed to correlate all of your vulnerability data into a single place.
So, like the biggest, older providers like Nucleus and this, Vulcan's a newer, but still bigger company, but there's a ton of startups right now that are very much focused on that market, but Gartner has since turned that term into ASPM, which is this idea of like, we are taking all the scan data in. And serving as a place to prioritize and remediate. And I just think that's fundamentally missing, there's an opportunity to provide your own scanning value. Instead of just this aggregation dashboard.
Motivations behind the ASPM article
Alexandra: Yes, I think that's what you mentioned actually in your article: WTF is ASPM?
That's why I also was very curious to read it. I think it's a very insightful article. What actually motivated you to write about it in the first place? And, you know, compare it to Gartner?
James: It was because as I've met with so many vendors, like in the last three months, I've probably talked to like 20, 30 different ASPM vendors now because they've all rebranded as this.
It's wildly different. Some, like Armosec or Phoenix security, focus on aggregating your vulnerability data. They don't offer their scanner; it's a light touch. It reminds me of a business analyst-type platform with filters, writing ETLs against your standardized vulnerabilities database.
These tools resemble startups in the Nucleus space, such as Daz, Avalor, Opus, Silk—numerous players in that market. To me, the Armorcode or Phoenix approach isn't different from Avalor's simplicity, aggregating vulnerability data and perhaps adding a few unique touches. These tools, labeled ASPMs, sit alongside companies like Arnica and Snyk, providing specific value.
You can also look at Apiiro, Oxy, and many others that have built their scanning value across different tools, going beyond just aggregating your dashboard. So, I want to differentiate between vulnerability management solutions, as half of these acronyms seem to exist to serve that market, and those doing all-in-one scanning, which is what I think Snyk initiated. Snyk, initially an SCA tool, wasn't that interesting, but when they started offering SAST, IAC, and container scanning in one place, it evolved into an ASPM. Other vendors have caught up or surpassed them in functionality on that platform.
Tristan: That makes total sense. So, in a way, you disagree with the Gartner definition because, for them, ASPM is kind of the same as ASOC. But in your opinion, ASPM is like ASOC plus existing scanning capabilities to cover all eight possibilities during a pipeline.
James: Yeah, absolutely. I wouldn't have even categorized SOC separately. It's essentially vulnerability management—misconfigurations, CVEs, etc. It's a dashboard to manage and prioritize vulnerabilities. Quickly, each individual pipeline scanner is becoming commoditized. Though, I hesitate to say that since entities like Endor Labs are doing amazing things with just SCA. However, as a whole, the market is consolidating towards providing SCA, IAC, container, etc., all in one place. People seek value in consolidating tools because managing 10 different tools in one pipeline, especially with different Docker files, becomes challenging. Having 10 different tools making PR comments every time a developer pushes a line of code is not ideal. People prefer avoiding that tool sprawl.
It makes total sense to consolidate around scanning, but there's no reason to include vulnerability management providers in that. The workflow doesn't even make sense theoretically. Having Semgrep scan something in the pipeline, getting results, and then having Snyk come in via webhook to get results, but then sending them somewhere else for PR comments—it adds unnecessary complexity. On the other hand, the alternative pitch is appealing: removing all those tools and replacing them with ours. That's the actual consolidation that people find enticing.
Unique aspects of some ASPM tools
Alexandra: For the 20 to 30 ASPM tools you interviewed, as mentioned in our article, each had something unique. Can you provide examples of these unique aspects?
James: Absolutely, it boils down to the details, which I believe will be the main focus for the next year. During evaluations, it's challenging to discern which tool excels in specific areas. I usually ask, "What in-house creation are you most proud of?" This usually leads to showcasing one particular thing. Some examples include Apiiro and Ox's unique technology for data mapping. While their individual scanners may not stand out notably, Apiiro, similar to Bionic, examines configuration files to create an API mapping for data correlation and prioritization. It's unique, but is it distinctive enough for a preference over others? That's a tougher question. Oxeye, on the other hand, employs an eBPF agent that maps out your application. Arnica boasts outstanding out-of-the-box workflows, relying on a GitHub user map to send Slack alerts, allowing users to ignore or act on them, even generating Jira tickets for ignored alerts.
Separately, Ox provides a comprehensive approach to assembling code-to-cloud pictures, linking container images to their GitHub repo locations. SciCode impresses with remarkable querying capabilities. This illustrates my point about various little differentiators that make it challenging to discern the value you're getting from each specific tool.
Lacking capabilities of ASPM - no one is taking fixing issues seriously
Tristan: If you examine the current market, there are numerous vendors in ASPM. Everyone is rebranding as ASPM. Some lack scanning capabilities, while others possess them. Among those with scanning capabilities, there are slight differences—small tweaks, unique features—that set them apart. Despite these nuances, they generally maintain a similar level of quality, making it challenging to differentiate. However, do you believe there's a capability lacking in all of them? Is there an obvious feature that none of them currently possess?
James: I often find it surprising that no one seems to take fixing issues seriously. In a recent conversation, I realized it's because vendors are overly focused on listening to security professionals rather than developers. When I express concerns about my day-to-day job, it's mainly about wanting more effective ways to prioritize tasks. The common issue is wasting time having developers fix things that were never actually vulnerable, necessitating better false positive categorization. This should be the priority, but the real problem is that no one is simplifying how to fix the actual problems.
Security teams often take a time-consuming approach, creating a mini Proof of Concept for every vulnerability to determine if it's exploitable. For instance, dealing with a transitive dependency vulnerability in a tool like Vault can take weeks. The broken approach is to create a zero-day for each vulnerability when the actual solution is often as simple as upgrading to a newer version of the tool.
I'd like to see a shift in focus within tools, moving from counting alerts to quantifying fixes. Instead of saying you have 20,000 vulnerabilities, it should be framed as having a hundred code changes to make. This is something you can present to a development team, prioritize, and address. The market trend of introducing more filters to reduce a list of 20,000 vulnerabilities is akin to putting lipstick on a pig—it doesn't address the root problem.
In the container space, most fixes involve redeploying images due to base image patches. Rather than having every image with 10,000 alerts, a more consolidated approach would be one alert per image: redeploy it. The key to consolidation isn't more filtering but rolling up by fixes.
Alexandra: Is it something you'd like to know during the product feature development—how many potential vulnerabilities and code fixes or where you need to pay attention before developing? Do you think it would provide valuable information?
James: Yeah, I think the whole aspect of pre-planning risk lacks sufficient tooling currently. There are a couple that I'm aware of that take a threat modeling approach, assisting with threat modeling on different services. The challenge is that evaluations for vulnerability tools often focus on stopping immediate issues or preventing the introduction of new vulnerabilities. However, in day-to-day AppSec, vulnerabilities typically don't arise from new code because new code is actively monitored and written.
When I import a library, developers aren't intentionally incorporating 10-year-old libraries; the library becomes aged based on its tenure in the environment. So, in response to your question, the evaluation focus should be on how the tool helps address tech debt and ensures clear asset ownership. It's not about the tool's sophistication in detecting peculiar coding practices. The real issue lies in dealing with the 5,000 SAST findings post-deployment, not the rare instance it finds something before the production release.
The Elephant in the Room - Convergence of ASPM and CSPM
Alexandra: Yeah, very clear, that's exactly what I asked. Also, you know, we love to explore the elephants in the room, hence the name of the podcast. In the article, you mentioned that the elephant in the room is that SPM and CSPM are converging. CSPM for the cloud and SPM for the code. Now, CSPM is kind of a feature of SPM. Why do you think this is happening?
James: Yeah, I think, one of the proudest vendor consolidations I was able to do was to get rid of CSPM entirely in a former role because, if your environment is a hundred percent controlled by infrastructure as code, there's no reason you should need to scan it for misconfigurations—you can detect all of it as code. CSPM, not in the sense of runtime assets but just crawling cloud provider APIs to look for misconfigurations, is obsolete with infrastructure as code. ASPM has a real opportunity to take over this market. I was hopeful this would happen with Sneak Cloud, but it didn't quite pan out.
What excites me the most about ASPMs are the ones taking the full application code-to-cloud context seriously. They offer a complete all-in-one security scanning solution. Those are the ones where you can say, "I don't need a CSPM because I'm scanning everything as code, creating a runtime environment picture entirely from scanning the code."
However, it's challenging as CSPM vendors are entering this space, recognizing that CISOs want an all-in-one visibility tool. CSPM tools, with more funding and market presence, are ahead. Still, it might take a while for developer mindshare to shift towards them. On the other hand, tools built more for developers might have an easier time shifting right but with fewer resources due to a newer, smaller market. Consolidation is inevitable, but it's uncertain who will emerge victorious—more money or existing adoption. I honestly don't know which side will win, but there will be a winner.
What happened at Snyk
Tristan: And, specifically, so you mentioned several times what happened at Snyk?
Why do you think they didn't just build this full-featured ASPM platform and went straight from SCA to DAST to runtime protection? And they stopped at some point, like adding more scanners and more features?
James: I mean, Snyk is still an amazing product. I want to make sure I don't come across like I'm bad-mouthing them or anything. What they have on the vulnerability side is way ahead of what most people reference them for—the detailed work of creating a line-by-line database for function-level reachability to CVE findings, which is difficult and often overlooked. The positives are clear, and Snyk recently launched their ASPM platform, taking the Gartner approach to manage all risk in a single dashboard.
However, the Snyk Cloud launch was intriguing because there was a conflict at the foundational level. They weren't initially going to get into runtime, but scanning cloud involves scanning runtime. This conflict influenced the product. Now, it's rolled into IAC, forming part of the infrastructure-as-code offering, which makes more general sense. The code-to-cloud picture is crucial, and while the platform can tell you the repo and deployments, it struggles to show where a Docker image lives in production. This complete code-to-runtime picture is what excites me about any tool, but currently, Snyk faces challenges in providing that full picture.
What would the first full ASPM tool look like
Tristan: Yeah, of course. ASPM can merge with CSPM, and by CSPM, I mean the runtime perspective. It provides both the view of the code and where it's actually running. This raises another question. You mentioned Snyk stopping at some point due to a strategic decision, choosing not to go for full scanning capabilities and focusing on some. Do you think the first tool with full scanning capability, the first full ASPM tool, will be a code scanner extending its capabilities to cover the full pipeline, or will it be a new entrant to the market that starts as an ASPM tool right away? Who will win this race?
James: Yeah, I mean, that's why on the Latio site, Ox is the choice—not just for the value of any single scanner but because they cover it all, telling the full story. In the ASPM realm, the two companies I like the most are Ox and Aikido. Aikido makes sense for startups aiming for SOC 2 compliance, offering a straightforward solution with multiple scanners. Ox has been smart in allowing customers to bring their own tools or use theirs, essential for a smooth transition. Arnica, on the other hand, takes a bolder approach, claiming to have a superior scanner and encouraging a direct switch. Both strategies have their merits.
Tristan: Yeah, of course. But, in any case, it's very hard to move a company away from all the scanners because there's a lot of configuration involved. For instance, if I take Semgrep, most companies using Semgrep or any static analysis tool have their own carefully crafted rules by application security engineers over years to match their precise code base. Trying to move them away from this knowledge and expertise embedded in the configuration would be very hard.
James: That's definitely something easy for me to forget about without an enterprise background. While I've used some custom rules, it's never been at the scale of having hundreds of them across thousands of repos. If you're launching a full ASPM platform, it either needs to be with a small team within the company or capable of handling everyone else's scanners. Otherwise, you're competing against tools that integrate with all existing solutions and orchestrate them effectively. Legit's success lies in ingesting data from various scanners and presenting comprehensive pipeline coverage, making it a smart approach for enterprise sales.
Tips for learning more about ASPM
Tristan: That makes total sense. For the listeners that are new to the topic, for instance, that are not familiar with ASPM, what would you advise to learn more about the latest innovations in the field of ASPM and application security in general?
James: I think it's a hot market, and so there's going to be plenty of like marketing stuff.
To navigate this, I recommend focusing on individuals who excel in specific tools. Rather than learning about ASPM broadly, understand each tool's unique aspects. To assess effectively, you must grasp what distinguishes a good IAC scanner from a bad one and comprehend the associated challenges. Demos often showcase IAC findings, but without understanding the why, it's challenging to interpret them. Explore leaders in various categories, such as reachability or DAST running in pipelines, to grasp their specialties. Feel free to check out my LinkedIn, YouTube, and Latio.tech for additional insights.
Tristan: Certainly, we'll include the link in the podcast description. It's excellent to centralize data and expertise in one place. Navigating all security tools is challenging due to their varying pros and cons. It's essential to find what suits your use case. Instead of diving into ASPM broadly, focus on understanding each vendor's strengths and weaknesses. This approach helps in making informed choices.
James: Right.
Yeah, a significant moment for me was trying to figure out if I was bad or not. People should feel comfortable blaming the tool instead of themselves for not understanding what's happening. I was lost until I came across something from Jeevan and others during a Twilio, Netflix, Meta talk. They discussed the state of app security and how they deal with it. Application security involves thousands of vulnerabilities, with many left unaddressed. It's helpful to hear honest perspectives, as companies won't openly disclose the number of vulnerabilities in their backlog. This transparency helps assess whether one is doing a good job or not.
Tristan: Sure. So great. And one last piece of advice for our listeners in general.
James: The idea I have in mind, I can't express directly. I need to reconfigure it, but it keeps coming to the forefront of my mind. Yeah, learning Kubernetes is crucial.
Tristan: This is a great one.
James: It's surprising that many cloud tools I talk to still don't support Kubernetes. It's where your application is deployed, providing the overall context of findings. Kubernetes skills are challenging to hire, and it's foundational. In these tools, the blind spot often lies in container security. Snyk excels in this area, tackling the complexity of container security that many prefer to avoid. Understanding the container lifecycle is challenging but presents a significant opportunity and skill gap.
Alexandra: Thanks a lot for the great conversation today. It was nice to have you on our podcast. Have a great day.
Tristan: Thanks. Excellent conversation, James.
James: Thank you both.
💡 Want to discover other episodes? Check it out below:
- SCADA systems: How secure are the systems running our infrastructure? with Malav Vyas
- Threat modeling: the future of cybersecurity or another buzzword with Derek Fisher
- Security experience: top-down vs bottom-up with Jeevan Singh