Webinar recap: How to combine SAST and DAST for complete application coverage
Modern applications come with a whole host of challenges that legacy SASTs and DASTs simply cannot keep up with. If you want to be sure no vulnerabilities slip through the cracks in these applications, the key is combining a modern SAST and DAST.
Why?
Last week, Escape’s CEO Tristan Kalos brought the DAST, and Amit Bismut, Head of Product at Backslash Security, brought the SAST, as they joined forces to explore exactly that, in a webinar titled “The Future of AppSec: SAST + DAST combined for complete application coverage.”
You can watch the replay here:
This article is an overview of everything Tristan and Amit illustrated, from what SAST and DAST actually are, to precisely why modern SASTs and DASTs are imperative to ensuring comprehensive coverage in your applications, especially to test complex vulnerabilities like access control, BOLAs, and IDORs.
We will then outline exactly where throughout the SDLC you can implement SAST and DAST (and there are numerous places!) before rounding up with a summary of why SAST and DAST are powerful individually but stronger together. That’s not all, though, as you can also find the Q&A section from the webinar at the end. With so many exciting ideas to get through, let’s dive straight in.
What are SAST and DAST?
Before considering why SAST and DAST are better together, Amit and Tristan ran through exactly what SAST and DAST are, and why legacy SASTs and DASTs simply cannot match up to their their modern counterparts when it comes to securing modern applications.
SAST
Static Application Security Testing (SAST) analyzes application source code to identify vulnerabilities such as SQL injections, path traversal, and cross-site scripting. But while traditional SASTs pattern-spot to uncover vulnerabilities, now in 2025, modern SASTs go much further beyond just analyzing code in order to actually understand the application.
How modern SAST stands out
Snapshot summary:
- Modern SASTs actually understand the dataflow and relationships between code.
- This means they can discover secrets, configurations, and business logic flaws.
- They excel in finding triggerable injections in code and triggerable vulnerable packages
- By analyzing security frameworks they can see if vulnerabilities are actually exploitable.
From understanding the Software Build of Materials (SBOM) and the different packages an application invokes, to understanding the dataflow and how these packages then interact with one another, a modern SAST analyzes the code and connects all of these (SBOM, dataflow, application configuration, etc.) into a single graph that tells the story of the application. It then tries to find all of the entry points (sources) and exit points (sinks). When a dataflow is discovered that starts at an entry point but then exits at a vulnerable exit point, that’s where you can detect an exploitable vulnerability.
This is where modern SASTs stand apart from legacy SASTs. Modern SASTs can understand the relationships between code, but they can also understand whether any security frameworks exist in the dataflow for a certain vulnerability to see if the vulnerability is actually exploitable. So a modern SAST goes far beyond the capabilities of legacy SASTs because of its ability to understand the operation of an application as a whole, by analyzing the relationships between the files and data that flow through, as well as by diving deeper into the code, allowing it to spot and test for different secrets, configurations, and business logic flaws.
DAST
Dynamic Application Security Testing (DAST) analyzes a running application by simulating attacks to identify vulnerabilities from an external perspective. A standard DAST is easy to deploy and scale, as you simply point it towards a running application and it can help find the issues that are directly exploitable by malicious users.
But that’s where the benefits of legacy DASTs end. They are rife with limitations, to the point where security professionals think "dast is dead", but this is where a modern DAST with business logic testing capacities comes in.
How modern DAST stands out
Snapshot summary:
- Automatically pentests like a human because it understands an application’s structure, allowing it to find business logic flaws.
- Can test for complex vulnerabilities like access control issues, BOLAs, and IDORs.
- Native support for testing APIs, microservices, and single-page applications.
- Empowers developers with actionable remediations and effective prioritization.
Modern applications are based on APIs (which is where 80% of web traffic flows!) and business logic flaws. However, legacy DAST tools are built to test web pages, meaning they are simply not equipped to provide adequate coverage to these modern applications, unlike a modern DAST.
Using powerful reinforcement learning techniques to create a “chain of thought”, Escape’s modern DAST can automatically assess an application how a human pentester would. This enables it to test the business logic of an application because it actually understands the structure of an application and the dependencies between different data and resources.
“Modern applications we know are based on business logic flaws. The user will call an API, get some data from the API, and reuse this data in another transaction. It is a graph of requests that have dependencies with each other. A modern DAST is smart enough to understand the structure of the application and run very efficient testing that is at the business level.” - Tristan Kalos, CEO, Escape
So, to compare, a modern DAST is:
- Actionable for developers: Unlike legacy DASTs, a modern DAST actually streamlines the remediation process for developers.
- Easy to implement in the SDLC: They also help you shift security left by being easy to implement in the SDLC (again, unlike legacy DASTs) so developers can take action on findings
- API-native: While legacy DAST tools are built to test web pages, a modern DAST like Escape has native support for testing APIs, testing microservices, and single-page applications - all the tools that we use to actually produce modern applications.
- Able to test business logic flaws: Legacy DAST tools cannot test the dependencies between data and resources unlike a modern DAST.
With such a significant difference in not only time to value between modern and legacy DASTs, but also in their accuracy and efficacy, it is clear that legacy DASTs can no longer compete in modern application testing strategies.
Scale your security with a modern DAST like Escape
Effortlessly scale your security and uncover hidden assets and shadow APIs.
Book a demo nowHow they fit into the SDLC
Where to implement SAST in the SDLC:
- IDE: Give developers insights into potential vulnerabilities right as they are writing their code.
- Pre-Commit: To prevent secrets, scanning the code is ineffective, so you use SAST to scan and find the secret before it is pushed into the Git provider.
- PR Check: A harder gate to actually block vulnerabilities - whereas in the IDE they’re just identified - by scanning and reviewing the new code that enters the application.
- Continuous monitoring: Primarily to identify packages when they are actually vulnerable, since 99% of packages aren’t vulnerable any earlier in the process.
“There is kind of a lie that the industry told security teams that basically shifting left is the silver bullet of fixing everything. So if you have packages that are vulnerable, just put the security gate very early in the process and the problem will go away. But in reality, there are many different findings that we can find by analyzing statically application.” - Amit
Where to implement DAST in the SDLC:
- Continuous monitoring: Regularly scan the production environment so results from exposed applications are sent to developers to remediate.
- Staging: To shift security left, the DAST is connected to the Git or CI/CD and it re-scans and tests the pre-production environment every time there is a new version of the code, so developers can continuously fix issues.
- Life preview environments: If you have these, you can trigger a DAST scan for every version of code, giving developers insights in real time.
Where you implement the DAST depends upon the maturity of your development process and what would work best for your environment, but whether in one or all parts of your SDLC, modern SASTs and DASTs hugely streamline remediation in comparison to their legacy counterparts.
Why SAST and DAST are stronger together
Overall, SAST and DAST target many different vulnerabilities to each other, meaning put together, you get greater coverage of different ranges of issues. Static approaches can dive deep into any code issues, whether cloud-native, in APIs, or even not connected to the internet, discovering issues like hard-coded secrets or anything related to best practices that you may not be able to find in the interface.
The strength of a modern DAST, on the other hand, is its ability to detect vulnerabilities in any interface, including SPAs, APIs, and microservices, that doesn’t hold code. For example, a GraphQL API does not necessarily hold a piece of code that defines it, it's just the graph QL interface itself which might be vulnerable. These issues, such as access control, can’t necessarily be found in the code, but DASTs also simulate attacks that touch the entire application, not just the entry point.
“There are many vulnerabilities that are way easier to find and fix at the static level and other vulnerabilities that are easier to find and fix at the dynamic uh level. So there is a complementarity here that is very strong and I'm bullish on both.” - Tristan
So, while both SAST and DAST can uncover issues like SQL injections or XSS, they are designed to detect different vulnerabilities and help with remediation at different levels. SAST helps with finding phantom packages and giving the full SBOM, whereas the modern DAST will help you uncover shadow assets and undocumented APIs.
Combining both means vulnerabilities are easier to fix because they are detected earlier in the SDLC. With SAST, you can pinpoint the line of code and the developer that created or introduced the vulnerability. On the other hand, the DAST is easier to scale because you can launch it on thousands of applications without even knowing who develops them. It understands the context and detects business logic issues that need a database or the code to be run, finding the issues that are directly exposed to your attackers.
It is only by combining the two that you can understand both how developers are building an application as well as how and where anything is getting exposed, and this is what will give you full visibility over the entire SDLC and a complete process of securing your applications.
“The combination of both is really not only a way to detect different vulnerabilities, but also to detect different levels of remediation from vulnerabilities at scale that are easy to remediate.” Tristan
Often, when we think about SAST and DAST, they are put in the respective boxes of white and black box testing, but modern SASTs and DASTs like Backslash and Escape go beyond this. A goal in security testing is not only to detect issues but also to protect security’s relationship with the development team, by ensuring they are only working on relevant fixes. Therefore, these tools go beyond this white/black box scope to ensure that any issues found are real exploitable risks, avoiding the alert fatigue from the slew of false positives in legacy SASTs and DASTs.
Escape employs a hybrid approach where the Escape DAST analyzes code to guide the dynamic testing to ensure more sensitive and accurate results, and similarly, Backslash’s SAST doesn’t just analyze the manifest file but also the code to find phantom packages that would be otherwise uncovered.
“I see so many security teams that have a bad experience with tools where their developers do not really trust their process. So we're super sensitive in finding things that are accurately exploitable, starting with the right things that are real, real risks and true positives.” Tristan
Conclusion
Modern SASTs and DASTs go beyond basic static and dynamic testing to actually understand how an application operates, which is imperative to uncover vulnerabilities like phantom packages, shadow APIs, and hidden assets that you simply would not be able to find otherwise.
Crucially, these tools can test for business logic flaws, to find a whole range of issues that are fundamental to the operation of modern applications.
SAST and DAST each target different layers of applications, with the SAST uncovering issues earlier in the SDLC and the DAST coming in from pre-production.
Implementing both is the key to complete coverage over a greater range of issues, from how developers are building an application to exactly what they’re exposing on the other end. Modern SASTs and DASTs discover critical issues missed by legacy tools, uncover business logic flaws at scale, and empower developers to remediate risks effectively and efficiently.
Q&A
What about IAST?
So IAST is Interactive Application Security Testing. It is a form of testing that includes an agent to instrument the application and guides the dynamic security testing with information from inside the application at one time. It has been praised for its ability to understand exactly what happens inside of the application during the testing. On the other hand, it is quite hard to setup because you have to install an agent and if you have many different languages then the support is complicated. At Escape, we have a hybrid approach that basically reads the code of the application and uses this understanding of the code to guide the dynamic testing. So we analyze an application statically reading the code, and then we can launch very precise tests without the need to install an agent, without the need to modify the application at runtime. You want security tools that can give you 100% coverage and time to value very quickly, which is why being agentless has become a big trend in recent years. This is likely one of the reasons that IAST didn’t become as popular, because it involves changing your code; it basically involves more friction with developers.
Regarding a DAST strategy, is it recommended to do a complete scan on the entire application as releases occur or only scan the parts of the application that have materially changed?
If you think about business logic and the application as a whole, even a small change on a specific route can generate a BOLA at the application level. The approach that we advise is if you have DAST in the CI/CD, try to scope it to what your developers are doing. You don't want a full scan every time your developers release something, but run full scans on your pre-prod or prod environment. Modern DAST scans are very fast. They run in 6 minutes. So do not hesitate to run those full scans every time you release a major change, like a new version of your application, because this can really help with finding and assessing the application as a whole.
It’s also a good question for analyzing statically. When you integrate SAST into the CI process, you really want to have a very fast scan because you involve friction with the developers and they need to wait for it. So one of the best ways to do that is to do a Delta scan. Let's say that you have a feature branch with all the changes you made. The recommendation is to analyze the changes and to understand how those changes affect the security and whether they have vulnerabilities. That's one way to make the scan really quick. Another way is to make sure that we only surface issues that are new because we don't want to show developers issues that they had already - that's not the right workflow, not in the CI. For that we have tickets and we have another process, so we would do that for code vulnerabilities and analyzing statically and less for DAST, or it's a different approach than DAST.
How do SAST and DAST tools adapt to evolving or zero-day threats? How does it stay up-to-date with the latest vulnerabilities and attack vectors?
There is no cheat code for it. On Escape’s side, we have a research team and they work every day on finding new kinds of attacks and doing security research. When everyone started using LLMs in their apps, we did some research on how to test LLMs with DAST and we implemented that directly. So every time there is a new technology, a new threat, or a new vulnerability that is exploited, we add a security test inside of Escape, and all our customers can have this test running on their application instantly without doing anything. In Escape's documentation, which is publicly accessible, we have a continuously updated list of all the security tests, attack scenarios, and vulnerabilities that we can discover out of the box. So you can see these numbers grow every week with new default security tests and new vulnerabilities that are discovered.
The same thing goes for static analysis. So the part of packages is using known vulnerabilities so it's not really relevant for zero days, but analyzing code itself is using different patterns that are vulnerable. So we also have a research team who analyze different patterns for vulnerabilities, and once we find them, we run research across open sources to find whether those patterns actually exist in the wild so that we can add them into our engine and basically find whether there are vulnerabilities in similar patterns in our customer base.
How are pipelines analyzed to prevent misconfigurations? Same thing with CI/CD analysis; how is it calculated? How is it auto-remediated?
I think that static analysis is really good for misconfigurations. There are patterns that are really well-defined for misconfigurations. So one obvious option is analyzing infrastructure as code. Maybe the simplest option is defining an S3 bucket that is not encrypted or publicly opened, so it's a flag in the configuration that we can easily detect. There are more sophisticated ways of thinking about misconfigurations. Many people think about development packages as something which is not important, but what if your configurations are not taking that into consideration, and when you build your code, you don't specify which environment you're using? Then, you're taking development packages into production. So this is something that we found very common, and we can find it very easily using static analysis.
The part about the severity is also interesting. I would say the traditional way of thinking about that is that each type of vulnerability has a different severity, like an SQL injection versus post reversal versus something else. In reality, when we analyzed the last 6 years of known vulnerabilities, you can see that when the security teams at the National Vulnerability Database (NVD) analyzed each one of the Common Weakness Enumerations (CWEs), they gave them very different severities based on the CBSS score, and it really depends not just on the type but also on many other conditions. For example, it's really different if the SQL is on a Post, Put or Get, and there are more and more ways to look at it. So, based on the pattern itself, we define the severity.
Let's say there is a code base that doesn't get updated often. A new vulnerability in one of the packages hits this application. What is the journey in Backslash to try and guide the developers team to this new thread?
This is the natural workflow of dealing with known vulnerabilities in packages. So what would happen is that a new vulnerability will come out. It will be published, and Backslash automatically re-scan the code whenever new code is pushed or whenever there is a new vulnerability matching the code that exists and is vulnerable. We have different integrations, and a very common way to deal with that is to create a policy in Backslash that would create a notification in different platforms like Teams and Slack and would automatically create a ticket for the relevant team that is responsible for that piece of code.
Since IDORs are tightly bound to business logic, how does the scanner identify these kinds of issues?
This is a great question because IDORs and BOLAs are very coupled to the business logic of an application and knowing who has access to what in an application is not even sometimes clear for the developers or for the product managers. At Escape, we solved this question in two different ways. We have built-in rules that identify the most common problems. For instance, if a public user has access to the PII of many people or many users, we know there is a problem. No public access to an API should give access to many PII from many different users. So, we pinpoint the problem. The second approach is through custom rules. So, we allow any user, security engineer, or developer to create custom rules with a simple YAML language, so a simple configuration to say, for instance, a user can only access his account data. So if a user during a scan is able to access two different accounts, then there is a leak. And this is how with simple checks, and simple rules, we can detect very complex business logic issues, at run time, and during the software development life cycle.
What do you think about AI in SAST?
Like I showed in the demo, at Backslash we also leverage AI and LLMs. I think users have their journey, and whenever they find an issue, they would usually try to understand whether something is a false positive or negative. LLM is really great for that. Besides that, the rule base itself is something that we leverage LLM to optimize and really understand and find all the entry points. So it's part of the core of how we do SAST. And what I showed is that we can really understand the business logic and business processes, which are very tightly coupled to the code itself. So, it gives a lot of insights about not just whether there is a vulnerability here but also on which specific parts of the application we have those vulnerabilities. I would say it really allows for innovation and to be able to really understand and prioritize things much better.
Do you think GitHub Advanced Security (GHAS) is a fast solution?
I think it's part of them being a provider for code. This is their main core business. Their approach is to do security statically in a very simple way, so they would have the very basic features and it will be very easy to deploy them. But they will not have any advanced features, and they will not fall under the category of an advanced security tool for static analysis.