Webinar Recap: Securing AI-Driven Applications with DAST

“You ask the developer why they wrote the code the way that they did, and they don’t have an answer anymore. They’re like, ‘Well, AI wrote it.” - Seth Kirschner, DoubleVerify

AI is now deeply embedded in software development workflows, and the implications for application security teams are profound. Gone are the days when engineers wrote every line of code themselves, fully understanding the rationale and risks behind each implementation. Today, developers are working side-by-side with tools like GitHub Copilot and Cursor that suggest, autocomplete, and sometimes even write entire logic blocks.

This evolution brings speed, but it also adds ambiguity. Code now reaches production with unclear origins, missing ownership, and without developers fully understanding what was built or how it works. As a result, new risks are introduced without being recognized by their owners, and resolving them often takes longer.

In this environment, AppSec teams can no longer rely solely on traditional testing or manual review processes. As Seth Kirschner, Sr. AppSec manager at DoubleVerify put it, “closing that gap has been one of our major priorities.” This webinar, featuring AppSec leaders from DoubleVerify, Applied Systems, and PandaDoc, was not about vague future-gazing.

It was a grounded discussion on how security organizations are adapting operationally, culturally, and technologically to secure machine-written code by focusing on people, processes, and with the help of modern dynamic application security testing (DAST) tools.

Here’s what the experts shared:

Webinar replay: Securing AI-driven applications with DAST

Meet the experts

Before diving into the key takeaways, it’s worth knowing who you’re hearing from. they’re practitioners leading AppSec efforts at fast-moving, high-growth companies where AI is already reshaping how code gets built and secured.

Seth Kirschner -

Seth is the Senior Application Security Manager at DoubleVerify, a global public AdTech company that measures billions of ad impressions in real-time. With a background spanning Deloitte, MUFG Securities, and a health tech startup he co-founded, Seth’s approach to AppSec blends enterprise structure with startup agility. He currently leads a globally distributed AppSec program covering everything from pen testing and cloud security to AI risk and security automation.

Nathan Byrd -

Nathan is a Principal Application Security Architect at Applied Systems. He started his career on the dev side building enterprise-grade systems for companies like Mastercard before shifting into security. Today, he brings a developer’s mindset to the challenge of securing the SDLC at scale, with a special focus on how AI tools are changing the way teams build and ship.

Nick Semyonov -

Nick is the Director of IT & Security at PandaDoc, a fast-scaling SaaS company known for its document automation platform. Nick oversees everything from infrastructure to AppSec, guiding the org through growth, compliance, and IPO readiness. With his dual lens on IT operations and application security, he’s particularly tuned in to how AI is transforming development teams and the tools they rely on.

AI is here to help - but teamwork is key to secure development

You open a pull request. It’s clean. The logic looks tight. The tests pass. But then you ask the developer, “Why did you write it this way?”

They hesitate.

“Well… Copilot wrote it.”

That’s the new norm in software development. AI tools like GitHub Copilot and Cursor aren’t just autocomplete they’re collaborators. They generate functions, write business logic, and sometimes drop in entire components with zero manual input. And while this is helping dev teams ship faster than ever, it’s leaving security teams with a big problem: how do you secure code no one truly understands?

At PandaDoc, Nick Semyonov saw this firsthand. His developers weren’t waiting for a policy to explore AI, they were already using it. Instead of pushing back, Nick’s team leaned in. They tested over 30 AI tools before settling on Copilot and Cursor. The decision wasn’t about popularity, it was about control. They opted for paid models to limit data leakage, restricted third-party integrations, and embedded oversight directly into the IDE.

But even with all that governance in place, Nick made the risk clear:

“If you treat AI like an intern, their focus is to make things that work not very secure.” - Nick Semyonov, PandaDoc

And as Seth Kirschner from DoubleVerify pointed out, that loss of developer accountability has ripple effects:

“You ask the developer why they wrote the code the way that they did, and they don’t have an answer anymore. They’re like, ‘Well, AI wrote it.” - Seth Kirschner, DoubleVerify

The takeaway? Code is getting written. Fast. But security can’t just sit back and assume it’s safe. AppSec teams have to make sure it’s code that won’t expose anything vulnerable later.

Some developers don’t know what the code does anymore

When AI writes the code, who owns the risk?

Once all the vulnerabilities are triaged, security engineer would create the tickets for the most important issues, reach out to the developer, and ask, “Why did you write this like that?”

You’d get an explanation, maybe a little back-and-forth, and leave with a fix and a better understanding.

Now? The developer didn’t write it. Cursor or Copilot did. The reasoning behind writing each function is gone. And without context, security teams are left with reverse-engineering logic that no one really understands.

Nick Semyonov from PandaDoc framed the root issue clearly:

“What we’ve seen with a lot of AI-generated code is the focus is to generate code that works, not necessarily code that is secure.” - Nick Semyonov, PandaDoc

It’s not just that the code is insecure - it’s that developers aren’t even learning why. With AI doing more of the heavy lifting, their opportunity to understand and internalize secure coding patterns is disappearing.

As Nathan Byrd from Applied Systems pointed out during the webinar:

“If they’re no longer looking at the code in the first place, then they don’t have that opportunity for learning.” - Nathan Byrd, Applied Systems

That creates a dangerous ripple effect. Without understanding, there’s no ownership. Without ownership, triaging takes longer. And without learning, those same vulnerabilities just keep coming back. AppSec teams aren’t just managing threats, they’re filling in the gaps AI leaves behind. And unless those gaps are closed with better review processes, testing, and developer education, the black box of machine-generated logic will only grow.

“It takes those junior programmers that we started talking about and turns them into the senior programmers because they see the issues, they get to learn a little bit about them, they resolve them, and over time they start writing more secure code.” - Nathan Byrd, Applied Systems

This iterative learning is what turns a junior dev into a security-aware engineer. Every scan failure becomes a teaching moment right in the feedback loop. And that’s essential in an AI-first world, where “teachable moments” could be shrinking fast.

How AppSec teams are responding to AI’s rapid development pace

You don’t need a dashboard to tell you what’s happening just look at your pull requests. The code volume is exploding. Thanks to tools like Copilot and Cursor, developers are shipping faster than ever. But there’s one big problem: your AppSec team isn’t growing at the same pace.

At PandaDoc, Nick Semyonov is seeing that velocity firsthand:

“Now our developers are a lot more productive, but we’re also pushing twice as much or three times as much code into production. My team is not three times the size.” - Nick Semyonov, PandaDoc

With more code flying into production, there’s more opportunity for critical issues to sneak through. Think: unsanitized inputs, hardcoded secrets, poorly scoped access. AI doesn’t naturally think in threat models. To keep up, security teams are shifting from manual gatekeeping to embedded automation. The panel emphasized the importance of intermediate testing not just at pre-prod, but continuously across the lifecycle. If AI is injecting new logic on every commit, guardrails need to be proactive, not reactive.

And the stakes? They’re multiplying fast. As Nathan Byrd from Applied Systems put it:

“I don’t think it’s just AI versus AI or AI with AI. It’s AI versus AI versus AI versus AI… You’ve got the development AI, you’ve got the security AI, and of course the attackers have AI as well… so do hopefully the testers.” - Nathan Byrd, Applied Systems

In other words, this isn’t a pipeline it’s an arms race. And if your tooling is still based on manual reviews or static tests, it’s not built for this battlefield.

Seth Kirschner from DoubleVerify underscored the same pressure:

“Closing that gap has been one of our major priorities.” - Seth Kirschner, DoubleVerify

It’s not just volume, it’s velocity without review. As Nathan Byrd pointed out, some developers are now pasting in AI-generated code and shipping it straight to production, i.e. “vibe coding.” It works great for side projects or quick testing but in a real environment, it’s code going live without eyes on it. And that’s when small mistakes become real risks.

The bottom line? Security needs to scale at the speed of AI - through smarter automation, contextual DAST, and developer education that keeps pace with code generation.

Because when the code’s coming from AI, and the attacks are too, you need your defense to be just as smart and just as fast.

AI adoption is spreading across teams - here’s how security can keep up

AI-powered tools are now in the hands of everyone from finance teams automating spreadsheets to HR spinning up chatbots for onboarding. And guess what? They’re building with the same underlying APIs, plugins, and integrations your devs use but with zero oversight from security.

This isn’t hypothetical. Across industries, non-engineering teams are quietly launching AI automations with access to sensitive data and internal systems. They’re wiring up OpenAI APIs, syncing documents, and even scraping internal dashboards all with the best intentions, but absolutely no guardrails.

For AppSec and DevSecOps teams, this means the traditional perimeter is gone. Your risk doesn’t start at the repo anymore, it starts in collaboration tools or low-code environments - whether it’s Notion or shared n8n workflows (we know it first hard as an avid users at Escape!).

As Nick Semyonov from PandaDoc explained,

“We are using n8n at PandaDoc for general use cases… We have an OpenAI subscription with a key that’s shared with the team, so they don’t get to use their own models or personal AI tooling - it’s a shared key.” - Nick Semyonov, PandaDoc

This approach adds a layer of control, allowing non-dev teams to automate tasks securely while minimizing unmanaged AI exposure.

Scaling security without slowing down development

If your developers are now three times more productive, your security strategy needs to keep up not by just hiring more people, but by evolving how you approach security testing. Dynamic Application Security Testing (DAST) is a powerful way to scale security alongside the rapid pace of development driven by AI tools.

During the webinar, an audience member asked a pointed question: Why choose Escape over a more established player like Rapid7?

Seth Kirschner from DoubleVerify shared valuable insights on what truly sets Escape apart. He explained that Escape excels by generating thousands of unique test cases against application endpoints, providing broader coverage and deeper visibility than many older solutions.

Unlike traditional tools, which require you to explicitly define every endpoint to test, Escape leverages integrations to automatically build an inventory and scan applications almost as if they were production environments with minimal human intervention. According to Seth, this capability brings unmatched scanning depth and breadth:

“Escape shines … coming up with thousands, if not several thousands of unique test cases to put forth against your application endpoints, and be able to get wider coverage and visibility.” - Seth Kirschner, DoubleVerify
Why Escape is better than Rapid7

He contrasted this with Limitations of Older Solutions Like Rapid7

“For Rapid7 or their equivalents that are sort of older solutions in the marketplace, it requires for you to very clearly specify your application and definitively type out here are my endpoints that you need to test.” - Seth Kirschner, DoubleVerify

Seth added how Escape’s integrations streamline the scanning process:

“It’s doing a lot of that inventory capability just based off of the integrations that we’ve already established. From there, beginning to scan everything and almost treating it all like a production environment so we can get crawling capabilities and additional discovery without much human in the loop.” - Seth Kirschner, DoubleVerify

Seth also emphasized that Escape provides the most comprehensive visibility and testing coverage they’ve seen so far:

“Escape gives us definitely the most visibility in coverage as well as the most testing coverage relative to any previous solutions that I’ve seen introduced.” - - Seth Kirschner, DoubleVerify

Nick Semyonov from PandaDoc echoed these points, highlighting how Escape’s smarter testing approach really stands out from legacy DAST tools. He noted that many older tools run the same set of predefined tests repeatedly without understanding the context of the API or endpoint they’re scanning. Escape, on the other hand, intelligently adapts its testing based on what it discovers, probing for vulnerabilities with greater precision. Nick described Escape as a “smart DAST” that goes beyond simple checklist testing:

“Escape is like a smart DAST, it’s a lot smarter than the other solutions we tested. Again, thinking about the legacy DAST solutions, they just have pre-hardcoded test cases and they don’t really understand the context of where it is inside its testing. It just runs the 30 test cases, goes to the next step, further down to the API code, runs the same 30 test cases.” - Nick Semyonov, PandaDoc

He continued to explain Escape’s contextual awareness:

“We saw Escape being a lot smarter, understanding what’s happening, where it is located. For example, it’s finding a billing API, it’s found what it thinks is a billing ID, like 001, and it tries a few other IDs to see if it has access to get some other people’s billing info. It’s a lot more understanding of what’s happening where it’s at. I think this is where tooling and security tooling overall is going.” - Nick Semyonov, PandaDoc

Nathan Byrd from Applied Systems added an important reminder that the tool alone isn’t the full story. The team you work with their responsiveness, their willingness to listen and adapt is just as crucial when selecting a security vendor. As Nathan said:

“Whatever tool you’re using, the most important thing is the team you’re working with, how they respond, and how much feedback is taken. Those things are very important when choosing a vendor.”- Nathan Byrd, Applied Systems

With AI accelerating code delivery, security testing must evolve from slow, manual reviews to fast, automated, context-aware testing that integrates smoothly with CI/CD pipelines. The panel agreed: it’s about testing early, testing often, and testing dynamically matching development velocity without becoming a bottleneck.

Conclusion: Securing the Future - Before It Ships

AI isn’t a “next quarter” risk. It’s already writing the code your developers are pushing to production and it’s doing it at a scale, speed, and opacity that traditional AppSec tools just weren’t built for.

In this session, our AppSec guest experts showed how teams are stepping up—with smarter tooling, better processes, and a new cultural posture around what it means to build securely in an AI-native world.

Nick Semyonov tested 30+ AI tools before settling on the right developer setup. Nathan Byrd explained how vibe-driven dev doesn’t belong in prod. Seth Kirschner shared how DAST has to evolve to match modern endpoints. And together, they showed us the blueprint:

  • Collaborate closely with developers to integrate AI tools safely and effectively.
  • Embed guardrails throughout the development lifecycle, not just at the end.
  • Scale testing with context-aware automation instead of relying solely on more headcount.
  • Educate, integrate, and automate security without becoming a bottleneck.

AI isn’t asking permission, it's already shipping. The key question now is whether your security practices are evolving fast enough to keep pace through collaboration and innovation.

With solutions like Escape and Escape DAST, you can confidently scale your security, embed context-aware testing throughout your pipeline, and empower your teams to build securely, no matter how fast your code moves.

Ready to take the next step? Book a demo with our product expert to see for yourself how Escape DAST can help you get wider coverage, visibility, and understand the context of your applications in the AI-driven development era.


💡 Discover how to scale secure development in the age of AI