How Small Security Teams Scale and Optimize Workflows in Decentralized Environments

    If you're an application security engineer on a small team inside a large, decentralized organization, you already know the feeling: hundreds of products shipping continuously, engineering teams moving fast, and you and your colleagues outnumbered by 100x trying to keep it all secure.

    We recently hosted a webinar where two practitioners and our customers shared how they've tackled this exact problem. Their approaches are different, but the underlying philosophy is the same: you can't do it all yourself, so build systems and workflows that let others carry the load.

    You can watch the replay on YouTube as well

    Meet the speakers

    Daniel Ilies is an IT Security Engineer at Visma, based in Romania. He's been in the IT field for over 17 years, with nearly seven of those spent at Visma. Daniel currently owns Visma's DAST program, meaning he's the central point responsible for dynamic application security testing across the entire organization.

    To put that in perspective, Visma has roughly 300 products to scan, and acquires 40 to 50 new companies every year, all of which need onboarding into their security programs.

    Gabriel Berrios is an Application Security Engineer at Schibsted, a major media house in Norway that operates several newspapers and digital products. Gabriel's team covers security at every layer, from code scanning and application monitoring to cloud security and a bug bounty program. He went through a company split and a migration process, which meant rebuilding the security tooling and processes from scratch with a new small team for an almost entirely new company structure.

    Both work in environments where engineering teams are numerous, autonomous, and constantly shipping. And both have found ways to make that work.

    The core problem: too many teams, not enough security people

    When asked about his biggest constraint, Daniel didn't hesitate: "The main issue that I had was the manpower. There are simply not enough colleagues to help me with all these scans and making them run smoothly and successfully for every scan."

    Running a DAST program from a central point means you're responsible for scan creation, maintenance, authentication flows, timeout errors, custom settings, and all of that multiplied by hundreds of products. It was consuming enormous amounts of time before he found a way to push ownership outward.

    Gabriel described a similar challenge from a different angle. At Schibsted, the problem isn't just scan management, it's alignment. "Since we are a large organization, our time is limited because there is continuous delivery, development cycles releasing new products all the time. To be able to align with those teams might also be a bit time-consuming," he explained.

    His team wants to scan deeper and cover more surface area, but engineering priorities don't always line up with security timelines: "The challenges, mostly, can be constrained by engineering priorities. For example, you have a central team that is pushing something, and it might take some time for us to align with them regarding some vulnerabilities as well. So, like time coverage and progress are together."

    The solution both landed on: empower the product teams

    Daniel and Gabriel arrived at similar conclusions independently. You can't be the bottleneck. You have to give product teams real ownership of their own security posture, not just tell them to "care about security", but actually hand them the tools, the visibility, and the accountability structure to follow through. The specifics of how they each do this are worth digging into.

    Daniel's approach: set clear expectations, measure everything, act on the gaps

    At Visma, the security program operates on a tiered system in which every product team has a target tier, with platinum as the highest. Daniel's role is to set clear expectations for each product and then check whether they're meeting them. The checks are concrete: Is the product enrolled in DAST (and eventually other security services)? Do they have old, unfixed vulnerabilities with different remediation timelines based on severity? Are the scans actually running successfully and authenticated?

    How to set expectations for each Product team

    That last point matters more than it might seem. "The scan can be a success, but if there is no authentication or authentication is failing, you will not scan anything behind the login," Daniel explained. "So again, the scan will not bring a lot of value." A green checkmark on a scan that isn't hitting authenticated surfaces is worse than useless, it's false confidence.

    The results are centralized and visible. If a product team doesn't meet its targets, penalty points accumulate, making it impossible to stay on their desired tier.

    These targets come from management, not from the security team. "It's not the security team that sets those targets for the teams," Daniel said. "We just gave them the tools and the measurement system. That's my job." Leadership can see exactly where every product stands, which creates natural accountability without Daniel having to chase hundreds of teams individually.

    His advice to others considering this model: "Empower the product teams and let them take care of their security. Give them the necessary tools, be transparent. They don't have to be afraid if we find critical vulnerabilities that actually means the tools and the service have value. Fix it, share how you fixed it with colleagues from other products. Be transparent."

    Gabriel's approach: tool adoption, automation, and alerting

    At Schibsted, Gabriel structured his team's impact around three pillars that he presented during the webinar: tool adoption, automation, and alerting.

    How to make an impact with your AppSec team according to Gabriel

    Tool adoption means making sure security champions across every engineering team are actually using the tools effectively, not just that the tools exist, but that teams are onboarded to the right projects, have the right access, and are actively engaging with findings across code scanning, cloud scanning, DAST tools, and bug bounty. "It's our job to make sure that the security champions are having good tool adoption, and that's on every level, not only from a code scanning, but also from cloud scanning, web application scanning, and so on. Bug bounty is a bit different. The measurements are how fast we patch everything, or how many criticals we have, or mediums and so on, or how much money we pay out." Gabriel said.

    Automation is where Gabriel is investing heavily, particularly around risk-based triage with clearly defined exploitability criteria. The goal is to stop wasting time on informational findings and focus on what's real. "It's better for the teams to have 1 or 2 or 3 really real findings than 100 informational ones," he said.

    Alerting ties the first two together: improved detection and routing so that findings and alerts land in the right team's Slack channel or inbox, not in a central queue that the security team has to manually sort through. Teams set up dedicated alert channels and get notified about findings relevant to their specific projects. If a tool allows, they can login with vulnerabilities related to their specific project, focus and fix them (like it's possible with Escape).

    Gabriel also runs a security champions program with monthly sessions, shout-outs for teams doing well, and extra attention for teams carrying high vulnerability counts. "Security is a shared responsibility in the organization," he said. "Not all vulnerabilities can be solved by the security team alone. It requires a collaborative effort."

    Making triage less painful with AI

    One of the most time-consuming activities, both speakers mentioned, is manual triage. In the day-to-day work, they need to constantly validate whether a finding is actually exploitable and assess its real impact.

    Gabriel has been exploring automation to tackle this: "I would like to have a risk-based triage. Is the finding actually exploitable or not? This is very time consuming. It's better for the teams to have 1 or 2 or 3 really real findings than 100 informational ones." He's been experimenting with AI agents that can gather context about a vulnerability, test it against sandbox environments, and help his team determine exploitability.

    His advice was practical: "You can use scripting, you can use AI agents to gather a lot of information about a particular vulnerability based on experience and available data, and then run it against different tests." The goal isn't to replace human judgment, but to filter the noise so the team spends time on what actually matters.

    Justifying change to leadership

    Both speakers addressed a question many security engineers face: how do you convince leadership to invest in new tooling or approaches?

    Daniel's approach is methodical: "I'm always trying to come with a good POC to demonstrate what I want to implement. The advantages but also the disadvantages. Comparing with what you already have and highlighting the advantages. For me, this always worked." He emphasized covering as many realistic scenarios as possible and presenting concrete numbers after the proof of concept.

    Gabriel took a complementary angle, noting that the security champions program itself becomes a tangible asset when talking to leadership. "You can show: OK, you have this program, this is how you're working, and the remediation cycles end up being shorter and shorter every time. That's a good thing." When leadership is already pushing AI adoption, he added, it's easier to slot in automated triage tools under that existing mandate.

    On AI limitations for engineers

    A question from the audience asked about AI limitations placed on engineering teams, a topic both speakers found deeply relevant.

    Gabriel was candid about the tension: "There are people that know development and might use AI as a support, they can spot hallucinations and move on. And then you have people who are not so experienced in coding but still try things and push something to production that might be vulnerable." His recommendation was clear: establish guidelines and policies around which AI tools are approved for enterprise use, and improve detection on the code review and scanning side rather than trying to contain usage through prohibition.

    "If you try to contain it, they will just go around the managed device and try to do it anyways," he warned. "The improvement is to kind of improve on the detection part."

    Practical advice for overwhelmed security teams

    When asked for their single best piece of advice, both speakers offered grounded, actionable guidance.

    Daniel's advice centered on transparency and empowerment: "Empower the product teams and let them take care of their security. Give them the necessary tools, be transparent, they don't have to be afraid if we find critical vulnerabilities. That actually means the tools have value. Fix it, share how you fixed it with colleagues from other products. Be transparent."

    Daniel's advice for overwhelmed security teams

    Gabriel focused on fundamentals: "Understand your systems. Read the documentation. If you just blindly trust outputs from a tool all the time, the process might be slower." He also strongly recommended scripting away repetitive tasks.

    For example:

    • Syncing user access
    • Auditing system configurations
    • Checking for orphaned resources after decommissioning
    Gabriel's advice for overwhelmed security teams

    What's actually working: feedback on Escape

    Toward the end of the session, both speakers shared specific capabilities that have made a difference in their day-to-day work when it came to operationalizing DAST specifically.

    Daniel highlighted a few capabilities he hasn't found elsewhere. IDOR scanning with support for multiple users was a standout: "You can add multiple users, more than two, which means there are a lot of scenarios you can play with. Very few DAST tools currently have this." The role-based access control was another win. When a team requests onboarding to DAST, everything is automated from that point, giving them access to their own project only.

    "This is our goal: to empower the teams to take care of their own scans. We provide the tools, we provide ways to measure their security posture, but we empowered the product teams to take care of all of this. Now that we have, I'd say, the proper tool to do this."

    💡
    Learn more about Escape Projects here

    And the AI-based authentication surprised him: "It's able to detect some clicks, not necessarily instructed to click on accept cookies on every button, but it simply knows where to click. In most cases, it's just working and I like that."

    Gabriel pointed to the flexibility of custom scanning rules: "You can create custom rules for every technology that you're scanning." He also agreed that API scanning depth was a differentiator, something he hadn't seen matched elsewhere. The ability to configure global settings, whitelist or blacklist endpoints and domains, gave his team the control they needed across a diverse portfolio of products.

    For both, the common thread was the same: the tool needs to reduce the operational burden on a small team, not add to it. If the security team still has to babysit every scan and manually route every finding, no amount of detection capability matters.

    💡
    If you're in a similar position: small security team, dozens (or hundreds) of product teams to cover, book a demo with Escape to see how teams like Visma and Schibsted are scaling DAST without scaling headcount.

    Q&A Highlights

    What are the general improvements in operational efficiency you see with automation?

    Both speakers agreed that automating the measurement and reporting layer is where the biggest wins come from. Extracting security posture data across hundreds of products and presenting it to teams manually would be unsustainable. Daniel noted that automated metrics are essential at Visma's scale, while Gabriel emphasized that the quality of data going into automation matters enormously, especially for triage, where tuning takes significant time before the outputs become trustworthy.

    What AI limitations do you place on your engineers?

    Gabriel shared that the key is not prohibition but detection and guardrails. Approved enterprise tools should be clearly identified, and code review processes need to account for AI-generated code that might introduce vulnerabilities. Engineers who aren't experienced developers should be encouraged to involve experienced colleagues before pushing AI-generated projects to production.

    How do you handle the onboarding of newly acquired companies into your security programs?

    Daniel explained that Visma's workflow and measurement system is already established company-wide. When a new company is acquired, they're onboarded into the existing security program structure. Management can see exactly where each product stands relative to its target tier, which creates organic pressure to onboard without the security team having to chase every new acquisition individually.


    Want to learn more from security practitioners? Check out the following articles==