Security experience: top-down vs bottom-up⎥Jeevan Singh (Rippling, Twilio) ⎥The Elephant in AppSec Podcast

Welcome to the second episode of The Elephant in AppSec, the podcast to explore, challenge, and boldly face the AppSec Elephants in the room.

Security experience: top-down vs bottom-up⎥Jeevan Singh (Rippling, Twilio) ⎥The Elephant in AppSec Podcast

Welcome to the Elephant in AppSec, the podcast to explore, challenge, and boldly face the AppSec Elephants in the room.

Today we’re excited to have an amazing guest, Jeevan Singh, Senior Staff Security Engineer at Rippling, joining us. 

Jeevan is responsible for a wide variety of tasks, including architecting security solutions and working with development teams to resolve security vulnerabilities. With over 15 years of experience in various development and leadership roles, he enjoys building security culture within organizations. 

That’s why we’re extremely excited to talk with him today about the top-down vs. bottom-up approach to security. Throughout our talk, we had a chance to challenge him on his vision, opinions, and ask some "spicy" questions!

In our conversation, Jeevan shares:

Let’s dive in!

💡
Want to be a guest speaker? Fill in this form, and we'll get back to you!

Referenced:

💡
Listen now on Spotify and YouTube. The Elephant in AppSec caters to all: Whether you prefer listening, watching, or reading, we have something for everyone. You can find the full transcript at the bottom 😌

Find the full transcript below:

Alexandra  

We’re very happy to have you here today with us on the podcast and cover different topics, mainly focusing on approach to top-down versus bottom-up approach to security. These are very exciting topics, and I think you have a lot of knowledge. We're very looking forward to learning from you and challenging you as well a little bit, because that's the goal you know?

So let's start. In your opinion, what can you explain to people who just joined us and they probably have different experience in security. If you can just explain a little bit, what does it mean: top-down versus bottom-up?

What are the top-down versus bottom-up approaches to security?

Jeevan

For sure. Thanks for having me. I don't know if I have a lot of knowledge. I definitely have a lot of opinions. 

So, when traditionally, I think about a top-down approach, I'm really thinking about how you have CEOs, the ExecSuite really, pressuring all of the organization to think about security in all aspects of the business. So, CEO will pressure his exec team. The exec team will pressure their VPs. VPs will pressure the directors, and all the way down towards the individual contributors within the organization.

And traditionally you find this in more of the old, I wouldn't, I was gonna say older, but more like the traditional companies. So, like in banking, finance, and insurance, security is really important. I definitely don't want other people to have any idea of my healthcare information or my financial information, stuff like that.

So, more often than not, we see that happen in the more traditional organizations. Bottom-up approaches where all the ICs and, more likely than not, it's going to be on the engineering side of things, software developers, and they are evangelized. They are really empowered to care about security, and you see that with more of the tech companies and programs like security champions, where you work really closely with the individual contributors and make sure that they are knowledgeable about security and they have the right information that they need to make the right decisions for their teams. So, there is definitely a big difference between a top-down, where you're pressured to do security, and a bottom-up, where you're empowered to do security. But a lot of companies have a mixture of both as well.  

Factors influencing preferences 

Tristan

Actually, you answered to this question already, but I feel that there has been a shift in the trends towards favoring either top-down or bottom-up approaches.

It looks like enterprise are more top-down, startups and new tech companies are more bottom-up, but what are the factors that influenced this preference? 

Jeevan

Yeah,  it depends from company to company, but for me, personally, I like both approaches. 

I've been at companies where, as a security engineer, I try to evangelize, I try to motivate engineers to be more caring about security.

And they do actually, they're very passionate and caring about security. But the challenge is they don't have the autonomy to make the decisions. So they have their engineering managers telling them: “Hey, we have deadlines. We have to hit these goals to ship out these features. And you have product managers say, yeah, no, forget about security. Forget about technical debt, but we got to keep shipping new products all the time.”

So now, in those particular situations, it doesn't matter how much the engineers individually care about security. Security work is not going to happen, which is why you need both - a bottom-up and top-down approach.

And the top-down approach - it makes a lot of sense, because a product is not optimized to deliver technical debt or security technical debt. That particular organization they're usually rated on how many new features they push out or how much revenue they generate for the company.

And while I really care about security and I think it is a revenue generator, most companies don't think of it that way. People will not buy your tool because you have the best security, but the flip side also works where people will avoid your tool if it doesn't have good security. So, it's really hard to motivate a product to really focus on security itself, which is why you need to make sure that their bosses and their bosses’ bosses actually care about security so that they have to actually care about security.

So you're right where a lot of the high-tech companies, especially the startup side, they don't really have the passion or want to do security, or they're in survival mode and trying to just push out their product. But more of the enterprise, even the high-tech enterprise companies - there is an expectation. Your clients, if you're spending seven-figure, eight-figure deals, licensing deals on this product, there's an expectation of security built right into the product.

Alexandra

Yes, makes sense. I think I'm going to challenge you here a little bit. You said you prefer both.  Is there no preference at all for one another?

Both approaches: top-down and bottom-up are needed to ensure the best security experience

Jeevan

No, you have to do both. So you absolutely need a top-down approach. Because if the exec suite does not care about security, you're wasting your time in that organization. You should go work for a company, where as a security engineer, you should work for a company that actually does care about security because it's going to be an uphill battle all the way. 

And I've been fortunate enough in my career not to have that. But there were places where I thought we had very strong security, and I knew that there wasn't gonna be any additional investment that they didn't want to have next-generation security and bleeding edge security, which is totally appropriate for that company at that time.

But like, you need that a top-down approach to really make sure that the VPs of Engineering, the directors also all care about security, but you also need that bottom-up approach. So, if the individual engineers don't know how to fix security vulnerabilities and don't know what good security looks like, you're going to have that struggle as well.

So, traditionally, when I join an organization, I like to start top-down. I like to get to know the directors, VPs, the CTO, so that I can get buy-in at the highest levels. And once I have buy-in, and have convinced those folks that they should care about security, more often than not in 2023 people actually care about security, so it's not, doesn't require that much convincing, but once I get buy-in, buy-in from that level, then I will go down and work with the individual engineers and really teach them how they can be good stewards of security for us. 

How fear-driven decisions contribute to the shift

Alexandra

And do you think that in the companies mentioned here, the healthcare or finance, where there are higher stakes, you know, and where potential fear is a big factor, do you think we tend to shift more towards top-down control in these situations? 

Jeevan

I agree with that. Like when you are worried about what regulators may say that definitely has an impact in itself.

So, if I'm worried about the SEC, if I'm worried by GDPR or privacy regulators, I'm really going to care about the processes that are put in place, and it's going to be top-down heavy. In itself, we're going to be implementing it in this way, but I don't think it'd be to the point where it's a sort of like a FUD, like fear, uncertainty, doubt.

It's not like - Hey, we got all of these nation-states that are going to attack us if we don't do A, B, and C, you know, like that type of uncertainty and fear. I wouldn't want to implement a program that way. I would just, as I talk to directors and VPs, talk to them and say: Hey, if we fix this particular vulnerability, we should be much better placed for when auditors come in and ask us questions.

I'd never, I will never lie to an auditor. I'll be very truthful to an auditor. So, either you'll look bad because, as I've read, I'd make sure that we'd have these conversations. They had plenty of time to remediate these issues. And if the auditor was really strong at probing and asking the right questions, they're going to get the truth out. 

So I definitely want to work, make sure that we work with our engineering partners to make sure that they are aware of the issues, but also have a balanced approach and not trying to scare them into doing the right thing. Like they are the ones that actually own the risk within the organization and they are the ones that should make the ultimate decision.

I'm there to really help them guide and understand risk in itself. Let me clarify that. Like if they're doing the wrong, wrong thing, I'm going to step in and say: no, we're not doing this. We have an example of a really wrong, wrong thing. 

If I'd noticed that we had a backdoor into our software, that's it.

No, I don't care. I don't care if I lose my job situation. Like, we're fixing that in itself, or if there was some egregious vulnerability out there that I felt that some adversary that does even a normal level of programming could get with it, I will put my foot down and say, we absolutely have to fix this.

But if there was like, we need to have service authentication in order to do better in this particular audit server to service. Authentication is not easy. It requires a lot of work preparation and work. It might be multiple quarters, depending on the size of the organization. We would have conversations around that and say: okay, maybe we can implement this small layer first, and then we can iterate to get to where we think is very appropriate in itself.

So it really depends on the situation but we want to give them the full context to make the right decisions. 

Always facing resistance 

Alexandra  

And have you ever had any examples of resistance that you encountered? Let's say you come to a new company, then you said you have first a top-down approach, and have you met, for example, resistance on another side from the engineers that you’ve just met or vice versa? 

Jeevan

Yeah I've always encountered resistance, like a small amount of resistance is always healthy. So we had a new VP of engineering join us when there were my days at Twilio before Segment.

A new VP of engineering joined us. And he was a very, very strong security advocate. So the partnership, the relationship was already there, but he challenged us on everything. So, severity and prioritization was a great conversation cause he wanted to make sure that we are looking to his teams, his entire org, I think.

I think he had maybe 200-300 engineers reporting into his org. They're actually really focused on the most important of the security items themselves. So, he basically told his org: I don't want to see P1s out of SLA, everything has to be fixed. Engineers looked at them and some of those P1s were actually P1s.

And so we looked at those, and we sat down, and we showed them that this is how we actually map out severity. And asked them - do you disagree with these things? And the engineers did, and that was good because like for us on the security side, we have a very small sliver of knowledge in all the things that they do.

So, I remember one particular situation: I sat down with a director of engineering, and we walked through maybe 4 or 5 of his vulnerabilities. And 2 of them he completely disagreed with. So I showed him the rating, how we actually rated it, and different capabilities. Oh, we were using CWSS common weakness scoring system internally.

And I like to take a lot of notes when I score things. So we opened it up, we refreshed it. And we looked through it and he's like, I disagree with this and this capability. And I'm like, okay, why do you disagree? And he talked about all the controls that he had in place that I wasn't aware of as security. 

As much as I want to be intimately knowledgeable about everything that engineering does, I'm not going to know everything because they're in that code day in day out, and I'm not. So we sat down, we had that conversation, and he convinced me. He's like, yeah, you're right, those are great controls to have there. And I think we pushed that to P2,  like a high to a medium in itself. 

So having those good conversations at both the director and IC level really made sure that we had the right severity, the right priorities in itself, and also had great conversations at the VP level where, like are we really focused on the right type of risks themselves? Like the director and engineering level, there were like, sort of maybe the weeks worth of work, whereas at the VP level, we're talking about months or quarters worth of work.

So having that healthy conversation with them and say, okay, these are the risks that we are seeing and they had their own opinions on the risks that they were seeing. So having that healthy conversation with them to make sure that we are really focused on the most important things.  

Can the ideal world be reached?

Tristan

It's crazy to think that you can't really do great cyber security and great application security without actually talking to everyone and understanding like what everyone is doing, what is the code that they own, what are the responsibilities. And I've had this discussion with a lot of security professionals, actually, that there is a huge part of the relationship and discussion with the stakeholders in security. So to take a look at the future, do you think we have to continue doing that? Can we reach an ideal world where developers would create secure code from the very beginning? Or will we still have to, you know, have conversations, think about security, think about security architecture, think about risk? Forever. 

Jeevan

Oh, I love this question. I think if you build a better mousetrap, you're going to build better mice in general. So I love the comparison, the better we get at security, the more interesting, the more challenging our adversaries will become in itself.

I feel that, even today, we're still dealing with the foundational issues in itself. They're just processing issues, things people still don't know much about security. You, as an engineer, you have to be at least, like a senior level to really be able to provide good security guidance for your organization and your team.

So I do expect all my senior staff and principal levels these days to be at least well understand what exactly is important for security. But I really see how security echoes quality. So way back when, maybe 20 years ago, there was a lot of shift in the QA world where, back in the past, engineers would develop their code.

They would just hand it off to QA without telling them anything like we made this feature, go figure out where all the bugs are for it. And there was a shift. We started as engineers, we started talking to QA a lot more, and they started getting involved in the engineering process where they're giving input on how we should be actually developing and reducing quality issues and defects right from the get-go.

And that collaboration really reduced, really improved the quality of code that we see. And QA folks started writing a lot better tests, integration tests for developers. And developers actually started writing QA for the unit tests themselves. So we saw this evolution where the quality improved while we had a lot stronger calibrations, and collaborations.

And over time, we see fewer and fewer QA folks or Quality engineers themselves. And I do hope that happens. Don't hope it happens soon, but I do hope that happens. I don't want to be out of a job in the next decade, but I do hope that it happens over time where security is, we start with a lot stronger collaborations with our engineering partners, and a lot of companies are good at that. So much that, regularly, we have senior engineers saying - no, we're not doing it this way because these are the security implications without having security in that room. And then over time, that it just sort of goes down to the intermediate and even junior engineers where they sort of understand that everything that they do, they should be thinking about security itself.

So I think there's going to be a long time before that happens, and I suspect that we're going to have a lot more like us, as security folks, we'll probably stop focusing on the foundational issues. And we'd be looking at much more difficult issues that our tooling has a hard time to find. Like business logic issues in itself very difficult or even AI-related issues securing AI and LLM  in itself. 

When organizations make a shift from basic issues to business logic vulnerabilities  

Tristan

That's a super interesting answer. So I hope we, we still have a bit of time before developers replace all jobs and become like, infrastructure engineers, security engineers, QA engineers, all at the same time. 

Jeevan

Yeah. I don't think you want me to write code again. I'm a little bit rusty. 

Tristan

Well, same on my end, I'll be honest. It's interesting what you're saying about, at some point, the security people stop taking a look at the basic issues, like injections, and take a look at the broader scope, like business logic vulnerabilities. So this part is interesting because we are working on specifically this topic at Escape with AI, but also like big infrastructure security issues that involve many services working together.

I've noticed that with larger companies, like when the company becomes more mature in terms of security processes, the issues tend to be related to architecture or more deep business logic issues. Is it something that you have observed as well? I mean, is there a shift at some point in the maturity model of a company where you start to have like only infrastructure issues like broader scope?

Jeevan

I do think so. Larger companies typically have larger security teams, and they do have a lot more architects and even higher-level ICs that really influence how you should be building out your features and functionality. That being said, it's also very, very difficult to do the basics at large organizations.

So, some of the challenges I've seen in my career is that we don't even know all of the domains that we own at times. I've been at places where we can help from that. We own thousands to tens of thousands, maybe even hundreds of thousands of domains of domains in itself. 

And the whole footprint, it is hard to just understand what we own. But those large companies do focus on the foundations and the fundamentals a lot initially. And once they get into a good place, they are able to sort of move up that chain of challenging security issues. Where, once the foundations are done, they can start looking at some of the things that more focused on the business logic stuff where there isn't tooling.

And you have to really sit down and map out how exactly you're going to be fixing those things. Startups, on the other hand, a lot of startups don't have security people. So it's really difficult for them to consider those sort of challenges and problems in itself. So, as companies mature and grow, they will have more security engineers to help them.

And hopefully those security engineers know how to do the basics, so that they can focus on the more difficult problems.  

Best practices for security for start-ups

Tristan

That’s actually quite interesting. Like say, if you had like a friend or someone, you know, professionally, that he's the CTO of a startup that is growing, but not to the point that they can hire the first security people. And this person is asking you, can you help me? Like having the best practices on the long run security, what would be the advice taking into account the fact that they don't have that many resources to invest in security? What would be the priority for them?  

Jeevan

This happens a lot more these days.

Like there have been a lot not just reaching out to me, but reaching out to a lot of folks in the industry where before I was. You would have at least a hundred engineers, maybe 200 engineers before you think about security at all, or have your first security hire. But I've been chatting with startups in itself where they're 14 people, 20 people, and they are like, no, we want to embed security right away.

Like “we think we're already too late”. So, a lot of folks are hiring where there are only maybe 50 engineers or maybe 30 engineers. They're having their first security engineering hire. 

So one of the things that I definitely talk to them about is like focus on…

Microsoft put out a great SDL security development life cycle about 20 years ago, and it's still relevant today. And really focus on the left side and everyone talks about shifting left. And there's a reason it's very cheap to do security when you shift left. So, the very leftmost side of shifting left is training.

So if you can provide some sort of training to your engineers, so that they think about security more and build up those processes very, very early on. And then even think about it while you're in the requirements phase. So should we be shipping this data to this third party? Does that make sense for us as a business? What are the implications that we might have? Like, what if that company gets breached? Like what happens to us and what do we need to do on our end? And then you have your design phase. So that's where you just spend a little bit of time.And at bigger companies, we threat model. We dive really deep into the features, and we look at all the individual risks that can happen anywhere and try to mitigate those risks. But even at a small startup company, just spend an hour and just say, what can go wrong? What can we do about it? And did we do a good enough job? And so if you can answer the actual fourth question for threat modeling is what are we building?

So if you can answer, what are we building? What can go wrong? What will we do about it? Did we do good enough? Those are just the four questions that you need to ask yourself as part of it. And anyone can sort of ask those questions and have responses for it. So smaller companies, I think they can actually really think about security early on, and if they focus on training, the requirements, and design, they'll be in a really, really good place. Then, they do actually hire an actual first security engineer. Hopefully, there isn't too much work that needs to be done there. 

Getting developers to be well-trained on security topics

Tristan

This is very interesting, especially thinking about training. There are a lot of discussions about training. Especially, for example, just a few days ago, I went on Reddit, and I asked the question, like, how do you make your developers interested in cyber security? And I put that in the blue team security, right? And there were like a dozens of comments and likes. And everyone was talking about it, because this is a topic that captures a lot of attention from the security people.

What's your take on it? How do we get developers to be well-trained about security? How do we train developers the right way on security? 

Jeevan

I think you nailed the first part. Where is: how do we get them interested in security? And the interest does come with just chatting with them a lot more.

A long time ago, I chatted with someone from Salesforce and she was working on the security awareness and training team there. Marissa, awesome individual. And I have the question - how do you scale security? Like it's really difficult to scale security. And she said - the best way to scale security is to talk to people one on one. 

That doesn't make sense. Like, it doesn't look like scale to me. But I took her advice to heart, and I started talking to people one-on-one, and  wow, what a difference maker. , once you can show your genuine passion, passion, and interest in security, it is infectious, and other people get interested in security.

By talking to people one-on-one, I was really able to help scale security in that particular organization. We had about 150, 200 engineers there, and it took time. A couple of quarters worth. I didn't talk to everyone, but once you talk to enough people, there are enough people caring about security that you can really actually impact how things happen within the organization.

And yeah, so that's the first part - just start talking. Just talk, talk to people talk to people individually, talk to them about your concerns, and people will actually care about your concerns in itself. And then the training part, I always like making custom training for the organization that I'm at. 

Mostly because, especially the type of training that I want newcomers to see. So we had great training at Segment, where we had two parts that every single new engineer had to go through. We had to think like an attacker and what it was like. We give you the tools that attackers have and you start exploiting a vulnerable application.

And that really helped people sort of figure out, we show them how they can exploit it, and then we'll give them the challenges and say, can you find this type of vulnerability in the system? So it really made them think about how people are going to be attacking their software and hopefully helping them think about how they can prevent that.

The second part of that training was a secure code review. So how do you look for vulnerabilities in your code or other people's code when you do a review? And when you have that first collaborative exercise with developers. When they join the organization, they don't think about you as an adversary.

They think of you more as a peer, and it really helped shift the culture within the organization.  Later on, during Segment, we rolled out threat modeling training. We had a hypothesis that developers had one requirement; we noticed that more and more security reviews and threat models were coming our way, and we knew that the engineering org was going to scale way faster than the security org.

So we want to get ahead of it and say, okay, if let's try training all of the engineers on threat modeling, and if they can discover vulnerabilities, we can focus on the core threat models that we need to do and they can do the rest of them in itself. So going through it, we started teaching developers.

We taught them in a way that was very collaborative and engaging. And it completely shifted the culture there. One of the things that  I was completely surprised about it. But everyone really started caring even more about security. They already cared a lot. They cared even more. 

One of the stories I like telling is - there was a particular high vulnerability in the framework that program framework that Segment was using. And one of the engineering managers noticed that he discovered three areas in the ecosystem that had that vulnerability. He patched his system, and he alerted the other engineering managers. They're all East Coast. , so he patched one, and pushed it out. The other two were going to patch it by the end of the day. And when we woke up on the Pacific coast, everything was done for us, and we didn't do anything. 

But the culture was so embedded as they cared so much as they knew more about security and threat modeling and their systems. They cared so much more than that. It was internally they wanted; they had that desire to make sure that they reduced that attack surface.

So the training first, you have to really evangelize, show people that security is not going to block you. We're not here to just say no all the time. We actually want to partner with you. So 

  1. Really evangelize and make sure that you get people excited about security 
  2. Then provide them the training they need
  3. Get to that next level, show them how attackers hit your system, show them how you can defend your system, and show them how you can discover risks early on so that they don't have to worry too much about these sorts of issues. ,

 It will really get them to the right place that they need to go.

Why involving developers in security tooling POCs is important

Tristan

That’s where you get to, once you start training and making developers interested in security, they tend to, like, actually fix themselves the, the issues, like they, they are a volunteer to helping with security stuff.  Do they also make better decisions when it comes to security strategies or buying new security tooling? 

Jeevan

Definitely on the security strategy side of things, I feel so some of the things that,  Engineers are much smarter about their systems and security. People are, and mostly because they're building these systems, and we get maybe an hour or two with the system, and we provide the guidance that we can provide.

But, if you have that partnership and discussion, you'll get to a much, much better place. And those individuals will, when they're trained, already know what needs to be done in itself. So, the amount of convincing we need to do is none. And sometimes, they have a better solution for the problem than we do with respect to security tooling.

I still feel that's the domain of the security team because we are looking for something very specific. But having said that, I hate doing POCs without engineering being involved, especially if it's very impactful for engineering. So if I'm doing a new static analysis tool, that's very, very impactful for engineering.

I want to make sure I'm pulling at least certain samples within the organization to talk to them, have them involved as part of the POC, and get their opinions. Is this going to be adding too much friction as part of the process or not enough? What are their opinions on it and really helping to make sure that they are partners in it. So, if I were to make that decision three months, six months go down and everyone hates it and finding ways to bypass it wherever possible. That's the wrong decision that I made. But if we did it collaboratively, and I already know all the problems with the various tools that I've worked with, then and we have to absolutely choose a tool.

I can work with them to choose the least worst tool if it's in that capability or the best tool if everything worked out well for them. So really want to make sure that we partner with them on the tooling decisions themselves.  

Frictions about choosing security tools and how to get a buy-in

Alexandra

Have you had an experience where it was a big fight? For example, you absolutely wanted this tool to be implemented, and the someone from engineering team said - no, not at all?

Jeevan

I don't think I've had that sort of friction in itself. So, usually, the partners that we have are like the platform engineering teams where they're responsible for the developer experience, the VEX for the entire ecosystem. So, they already understand the challenges they have or the teams will have with a tool.

And they've already bought into why. Before we even select the tool itself, I want to sit down with them and say, this is why we need the tool. It's like, we have to fulfill a regulatory requirement, or we have to do it because our enterprise customers, these million, 10 million customers, are expecting us to have this sort of thing.

So when you can actually put a business value or a dollar value to why we need to do it, it really makes things a little bit better in a sense. So friction is definitely on the type of tools. And I feel that typically engineering is not nearly as opinionated as security teams that will tell me - nope, this is bullshit. We're definitely not integrating this particular tool in our ecosystem or, like, yeah, we can get this tool in here, but it's going to cost us three-quarters of work in order to really shore it up.

So it's working optimally. In our ecosystem, it's really better to understand the balance, the pros and cons for each tool, and come to a decision collaboratively in itself. 

What is restraining us from reaching the perfect top-down vs bottom-up combination & on the bottlenecks for application security today 

Tristan

What I understand is that you try to involve both the security team and the engineering team when onboarding new tools. Of course, the security team has to be involved because they're primary users of the primary controllers of security tools. But speaking again about bottom-up versus top-down, what is restraining us from reaching the perfect combination? What is the bottleneck in today's practices for application security in general? What is not working in the decision processes, in the strategies?  

Jeevan

That's a really good and tough question. 

Tristan

I know. We told you. We told you.  

Jeevan

Every company is different, and in every company, top-down is always a problem, and bottom-up is always a problem as well.

So, top-down, the challenges that we come across are the amount of investment that we need to make in security. So execs. Yep, they know that we have to care about security, and they are passionate about security, but they also have a business to run. So how much of that budget does security get? So should we be able to hire five new people next year, or are we getting zero headcount and we have to just deal with the challenges where engineering is still growing and we're not, so it's really that sort of budgetary financial challenges typically top-down, making sure that we're getting the, what we need to be successful in our roles in itself.

And the bottom-up, you'll run into individuals who are naysayers. They're like, nope, we don't care about security. Like we're just going to do what we do. And, either they're very vocal about it, which could be toxic. And you don't want your security engineers to work with someone toxic. So, mitigations for that is really sitting down with them, understanding where they're coming from, and if you can't come to an agreement.

You just escalate. So talk to their manager, director, VP, whatever you need to do, or remind them that you might not agree, but you have to do your job. And this is part of your role and responsibilities, or worse off - they'll say they'll do it, but they never do it. Which again, why like I love showing data and metrics.

So if they say, if there's a director that says, yeah, we, I absolutely passionate about security, I'm going to be like, we always patch and do things. I have always liked to drive a data-driven program. I like to collect information, and being in security is sort of like I feel I'm going to age myself with X files that trust no one model. 

I like to trust people, but I like to verify with data. So if I get the data that, hey, these vulnerabilities, this particular team always extends their SLAs and never actually fixes vulnerabilities or these tools that have shown that this particular team has a ridiculous amount of security technical debt.

I would like to show that data to those directors and try to make sure that we drive it into a good place, saying: Hey, all of your peers, I have about half the vulnerabilities in their ecosystem. You have double, what's going on? How can I help you, and if I can't really get through them with a partnership, then I would go back to the top-down approach and say - Hey, everyone else is doing much better.

This individual, I tried working with them. These are the things that we've tried to do. Nothing's working. I need you to step in and start making sure that they're committing to what they need to commit to.  

Tristan

Yeah, yeah, I totally understand. Like you, you try first, like the bottom-up approach, and the top-down approach is like a kind of last resort, in case everything else fails.

Jeevan

Exactly. The carrot and the stick. I much prefer giving carrots out to people. Of course. Chocolate bars, whatever they fancy rather than sticks. 

Tristan

What will be, according to you, the next revolution of application security processes since you have talked about the bottom-up and shift left. But what is the next big thing? 

Jeevan

Yeah, I think this is a two-parter. 

First, you also hit up the other part where training. So there's an expectation that every engineer will have to be good at security. So we'll see a lot more trainings, a lot better trainings being put out within the company, probably a lot more companies spun up specifically to think about trainings and how you can actually do it well and scale well with the organization.

And then, I hate to say it, but there's always AI and ML. So how's that going to be? It seems like it's a hot topic, and no one you can't go to a single security conference or have a single conversation without someone saying AI or ML, but it is definitely going to change how we do things.

So both with - how do we incorporate it to help us roll out better and stronger security, but also like a lot of us work in companies that collect a lot of data, how do we make sure that we are leveraging the data in a safe and secure way and no one's opposing those data sets and no one's leveraging the AI in a way that they can get secrets or identify data that or have access to data that they shouldn't? Being able to secure the data sets that we have within our organization. 

So yeah, trainings and AI/ML. It's going to be very, very interesting. 

And the other part, I always worry that adversaries are always a few…It feels like they're always a few steps ahead, just in general. So, how are they going to be leveraging AI to attack us? We have to sort of keep that in the back of our minds. Like, they're writing better phishing emails these days. What other techniques are they going to be able to use to attack us?

Jeevan’s recommendations to young security engineers

Tristan

It's hard to tell if it's like phishing or sales. Thank you, Jeevan. Like I have two last questions for you, and I believe those are mostly for our listeners.. We're security engineers, security professionals, and perhaps developers, fond of security. First, what advice would you give to younger security engineers who want to progress in their careers? 

And, second, do you have a book to recommend about application security or about security in general that our listeners should definitely read?

Jeevan

Great, great, great. Questions for the younger or the more people that are just getting started on the security side. I think the most important skill set, especially when I'm looking to hire individuals, is their desire, and their hunger to consume more security knowledge.

So if I'm hiring a junior or an intermediate, I'm okay with them not coming in with too much knowledge. That's expected here, junior or intermediate. I don't expect you to be able to fix all of the things, but I do expect you to just want to learn, consume, be a sponge in itself, and just super, super hungry to learn and grow in itself.

So yeah. And I guess that sort of rolls into the second part where the books type of information.I see a lot of folks not nearly adept at threat models these days, like even to the point they chatted with senior folks, and they're just getting started in threat modeling. Threat modeling is not too difficult in itself.

We talked about those just 4 questions that you just need to ask. So, in my career, especially early on, I did it wrong. I didn't realize I was doing it wrong until maybe a couple of years into my career. So, I highly recommend that you get books on threat modeling. Learn about threat modeling, join the OWASP Slack, and there's a big channel where literally the best threat modelers in the world are a part of. Listen in and see what type of stuff they're thinking about.

So that's one thing. The other is I found podcasts to be very good way of learning and listening and learning about security. So this one's a great one to listen into, but also there are very specific ones around AppSec. So, depending on the type of flavor that you have, there's a number of them in itself.

So, podcasts and books on threat modeling. Those are the advice I would have for folks just looking to get better at security. 

Alexandra

Well, I need the link from you after the podcast. 

Jeevan

Yeah, absolutely. 

Alexandra

That was exciting. Well, thanks a lot for the conversation. I think it was great. I think I've learned a lot because I'm quite new now to security, so for me, it was definitely super insightful. Thanks a lot for taking time as well. 

Jeevan

So yeah, thank you for having me. A lot of spicy questions today. I love those challenging hard ones and ones that you just have to be very careful navigating in itself.


💡
Explore additional episodes in our series! Dive into our conversation with Aleksandr Krasnov (Meta, Thinkific, Dropbox) as we dissect the shortcomings of current DAST tools.