Welcome to the Elephant in AppSec, the podcast to explore, challenge, and boldly face the AppSec Elephants in the room.
Today we’re excited to have an amazing guest, Derek Fisher, joining us.
Derek is a cybersecurity leader, a speaker on various cybersecurity topics, and author of the famous “The Application Security Handbook.” Having been a developer before shifting to application security more than a decade ago, Derek has had the unique opportunity of seeing the industry from the perspective of someone who builds the software that needs to be secured.
In our latest episode with Tristan Kalos, we challenged Derek on whether threat modeling is the future of cybersecurity or just another buzzword. We discussed how to do threat modeling right (and wrong), what’s wrong with its current state, and what its future might look like.
In our conversation, Derek shares:
- Benefits of threat modeling and why you must adopt adversarial mindset
- How threat modeling can be integrated into security processes
- Differences in approach to threat modeling in different types of organizations
- How to avoid check box mentality
- Where to start your threat modeling journey
- How threat modeling is not just about following a framework, it's a mindset shift
- How to do threat modeling wrong and what can be done better
- The short-term and long-term impact of AI on threat modeling
- The future of threat modeling
- Why enjoying what you do is important to get into the right cybersecurity field
Let’s dive in!
- MGM cyber-attack that led to data breach
- OWASP Threat Dragon
- Microsoft Threat Modeling tool
- STRIDE model
- DREAD model
- PASTA Threat modeling
- MITRE ATT&CK®
- The Application Security Handbook
- Alice and Bob learn Application Security
- Derek's courses on Udemy
Find the full transcript below:
Setting the stage
Hi, Derek. We're excited to have you with us today. In the industry, the importance of threat modeling is increasingly discussed, and that's the topic we want to explore today. Welcome to our podcast.
Let's start by setting the stage with a real-world example. Some listeners may recall the significant cyber-attack on both MGM Resorts and Caesars Entertainment last September. It gained attention on social media, impacting MGM customers' ability to make credit card transactions and withdraw money from ATMs. The systems were completely shut down, resulting in an estimated 100 million in losses for MGM Resorts. In your opinion, could this have been prevented with proper threat modeling practices?
Yeah, first of all, thank you for having me on the podcast. I appreciate the opportunity to discuss this. I was actually at Black Hat right before this incident occurred. Often, when major security conferences happen, and then a cyber attack follows, people point to the conference and say: Hey, you know, somebody there was just kind of fooling around? However, in this case, it was much, much deeper than that. The attack was deeper than a casual connection to the conference.
To your point, it was rather costly for MGM, with reported losses ranging from 80 million to over 100 million. There were significant operational disruptions, including issues with slot machines, credit card access, and guest room entries. The total outage lasted about a day and a half to two days, resulting in substantial financial and operational costs.
When teaching threat modeling, I often use examples like Amazon, being a big online retailer, if they have an availability issue and they go offline, even for a period of time, it could be an hour, that's a huge revenue hit for a company like Amazon, and I'm tying all this back to the threat modeling.
I promise I'll get there. But, you know, it's not just the dollar and cents impact of this, but a lot of people were inconvenienced. There's a short-term impact of like, hey, we're not bringing in revenue for this X amount of hours, but there's a long-term impact of the reputational damage.
And not just that, there's a longer-term cost of - now we need to hire additional security individuals. Now, we need to look at compliance-related frameworks that we need to implement. And so there's a long tail of additional costs that aren't really baked into that initial breach.
I want to ensure I caveat all this with the fact that MGM, it's not like the MGM International Resorts International, just to be clear, it's not that they didn't have a security team. They have a security team. For all of us in the security industry, we know that usually, we're one crisis away or one vulnerability away from a total disaster. It's a case of being ever-vigilant. I don't want to sling mud at MGM. I'm sure they had everything in place that they thought was appropriate to try to thwart this. When you tie it back into threat modeling, it really comes down to understanding, the easiest way to think about threat modeling is: What do we have? What are our assets? What can go wrong? And what are we going to do about it? And for MGM, asking those questions of: What happens if we're offline for 4 hours, 12 hours, 36 hours, a week? What happens to our payment systems? What happens to our clients? They can't get into their rooms. Asking those questions and saying, okay, what are we going to do about it? Those are simple questions that somebody can ask themselves, and again, I'm not saying that they didn't. I had never worked at MGM Resorts International. I don't know anybody that works there.
Have you been there?
I have been there, but I don't know their security team. I'm sure they are doing the best they can. A lot of times, it comes down to a cost balance. What is our risk of this occurring? And what is it going to cost us to do something about it right now? There are different ways to balance that to say, "Okay, this risk is very low." We do this all the time in security, balancing risk against the business. Is this risk probable? Is it going to happen? Do we have the right controls in place to protect against it? If not, we need to do something about it. If we feel comfortable that our controls are going to hold and the risk is low, maybe we're not going to do anything about it. That may have been part of the calculation they did here, but it comes down to asking those basic questions: What do we have? What happens? If something goes wrong, what are we going to do? Those are just the basic questions to ask.
In this case, it was an example of a real targeted attack, and those are very difficult to defend against. It wasn't a mass phishing campaign where they sent out emails to employees at MGM. They targeted a specific individual working at MGM. Not all the details are clear because some of it is still classified, but publicly known is that somebody was targeted with a vishing attack, a voice phishing phone call.
We put security controls in place for mass attacks, like persistent phishing campaigns, constant port scanning, endpoint malware drops. However, we're not always looking at the individual being targeted, a more complicated defense.
Okay, so in the future, we would have to look also potentially at individually what could have been prevented, and actual threat modeling potentially be in the future of cybersecurity, where it helps security people do that?
I think, you know, this is still a threat, right? You still have to model that, and MGM is probably doing this now, saying, "Okay, what happens if one of our employees is targeted in an attack, and their credentials are compromised?" We look at threat modeling from the perspective of not so much Jane working in the front desk, but what happens if a privileged account gets compromised? What are we going to do about that? So that still ties into threat modeling.
Benefits of threat modeling and adversarial mindset
And if you had to describe or summarize the benefits of threat modeling, what would it be?
That comes out in, let's take this example, the MGM example where we say a privileged account has been compromised. This is where you start talking about what is our isolation? How do we reduce that blast radius? How do we incorporate something like zero trust? We say, "Okay, assume compromise, assume everybody within our network is an adversary." Honestly, that is the mindset we all in the security space need to have, assuming that everybody within your network or within your system is adversarial because they may not be today, but they could be tomorrow, whether it's intentional malicious insider or an insider having their credentials compromised.
So, take that consideration, like, okay, we have an insider in our system, in our network. They've been compromised. How do we limit that exposure? How do we ensure that individual, you know, the blast radius is as small as possible?
How threat modeling can be integrated into security processes
Okay, very clear for me. And do you think it's very, like, how can we implement it in the processes? Because security is a lot about processes and working with developers, and everything kind of is also, we pay a lot of attention to how these processes are constructed. So how could we integrate?
Yeah, that's the difficult part because threat modeling in product security, application security, we're very accustomed to buying a tool, implementing it in the software development lifecycle. We get scan reports out of it. We burn down those reports, implementing static analysis.
We get 100 vulnerabilities thrown at us a week, and we just burn down those 100 vulnerabilities in a period of time. We've figured that out. We know we can argue about how effective that is and whether that's the right approach and all the other stuff, but we've figured that out. We've figured that process out.
From a threat modeling perspective, that's more difficult because to do it correctly or to do it accurately, you want to make sure that you have the right people involved that can answer the right questions. That often involves product people with knowledge of the product, people knowledgeable of the technology, those that are knowledgeable of the operations of that application. You may need to get, depending on your technology stack, cloud specialists in there, and you need security people. So you now have this group of individuals, each coming from a different set of disciplines, to really be able to tackle that threat model and say, "Okay, here are the things that can go wrong. Here's how we're going to correct them."
That is very time-consuming, and it doesn't scale because it's a very people-intensive type of process. When you look at the tooling perspective, there are plenty of tools out there. OWASP has one called Threat Dragon. Microsoft has one called the Threat Model. I think it's called, and those are free graphical user interfaces. You drop blocks on the screen, draw some lines between them and say, "Here's my architecture, my data flow."
It'll spit out a bunch of threats that you need to protect against and give you recommendations on how to protect against that. The issue with that is that you come to a point where it feels almost like you're checking a box. You're okay, create, send me your right code. I've finished that function. That function is checked into the branch. It's running in production. We're good. You kind of get that same mentality with an artifact that you're creating in threat modeling, where you're creating an artifact. It's done. Somebody's reviewed it. You're fine. It doesn't mean that something, it doesn't mean you're actually improving your security.
There's automated threat modeling that's been catching fire lately, but it really comes down to the questions that you need to ask at the user story level or at the requirements level, right of, "Hey, we're building this new feature, this new functionality for a client, what can go wrong?"
Threat modeling in different types of organizations
You know, what happens if these credentials within this function that we're creating get compromised? What happens if someone is able to view the data unauthorized, view data or exfiltrate that data or corrupt that data? And it's going to vary depending on the organization you work in, the industry you work in. I've worked in healthcare, I've worked in the financial space, I've worked in the military space. All of them have different types of requirements and different types of concerns that they're worried about.
A hospital application that operates in, say, like an emergency room or an operating room, uptime and ensuring that the data is correct is paramount. You want to make sure that you have access to the information you need, and the information is correct. Whereas a financial institution, they may be more concerned about the confidentiality of their data. Those kinds of questions need to be put into the context of the industry and the organization that you work within.
Do you think all those different industries, as you said, they've very different problems, but also different compliance regulations, and, I guess, threat modeling can also help a lot with that?
Yeah. And that's again, where I think it means that you have to have the right people involved when you're doing those threat models.
Because they're basically checking the boxes again, right?
Yes, and because I think one of the biggest issues is that not everybody's a subject matter expert on everything, right? Security people are very good at knowing how to secure something, how to create the architecture, and often how to break into things, right? That's what they think of, right?
Compliance people are always thinking about the impacts of this data being breached from a business perspective. We have fines, we have contracts that we may be in breach of, and things like that. And the business people or the product people are going to know what are the impacts to the clients. So I think that's why it's important to get that view from different angles of different individuals to really get a holistic view of that threat model.
Where to start your threat modeling journey?
And once you decide to integrate threat modeling in your organization, so you mentioned different models already, like automated threat modeling, manual thread modeling, you have some tools that help with a nice graphical UI.
So if we were to more, like for instance today, we have people listening to us that want to start their thread modeling journey. What do you advise them? Where to start exactly in practice?
It's not going to be a fun answer, but the best way to start is, honestly, the manual approach. Cause you need to know the ins and outs of threat modeling, right? You need to know how to build a threat model. Yeah. I mean, that's true. Even if you look at development, right before you, before you automate something, do it manually first. So that you know here's how it works. Here are the connection points. Here's the ins and outs and all that stuff. Then automate it.
And I think that's true for those trying to get started in their threat modeling journey is what does it take to create a threat model? What are those inputs, outputs? What are my expectations? Understand the different types of methodologies that are out there. And then, you know, start leaning on an automated tool.
Now, you can use a graphical tool like Threat Dragon or Microsoft's Threat Modeling tool to help you understand how the architecture aligns with threats. But it's good to get your hands kind of dirty on basic threat modeling before diving into something that offloads it for you.
Threat modeling is not just about following a framework, it's a mindset shift
Of course, there are standards like STRIDE, DREAD, PASTA, like the MITRE ATT&CK frameworks related to threat modeling. Are those good for getting started? Do they give real insights, or are they for more advanced power users of threat modelers?
It depends. Each one has a different application. STRIDE is probably the most well-known but using it doesn't make it right. It's a way to categorize threats that can occur. Saying we're going to utilize STRIDE to do this threat model. What does that mean? Does identifying a threat outside spoofing, tampering, and repudiation categories mean it doesn't exist? Going back to the check-the-box activity with the threat model, it's saying, "Hey, I created it; it's done." You can fall into the same traps with a very specific threat modeling methodology, saying it doesn't fall cleanly into this methodology that we're using. So, therefore, do we worry about it? You know, and again, threat modeling and we do this on a daily basis, and I talk about this in my class when I'm teaching students about the fact that we do threat modeling on a daily basis, whether we know it or not. We think about that when we're walking to our car, to get into the car while we're out, you know, shopping or while we're going out for a run or something like that.
I mean, we're constantly in our minds, you know, threat modeling, we're not following a process. We're just thinking about like, "Hey, what can go wrong if I'm doing this? " And so I think that's a mentality shift. I think those frameworks help people, you know, focus on certain aspects of threat modeling, but threat modeling is more than following a framework.
It's a mindset shift.
A quick example of threat modeling: shampoo in the hotel room
Exactly, especially because thinking out of the box is one of the main aspects of threat modeling, right? Anticipating where you'll be attacked, where problems may arise, not just following the rules.
So, a quick example from several years ago—I attended a threat modeling conference. The instructor, a cybersecurity professional, shared a hotel incident. In his room, there were dispensers for shampoo, conditioner, and body wash. He wondered, "What if someone replaced shampoo with hair removal?" It's about exploring what could go wrong, even if it seems silly. The mindset shift is crucial, going beyond predefined categories like stride or dread.
The that was teaching the class gave the example that he worked in cyber security, just like everybody else that was in the room and he said he went to a hotel he was in, it was in his hotel room, and in the hotel room, they had the dispensers in the shower for like shampoo and, and body wash and things like that.
So they had the containers with the dispensers in the shower for shampoo and conditioner and body wash. I don't need the shampoo. So it's fine. But you know, he went down to the front desk and he asked the front desk person, he said, what if somebody replaced that shampoo with hair removal?
And the person at the front desk was like, why would anybody do that? And then, and he's like, I don't know, I'm just asking, there's no control in that. Somebody could, somebody could just pour hair removal on that. And the next person that goes in there suddenly looks like me.
Never going in a hotel room again.
But I mean, you know, that it was a very silly example, but it's that mentality, that mindset shift, right.
Of thinking like "Look at this thing that it's sitting in front of me, what can go wrong? and you start playing those things out in your head. And it's like, yeah, somebody could do that, and then you ask the question.
Yes, yes, exactly. But, you know, and then you start asking yourself, well, what can we do about it?
We can put little locks on each one of these and, you know, give the key to it to the cleaning individual or people, but it's like, okay, well, is that really worth it? Right? You know, this risk is so low that's going to be costly for us to do that. It's going to have operational impacts.
And again, that's kind of a silly example, but it's the mentality shift, right? That doesn't follow STRIDE or DREAD or posturing those other things. It's just, I'm thinking about how this can be broken and how this can go wrong. And what am I going to do about it?
ROI of our actions & how to avoid over threat modeling
But you still have like to sort of like calculate the ROI of doing something because sometimes there is a risk, but it's very acceptable compared to the cost of fixing this, right? In the shampoo case, for instance, putting locks on every shampoo dispenser is not necessarily compared to the risk. Do you think it's the same to do threat modeling, right? In like the context of a company or an organization, how do you avoid doing over threat modeling, doing it too much?
Do you mean like identifying threats? Yeah. This is another area where, when I talk about threat modeling, the common phrase is that your threat model is never truly done. Because your system changes the moment it goes into production. For example, if you're creating an application, the moment it goes into production, you're already working on the next thing. You might receive feedback about something not working or needing changes. It's a constant environment, especially in DevOps, where there's a continuous set of changes. The days of releasing applications every six months or occasional hotfixes are long gone.
Companies now push code into production multiple times a day. To answer your question, there's no definitive end to threat modeling. This leads to the realization that tools alone are not the solution. You can't expect someone to fire up Threat Dragon every time there's a code change to update their threat model. It depends on the size of the organization, but in large organizations with tens of thousands of developers, that's not feasible. It comes down to whether we can automate, ask basic questions, and educate developers, architects, and those involved in system development to reflexively understand threats. They need to know that when they do X, Y could happen and consider what they'll do about it.
How to do threat modeling wrong
So I guess that are the steps that you can take here how to apply it and how to do it right.
And I think the question that comes up is how to do it wrong or what people shouldn't do, you know, at all based on your experience,? How they shouldn't and mustn't approach when they start?
I think one of the most challenging parts of threat modeling is scope. What I mean by that is knowing what is part of your system and what isn't. The concept is that the threat model is never done, and you're always going to be threat modeling. The reason scope is crucial is that it involves a lot of work. You want to identify what I can truly control in this threat model. If I can't alter the code, that's out of scope. There's a difference when working with a third party, like calling an API. I can't change the API's code, but I can alter the data I send or receive, making it in scope. Knowing your scope is often a tricky part of threat modeling, and sometimes that can go wrong. Identifying assets, understanding what's part of your system—data, systems, components, people—that's the hardest part. Once you identify these, the pieces of threat modeling fall into place.
What can be done better?
That makes total sense. If we take a step back and think about what's wrong with threat modeling, especially the current state of threat modeling, what do you think are the main bottlenecks? What can we do better?
It's a lack of, I don't want to say training, but a lack of knowledge and understanding for individuals in system development and software development. We work in an environment where everything has to be done immediately. There's constant pressure to move on to the next task. Things that require more time to think about or implement often fall down the priority list due to immediacy.
One bottleneck is the lack of knowledge, not necessarily formal training. People need to understand the basics—risks, threats, threat actors. Others have to know, not so much how to create the perfect threat model that you can present at the next threat modeling conference. You know, just the basics, understanding what the risks are, what the threats are, who the threat actors are. Understanding just those basic concepts goes a long way.
And I think there's all your mindsets in fact.
And I think there's also, another bottleneck is really, I hate to say it, but trying to understand or trying to get the organization to fall behind a single process of threat modeling. Threat modeling can't, especially in a large organization. In a small organization, it may work. You say, "Okay, we'll use threat dragon. Here's the process—everyone just do it this way." That's great in an organization with a hundred people. In an organization with tens of thousands of people, that doesn't necessarily work.
You can't say the only option for threat modeling is this way, following this process. That gets challenging. So there has to be flexibility, giving people the freedom to create the threat model in a way that makes sense for them. In security, we get hung up on creating these processes that we can track metrics and we can say, "Hey, here's how we're doing this". But I think we've, and I'm not saying it's true for everybody, but I think in a lot of cases where we focus on the process and not on what we're actually trying to do.
Like, I think we have to keep on asking ourselves, what is the risk? What are we trying to protect against, and, and how are we going to do that? And, you know, are we making the organization more secure day over day, right? It doesn't matter how we get there as long as the organization is becoming more secure on a daily basis, then that's good, right?
So I think being a little bit more loose on the process of actually threat modeling and giving more of more individuals within the organization. The ability to produce those threat models are good ways to, you know, unblock those bottlenecks.
The short-term and long-term impact of AI on Threat Modeling
Do you think the AI will help each individual to adapt their process in each organization, or is it actually maybe a threat?
So I'm glad we, you know, I would think that we can't get through podcasts anymore without talking about AI. So, no, I actually had a conversation about this yesterday around AI and threat modeling. I don't think AI is there yet. The way it is today with generative AI, where you're asking it to produce something, there are challenges with hallucination, poisoning, prompt injection, and a whole set of new challenges with AI. But where I think it can be helpful is how can AI help developers write more secure code or, in the case of threat modeling, how can AI help us produce some of those artifacts?
We won't get AI to open up Threat Dragon and create a diagram for us, but there are examples of using something like Fragile, a way to codify threats, risks, and mitigations. We can go into a generative AI tool and say, "Here's a threat I'm thinking about, can you codify that?" It can create JSON for you, and you can use that in automation when creating fragile threat models.
In the short term, it's a way to use generative AI by having it create code, whether it's JSON or YAML, depending on the tool. Describing a threat, mitigation, or system to it and saying, "Spit out the code for this so that I can put that into my build process," making the security team happy with an artifact in the build. That's a good short-term way of looking at AI.
I think long term, we've seen AI change drastically in the past couple of weeks, and let alone in the past couple of months. I mean, it's changing so rapidly. I think it's hard to predict where we're going to be 3, 6, 9, 12 months from now. We may be in a year from now in a place where we can input a data flow diagram or an architecture and have an AI model just create the threats for us and spit out a list of mitigations and stuff.
We may be there in some time, but who knows?
The future of Threat Modeling
Actually, I wanted to ask you this question. If we talk about the future of threat modeling, and I think you've kind of answered already, like about what it will be in 10 years?
If we think about AI and everything, and we go in 10 years, we'll just have to input the diagram of the data flow of the apps. And we'll have all the threat modeling done ourselves, but also the existence of AI and the new technical innovation will bring more threats to applications as well.
So how do you think we will deal with that? Do we have to invent novel ways of doing threat modeling to counter the advanced evolution of the threats?
I mean, none of us are going to be out of a job anytime soon. I don't think so. I mean, it's like every time, you know, every time we think we're, I'm old enough to remember doing security on-premise data centers.
And it's like, that's a whole different model, right? I could look out my window and see my data center across the street, and so I know where it is. I know where my data is. I know all that. And then came cloud, right? And that upended everything where, now it's a different threat model, right?
You know, and then you have containers that come along, and that's a different set of threats and then you have now AI and everything in between. And so I think, like anything else, and I was talking to somebody at a networking event a week ago about how I think this is the challenge for security folks.
You know, I used to be a software engineer. I worked in .NET C sharp, but I dabbled in Java. I was not an enterprise software engineer for Java. I was an enterprise software engineer for C sharp, but stepping into the security space, I need to know Java. I need to know C sharp. I need to know Python. I need to know Rails and, you know, now rust and, it's like the whole list goes on. And now I need to know cloud and not just cloud, but not just AWS, but GCP and Oracle Cloud.
And so like, AI is just another one of those things that kind of get heaped on security where it's like, you guys go figure out how to secure this.
And I think, you know, that's the challenge for security folks is that you need to be, you know, a Jack of all trades and, or Jill of all trades and know what the implications are from a security perspective on those technology stacks or that technology innovation.
But it all comes down to the same things. It's very easy: confidentiality, integrity, availability, right? Those are the three things that we talk about all the time in security. And I think as long as you're putting, you know, those the new innovations, new technology. Down to that kind of basic principles.
You're moving in the right direction, but it's a challenging space. And yeah, I mean, things are gonna have to evolve just as technology is evolving.
Derek's learning recommendations
I totally, totally agree with you and thanks for sharing your vision about that. It all comes to availability, confidentiality, and integrity.
We always ask it during the podcast. Derek, do you have any advice for the younger generation of security professionals, and do you have a book to recommend for them apart from yours?
You can pick up mine. You know, and so, I do get asked frequently about advice for those getting into the security space and how they can get into it.
There's a lot of challenges with getting into security. I think a lot of us that are in the security space are looking for more people to get into it. There are plenty of people that want to get into it, but we still have challenges with making those connections, and I think some of it is that we have these requirements for individuals to have a certain set of experience and background and knowledge, and that limits who we can kind of hire, and so, you know, I try to tell individuals, what are you interested in?
If you want to get into application security or product security, I can certainly help you get there. If you're interested in information security, then that's a different branch. So it's important to really know what interests you because if you don't like what you're doing and if you don't wake up in the morning and say, like, I'm looking forward to doing the things that I'm doing, you're going to have challenges.
I think it's important to really like what you're doing. I look at the people that work in information security. I don't get it. I don't know how you wake up in the morning and say, I love doing information security because I think it's just so boring, so dry. I know that it's needed, but they wake up in the morning and they probably say the same thing about me. Like, it's boring, it's dry, you know, but you never know. So, you know, I think it's important to know where you want to go within security, and then it's easier to figure out. I don't want to say easier, but then it's, then it's you can start connecting the dots on the path to get to that. But I think the biggest thing.
Especially those that are in security and getting into a more technical space is you need, you really do need to have a technical background, doesn't have to be in security, but if you are going to be somebody that works in application security, you better know how software is developed.
If you're going to be in a SOC analyst or somebody who works in security operations, then you do need to know how to identify ongoing activity and how to, you know, how to spot malicious activity.
If you're going to be a pentester, show examples of how you've done penetration testing in your labs and things like that.
So I think getting, it's one of those catch-22s that they talk about all the time is that, you know, you can go out and get all the certifications that you want if you don't have experience, oftentimes, an organization is not going to hire you.
So experience, I think, plays a much bigger part in your journey than certifications.
I often tell individuals that, you know, as they're trying to pursue certain certifications because that's what they ask, like, well, which certification should I go for? I wouldn't spend a whole lot of time chasing certifications because if you don't have that background or if you don't have that experience, it's going to be hard to justify that certification to an organization. So, you know, that kind of also plays into, can you get some role if you want to be a SOC analyst. Can you? And you can't get in because you don't have experience, can you take a help desk? Right. And I know that's not glamorous. I know that's not what people are looking for in terms of answers, but can you get a help desk type of role to start building that knowledge about how an organization works, how it works?
That'll then get you that experience. You can start chasing your securities or certifications while you're doing that. And the caveat there, oftentimes organizations will help pay for that, right? They'll help send you to get trained to get your certification. And then you have an opportunity to work on the inside to get into a security role.
So again, it's not a great example of a great answer there, but it's more, know what you want to do. Can you get that technology background under your belt, and that'll help kind of grease the wheels to get you in.
So learn about the technology, like get a technology background, and then define precisely what you are chasing. What is the position you're dreaming about, and then you can ramp up and learn about it.
And about a book that you could recommend?
I am so, I'm so terrible because there are so many books out there. And a lot of good ones. And there's a lot of people that I follow on LinkedIn that have written books that are very good people to follow.
Chris Hughes is one. Tanya Janca is another one. She's written Alice and Bob learn Application Security. Matt Rose is another guy that I follow on LinkedIn.
Who wrote the forward on my book. Unfortunately, as much as I love reading, I usually have a book in hand. I just have not had time to get caught up on a lot of reading lately. It's been a lot of LinkedIn reading instead. So I was trying to follow individuals. I'll have to pass on the recommendation just because I can't justify a recommendation if I haven't read it recently, so.
Tristan: Okay, you actually gave us the names of really good people, great people in cyber security that have a lot of interesting insights. On my side, I have read your book "The Application Security Program Handbook". I really recommend everyone to read it. It's very insightful, and it goes from a very high level of security up to the details. And I really enjoyed that. So I recommend the people listening to us to read it.
Derek: I appreciate that.
Tristan: Thank you, Derek. It was incredible to have you today as a guest and it was a really nice deep dive into threat modeling. I really appreciate that you shared your insights with us.
Derek: Yeah, I appreciate it. Thank you.
Alexandra: Thanks a lot for being here with us.