Episode Summary
Episode Video
Episode Show Notes & Transcript
Show Highlights
About Crystal Morin
Links
- Sysdig’s 2025 Cloud-Native and Security Usage Report: https://sysdig.com/2025-cloud-native-security-and-usage-report/
- Sysdig on LinkedIn: https://www.linkedin.com/company/sysdig/
- Crystal’s LinkedIn: https://www.linkedin.com/in/crystal-morin/
Sponsor
Transcript
Crystal Morin: The way that I like to compare Sysdig Sage to my previous work is the genius guy that used to sit behind me when I didn't know something, and I could turn around and swivel my chair and be like, "Hey Dave, what does this mean?" and point to my screen, and then he would wheel his chair over and be like, da da da da da da da da and explain it all to me, and I'd be like, "Cool, thanks."
That's what Sysdig Sage is to me. It's my Dave, but I don't have to bother anybody to get the answer.
Corey Quinn: Welcome to Screaming in the Cloud. I'm Corey Quinn, and
in a remarkably refreshing change of pace, I'm going to start off by doing the ad read on this one live instead of splicing it in later, because this episode is brought to us by our friends at Sysdig. Sysdig helps companies secure and advance innovation in the cloud, because building in the cloud.
Enables businesses to accelerate time to market, which is, of course, important to those folks. But the cloud has introduced a world where it only takes 10 minutes to initiate an attack. It would take less time if the cloud providers would get off of it, but so far, kind of fortunately, they haven't. But what that means is that security teams have to protect the business without slowing it down.
So, how do they identify and prioritize the real risks? Well, that's where Sysdig comes in. Sysdig's a complete CNAPP, that's C-N-A-P-P, which probably means something to security people, and it uses AI, because of course it does, to help security teams prioritize and stop the threats that matter most. Now, to learn more, you can visit Sysdig.com, or you can listen to me talk to Crystal Morin. Crystal's a cyber security strategist at Sysdig. Crystal, thank you for listening to me.
Crystal Morin: Thank you for having me, Corey. I'm excited to be here.
Corey Quinn: I assume you don't disagree meaningfully with any of the ad read that I just did, given that it's your folks that sent it to me.
But if you want to argue, I am thrilled to wind up. Oh boy, it's drama time.
Crystal Morin: No, not at all. That's exactly what we do. We like to help protect organizations from the attackers that want to get all of the good stuff in your organization and use it against you. Your data, your customer information, your money, all of that stuff.
We want to help protect you from them.
Corey Quinn: I've spoken with some of your colleagues in previous years whenever your usage report comes out, which, like clockwork, it just has again, and I was taken by the 5/5/5 Benchmark that you folks talked about last time, which, the important part that really resonated with me was that from breach to things start to be exploited is roughly five minutes was the top part of it, and I forget what the trailing fives are, which probably indicate that I'm not the only one, so could you please refresh me on what the 5/5/5 Benchmark is?
Crystal Morin: Yes, so 5/5/5 stands for five seconds to detect, five minutes to investigate or correlate and triage what's going on, and five minutes to respond.
So that equals 10 minutes or 10 minutes plus that extra five seconds. So 10 minutes in total.
Corey Quinn: 10 minutes-ish, cause I haven't synced with an NTP server lately.
Crystal Morin: Exactly.
Corey Quinn: Wonderful. The takeaway that I took with it is that you have to be able to respond rapidly because most people are not going to answer a page in less than 10 minutes.
Therefore computers basically have to do a lot of the auto remediation for you. The obvious challenge with that is computers making random decisions and turning off production is usually frowned upon. Like, you're at least on the security side where it's more defensible. I help companies fix AWS bills.
So, when we take something down, suddenly we're not allowed to save money ever again. So, it's a little bit lower stakes. In your case, there's a very good argument to, "No, no. Instead of having a breach, we DID turn everything off." But, I'm told there's some middle ground between those two approaches.
Crystal Morin: Yes, there absolutely is.
Do you want to dig into it?
Corey Quinn: I do.
I'd love to learn a little bit more about what a Sysdig does, other than, to be very frank, creates a report that I find compelling and gives me interesting things to comment on. But under the hood, what is it you folks actually do?
Crystal Morin: Okay, so this is what we do. I can dig into this 5/5/5 and what we actually found in the report and what is going in to how we stop attackers in less than 10 minutes, because that kind of will give you an idea of what's going on under the hood in those few minutes. Shall we?
Corey Quinn: Indeed, please, lead on.
Crystal Morin: Okay, so it starts in less than five seconds. So what we found to begin with is in less than five seconds when something happens in a customer's environment, there's triggers going off. An attacker is starting to go into their environment, right? That triggers an alert.
Something's happening in their environment, but there are several Hops that have to happen for that alert to go from their environment to their inbox, or Slack channel, or whatever it may be, wherever their SOC analyst gets that alert, right? That they're like, "Oh no, something's happening," and they need to actually, you know, go in, go look at their computer screen and turn around and go do something about it.
Right? So, there's a lot of things that need to happen in the little computer network for them to actually respond.
Corey Quinn: And let's be clear, this also assumes a 24/7 SOC or, alternately, attackers who are polite enough only to operate during business hours.
Crystal Morin: Yes. So, within milliseconds, there are several different Hops that that alert takes to go from event happening to receiving that Slack alert on your computer screen, and that takes less than five seconds. So, pretty much near real time, you're being alerted that something is happening right then and there. From that point, you have the alert on your computer screen. Now you need to figure out what exactly that means. Okay, I see that something's happening.
What exactly is going on? Let's try to put some information together. So, that next five, we say, you need to figure this out in five minutes. The data in our report, we actually found, on average, our customers are able to correlate and investigate an incident in three and a half minutes. So, well within that five minute, what I mean, what we tell them they should be doing in five minutes.
So with some of our functions and automated responses and being able to correlate information from the dashboards that we provide them, information about identities that are involved in the incident, the containers that might be involved, things that are happening in their cloud environment, putting all of that information together in one place, being able to visualize the attack chain, right?
Where did the attacker enter? Where are they moving to? Being able to see perhaps where your crown jewels are located. So, where might the attacker want to go? Being able to see exactly what that looks like and not having to guess or insinuate in your head visualizing it, you're able to make a deduction and move on to the response part of that much, much faster.
Corey Quinn: Folks talk about the fog of war a lot, but I'm more accustomed, since I'm sort of a peaceful type, to the fog of production where you sort of have these things in your head, but trying to court to say, okay, I'm seeing this event on this system. Where does this fall on the diagram that there's mental overhead involved in unpacking that. And this all, of course, presupposes that you're not using the good old days of AWS user accounts with IAM where it can take a decent part of that three and a half minutes to dig out the authenticator just to log into the console in the first place.
Crystal Morin: Yes. And we actually, this isn't just me talking.
We've spoken to some of our customers as well, and those quotes are in the report. They've said, too, that they have been able to do this. And this is usage data that I looked at to write this report. So I'm putting together averages. This is a correlation of what they're doing with our platform. This isn't survey data, folks just kind of guessing and putting together their best efforts, where they used to take weeks or days to try to investigate a report or investigate an incident prior. Some of them have said it's between 10 and 15 minutes to be able to look and see what actually is going on when they receive an alert from us.
So that was really exciting when I was able to find that when writing this report. And then! If we're done with that, I can move on to respond to that.
Corey Quinn: But back in the early days, I just want to point out that when Sysdig first launched, it took upwards of 20 minutes for a CloudTrail event to show up, showing that something had happened in an AWS account.
That team's done amazing work and gotten that down to within a second or two most of the time, which is awesome. Back in those days, it didn't exist. So it's pretty clear that what Sysdig does operates by looking at the workload directly and not waiting for some of the slow provider mechanisms to take their time and work through whatever systems they have.
Crystal Morin: I've used other dashboards in the past too, and I would do some like proactive threat hunting, and you write a script, and you're trying to go and look to see if something happened, and it could take hours to be able to get results back. So you have to try to change and manipulate that to get results and still it could take an entire day to go and see if what you want to look for is actually happening.
It's impossible to get anything done that way. So.
Corey Quinn: So, as you mentioned, getting back to the report, the 2025 Cloud Native Security and Usage Report. I've always liked these things from two perspectives. First, it's a good read for folks who have not seen one of these things before, which I tend to assume is the majority of folks, just because there's always more people that haven't read a thing than have read a thing.
But I also like to look at it from the perspective of what has changed year over year in these things to identify broader trends. Which direction do you want to attack this from?
Crystal Morin: So I guess I can move on to that last part of the five because there is actually some trend information that has evolved over the last year for that part.
With automated response, that's what I looked at in particular. I looked at container drift, right? So when you have a container that you start with and it changes from, say, you know, golden image, what it's supposed to be when it's in production, it kind of changes from what it was. When that can happen maliciously because someone enters it, or developers could be changing the container, what, just as it's in production, that's okay too. So, you can turn on an alert for container drift, and you can be alerted that something's going on while the container's in production. We have options for automated response for container drift and a couple other things as well for, malware and crypto miners and things like that, too.
But for container drift, you can pause. stop or kill the container. If you kill a container and you don't have a mature system, right, you have developers, for example, who like to go and make changes, and you have automated response to kill a container set up, that could cause some issues, right? You could have operations stop.
So you definitely need to have mature security practices in place to be able to have these kind of automated responses.
Corey Quinn: People do like to forget, given modern system stability, that containers are designed to be ephemeral.
Crystal Morin: Yes. So, last year, there was a very small number of organizations that we saw using container drift automated responses.
I believe it was about 4%. This year, that has actually tripled. It's still small. It's about 11% of organizations. A majority of organizations are still having alerts. So they do get alerted to container drift, but there's now 11% of organizations who are using automated response actions for that kind of thing, and we did also see an increase in the number of organizations who are writing automated response actions for malware and crypto miners and things like that as well. So that's really exciting to see because it's not just us. There's tons of vendors and evangelists and thought leaders and things like that, who are telling, "You have to automate response," right? There's playbooks and sores. Everything has to be automated for response to incident response.
Corey Quinn: The attacks are largely automated. It's the only way that works. These people, this is not the 80s anymore where people are sitting there at their keyboards, thinking really hard around what they're going to do next. They have automated tooling that in many cases, let's be honest here, is more robust and well built than the production environments they're attacking with it.
Crystal Morin: And it can be scary too. I understand that, right? Because you don't want to break anything, but as long as you're communicating between teams, and you all understand you're on the same page, you can absolutely do these things. And in the report too, we talk out the variety of different ways that you can go about automatically responding to some of these incidents.
And kind of building and manipulating your own responses and tailoring them to what you want. It's not just a set and forget kind of thing. You can make it what you want it to be. So that kind of helps too. So hopefully we'll see more of that next year.
Corey Quinn: There's also another finding I want to get into around the prevalence of AI, because, you know, it's 2025.
We're legally obligated to talk about AI things. But first, that's right, it's time for me to talk to you about the company you work for again.
Sysdig is sponsoring this episode. What is a Sysdig? Well, they help companies secure and advance innovation in the cloud, because building in the cloud enables businesses to accelerate that all important time to market.
And yet the cloud has introduced a world where it only takes, as we've just said, 10 minutes to initiate an attack. Therefore, security teams have gotta protect the business without slowing it down and becoming the Department of No of yesteryear. So how do they identify and prioritize the real risks?
You guessed it. Sysdig, which is a complete CNAPP that uses AI, which we'll get to in a second, to help security teams prioritize and stop the threats that matter the most. Now, learn more at sysdig.com.
Now, let's talk about AI in particular, Crystal, because I want to figure out what is real and what is hype.
Crystal Morin: All right, well, we can get the smaller part of it out of the way, and then we get to the security part, because that's the really exciting part of this. The hype. Implementing AI. That's the hype, right? Everybody's using AI. That's what we hear in the news. How many organizations are actually using AI for security?
Sysdig has an AI tool for security. It's a GenAI security assistant that we have integrated into our platform that you can use to help correlate investigations and things like that. It's really cool. It's called Sysdig Sage. Fascinating. We have, as of the end of last year, after four months of general availability, 45% of our customers have begun using Sysdig Sage.
75% of them are DevOps folks. So like I said, they're using it just to, you know, speed up. Most of them grab their cup of coffee and ask Sysdig Sage, "Hey what happened last night that I need to be on top of this morning while I start my day?" That's what they use it for. Like, what's going on in this container? What's going on in this environment with this identity.
Corey Quinn: That sounds legitimately useful. Something that you might, dare I say, not have to shove onto people. Everyone's talking about how AI is the next thing. Well, okay, but if it's a half as amazing as people like to say, I mean, not to abuse a metaphor here, but I have a four year old daughter who's extraordinarily sugar motivated.
I don't have to shove ice cream down her throat. It's a pull rather than a push when it comes to that sort of stuff with with genuinely useful AI, yes, it exists, that is the model. People seek it out. They use it like this proactively. I don't know if other folks who are listening to this have met DevOps folks before, but having been one myself for many years, you can't get me to unwillingly do anything before a cup of coffee in the morning.
So the timing sequence of that speaks volumes.
Crystal Morin: So the way that, maybe you'll appreciate this too. The way that I like to compare Sysdig Sage to like my previous work is the genius guy that used to sit behind me I didn't know something, and I could turn around and swivel my chair and be like, "Hey Dave, what does this mean?" and point to my screen, and then he would wheel his chair over and be like, da da da da da da da da and explain it all to me, and I'd be like, "Cool, thanks." That's what Sysdig Sage is to me. It's my Dave, but I don't have to bother anybody to get the answer.
Corey Quinn: To continue the Dave metaphor, I've worked with several Daves who, when they didn't know the answer, were terrified to lose face and begin making things up.
Which, the hallucination problem is challenging, but in this case, it almost feels like that is not the bad pattern when it comes to security. Well, I made up an attack that didn't exist. Yes, it's annoying. You had an impromptu fire drill, but someone is actually in the midst of an attack and nope, all is well on the Western Front is not something that is a terrific message to be sending out there.
How do you split that difference?
Crystal Morin: Yeah, it's just my senior security analyst is what I consider it to be. It's not trying to just put out fires. It's just trying to help me figure out what something is, period.
Corey Quinn: It's the assistant rather than the replacement. It's the categorization, "Here's the things you probably want to look at first," versus, "Oh, you want to know what happened last night?
Here's eight gigabytes of logs," and just dumps onto your screen. Terrific. Yeah. Let me get right on that.
Crystal Morin: So that's that.
In other exciting news.
Corey Quinn: Yes, the usage story of AI among the customer base.
Crystal Morin: Right. So, we looked at not browser-based AI or anything like that, but the number of AI and machine learning packages in workloads that are running in a customer environment and found 500% growth of the number of packages in running workloads, and I went through all of them. There were a lot. There's some tables in the report that broke out all of the names, types of GenAI and machine learning, just all the different kinds of names, what we saw.
There's some absolutely massive growth for OpenAI TensorFlow Transformers. So you can kind of see those numbers in the report of what that 500% growth actually looks like. The GenAI packages specifically, because again, a lot of it is actually machine learning packages, but GenAI alone doubled. The number of GenAI packages running in workloads doubled, which again, kind of aligns with, you know, almost 50% are using Sysdig Sage.
GenAI packages doubled over the last year. So that kind of makes sense as far as growth goals. Amidst all of this growth of AI and the introduction of AI in our customer environments, I did find something very soothing and made us very happy. Public exposure of these workloads with AI, right?
So exposure to the internet, attackers constantly are scanning the internet for, you know, IPs, websites, workloads, whatever that are exposed so they can look for misconfigurations, vulnerabilities, whatever that will let them into your environment so they can wreak havoc, right? That's the easiest way that they can get in unless they have credentials or something like that. So they're always scanning the internet.
Corey Quinn: I get those all the time. Even you see it on the security researcher side. I recently got something in the email about a dangling subdomain that I had, where it was assigned to an Elastic IP that had since been released, and someone wrote this very long write up talking about the danger and asking for tips via PayPal.
And it sounded incredibly well researched and professional, except for the small, minor problem that it was targeting one of my test domains, which I'm not expecting anyone to necessarily know the purpose of a given domain, but maybe finding a security exploit in the shitposting.monster domain might not be the high value target that you think you just found.
So there's a lot of the using AI for these things to create noise. Seeing people create value with it is much more interesting to me.
Crystal Morin: Yes. So, public exposure. In April of 2024, we saw public exposure of workloads with AI at 34%. So 34% of workloads containing AI, which potentially has sensitive information, right, because people are feeding sensitive information, data, whatever, into AI, GenAI, potentially, 34% of those packages were publicly exposed.
By the end of the year, that was down to less than 13%. So there was a 38% reduction in the number of workloads with AI publicly exposed to the Internet. So 500% growth in workloads with AI and 38% reduction in public exposure. So there's a massive use of AI, but there is an obvious prioritization of the security.
That correlation is huge for us. That's really, really exciting to see that even though everybody is looking forward to and trying to use GenAI, AI, whatever they may be trying to do, that they are trying to keep security at the top of mind. There's also another graph in the report as well. I looked at public exposure and then broke it down further into, do those packages have critical and high vulnerabilities?
Are those in use? Are they in production, right? Because those are the kinds of things that our attackers are going to look for. So, can I get to that publicly exposed workload? Are there vulnerabilities that I can take advantage of? And is it running in production, right? Like all of those kinds of things that are layered, there's almost nothing there. It was like less than 1%. So the security of AI is definitely a high priority, which made me very, very happy to see.
Corey Quinn: I wonder if that's a natural outgrowth of what I consider to be many companies overindexing on the value of their data as a competitive advantage. I understand the reasons behind it.
Truly I do. But at the same time, I'm not convinced that Even if you were to get the complete code base of a large competitor, is that a meaningful gain for you necessarily, especially at large scale where everyone implements things differently? I think it's a little bit less clear, but companies are remarkably concerned about it.
Is this is this lockdown that you're seeing of AI workloads in response to that concern?
Crystal Morin: It could be. I hope so.
Corey Quinn: Honestly, I'll take the win wherever I can get it.
Crystal Morin: Exactly. Like I said, the, the positivity makes me happy. The prioritization of security is good. The fact that people are thinking of it is a good thing.
Corey Quinn: This was a part of the report that made me happy.
Now I want to talk about a part of the report that made me sad because it struck a little close to home. Specifically, the wild proliferation of machine identities, which to me in my mind, please correct me if I'm wrong on this, is things like instance roles or execution roles within the AWS context.
Things designed for automated systems as opposed to human beings logging in for things.
Crystal Morin: Applications, API calls, really, I mean, it could be anything that's not a human that is connected to your cloud environment.
Corey Quinn: I'm the only user in one of my AWS accounts. It has 400 roles in it, most of which were created by AWS automated managed service things.
So, are those properly scoped? I don't know. If there's something important in there, it is buried under a pile of other things.
Crystal Morin: Well, thank you for validating my findings in the report. I really appreciate that statistic.
Corey Quinn: No, thank you for confirming my own biases and suspicions with actual data. It's great.
Yay. The confirmation bias thing, or we cherry pick things from reports to talk about that resonate. This is one where there's a wild rise in the number of machine identities. I'm a big fan of casting shade on these things. I want to pull up the actual numbers on this so I can do it more effectively, but you compared, even among different cloud providers, where the number one by I think a couple orders of magnitude was Azure.
It's because apparently as someone, as a user knew, we're maneuvers through various Microsoft properties. Every action they do counts as a different user, presumably built by the seat, but it was wild. 67 times more users, according to the report.
Crystal Morin: First, we looked at human users, and we looked at the three major providers: Azure, Google, and AWS.
And this is, again, just human users alone, GCP and AWS had one-to-200 human users on average for those organizations, and Azure had over 7,000. And I looked at those numbers and I was like, well, that doesn't make sense. That seems like a little bit of an outlier. So I went to some of our engineers and I was like, "Can you help me make sense of this? Why is Azure have 67 times more users than the other two. Shouldn't those probably be about the same?" So we dug into it, and then we realized that for organizations that use Azure, every time a human user logs into a Microsoft service, like you log into OneNote and PowerPoint and Excel and Word, for every service you log into, it counts you as another user.
So you have 100 employees, but you log into seven different services. Now you are eight users, and that's how you get to 7,000. So that makes managing human users in Azure very complicated. And why they're counted that way, I'm not quite sure. If it makes them, I don't know if it makes it more complex to manage them. If our customers who manage Azure accounts know this, and understand it, I did speak to one who gets it. He knew it. He helped me understand it with one of our engineers. So yeah, that's just the way it's counted. It's just really strange. So, if you didn't know that, now you know.
Corey Quinn: I have to assume this is based on legacy account structures.
Every company has these. My personal favorite example of that is when you log into the Google Cloud console and watch your address bar as it steps through various places, it bounces out and then back to accounts.youtube.com as a part of the way you log into your company's, maybe a bank's, infrastructure provider systems, because you know, the video site where kids say horrible things in the comments is absolutely something you want critical paths.
I digress.
All account management, all user management is horrible. But what is the answer to this between the individual users having massive proliferation, the machine identities scaling at similar massive rates? How do you how do you wrap your head around this?
Crystal Morin: One of the things that we found, and this is actually silver lining, and we'll get to machine identities, we haven't even got there yet, but we did find. a wonderful statistic of maturity for managing human identities. And there were 15%. 15% of organizations had zero human users in their environments, which was a good thing. It means that they're using a third party SSO provider. So rather than logging directly into your cloud environment, you're logging in via a third party, so that additional layer of security.
Like Okta, for example, right? That's probably the best and most well known example. So, again, instead of an attacker having direct access, being able to log directly into your cloud account, and having, again, 100, 200, or 7,000 options, directly into your cloud account, there are none. They have to go through that third party to get in.
Corey Quinn: And then they wind up getting a role dispense, which inherently is time-bound, as opposed to these permanent credentials you're going to have at a backup somewhere that gets discovered three years later and then used to exploit you.
Crystal Morin: Yes. So we found 15% of organizations did have a little layer of maturity and no cloud human users in their environments.
Corey Quinn: Did you measure that in previous years?
Crystal Morin: No, I have not. So the last two years we looked at excessive permissions, and those were really, really bad numbers the last two years. So this year was a different approach looking at the human and machine identities. So I'm hoping next year I'll probably look at these same numbers again and see if we can find some new trends next year.
Corey Quinn: I certainly hope so. Now getting back to the machine identity piece.
Crystal Morin: Okay, the human users, those numbers were weird. Machine identities, I found 40,000 times more machine identities than humans in an organization. 40,000 times more. There was one organization in particular, I don't remember how many users there were,
but the machine identities service accounts, there were 1.6 million machine identities in their environment.
Corey Quinn: Were they creating a new one every time a container would spin up and then just never deleting it?
Crystal Morin: So,
Corey Quinn: Were they using your identity, Vista identities as a database? Lord knows I've done stranger things.
Crystal Morin: Poor provisioning. So what we think happened is that they're just being poorly provisioned. A majority of these have no access. So they're probably very low risk. They don't have any assigned permissions, right? So when we think of, racking and stacking high to low risk priorities, these would fall pretty low risk because there's no permissions.
If an attacker tried to get into one of these identities, it would be a little more complicated than say others who do have permissions.
Corey Quinn: Maybe they try to enumerate the identities and see what has permissions, and this is defense through not, it's not even a security through obscurity. It's security through what the hell are these people doing?
You know, you have a strange approach to things when an attacker breaks in, fixes your environment and then leaves you AWS credits to have a better attempt the next time.
Crystal Morin: So they shouldn't be there. You could remove a lot of these identities, but when you have other concerns, high critical vulnerabilities in your production environment, those are higher priority than these non-provisioned machine identities that are sitting around, right? I mean, you have to prioritize your risks. These aren't a risk, but it's still not good.
Corey Quinn: This feels like an enormous pile of hay for the needles to hide in.
Crystal Morin: So, 40,000 times more machine identities does seem a little outrageous. So, I did some data manipulation to try to, I don't know, make it seem more palatable, I guess.
And I took out some of these outliers, like the 1.6 million machine identities and the Azure users who had 7,000 human identities. So I ended up taking out 11 % of organizations and the numbers were, like I said, a little more digestible. They were About 150 users to 5,000 machine identities, which is about a 35 times difference.
Crystal Morin: So again, I mean, that, in my mind, makes more sense. 150 employees. Still, 5,300 machine identities does not sound like something I want to manage. 150 employees sounds like a real business, though.
Corey Quinn: It really does, but you need a department to handle that.
Crystal Morin: That's still not good. Human identities are being provisioned well.
Machine identities are still a very high risk. They're not being provisioned well. They need to be taken care of. If they're not being used, they can go away. That needs to be the next priority. There's a lot of good things in this report. If there's anything that I could, there's some other things that we need to work on too that are in here, but if there's one thing that I could highlight, it's that we need to focus on these non-human identities. Because they're, they're definitely an issue.
Corey Quinn: If people want to get a copy of this report for themselves, where's the best place for them to do so?
Crystal Morin: They can go to sysdig.Com. There's some banners there for you.
There's a press release. But yeah, it'll be pretty easy to find if you go there. Our LinkedIn website, you can go to my LinkedIn page. I've got it there too. Yeah, come find me or go to our website. It'll be really easy for you to find. No problem.
Corey Quinn: And all of this will, of course, be in the show notes.
Crystal, thank you so much for taking this time to speak with me today. I appreciate it.
Crystal Morin: Thanks for having me. That was a lot of fun.
Corey Quinn: Crystal Morin, Cybersecurity Strategist at Sysdig. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five star review on your podcast platform of choice.
Whereas if you've hated this podcast, please leave a five star review on your podcast platform of choice, along with an angry comment from one of your 40,000 user accounts.