Episode Summary
Andrew Peterson launched his career working in sales at North Face. After stints at Google, the Clinton Foundation, and Etsy, he launched his own company—Signal Sciences—makers of a next-gen WAF and RASP web application protection solution that detects and stops attacks wherever applications run. Join Corey and Andrew as they explore why Signal Sciences is an “accidental” security vendor, why security is no longer solely about preventing breaches but increasingly about responding to them quickly and effectively, how organizations are taking a more proactive approach to security and privacy in the GDPR era, and more.
Episode Show Notes & Transcript
About Andrew Peterson
Andrew Peterson is the CEO and Cofounder of Signal Sciences. Under Peterson’s leadership, Signal Sciences has become the #1 and most trusted provider of next-gen WAF and RASP technology and one of the fastest growing cybersecurity companies in the world. As CEO, Peterson is responsible for overseeing all business functions, go-to-market activities, and attainment of strategic, operational and financial goals.
Andrew Peterson is the CEO and Cofounder of Signal Sciences. Under Peterson’s leadership, Signal Sciences has become the #1 and most trusted provider of next-gen WAF and RASP technology and one of the fastest growing cybersecurity companies in the world. As CEO, Peterson is responsible for overseeing all business functions, go-to-market activities, and attainment of strategic, operational and financial goals.
Prior to founding Signal Sciences, Peterson has been building leading edge, high performing product and sales teams across five continents for over fifteen years with such companies as Etsy, Google, and the Clinton Foundation. In 2016, O’Reilly published his book Cracking Security Misconceptions to encourage non-security professionals to take part in organizational security. He graduated from Stanford University with a BA in Science, Technology, and Society.
Links Referenced
Links Referenced
- Twitter: @ampeters06
- LinkedIn: https://www.linkedin.com/in/andrewmarshallpetersonSignal Sciences
- Sponsor: X-Team
Transcript
Narrator: Hello, and welcome to Screaming in the Cloud with your host, cloud economist, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud. Thoughtful commentary on the state of the technical world. And ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.
Narrator: Hello, and welcome to Screaming in the Cloud with your host, cloud economist, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud. Thoughtful commentary on the state of the technical world. And ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.
Corey: This week’s episode of Screaming in the Cloud is sponsored by X-Team. X-Team is a 100% remote company that helps other remote companies scale their development teams. You can live anywhere you like and enjoy a life of freedom while working on first-class company environments. I gotta say, I’m pretty skeptical of “remote work” environments, so I got on the phone with these folks for about half an hour, and, let me level with you: I’ve gotta say I believe in what they’re doing and their story is compelling. If I didn’t believe that, I promise you I wouldn’t say it. If you would like to work for a company that doesn’t require that you live in San Francisco, take my advice and check out X-Team. They’re hiring both developers and devops engineers. Check them out at the letter x dash Team dot com slash cloud. That’s x-team.com/cloud to learn more. Thank you for sponsoring this ridiculous podcast.
Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by Andrew Peterson, CEO of Signal Sciences. Welcome to the show, Andrew.
Andrew: Thanks for having me Corey.
Corey: No, thanks for joining me.
Corey: So, let's start at the very beginning. What is a Signal Science, and given you have several of them what do you folks do?
Andrew: Yeah so the marketing term that we call ourselves is a next generation web application firewall and/or a runtime application self protection tool or RASP. You can thank Gartner for that one. But both tools essentially are about, how do you protect your web applications, your APIs, your microservices that you're running, basically all layer seven type of traffic and across any type of platform that you're using it on? But that's essentially what we do.
Corey: In order to do the disambiguation between, "Oh, a security vendor. I've never seen one of those before." I guess every security vendor in most cases tends to go in a bit of a different, differentiating direction if for no other reason than it's very sad when they don't. But I guess what is it that makes Signal Sciences different than, I guess the typical run of the mill, endless sea of folks at RSA, all independently trying to sell me something with the word firewall in it.
Andrew: Yeah, so that's, I'll start with this. I think a lot of it, it just comes from where we come from and our background. We, for better or for worse, didn't wake up some day dreaming to be a security vendor. So we're the accidental security vendors in some ways. Our background was actually building technology and products and security tools in-house before. So we, me and my two co-founders, we worked at a company called Etsy, that's about 10 years ago when we first started working together. And Etsy, a lot of people know it's E-t-s-y, it's a big retail marketplace based out of New York. And their backstory is actually really interesting from a technology perspective because they were really on the forefront and one of the pioneers around the DevOps movement.
And so our challenge and how we started working together and coming up with some of the lessons learned that has turned into the vendor that is Signal Sciences now, is we were trying to build a security program there that was really counter to I think a lot of the kinds of security programs that we had seen before. Which, the old model of security was, "Look, we're going to be grumpy. We're not going to like dealing with engineers. We're going to blame engineers for all the bad things that they put into our code all the time that makes it insecure. And we're going to tell them 'No, they can't do anything all the time.'" That doesn't really work when the goal of the entire business, and especially the engineering program there, was about how do we empower people to launch code faster, to make changes quicker, to make our systems more resilient and more reliable. These are all the tenets of DevOps and doing that in a culture where you're getting these siloed teams to really work together.
So, in many ways as we built the security program there, it was probably one of the first dev-sec-ops types of, you know I hate using all these buzzwords here, but it was really about how do you get these three teams to work better together? And the lessons that we learned in the context of that were, not only is it really helpful when the security teams can not just say "no," but they can say, "yes," and think about how they can really contribute to making these teams better.
But when you actually start thinking about, if we can build products as a security team that are not only incredibly easy for people to use but also make the engineering teams feel like they are able to learn things about the behavior of people using their applications or using their software in ways that they never were before, that's actually helping them do their job. They actually are going to want to pull and actually use those tools. And so I think that that's been the unique approach that we've had to becoming a vendor, is to say "Like, look, if we're going to go to the dark side and go to that other bad place of security, which is the world of security vendorship, we're going to do it with a lot of empathy for understanding what actually works from a practical perspective, because we were in-house before building a lot of this stuff." But also that our philosophy was that the only way to scale security effectively is to scale it actually through the engineering teams. So we sure as heck better be working with and taking their feedback into account the entire time.
Corey: Absolutely. The hard part I think when you're running an application or a website or any significantly scaled-out service or product is, security has always been one of those things that is inherently an afterthought for most of us. Because everyone likes to say, "Oh, security is job zero," or, "Security is the most important thing." Well, a quick look at what companies spend research and development budget on proves that is not true. It always is something, it's like insurance. Most people should have some form of insurance but you don't expect your house to burn down. So it's never the number one thing you think about when you're when you're setting up something new. But it does need to be something that I guess folks care about. I mean I come from a similar perspective where I look at cloud costing. It's never job one, it's always a trailing function. How does that manifest for you both among your clients as well as for running a company yourself and having a good security posture internally, given that you are a security company and security issues would be problematic?
Andrew: I mean it's a great question. This hearkens back to, and I'll use our initial experience when I was at Etsy in-house before, because you're really struggling over and wrestling over these issues of, every security person on the planet wants to say, "Hey, security is the most important thing." Right? And, "That's all that matters and that's what we should be prioritizing first."
But, I was running product teams before. And our goal of developing products and features and software were really way-more business-related to say, "Hey, we have to get these features out because we're trying to help the business improve, right? We're trying to make money, we're trying to help our customers. We're trying to help people actually get things done."
The initial work with our security team was they were like, "Hey, you have all these potential bugs or vulnerabilities in your code so before you actually can ship this to production, you have to solve these things." Yeah. So that doesn't really work. Because guess what, like the business is going to move forward with or without security. So that's the former relationship that we've seen.
The thing that we've seen both with our customers now, but our big aha moment when we were doing this stuff in-house before was, look, if you're going to go and you're going to talk to an engineer and tell them that they have security flaws in their code, they're going to come back and say, "Well, yeah, that's one of many bugs that I have. I know I have bugs in my code. My question is, why should I prioritize working on this one over other functional bugs that I could go solve that are actually going to help our product actually be better and help our customers actually use the technology better??
And in the past, I think a lot of the response from the security team was because they're like, "Well, because security is important and don't you not want to get hacked?"
Look that's, I get where that's coming from, but it's not terribly productive and it's certainly doesn't speak to the way I think engineers and especially modern engineering organizations are thinking about this stuff. They need data. You need to have some data behind why these things are important. And so for us, what really changed our conversation around that type of thing specifically, right? Like about why should we build in security initially? Why should I even be fixing some of these bugs that I know are security bugs in the first place was, well when we could set up monitoring on being able to track actually what attackers are even attempting to do across, especially, let's just use the application itself, across the different parts of the application, it really changed the conversation.
Like, before I think engineers really thought, and when we would have these conversations they'd be like, "Well, I just don't think that we're actually even being attacked right now. So, security guy with the tinfoil hat over there that's super paranoid about everything, of course they're going to be screaming that we're going to be getting attacked all the time, but I just don't really think it's happening."
So, the easiest way to be able to respond to that was to say, "Okay well we set up monitoring to track different types of attack behavior that's happening on different parts of the app. And at least we could show, look this is the sub-directory or the mobile site or whatever part of the application that you're working on and here's the actual attacks that are happening on that right now." That really made it not only real, right? Like it's like, "Okay, this is real data that we're looking at right now and this is actually really helpful."
But then it immediately actually got alignment internally in the organization to say, "Hey, developer team, I'm not actually fighting against the security team who's just on my back all the time trying to get me to fix things. Security team and development team are now aligned against the real problem, which is the attackers on the outside who are trying to get in." So that data and that visibility slash/ability to have that detection on that type of behavior, it just completely changes the conversation that you can have between your security and engineering teams in-house.
Corey: I think there's also the, we're seeing an emerging, I guess, class of vulnerability as far as... When people wind up going to sleep at night and they work in a company, their prayers before bed are, "And finally, dear Lord, please don't let me be subject to a breach. But if I am, at least have it be something incredibly convoluted and clever, not something stupid like an open S3 bucket." Or whatever it is that winds up... Effectively there's this narrative that's entered the public consciousness, that when a company suffers a data breach, that they are obviously idiots who did not invest at all in cybersecurity and they failed a very basic thing.
And I don't think that narrative works anymore. I think that there is a lot of nuance to this. I think that there is a tremendous number of interesting attack vectors that need to be defended against. Despite what we tell ourselves, it's never going to be the top-most job for a company to care about. But this stuff still happens. And yes, it is a failure, especially when it's not your data that gets breached, but rather the data that you've been entrusted with. But in the public consciousness it's still, "Oh, you got breached, you must hire morons." Isn't true. It simply isn't. Do you see that narrative changing at all in the public awareness, or is that a losing battle from the get-go?
Andrew: I do. And I actually think it's a really important question because there's two sides to this, which is, one is, is it a losing battle for companies to try to change how they're protecting themselves in the first place and try to change their security posture. I think the second question that I think a lot about as it relates to just security professionals overall is like, "Is there any way to win at security? At your job? Are you basically just sitting there waiting to lose?" Which I think by and large it is, or at least it has been for a long time. But I think the thing that's changing and the hope that I have for the industry that's changing a bit is... I like to use the example of a lot of how operations has changed. And how success for ops teams have changed.
And I think in the past, you look 10 years ago about when ops teams and/or DevOps teams were a lot more immature, the expectation there was, "Look, we have to have a hundred percent uptime. We will never go down. And it's a binary concept, right? We're either up or we're down and the goal is 100%." Very similar to security, right? The goal is either "We're breached or we're not breached and there's no middle ground and nothing else matters. We should just try to be never breached ever."
Trying to, and the realities of if that can actually be happening in the more and more complex technology world that we live in where as you said, Corey, there's more and more nuanced ways where people can actually get access to data and what a breach looks like. This is going to be totally different in the future. I think we need to really see a maturity the same way we've seen it on the upside.
I think now when you look at really great ops teams and you look at how the success of those ops teams is even measured in the first place, is that, it's not about uptime and downtime necessarily, but if you do go down, it's only a small functional component of your application. Or a small functional component of the infrastructure. You're also doing a really good job of being able to identify when those things go down and communicate that back to your consumers. You're doing a good job of actually defining and fixing those things faster. And so that the success metrics are not, are you up or you're down, but it's how fast have you identified it? How small can you contain the impact of that service outage? How fast and how well you can actually communicate that back to your customers? And then ultimately you're going for how small of an impact can you actually have on their business and/or their lives or their use of the product.
And I think that that's really where I'm hopeful and starting to see the security community go to. But also, I think I'm also starting to see this from the consumers expectation, is that so many consumers that I talk to, or just friends and family even, are saying, "I feel like having my data get breached on various companies is kind of inevitable." And their judgment on how that breach actually happens... I hate picking on specific breaches, but I think the Equifax breach for example, has continued to stay in the limelight because of how poorly it was handled and not necessarily because of the exact breach itself. I mean, how many other breaches have come and gone in the last few years, and the Equifax one keeps coming up, I think because of a lot of the ways in which the management team and/or the communications around it was handled.
So I think that's the stuff where it's like, "Look, if you have really good communication, we can start scoping out our actual architecture and infrastructure, such that we can reduce the surface area or the amount of data that actually gets breached in a given attack." Those are going to be things that I think are bigger success factors for security teams and security people.
And I'd like to think that's the future of what consumers are going to look at to say, "Hey, this company really handled this well." And not just saying, "Oh, they're just another one that got breached. They must all be dumb. But wow. Of course they got breached because it's inevitable to have that happen to some extent. But I really feel like they were on top of their game. They really communicated this well to me and I actually feel in some ways safer knowing that they're so well informed and were so fast to take action on it."
Corey: I see an awful lot of companies with the mistaken idea that, "Well, we're paying a large cloud vendor to run all of our infrastructure and they have a bunch of services that they offer of varying degrees of utility. What do we need partners for? Why can't we just have everything be first party and that's the end of it?" And the honest answer to that is, "Well have you tried it? That's why." But you can't exactly say that to customers in some cases. How do you find those conversations tend to unfold?
Andrew: So there's a bunch of different things to unpack with this because I think it's, yeah, there's a bunch of angles to that. I think the first is, one of the things I've heard from a lot of customers is... Let's use AWS as an example and... Look, let's actually compare AWS and Azure, right? As two different platforms here. The one thing that folks say, "AWS has a lot of features, right? They have launched a lot of different types of functional features around security." And one of the biggest challenges that I've heard people have using using AWS is, especially if they're... Look, let's say it's a development group, they actually have all the intentions of doing the right thing by setting up the right security features in the first place but they're not security people themselves. And when they talk to their let's say network-focused security teams, the network focused security teams don't actually give them a great roadmap for what to use in the first place. So they're on their own to try to figure out what to select to use from a feature perspective.
And, they're not going to take one of all of them. They're not going to be like, "Okay, I'll turn on a hundred different features." They're trying to figure out what are the basic ones that they start working with and turn those on. And they're not really getting a whole lot of direction I think, from the Amazon folks right now.
So this is one of the areas that I've heard, like Azure in some ways is actually more preferable because it's a bit simpler and a lot more well-defined about like, "Hey, here's a reference architecture from the security feature component of what you should use them when you're using this, right?
So that's, that's sort of step one is... I think folks need a little bit more guidance on what they should be using or not. Then step two would be, I think to your point, Corey, when they start using these features, the question is, "Okay they're there, but are they actually good and are they solving real problems and can I automate these things and are they helping me to actually stop real problems? Or are we reverting back to the, 'Okay well if I just turn it on and I have it there, then I've covered my rear and I'm not going to get in trouble from a compliance perspective or something.'"
I don't like this, right? I think there are certain people that are like, "Okay, I have some of these pieces in place. I'm just checking boxes," because to me that's a reversion back to compliance-based security rather than security that's really focused on solving problems. But this gets back into this issue where it's really hard to find people that have a lot of not only, let's call them cloud and application development skills, but then also have security skills. Most of the people that we have in the security world have a network-focused background and most of the application developers really know applications but they don't necessarily know security.
So that cross-section between the two I think is really... It's hard then to set up systems to be able to say, "Hey, here's the features or the functionality that we're expecting from these different types of products that we're going to add on in our cloud environments," so that they can actually take some type of objective view on the value or the efficacy of that feature or that function.
Corey: Something you said just really resonates with specifically the idea of treating security as something beyond the checkbox, for the compliance dance. For anyone who's ever listened to me for more than 30 seconds, this will come as no surprise, but I have challenges when it comes to checking off box items and doing things for the sake of bureaucracy. I have zero tolerance for that, which makes me not a great employee, but that's beside the point. It tends to make me not the sort of person you want in the room dealing with auditors and dealing with compliance. Because I tend to see those check boxes and get at, "Okay, what is the actual intent behind this control? What is the problem it is attempting to solve for?" And you step down that path and try and solve the actual issues, auditors want the box checked. They want to make sure that you're rotating your API credentials and your IM users every 60 days, for example.
Even NIS doesn't recommend that anymore and the real world that we live in here, well, if you compromise a credential by checking into a repository at GitHub, the time between that happening and the time you start to see it being exploited is less than a minute. It's a 90 day rotation or 60 day rotation, does nothing to stop that. In many cases the alarm that goes off that shows that that's been compromised, is the bill: "Surprise! You've been mining a whole bunch of Bitcoin this month!"
That's where it really tends to fall to, I guess, fall by the wayside. But you can't, as a company, ever bypass compliance and say, "Yeah, it's a stupid requirement so we're not going to do it." You don't get the beautiful shiny certificate that you need to remain in business if you go down that path. But how do you reconcile that?
Andrew: Well, in general, I think the more Coreys of the world that can be running security programs, the better, I think for most everyone. So we are fully in the camp of... Look we, like, as a product category we help to check compliance boxes for a lot of our customers. But we have from the very beginning basically told people unapologetically, "We are not in the business of solving compliance for people. We're in the business of solving security problems." And if we can do both of those things at the same time, great. But the people we work with and the people that we're really seeing start to take over the security industry are really those that are highly focused and highly engineering-focused on exactly what you were saying. Like, I'm looking to understand what the actual problem is I'm trying to solve and then come up with solutions to those problems.
So I think there's probably a series of security vendors out there that are terrified about this movement that's happening where you're getting more and more, sort of less and less auditors controlling security programs. Although there's certainly still compliance and audit programs within every company, including our own. And there's an absolute... I think there is a world where those things are actually still valuable, but splitting compliance and security I think is actually quite important to the future of being able to solve these problems.
So yeah, as it relates back to the original question around like how do we separate... Checkbox compliance is not actually doing anything from real compliance. One of the positive movements I've really seen is that the actual compliance standards, the people that are writing those compliance standards are actually becoming more pragmatic about being able to solve these problems instead of just having a checkbox for a checkbox sake. So that's one of the things that I've seen is, is actually a much more relaxed definition of different types of solutions. So on the actual security engineering side, or let's call it the security side that's focused not just on checking the check box. They really can start to say, "Hey, this functionality that we have here that's really solving the core problem, that the spirit of what the compliance checkbox was trying to check, like we're able to actually still check the compliance checkbox even if it's not falling into that exact definition." Because they're either changing the definition to make it more relaxed, or I think the actual auditors themselves are starting to understand and get smarter about being able to be lax on those things.
So I think that's been a really great change to that part of auditor versus security. I think the other thing that you brought up and even in some of those examples, which is like... Look, I don't actually care necessarily about rolling creds every 60 or 90 days. I really care about when someone is actually compromised those credentials because that's ultimately what is the root of the problem that you're trying to identify. So focusing then and trying to get capabilities around detecting when that happens and then ideally having some sort of automated response to be able to actively respond to that issue, that's really where the technology-based or the engineering-based security group goes immediately to saying, "That's how we identify that problem and that's how we solve it."
And that is heads and shoulders or light years ahead of where we were five years ago of just being like, "Oh well. We have this basic change control in place so everything's good."
Corey: Yeah. We're also seeing security, from my perspective at least, emerge in different directions as far as you have, I don't know, a system that's designed to do one thing. But you take a look at what it's permissions are scoped for and it has the capability to do an awful lot of other things. Now, on the one hand, there's the first approach of, "Hey, how about we alert when it does any of those other things," which is great and handy and useful. But in some ways the better approach might almost be, "Why don't we take away those excess powers that it doesn't need?" The principle of least privilege seems to have in some respects fallen by the wayside. And I don't think it's intentional. I think it often starts as, "Oh, we're going to make it work so we're going to start with a broad scope and we'll come back in step two and narrow it down." But we never get to step two. It gets dropped and we move on to other burning fires.
Andrew: Yeah. This is a tough one because we've lived this in practice from again, sort of previous lives where... Look, if you are living in this DevOps world, which to me, I think a lot of it is about developer empowerment and really being able to actually change who the power groups within these ultimately political organizations are, which is like... The folks that ran hardware used to have a lot of that power because they had huge budgets to buy big hardware pieces, and now a lot of the investment money is actually going into the development organization and actually building software. And so guess what? The power is going over there as well. So the default attitude from a lot of those groups is to basically say, "I should have access to everything to be able to do anything I want at any time, because if I don't have access to everything, then it'll slow me down and I can't do anything."
So A, yeah, you want to be able to empower people to do things and move fast and be able to get access to things. But like you got to have a responsible conversation around that, which is... I really think things like GDPR are really lending themselves to saying, "Okay," well especially if we're thinking about this from a data access perspective, let's really think about privacy and data privacy by design being something that we implement at the beginning, such that we can not only limit access for different people internally to different types of data sets, which I think is just a great thing to do from a security hygiene perspective in the first place. But it's also actually falling into this compliance standard that we need to follow now because of things like GDPR.
So this is where these I think good changes that are happening in the industry right now as it relates to how we're thinking about implementing new types of compliance standards. I think the new compliance standards, I think they give a lot of people headaches, I think sometimes. But I think the intent of what they're trying to do is good, not only for consumers and access to the data around that, but I think it's also good just as a basic engineering practice to make it so that not everybody has access to all different types of data internally.
Corey: No, I think that you're absolutely right. It's an evolving question about what the right security posture is and how that winds up mapping to an individual organization's needs and requirements. The hard part is figuring out where people fall on that spectrum. And then of course, figuring out why we were going to invest in that before you get to the point right after you really, really, really should have been investing in this.
Andrew: Yeah. Well and it's, to be totally fair it's not an easy conversation. It's not an easy change I think for people to make. Because there are meaningful trade-offs between access to data and speed, versus privacy and security architecture or responsible security architecture. And I am actually not in the camp of saying, "I'm going to dictate this is exactly how it should be, this way or the other." But I think at the very least, things like GDPR again, I think are forcing people to have these conversations and it's good to just have the conversation.
Because let's put it this way. If you want to go down one road and you're going to say, "Hey, this is going to be our philosophy and we're going to make this decision," at least you're making that consciously as to say, "We understand that this is a more risky path to go on because we are making a lot of these tools or a lot of this data or a lot of these systems, whatever you want to call it, like a lot of these things available to more sets of people internally than we would on a decision that would actually be more optimized around less people having that data. But you've made that consciously. And you've actually had that conversation internally.
Where I think, in the past the default was just to be like, "Look, we don't even need to have that conversation," because it just wasn't even something that people were thinking about at the beginning. And they probably would have made different changes, or they might've made different decisions on that architecture or on those internal policy decisions if they had had that conversation in the first place.
Corey: I think that it's always a hard part to wind up getting buy-in, and to some extent a company's security posture is almost entirely going to be dictated by how effectively information security leadership is at articulating a vision and telling a story. If we want to be cynical about it, we could even extend that to spreading fear, uncertainty and doubt around what could possibly happen and scaremongering in order to drum up budget. I mean hey, whatever it takes.
Andrew: I think it comes back to, again, these are actually, it's nice to have some recurring themes I think in what we're talking about. But those teams and security teams that I've seen have way more success at being able to either create a culture of security that's more embraced internally, and/or just creating tie-ins with other different business units internally are the ones that are able to show and use visibility. Like, basically making investments into visibility around what actual attackers are doing across their system versus just again, sitting there and saying, "The sky is falling, the sky is falling. We need to focus on security."
And if people aren't, then they just revert into this, "Well nobody ever cares about security and we're never going to get anything done unless we have buy-in from the top." You know it's, it's just not, I don't think that's an effective route to do things and I don't think it ever will.
But being able to use data and use real-time information and visibility, again, that you can point to, to all these teams internally to say like, "Look, this isn't a theoretical thing. This is a real thing, that we are being attacked in these different places all the time and what we're going to do is we're going to be smart about how we set up our security programs, to make it so that it doesn't hinder your job. Ideally, it would actually help you do your job better, but at the very least we're going to make this stuff so easy and really understand your goals as different business units internally to make sure that we're not impacting those goals."
Like, yeah, that is a completely different way to approach that discussion rather than just being the guys that say no to everybody all the time.
Corey: Absolutely. Andrew, thank you so much for taking the time to speak with me today. If people want to hear more about what you folks are up to, where can they find you?
Andrew: Yeah, for sure. I think everybody runs, at this point everybody's building some type of software and everybody's running some type of web application or service. All these themes that we're talking about today really fit into what we're talking about, which is we help give you visibility over, yeah, the people that are trying to impact or attack those different layer 7 architectures that you guys have. You can come find out more at signalsciences.com. I promise we won't brow-beat you with too much vendor speak.
Corey: We will hold you to that. Thanks again for taking the time to speak with me today. I appreciate it.
Andrew: Yeah. Thanks so much Corey.
Corey: Andrew Peterson, CEO of Signal Sciences. I'm Corey Quinn. This is Screaming in the Cloud.
Narrator: This has been this week's episode of Screaming in the Cloud. You can also find more Corey at screaminginthecloud.com, or wherever fine snark is sold.
Credits: This has been a HumblePod Production. Stay humble.