Episode Summary
Guy Raz is a senior systems engineer at ExtraHop, makers of cloud-native cybersecurity solutions. Prior to joining ExtraHop in 2017, Guy worked as a network engineer at Cox Communications and a software consultant and professional services team lead at AirWatch. He holds a master of science degree in electrical engineering from Georgia Tech and is a AWS certified solutions architect.
Join Corey and Guy as they talk about what exactly ExtraHop does, how too many organizations treat security as an afterthought in the cloud, how most organizations have a ton of network data sitting there but few analyze it, the delicate balance between minimizing the attack surface and understanding and reacting to damage as quickly as possible, how Corey’s opinion of ExtraHop has evolved over time, how long it takes for ExtraHop to learn what anomalies look like in your environment, and more.
Episode Show Notes & Transcript
About Guy
Guy Raz is a Sr. Systems Engineer at ExtraHop with previous experience as a Network Engineer and Solution Architect. Guy is one of the SMEs leading the unique ExtraHop approach to cloud-native NDR for the hybrid multi-cloud enterprise. Before joining the Sales Engineer team, Guy was one of the ExtraHop Solution Architects, responsible for conducting deep technical and business discovery sessions, assisting in troubleshooting and problem resolution during war-room and security/network investigations, and developing strategies for acquiring high-value data from the wire; requiring in-depth technical understanding of L2-L7 networking principles.
Links:
Transcript
Announcer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.
Corey: This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version of their tool, canarytokens.org in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various parts of your environment, wherever you want to; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It’s an awesome approach. I’ve used something similar for years. Check them out. But wait, there’s more. They also have an enterprise option that you should be very much aware of canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files on it, you get instant alerts. It’s awesome. If you don’t do something like this, you’re likely to find out that you’ve gotten breached, the hard way. Take a look at this. It’s one of those few things that I look at and say, “Wow, that is an amazing idea. I love it.” That’s canarytokens.org and canary.tools. The first one is free. The second one is enterprise-y. Take a look. I’m a big fan of this. More from them in the coming weeks.
Corey: This episode is sponsored in part by our friends at Lumigo. If you’ve built anything from serverless, you know that if there’s one thing that can be said universally about these applications, it’s that it turns every outage into a murder mystery. Lumigo helps make sense of all of the various functions that wind up tying together to build applications. It offers one-click distributed tracing so you can effortlessly find and fix issues in your serverless and microservices environment. You’ve created more problems for yourself; make one of them go away. To learn more, visit lumigo.io.
Corey: This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version of their tool, canarytokens.org in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various parts of your environment, wherever you want to; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It’s an awesome approach. I’ve used something similar for years. Check them out. But wait, there’s more. They also have an enterprise option that you should be very much aware of canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files on it, you get instant alerts. It’s awesome. If you don’t do something like this, you’re likely to find out that you’ve gotten breached, the hard way. Take a look at this. It’s one of those few things that I look at and say, “Wow, that is an amazing idea. I love it.” That’s canarytokens.org and canary.tools. The first one is free. The second one is enterprise-y. Take a look. I’m a big fan of this. More from them in the coming weeks.
Corey: This episode is sponsored in part by our friends at Lumigo. If you’ve built anything from serverless, you know that if there’s one thing that can be said universally about these applications, it’s that it turns every outage into a murder mystery. Lumigo helps make sense of all of the various functions that wind up tying together to build applications. It offers one-click distributed tracing so you can effortlessly find and fix issues in your serverless and microservices environment. You’ve created more problems for yourself; make one of them go away. To learn more, visit lumigo.io.
Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. Once a year in San Francisco, if I find myself being overly cheerful, all I have to do is walk up and down the RSA Expo Hall and look at a bunch of vendors talking about how their on-premises product kind of sort of works in the cloud, and then I’m not overly cheerful anymore. One notable exception to this is a company called Extrahop. I’ve spoken about them before and on this promoted episode, we’re going to dive a little bit deeper. Today, my guest is Senior Systems Engineer Guy Raz. Guy, thanks for taking the time to speak with me.
Guy: Thanks, Corey, happy to be here.
Corey: So, for those who have not caught previous episodes, or heard me ranting from the rooftop about it, at a very basic level for folks who have not even, I guess, dip their toes in the RSA space because they, you know, want to be happy with their lives, what is ExtraHop?
Guy: ExtraHop is a cloud-native approach for analyzing wire data. Historically, customers have, kind of, looked at TAP SPANs, but with cloud, there’s a ton of ways of getting this natively. You know, AWS, GCP, Azure give us ways of collecting this data. ExtraHop is a platform for analyzing that network traffic and, in real-time, providing context to application and
security teams.
security teams.
Corey: So, when you take a look at that from, I guess, the perspective of security, it’s easy to sit here and say, “Oh, so how do you wind up thinking about security in a place or time of cloud?” Because there’s an awful lot of ways to view it: you can go down the path of, “Ah, I’m going to just use all the first-party tooling from my provider, and that’s it,” which, that could be fair. Alternatively, you could go down a different path of, “I’m going to just go ahead and buy whatever they’ll sell me at RSA,” which is great because the hardest part there is the booth attendees not making actual cash register sounds with their mouths when you walk past with an open checkbook. But security always feels like a thing that’s kind of an afterthought. It’s something that is tied too closely, on some level, to this idea that you’re never going to be secure, so you may as well just give up. It’s also something people only care about after it’s been a little too late, where they really should have been caring about it. How do you see that?
Guy: It’s a really unfortunate space, but you’re absolutely right, Corey, there. What we end up seeing as a lot of customers, and just the industry as a whole tends to be an afterthought when it comes to cloud. They assume cloud-native solutions or built-in free solutions have their best foot forward, have their best instance in mind. And that’s not always the case. There’s a lot of, like you mentioned, built-in solutions that these cloud providers can give us.
And while a lot of them are kind of scratching the surface of what security in the cloud can provide, there’s a lot that it kind of leaves unanswered. And the unfortunate thing is, the cloud journey isn’t always the easiest. There’s a lot of lift-and-shift, there’s a lot of refactor, and sometimes the security portion of that gets put on the side street until it becomes a priority or an event happens.
Corey: So, given that you can effectively not even swing a dead cat anymore without hitting 15 different security vendors all claiming to do everything you’d want, start to finish, what makes ExtraHop different? How do you approach security that’s differentiated from the rest of the, I guess, entire security industry?
Guy: Yeah, that’s a really good question. I think my favorite part, and one of the reasons I love our product is the data stream that we collect. Network data is a huge source of information that’s just sitting there silently, kind of, waiting to be consumed and analyzed. In the old on-premise environment, there were legacy packet capture solutions, or ways of grabbing this information from a SPAN or a TAP. But it’s still the same data stream as we go to the cloud, it’s just a slightly different way of collecting it.
So, the biggest thing that I would encourage people is, use the data that’s there. The network traffic is passing your infrastructure: it’s EC2s hitting your S3 buckets, it’s RDS instances going through a load balancer to a Lambda function. It’s all just traversing through infrastructure that you just don’t own anymore, but getting that information is a huge differentiator. You’re talking about every packet of every transaction being analyzed in real-time at a cloud-scale, which, you know, you need a smaller instance today—it’s smaller today—you need a bigger instance tomorrow, it just auto-scales up.
Corey: Now, back in the world of data centers, I agreed an awful lot with what you’re saying, as far as looking at the network as the first point of, I guess, the arbiter of truth, for lack of a better term. And, on some level in cloud, I feel like I’ve drifted away from that. Now, back in our days at data centers, you don’t know what’s running on these systems; you don’t know what various engineers are shoved onto them, but generally speaking, you can mostly trust the network. Please don’t email me. So, once you move into a cloud world, everything sort of changes a bit.
You don’t really have to think about any of the layer 2 networking, and most of the layer 3 networking sort of goes away, too. Plus, let’s be very realistic; from the perspective of the virtual machines you’re running in a cloud environment, everything beyond that is kind of a lie. There’s a bunch of encapsulation, you’re higher up the stack, you’re not on hardware anymore so, on some level, it always felt that, eh, networking is not really the same thing in the cloud environment. I can ignore it. And I have to admit, back when I first started talking to you folks, I was something of a skeptic.
And then you, more or less, made me change my perspective through a very sneaky approach of spinning up a test account for me with ExtraHop, and now I get it in a way I never did before. Is that aha moment common to the, I guess, the cloud-native set, or do most people come into this with a much more rational and reasoned approach to networking in the cloud?
Guy: I would say it’s both. We have customers who are familiar with the type of information we can provide going through their cloud journey, or are starting their cloud journey and they want the same type of visibility. But for our net-new customers, when we hit that whitespace, that aha moment comes, and it’s so much fun to see. Someone who had no idea what this type of data can provide; they’re used to legacy telemetry or log information. So, that aha moment is something that, as someone who gets to interact with customers, is one of my favorite parts of the job. And I would say it’s fun to play with and show that.
Corey: Now, I want to be clear that, again, in the interest of full disclosure, now, since I’ve put this in my test account, ExtraHop is now the second most expensive consumer of AWS services. But it’s not as bad as folks might think. It’s using a VPC mirror in order to look at traffic, and that costs me the princely sum of somewhere between $10 and $11 a month. And that doesn’t really vary, regardless of how much traffic I shove through this thing. It’s not doing a whole lot in the AWS
account; if I didn’t know that was there and that’s what it was doing, I would ignore the spend line entirely. How does this
work? What are you doing in order to get access to seeing what is happening, “On the wire,” quote-unquote, in a cloud
environment?
account; if I didn’t know that was there and that’s what it was doing, I would ignore the spend line entirely. How does this
work? What are you doing in order to get access to seeing what is happening, “On the wire,” quote-unquote, in a cloud
environment?
Guy: Just focusing on AWS for a second, since that’s what you called out. It’s using a native built-in functionality that Amazon provides. It’s called VPC packet mirroring. It’s super simple: you deploy an ExtraHop collector into your VPC, you set that up as a destination of your traffic, and then you configure what’s called a monitoring session in VPC. You can say I want it to do based on these tags, I want it to send traffic based on this subnet—or any there combination of—and it just kind of works. You know, it’s beautiful.
And where we’re kind of taking this to the next step is using some intelligent Lambda automation to ensure that anytime a new instance gets spun up, whether it’s tagged, untagged, deployed into a different VPC, or is a different instance size, it gets automatically added into this data feed. So, you know, you talk about the ephemerality of the cloud and how instances can spin up and spin down almost instantaneously, as soon as an instance is up, before it even gets any traffic sent to it, traffic is [laugh] coming to the ExtraHop, right? We’ll see IMDS traffic, we’ll see instance metadata, we’ll get the ENI information, all just by sitting there, passively listening.
Corey: One of the things that I found particularly, I guess—appreciated about your entire approach is I didn’t have to change anything about what was actually running in this account. I didn’t have to teach the EC2 instances that something else was going on. I didn’t have to reconfigure anything on an application basis. This was purely done in the underlying VPC configuration. It was done without any downtime whatsoever.
And I feel like that is an understated benefit for an awful lot of tooling. “Oh, just go ahead and roll this thing out to all of your environment.” Like, yeah, there are tens of thousands of instances and VMs scattered throughout our entire estate. Exactly how long do you think we’re going to spend on this? You don’t have that problem here, and it’s kind of nice.
Guy: It is really nice. And not to take anything away from some agent solutions because they do have their [crosstalk 00:09:46]—
Corey: Oh, I will, but please go on.
Guy: [laugh]. But this approach to security and monitoring in the cloud, to your point, Corey, is seamless. Application owners don’t know it’s there. It doesn’t add any added load. I’m a former network engineer. Troubleshooting different instances or different virtual machines, the first thing I used to do is turn off those agents, right? Is this consuming CPU resources? Is this slowing down my agent? That’s no longer the case in cloud. That’s no longer the case with this network-based approach.
Corey: I’ll also point out that it always feels like there’s a false dichotomy when we’re talking about security vendors. And it either feels like, oh, you’re in a bunch of data-center style environments, you’re migrating into the cloud, but basically today, your environment is a bunch of VMs, and maybe a load balancer or an object store. And a lot of tooling speaks super well to that use case. But then if you take a step back and look at well, the lie that all these companies love to tell themselves, and I’m no more immune to this than they are, to be very clear here, but we all tell ourselves this beautiful lie which is after this next sprint ends, then, then we’re going to go ahead and pay off all of our technical debt and things are going to be done properly with a capital P. And it never happens, but it’s the lie we tell ourselves.
And we make financial decisions, in some cases, tied to that false vision of, “Well, why would I wind up embracing something that is aimed at that particular use case because once we wind up going full-on cloud-native and embracing our provider of choice, all of this stuff is going to change?” What I like about ExtraHop is, all right, assume you’re in that mythical born-in-the-cloud world where you have a significant estate that everything runs on top of these higher-level services. ExtraHop is still there, still working, and still doing exactly the sorts of things we’re talking about here. No matter where you are on that transformational journey, it feels like there’s an answer here. Is that accurate? Have I been gargling the marketing tea too heavily? What’s the story here?
Guy: No, that’s pretty accurate. And it doesn’t really matter where you are on your cloud journey; security can’t be foregone for the sake of this cloud instance. We see this day in, day out. You know, if you subscribe to as many news alerts as I do, it’s a scary world. Just even recently this past weekend, we had a—not our customer, but there was an attack against an oil pipeline.
That came through a cloud vulnerability. IAM account leakage, and service accounts, and open S3 buckets. It’s a scary part of this cloud journey. We want to make sure that we’re scaling, we want to reduce our physical footprint, but we can’t forgo the security and the trust that our customers have in our applications. And that means that having an approach to security in the cloud needs to be top of mind, regardless of where you are in that cloud journey.
Corey: I think one of the, I guess, biggest concerns in the security space is very similar to what I deal with in the cost optimization space, which is people care about it only after they really, really, really should have cared about it, on some level. Now, over in the billing world that I live in, people generally have a failure mode of, “Well, we spent a little too much money,” and that is generally a very survivable thing. I used to say—tongue-in-cheek, only I was being completely serious—one of the reasons I went with AWS billing as my direction of choice was that no one is going to come and call me at two o’clock in the morning with a billing emergency; it is strictly a business hours problem. Security is a very different world. But if you screw up the bill, you spent too much money.
If you screw up security, well, your company’s name is mud, you could try and pull a SolarWinds with a ring of ablative interns to wind up trying to pass the buck off onto, but in practice, you’re probably losing a CSO and a few other high-level execs as a sort of token offering to the market gods. And it’s painful, and I’m hard-pressed to name a company these days that has not suffered at least some form of data breach somewhere. It almost feels like it’s a losing game.
Guy: It’s not a losing game, but it is a post-breach world, right? It’s not a question of, if you get breached. It’s more a question of what security holes have been left open, and what can they collect from these holes? And minimizing that attack surface is obviously critical, but understanding the damage and reacting to it as fast as possible is just as important. And honestly, that’s, kind of, my favorite parts about the cloud.
You know, I can see something like a suspicious transaction, or a large increase in web traffic, and then fire off an API to Lambda that says, “Deploy the security group onto this instance.” That whole process takes milliseconds. So, the reaction time that we have with the cloud vastly surpasses what we ever had in the data center. And yeah, you’re right, maybe that adds up costing a little bit more, or creates a slightly higher bill because we called a couple Lambda functions, but no exfiltration of data; no loss of customer information. You can’t trade that off, at the end of the day.
Corey: The thing that always, I guess, sort of bothered me about various breaches or various security reports is whenever companies will say definitively, “We have never suffered a security breach,” that might mean that they are absolutely on point—though, you always have this probabilities question—but it could also mean that they have no effective visibility or effective logging, and that is the dangerous part. It’s similar to this idea of back once upon a time in the early days of unbreakable Linux, when Oracle was pushing that and they said, “It is unhackable.” The entire internet proved them wrong within hours because everything can be broken into at some point. It’s just a question of how high do you raise that bar? Ideally, a little bit above random people just scanning S3 buckets.
Guy: Yeah, and you know, that’s really scary, kind of, the data that we get to see when—you know, you called this earlier that aha moment. Because we’re an always-on solution, we get to see the hygiene of the network, too. I can tell you when someone hit an insecure S3 bucket, or an IAM role logged in at two in the morning that it never has before, or someone sent an API command to Lambda to spin up another instance at two in the morning, using a service account that has admin permissions. It’s a scary world in the cloud, and making sure you have that surface covered gets you to those aha moments quicker.
Corey: This episode is sponsored by ExtraHop. ExtraHop provides threat detection and response for the Enterprise (not the starship). On-prem security doesn’t translate well to cloud or multi-cloud environments, and that’s not even counting IoT. ExtraHop automatically discovers everything inside the perimeter, including your cloud workloads and IoT devices, detects these threats up to 35 percent faster, and helps you act immediately. Ask for a free trial of detection and response for AWS today at extrahop.com/trial.
Corey: One thing that I do want to draw a little bit of attention to as well, having kicked the tires on ExtraHop for a few months now, I keep forgetting that I have it in place. And the only time I really get reminded is that $10 a month for that attachment to the VPC that I see on my bill when I go over that thing with a fine-tooth comb because of who I am and what I do. My point being is that I have instances in that account that are doing a bunch of relatively strange things from time to time. And the behavior is not consistent from day to day. One of them has an IRC bouncer hanging out on it because I used to spend a disproportionate amount of my time on freenode, and it does a whole bunch of different things that looks super weird.
And every time I wind up pointing a typical security product at it, it starts shrinking its head off—if it can even get that far into it—of, “This thing is clearly exploited. Shut it down, shut it down, shut it down.” And none of that happens. I mean, this thing looks very weird on the network, I’m not going to deny otherwise. This is my development box.
When I’m on the road—remember back when we used to travel places?—and I would just be connecting from an iPad and remoting into this thing, and then I would have it do all of the things I would normally do on a desktop computer. But it doesn’t make noise. Now, to be clear, I also have a somewhat decent security posture on this thing so it’s not a story of it getting actively exploited and it should be making noise. But it just doesn’t say anything. It just sort of sits there quietly in the background. And it works. Whenever I log in, I have to click around to make sure it actually is still working because there’s nothing on the dashboard where it’s just giving you noise to talk about noise. Why is this such a rarity?
Guy: [laugh]. So, your environment is probably pretty secure. I imagine you’re not deploying hundreds and thousands of containers and EC2s and spinning up all this type of data, but—
Corey: No. It’s tiny, I spend 50 bucks a month on this account.
Guy: So, it’s not atypical, the behavior you see. You know, I’ve been in POCs and proof of values where we deployed the ExtraHop, and it doesn’t see too much. And so one thing I’ve started doing for a lot of my customers is deploying a lab for them. Do you trust that something like an ExtraHop will see ransomware? Do you trust that ExtraHop will see credential harvesting, and lateral movement, and exfiltration?
Or are you using your ExtraHop to troubleshoot your web applications? Let me spin up a lab for you, throw some workloads in there. We’ll drop a Kali instance or a Kubernetes cluster and show you what an attack surface can look like. Not to scare or, kind of, build on what customers are experiencing, but knock on wood, I don’t want any of my customers to be attacked, but I also have to build that confidence that if or when something happens, they’re covered.
Corey: Back when I first had ExtraHop demoed for me, I was convinced it was going to be garbage, let me be very honest with you. And the reason was that the dashboard looked like it was demoware. It was well-designed, well-executed, it had a very colorful interface. It felt like bossware if I’m being perfectly honest. My belief has always been, you either get a good interface that works and is easy to use and navigate within, or you get something that looks super flashy when you do a demo on stage somewhere, but it is almost impossible to wind up effectively nailing both of those use cases. And then I started using this and I am having to eat those words because you actually did it. You wound up building something that looks great and is easy to navigate. How much work did that actually take? I mean, is that where all the engineering
on this product has gone?
on this product has gone?
Guy: We really appreciate it. Our UX team and our engineering group work very, very hard. We spend more on R&D and research than we do on a lot of our marketing and front-end sectors and it shows. The product kind of speaks for itself. And the experience that you’re describing with the easy-to-consume UI, with the data to support that experience behind it is our goal. And I’m happy to hear that you’re enjoying it in your lab.
Corey: I just did a little poking around while I have you on the phone, and if I dig deep enough, it does tell me that there’s some weak ciphers in use. And every single one of these things is talking to an AWS-owned endpoint, which is, first, a little bit on the hilarious side, since I keep this thing current. Awesome. Secondly, the fact that I had to dig for that and it wasn’t freaking out about it. There are no alerts; it doesn’t show up on the dashboard.
I had to really start diving into this. Because, yeah, it’s good to know if I’m doing some sort of audit activity, it’s good to know if I need to dive in and look at these things, but it doesn’t need to wake me up at two in the morning because, “Holy crap. The Boto3 library isn’t quite using the latest cipher suite.” How much tuning did this take?
Guy: Not much. So, there is a learning period, as with any application that has a backend on behavioral analytics. But most of my customers, usually two to three weeks after we start seeing a data feed, are in a state of excellent tuning. Very little manual tuning required, the system will learn normalities, it’ll learn behaviors, and it’ll flag anomalies, kind of, on its own. So, the same experience that you’re having where you’re running a compliance scan, or you’re running an audit, or you’re trying to look for, in this world where—I’m going to make a joke here—we all have free time, and you have the time to go look at, you know, “How do I clean up some of these hygienic issues that are not currently causing me heartache?” The data is there. That’s the beauty of the network is some of your users may be familiar with Wireshark, or something like a tcpdump. There’s boatloads of data in. There are thousands and thousands of data points you can analyze though. If you want the data, it’s there, but like you said, no reason to wake you up at two in the morning unless we see things that are super critical.
Corey: Encrypt everything sort of becomes the theme, especially when Amazon’s CTO slaps it on a t-shirt, and then in some cases charges extra for it; but that’s a diversion. What is the story as you start seeing more and more traffic wind up being encrypted at a bunch of different levels? In fact, I’ll take it a step further. With the rise of customer-managed keys and things like KMS in the AWS world, does that mean that ExtraHop is effectively losing visibility beyond just the typical TCP flow?
Guy: So, ExtraHop is unique in the space that we have the ability to decrypt TLS 1.3 data. It came out a couple years ago and it’s a way of encrypting traffic between servers and clients in a manner that isn’t as breakable as historic encryption mechanisms were. We can parse that data, we can ingest those decryption mechanisms, we can—in real-time, without being a man-in-the-middle so we’re not breaking any of this trust chain that you have to explicitly build to the internet in a lot of cases, or you don’t have to upload any of your private keys to the ExtraHop. So, it’s a super unique approach for how we can unpack that envelope.
This goes back to when we were kids, and we all got those Christmas presents and you check the box and you try to guess what’s inside. And maybe you’re right, maybe you’re not, but until you open that wrapper, you can’t really know what’s being said. So, something like a hidden database transaction underneath a web call just shows up as a web call when you’re not unpacking the envelopes. Decryption is an underrated feature, in my opinion, and I would—you know, true security posture team should probably have something where they can look inside those payloads.
Corey: This is where it starts to get a little weird, too, because, on some level, great, the whole premise of TLS is that my application talks to something far away—or nearby. It doesn’t really matter—but there’s a bit of a guarantee that from the point it leaves that application and hits the encryption side on the instance to the other end, there should be no decryption there. The only way I’ve ever seen that get around that is effectively man-in-the-middling these things, which in some level, “Oh, decrypt all of your secure traffic in the name of security,” always felt a little on the silly side.
Guy: Not only is it silly, it’s a little harder to manage when we talk about cloud because those man-in-the-middle decryption mechanisms typically involve building explicit trust so that they can decrypt the traffic, and then the client and the server both agree that, “Yeah, sure. You can read my information. You use your own certificate. I don’t care.” That gets harder to do as you start talking about containers, as you start talking about ephemeral instances.
Sure, you can build a golden image of a container and make it trust your IPS—which most people should have—but you still have to have the ability to see this traffic when you’re bypassing certain metrics. If you’re bypassing traffic back to your data center so you can [unintelligible 00:24:45] your point of sale application, or if, maybe, you’re a multi-cloud environment where you have to pass from cloud to consume all of your data space. You still have to be able to see that data to understand what’s really being said during the conversation without always being able to break that trust chain.
Corey: One thing that I want to make very clear I call out because otherwise, I am going to get letters on this. This is a promoted episode. You folks have paid to sponsor. Thank you. It is appreciated. But I want to be very clear you buy my attention, not my opinion. I know I’ve been, sort of, gushing about what ExtraHop does, and how it works, and how I view these things, but that’s not because you’re paying me to do that. I am legitimately excited about the product itself.
This is one of those things where it finally is giving me visibility into something that I understand from my olden sysadmin network admin days combined with how I know the cloud works today, and I’m looking at this and the strange spots that I see of, “Ohh, I would improve that a bit,” there aren’t that many and they’re not that big. This is something that is legitimately awesome, and I would encourage people to kick the tires and see what they think.
Guy: Yeah, we appreciate that feedback, Corey. A lot of us are previous users. I myself, you know, before coming to ExtraHop, used ExtraHop at a previous job and that was one of the big reasons I came to work for the company is I believe in the software. A lot of our people here and we have long-time-term employees believe in what we do. And our goal is to build this partnership and trust with our customers, too, so that they have the same experience that you do. It’s a fun product to play with, and kicking around and tires is fun and we’d love to show you.
Corey: When you start talking to folks who are going through their, I guess, ExtraHop journey of discovery—don’t ever use that term. It sounds awful—what do you find that they are getting the most confused about? What do they misunderstand that would be helpful for them to have more clarity around?
Guy: There’s a lot of what ExtraHop can provide when it comes to data ingestion, and data collection, and even data aggregation, but where a lot of my customers fall in the confusion space tends to be in, “Do I care about this data? Should I care about this information?” And that really falls down to the individual user’s responsibility. A security team cares about all of it, whereas an application team may only care about the website’s performance, or the network latency, or the error rates. And it spans the gambit.
So, one thing that I do with a lot of my customers is weekly training sessions, or give them access to videos that we’ve recorded in advance so they can self-teach. As an engineer myself, I hate when people talk me into things: I like to play, and I like to see. So, let me give you a guide, you want to play with it, kind of poke the toes, kick the tires, have fun, that seems to get customers excited, and again, back to that aha moment a lot quicker. There’s so much data that gets exposed, and sometimes it can be overwhelming. But when it comes to visibility, it’s all stuff that’s useful at the end of the day.
Corey: If people want to learn more, where can they go next? How do they begin this journey? And of course, mention me just because every time someone talks to a sponsor and brings my name up, the reflexive wince is just my favorite look in the world.
Guy: Yeah, so definitely mentioned Corey’s name. [laugh]. We have online demos where people can play with the lab, you go to extrahop.com/demo. We also offer AWS trials if you want to actually deploy one and see what it looks like in your environment for a period of time. And we have teams all over the world, from the United States, EMEA, APACs, that are happy to help answer questions, help deploy, and help automate a lot of this, whether it be through something like a CloudFormation template, or Terraform scripts, whatever infrastructure as code language you choose to use.
Corey: Excellent. Thank you so much for taking the time to speak with me today. I really do appreciate it.
Guy: Yeah, Corey, it’s been a pleasure talking to you. And I’m looking forward to maybe having another one with you in the future.
Corey: Oh, I would expect so. I’m curious to see what happens next. Guy Raz, senior systems engineer at ExtraHop. I’m Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice and an insulting comment that will no doubt get flagged by ExtraHop as being something that shouldn’t be on the network.
Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
Announcer: This has been a HumblePod production. Stay humble.