Episode Summary
Aviad Mor is the CTO and co-founder of Lumigo, a serverless intelligence platform that helps developers understand and troubleshoot serverless applications. Prior to this role, he wore many hats at Check Point Software Technologies, Ltd. over the course of 12 years, rising to the group manager of R&D for next generation architecture by the end of his tenure.
Aviad Mor is the CTO and co-founder of Lumigo, a serverless observability platform that helps developers understand and troubleshoot serverless applications. Prior to this role, he wore many hats at Check Point Software Technologies, Ltd. over the course of 12 years, rising to the group manager of R&D for next generation architecture by the end of his tenure.
Join Corey and Aviad as they talk about what Lumigo does, how the most interesting serverless environments are in AWS, what a hybrid serverless environment might look like, what the true promise of serverless is, why observability is really just hipster monitoring, what sets Lumigo apart from other players in the serverless space, how the serverless space continues to grow and diversify, why the future of serverless is exciting, and more.
Episode Show Notes & Transcript
About Aviad
Aviad Mor is the Co-Founder & CTO at Lumigo. Lumigo’s SaaS platform helps companies monitor and troubleshoot serverless applications while providing actionable insights that prevent business disruptions. Aviad has over a decade of experience in technology leadership, heading the development of core products in Check Point from inception to wide adoption.
Links:
- Lumigo: https://lumigo.io/
Transcript
Announcer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.
Corey: This episode is sponsored by ExtraHop. ExtraHop provides threat detection and response for the Enterprise (not the starship). On-prem security doesn’t translate well to cloud or multi-cloud environments, and that’s not even counting IoT. ExtraHop automatically discovers everything inside the perimeter, including your cloud workloads and IoT devices, detects these threats up to 35 percent faster, and helps you act immediately. Ask for a free trial of detection and response for AWS today at extrahop.com/trial.
Corey: This episode is sponsored in part by our friends at Lumigo. If you’ve built anything from serverless, you know that if there’s one thing that can be said universally about these applications, it’s that it turns every outage into a murder mystery. Lumigo helps make sense of all of the various functions that wind up tying together to build applications.
It offers one-click distributed tracing so you can effortlessly find and fix issues in your serverless and microservices environment. You’ve created more problems for yourself; make one of them go away. To learn more, visit lumigo.io.
Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. I periodically talk about how I bolt together a whole bunch of different serverless tools in horrifying ways to write my newsletter every week. At last count, I was up to something like four API Gateways, twenty-nine Lambda functions, and counting. How do I figure out if something’s broken in there? Well, honestly, I just keep clicking the button until it works, which is a depressingly honest story.
Now, that doesn’t work for everyone. Today’s promoted episode is brought to us by Lumigo. And my guest today is Aviad Mor, their CTO, and co-founder. Aviad, thanks for taking the time to suffer my slings and arrows.
Aviad: Thank you, Corey. I’m very glad to be here today.
Corey: So, let’s begin at, I guess, the very easy, high-level question: what is Lumigo and is ‘loom-ago’ an accepted alternate pronunciation?
Aviad: [laugh]. So, Lumigo is a monitoring and debugging platform for serverless environments. And yes, you can call it whatever you want as long as it’s Lu-mi-go. What we do is we integrate with the customer’s AWS account, we do a very quick connection to its Lambdas, and then we’re able to show him exactly what’s going on in his system: what’s going well, what’s going wrong, and how we can fix it.
Corey: So, let’s make sure that we hit a few points here at the beginning. It is AWS specific at this time?
Aviad: Yes, it is. We’re not officially exclusive with AWS, but right now we see the most interesting serverless environments in AWS, so it’s a pretty easy call. But we are keeping our eye open to, you know, Google, Microsoft, even Oracle.
Corey: Oh, Oracle Cloud has some phenomenally interesting serverless stories that I don’t think the world is paying enough attention to yet. But one of these days, I’m hoping that that’s going to change just because they have so much savvy locked up in that platform.
Aviad: Right. They do have serverless functions. Yeah, so.
Corey: They acquired the iron.io folks a while back, and those people were way ahead of Lambda at the time.
Aviad: Right, right. So, we’re waiting for the big breakout of serverless in Oracle, and then we’ll build the best monitoring solution for them.
Corey: So, something else, I think, that you have successfully navigated as far as, I guess, the traps that various observability tooling falls into, you also talk on your site about monitoring AWS Lambda as the center around which everything winds up being captured. You also, of course, integrate with the things that tied directly into it, such as API Gateway—or ‘opi-gateway,’ as I’m sure they mispronounce it at AWS—but that’s sort of where you stop. You don’t also show all of the container workloads that folks are running, and, “Oh, hey. While we have access to your API, here’s a whole story about ECS, and RDS, and all the rest.” And eventually, it feels like everything, in the fullness of time, tries to become Datadog version two.
And that always drove me nuts because I want something that’s going to talk to me specifically about what it is that I’m looking at in a serverless application context, not trying to be all things to all people at once. Is that a fair assessment of
the product strategy you folks have pursued?
the product strategy you folks have pursued?
Aviad: Right. So, we’re very focused on serverless. We think there’s a lot of interesting things that we can do there, and we’re actually seeing more and more use cases of serverless. And it is important to say that when we say serverless, it’s very clear what is serverless. So Lambda, of course, and API Gateway, DynamoDB, S3, and so on.
There’s a lot of services in data ecosystem, and seeing them all being tied together in a serverless cloud application, we’re able to do all of that to monitor it; not only monitor it at the high level, but also get into the details and show you things which are very specific because this is what we do all day, and sometimes all night. And then there are those boundaries of where do we go beyond serverless. So, there are some hybrid environments out there. And when I say ‘hybrid,’ the easy hybrid, which is you have two different applications which just happen to be on the same AWS account; one of them is completely serverless, and then the other one is EC2. So, that’s kind of hybrid.
But the more interesting hybrid is those applications which start with an API Gateway in the Lambda, and then are directly connected to something else, which is maybe Fargate, ECS, EKS, and so on. So, we are very much focused on serverless, but we are getting also a lot of requests from our customers, “So, show us, also, the other parts.” We’re starting to look at that, but we’re not losing our focus. Our focus is still very much on the serverless while allowing you to tie together if you do have some other aspects in your environment to see them all together.
Corey: So, you’ve done a number of things that I would consider best in class as you’ve gone through this. First and foremost, let’s begin with the easy stuff. It doesn’t appear that your dashboard, your tooling itself, is purely serverless itself. I can tell this because when I click around in your site, the site loads super quickly. It’s not waiting for cold starts or waiting for the latency inherent to the Lambda.
It’s clear that you have not gone so far down the path of being, I guess, religiously correct around everything must be serverless all the times in favor of improving customer experience. That’s something that I’ve seen a number of different vendors fall into the trap of, of, “Why is the dashboard so slow to load?” “Ah, because everything is itself a Lambda function.” Is that accurate, or if you just found a way to improve Lambda [laugh] function performance in an ungodly way?
Aviad: [laugh]. We are serverless—we call ourselves serverless first, but the customer is always—he’s really the first. So, if there’s a place where serverless is not the best solution, we’re going to use whatever is the best solution. But the truth is, we’re, I’d say, something like 99% serverless. And specifically, anything which is dashboard-facing customer-facing, that’s actually completely serverless.
So, we did have to put in a lot of work, but also, I have to say that AWS went a very long way, like, in the last two years, allowing us to give much better latencies in different parts of the dashboard. So, all of that is serverless, and it goes together with the new features of Lambdas, and API Gateways, and a lot of small things we had to do in order to provide the best experience to the customer.
Corey: The next thing that I think was interesting, as far as, I guess, capturing the way in which people use these things. One of the earliest problems I had, in the early days of these, I guess, new breed of serverless tools was getting everything instrumented correctly. It felt like it took in some cases more time to get the observability pieces working than it did to write the thing in the first place. So, you’re integrating out of the gate with a lot of the right things as best I can tell. Your website advertises that you integrate with the Serverless Framework, you integrate with a bunch of other [processes 00:07:52] as well. Chalice, which I haven’t seen used in anger too much, but okay; Terraform, which everyone’s using; Stackery, et cetera. Is AWS’s SAM on the list as well?
Aviad: Yes, it actually is. And once we started seeing more and more users using SAM, we had to provide a way to allow them to easily do the integration. Because one of the things that we learned is, no, our users are developers and, just like you said, they don’t want to spend time on anything, which is not like doing the thing that they want to do. Especially in serverless, because the whole serverless premise is work on what you do best, and don’t spend time on everything else. So, we actually spend a lot of time ourselves in order to make the integrations as easy as possible, as quickly as possible, and that also means that working with a lot of different tools to fit all the different environments our users are using out there.
Corey: It looks like you’re doing this through the application—judiciously—of a bunch of custom layers. In other words, whatever you wind up using winds up being built as an underpinning of the existing Lambda functions, so it’s super easy to wind up grabbing the necessary dependencies for anything that you folks support without having to go back and refactor existing applications. Is that directionally correct?
Aviad: Right. That’s correct. We’re using layers in order to, on one hand, do this deep integration we do with the Lambda, allowing us to do different instrumentations, collecting the data that’s being passed into the Lambda, being passed out of the Lambda, on one hand. But on the other hand, so the developer doesn’t have to make any code changes, and he can do whatever changes he wants to do. Doesn’t have to think about Lumigo at any point, and serverless layer does everything for him automatically.
Corey: How do you handle support of the Lambda@Edge functions, which seem an awful lot like regular Lambda functions, except they’re worse in nearly every single way, every time I’ve tried to use them? In fact, in my experience, the best practice has been to immediately rip out Lambda@Edge and replace it with something else. Recently, it was formally disclosed that they only ran in a subset of 13 regional cache locations, and they still took a full CloudFront distribution update cycle every time you did a deployment, which dramatically slowed everything down for deploying it; they were massively expensive to run at significant scale, and they would log to whatever region was closest so it was a constant game of whack a mole to figure out what was going on. But, you know, other than that, they were great. How do you approach those?
Aviad: Lambda@Edge are not very easy to use, and they’re, like, let’s say they’re full of surprises [laugh] because not everything they do is exactly what you find in the documentation. But again, since our users are using them, we had to make sure that we give them proper support. And giving them the proper support—other than running and collecting the data—is things that you mentioned, like the fact that it will log to the specific region it’s running in, so you have to go and collect all this data from different places, and you don’t really know exactly where it’s going to run. So, the main thing here is just to make things easy. It’s a bit of a mess when you’re looking at it directly, and taking all the information, putting it in one place so you as a user can just go ahead and read it and you don’t care where it’s running and what it’s doing, that was the main challenge which we worked on and added to the product.
Corey: So, across the board, it seems like you folks have been evolving in lockstep with the underlying platform itself. Have you had time to evaluate their new CloudFront Functions, I believe is what they’re calling it. Or is it CloudFront Workers? I can never quite keep it straight; between all the different providers, all the words start to sound alike. But the thing that only runs for a millisecond or two, only in JavaScript, only in the actual all the CloudFront edge locations, et cetera, et cetera. Rather than fixing Lambda@Edge, they decided to roll something completely different out, and I haven’t looked at anything approaching the observability story yet because I’m still too angry about it.
Aviad: [laugh]. Right. So, there’s a lot of things coming out, and we’re also very close partners with AWS, so in many cases, we’re actually beta users of new services or new functionality in Lambda. And one of the hardest parts is—and then we cannot spend all our time checking everything new. So, this is one of the things which is still in the to-do list; we’re going to check it very close, in a very close time.
I think it’s interesting to see how we can actually use it and is it as quickly as they say. What they say usually works; we’ll see if it works already today, or do we have to wait a little bit until it works exactly like they said. But no, that’s one of the things that are on my to-do list. I’m really looking forward to checking it out.
Corey: So, it looks like once I set this up and it starts monitoring my account—or observing my account. I know I know, observability is just hipster monitoring, but no one agrees with me on that, so I’m going to keep rolling with it anyway just to irritate people—it looks like I can effectively more-or-less click a button, and suddenly, you will roll out an underlying Lambda layer to all of my existing Lambda functions. How does that get maintained whenever I wind up, for example, doing a new deployment with the serverless framework or something like it that isn’t aware of that underlying layer, so it—presumably—would revert that layer itself in the definition? Or am I misunderstanding how that works?
Aviad: No, no. You’re actually getting it right. So, unless you, for example, are using a serverless plugin, so this is an integral part of your deployment, one of the things that we need to do is to automatically identify that a deployment is happening so we can automatically update the Lambda layer to be the right one, so you won’t miss anything. And this deep integration, which is happening without the user having to know anything about it, this is, I think, one of the most important parts because in serverless, as you know, you have so many components, and very easily you can reach, you know, hundreds of Lambdas, which are things that we’re seeing. So, if a user has to take care and maintain something across a hundred Lambdas or more, you can be sure that it won’t be maintained because he has, like, something much more important to do.
So, behind the scenes, immediately as the deployment is happening, we can recognize that it’s happening, and then update the layer that’s required. And by the way, now the layers have a new part called extensions, which allow us and everybody else to do a lot more with those layers, basically allowing the code to run in parallel to the Lambda. So, this is a new thing that AWS has started to roll out, and we think will allow us to give even better experience to our users.
Corey: Let’s have a look across the, I guess, the ecosystem of have different approaches to this stuff. One thing that has always annoyed me about a whole raft of observability and monitoring tools is they wind up charging me whatever it is they charge me; it’s generally fine—and I don’t really have a problem with that. You know, in advance going in what things are going to cost you. Incidentally, what is your pricing model?
Aviad: So, our pricing model is according to the number of invocations you have. So, we have basically two models which we’re using right now, and each one can decide what he wants better. So, if you want to know in advance exactly how much you’re going to pay, you can go with the tiered model meaning, I want to pay for, let’s say, a million invocations each month, and then you’re sure that you’re paying exactly for what you have a budget for. And it’s always related to how much your AWS account is working, so similar to how much you’re paying for your Lambdas. And then there’s another way, which is dynamic pricing, which is very similar to serverless payment.
So, it’s really according to the number of invocations you have; you don’t need to decide in advance, and at the end of each month, according to the number of invocations you have, you get the bill. And that way it’s not based on the invocation in general; it’s exactly according to the number of invocations.
Corey: And let’s be clear, if I wind up exceeding the number of invocations under my plan, it just stops tracing and observing these things, it doesn’t break my app.
Aviad: Yeah, right. [laugh].
Corey: Always good to triple-check those things. It seems like that might hurt.
Aviad: That’s very important. You’re totally correct. And, yeah, we never do anything bad to your Lambdas. That’s written on the top of our door: “Never hurt a Lambda.” And we make sure that nothing bad happens, we just stopped collecting data.
And by the way, even as you pass your limit, we still collect the basic metrics so you can see what’s going on in your system. But you won’t be able to see the rich information, all the information that allowing you to do the debugging, or seeing the full traceability end-to-end of all the invocations and see how they’re connected to each other.
Corey: This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version of their tool, canarytokens.org in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various parts of your environment, wherever you want to; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It’s an awesome approach. I’ve used something similar for years. Check them out. But wait, there’s more. They also have an enterprise option that you should be very much aware of canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files on it, you get instant alerts. It’s awesome. If you don’t do something like this, you’re likely to find out that you’ve gotten breached, the hard way. Take a look at this. It’s one of those few things that I look at and say, “Wow, that is an amazing idea. I love it.” That’s canarytokens.org and canary.tools. The first one is free. The second one is enterprise-y. Take a look. I’m a big fan of this. More from them in the coming weeks.
Corey: So, the pricing makes perfect sense, and that is in line with what I would expect, but the thing that irritates me then is, “Great. I know what I’m going to be paying you folks on a monthly basis, and that’s fine.” And then I use the monitoring tool and it cost me over three times as much in AWS charges, both direct and indirect, where it’s, “Oh, now CloudWatch is going to suddenly be the largest component of my bill and data transfer for sending everything externally winds up spiking into the stratosphere.” What’s your experience been around that?
Aviad: So, since we are collecting data and we are doing API calls, it will affect your AWS bill. But because we don’t want to irritate you, or anybody else, we are putting a lot of focus to see that we’re doing the absolute minimal possible effect on your system. So, for example, as we’re collecting data from your Lambda, we’re doing our best to add only milliseconds to the running time of your Lambda so you don’t end up paying a lot more for the runtime. Or for the API calls or data transfer, we have a lot of optimizations that we did, so the billing on your AWS account is really very, very small; it’s not something that you will notice. And sometimes when people do ask us, we go together with them into their account and show them exactly how their billing was affected by Lumigo so they’ll have assurance that nothing crazy is going on there.
Corey: Which is I guess one of the fundamental problems of the underlying platform itself. I have a hard time blaming you for any of this stuff. This is the perpetual joyless story of getting to work with a variety of AWS services. It’s not something that I see that you folks have a good way around just on basis of how the underlying platform works.
Aviad: Yeah. And then there are a lot of different prices for a lot of small things that you do, and you need to be able to collect it all in order to have the big picture of the effect. And yeah, we don’t have a silver bullet for it, but we can show exactly where we’re going, what we’re adding, to show how low it is.
Corey: One of the things that I think is not well understood for folks who are not into the serverless ecosystem is just how these applications tend to look. In most organic environments, you’ll see a whole bunch of Lambda functions that are all
tied together with basically spit and baling wire. They talk to each other, either directly on the back end—which is an anti-pattern in many respects, let’s not kid ourselves—or alternately, they’re approaching through a lens of, we’re going to now talk to each other through hardened REST APIs, which is generally preferred, but also a little finicky to get up and running. So, at some point, you have a request come in, and it winds up bouncing around through a whole bunch of different subsystems. Tracing, and a lot of the observability story around serverless is figuring out, all right, somewhere in that rat’s nest, it winds up failing.
tied together with basically spit and baling wire. They talk to each other, either directly on the back end—which is an anti-pattern in many respects, let’s not kid ourselves—or alternately, they’re approaching through a lens of, we’re going to now talk to each other through hardened REST APIs, which is generally preferred, but also a little finicky to get up and running. So, at some point, you have a request come in, and it winds up bouncing around through a whole bunch of different subsystems. Tracing, and a lot of the observability story around serverless is figuring out, all right, somewhere in that rat’s nest, it winds up failing.
Where did it break? What was it that actually threw the exception? What was it that prevented something from working? Or alternately, adding latency: where is the bulk of the time serving that request being spent? And you would think that this is the sort of thing that AWS could handle itself.
And they’ve tried with their X-Ray distributed tracing option, which more or less feels like a proof of concept demonstrating what not to do. And if you take a look from their application view, and all the rest, it is the best sales pitch I can possibly imagine for any of the serverless monitoring tools that I’ve seen because it is so badly articulated. You have to instrument all of your stuff by hand. There’s none of this, oh, I’ll look at it and figure out what it’s talking to and build an automated trace approach, the way that Lumigo does. And that’s always frustrated me because I shouldn’t have to turn every weird analysis into a murder mystery. Am I missing something obvious in how this could be done using native tools directly, or is it really as bad as I believe it is?
Aviad: [laugh]. I won’t say it as bad as you’re saying it is. I think X-Ray is a great place to start with. So, if you have, like, just a few Lambdas; you’re starting to check out the serverless world, X-Ray can be good enough if you don’t want to start with a third-party tool right at the beginning. But then as it gets a little bit complex, it’s going to get hard, especially if you’re trying to do it yourself.
That’s usually the wow part when people start using Lumigo when we show them a demo, is seeing how everything is tied together. So, once you see how everything is tied together: the whole system, which components are talking to each other, and how they’re affecting each other. And for example, if one of them goes down, does it mean that the whole system now is not working, or maybe, eh, wasn’t that important, and everything is working. I’ll fix it next week. But I think the most important part is actually what we call the transactions.
So, as you said, there’s an API call at the very beginning with an API Gateway or AppSync, and then it can go through dozens of components. Some of them are not directly related, so it’s like, Lambda calling, putting something into a DynamoDB, which triggers a DynamoDB stream. And then another Lambda is being called, and so on, and so on. It’s crucial to be able to see how everything is connected, both very visually, so you can understand it. There’s only so much you can understand when looking at a list as a human being, right?
You need to see it visually how everything is connected. But then after you understand how everything is connected in this specific transaction, if, for example, you have an issue in a specific invocation, you need to understand the story of that invocation. And maybe you’re looking at a Lambda which starts to throw an exception, and you didn’t change anything in its code today, yesterday, or the day before that, so take care of that exception, but the root cause is probably not in that Lambda, it’s probably upstream. So, you need to be able to understand exactly what was the chain of events, all the calls being made until that specific Lambda was called to see the data being passed, including the data that Lambda maybe passed to a third-party API—like Stripe or PayPal—and what it got in return. And only when you’re able to see all of that you’re able to solve an issue quickly, not a murder mystery like it might be. Time over time without having to think about how will I make sure that I make all the code changes in order to keep getting these transactions?
Corey: So, taking a look at the somewhat crowded space—if I’m being perfectly honest with you—of the entirety of, let’s call it the serverless observability space—or ‘observerless,’ as I’m a big fan of calling it—what is it the differentiates Lumigo from a number of other offerings that people could wind up pulling out of the hat?
Aviad: Right. So, that’s a great question. And every time somebody asks me, the first thing I can say is, the more I see people getting into this space, I think that that’s a great sign. Because that means there’s more serverless activity, there’s more companies doing serverless and it means that our serverless space is interesting. People see an opportunity there, and they want to try and solve the issues that we’re seeing there.
And I think that there’s a few things: one of them is the serverless expertise. So, if you look at a lot of the big companies—like I’ll mention Datadog and New Relic—they’re doing a lot of great things, but in the end, in the serverless environment, there are very specific things which you need to know, have to do in order to be able to do that distributed tracing, the distributed tracing which allows you to correlate specific transactions together and then bring in those metrics which are relevant and bring in the logs which are relevant for a specific transaction. That’s a lot of hard work which we put in in order to be able to do the transactions with a distributed tracing in the best way possible, and then showing it to you in the simplest way possible. And today, I think that Lumigo does that in a very good way. And also, if we’re looking around at other players, which are not only the big ones, also players, which are doing more specifically serverless, I think that if we’re looking at companies which are very focused on serverless, and serverless is the thing that they do, you’ll still see that Lumigo is the one which is doing serverless the most, let’s call it.
So, as serverless is expanding, we’re still not becoming generic—something that we mentioned before—and this allows us not only to do the best distributed tracing but also allow us to show you, out of the box, a lot of issues which might be hiding in your environment. So, it’s not only, “Okay, you have an exception here,” it’s also more specific things to serverless. Like for example, because it’s event-driven, so sometimes you’ll get duplicate events that Kinesis or SQL might send you over and over the same event. The fact that we can show you it automatically and put a spotlight on it can save you a lot of time in trying to understand why things are not working the way you think they should be working. And allowing us to scan your environment and show you misconfigurations which are specific to serverless, this is the kind of things that once you use Lumigo, you get automatically without having to do anything special and that can save you a lot of time.
Corey: I think that’s a relatively astute position to take. I’m a big believer in getting the right tool for the right job. I don’t necessarily want the one single pane of glass to look at everything. I think that is something that is very often misunderstood. Yeah, I might be using three or four different clouds in different ways.
I don’t need to see a roundup of all of them; I don’t necessarily care what the billing looks like on all of them; I don’t necessarily want to spend my time thinking about different aspects of these things juxtaposed to one another, and it’s a pain in the butt to have to sort through to find the thing I actually care about. So yeah, on some level, I definitely want there to be a specific tool. And let’s be clear, you have a terrific stack of things that you integrate with for alerting, for opening tickets, for remediation—or issues, as the case may be. Nomenclature is always a distraction. Don’t at me—but yeah, across the board, I see that you’re doing a lot of things right that if I were going to be entering the space, I would make a lot of those decisions very similarly. And then expect to hear it from the internet. You’ve been around for years now and are continuing to grow. What’s next for you, folks?
Aviad: So, that’s a great question, which I asked myself every morning. I’ll actually take together the two things that you mentioned. One is how we’re focused on serverless, and the second is where do we want to grow from there? And when you do this great focus, you have to make sure that what you’re focusing on is big enough. So, as we’re growing, we’re very happy to see that the serverless is growing with us.
We’re seeing more and more places using serverless. We see a lot more users, companies, developers going into serverless. And we see new types of users. So, it’s not only those bleeding-edge technologies that people want to use, and they are really trying to find out how they can use it. We’re seeing more and more places, for example, enterprises that had maybe one architect in the beginning that said, “Okay, I’m going to use serverless.”
And now a year or two afterwards, they see that it’s working, and it saving them money. They’re able to build faster, and now it’s both spreading virally to other teams which are starting to use that, and also the initial project, which was started two years ago, is now growing and becoming bigger and more complex. And also, that team which was just starting with serverless two years ago now has maybe a second and third product. So, what we’re doing is we’re looking how we can give serverless better and better monitoring for the new services that are entering that field. And also, we’re very strong believers that developers today are doing much of that monitoring—or observability, you can choose whatever you want—and that means that it goes all the way into debugging.
So, we think that doing those two together, bringing together the monitoring and debugging is a great opportunity for our users just to save them more time because it’s the same person who’s going to do both those things, and trying to keep being best of breed in serverless, and doing those two together, I think that’s going to be hard. And that’s exactly the challenge that we’re taking, and we want to see how we’re doing it to the best.
Corey: And I think that that is probably the best way to approach it. If people want to learn more about what you’re up to, how you view these things, and ideally, kick the tires on Lumigo and see for themselves, where can they find you?
Aviad: So, easiest thing you can do, just search for Lumigo in Google, you’ll get to lumigo.io. And from there, it’s very easy to try us out.
Corey: And we will, of course, put links to that in the [show notes 00:31:24]. Thank you so much for taking the time to speak with me today. I really appreciate it.
Aviad: Thank you, Corey. It was great fun and looking forward for the next time.
Corey: Absolutely. Aviad Mor, co-founder and CTO at Lumigo. I’m Chief Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you hated this podcast, please leave a five-star review on your podcast platform of choice along with a long rambling comment telling me how very wrong I am on the wonder that is Lambda@Edge.
Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
Announcer: This has been a HumblePod production. Stay humble.