Episode Summary
AJ Stuyvenberg began his career writing software for St. Jude Medical. Today, he’s a senior software engineer at Serverless, Inc., makers of the increasingly popular open source Serverless framework designed to make it easier to deploy applications across cloud vendors.
Join Corey and AJ as they discuss what a day in the life of an engineer at Serverless looks like, what the Serverless framework actually is and how it helps developers, how an open source company makes money, how Serverless differentiated itself from AWS, the differences between Serverless plugins and components, what’s in the company’s future, and more.
Episode Show Notes & Transcript
About AJ Stuyvenberg
Aaron Stuyvenberg (AJ) is a Senior Engineer at Serverless Inc, focused on creating the best possible Serverless developer experience. Before Serverless, he was a Lead Engineer at SportsEngine (an NBCUniversal company). When he's not busy writing software, you can find him skydiving, BASE jumping, biking, or fishing.
Links Referenced:
Aaron Stuyvenberg (AJ) is a Senior Engineer at Serverless Inc, focused on creating the best possible Serverless developer experience. Before Serverless, he was a Lead Engineer at SportsEngine (an NBCUniversal company). When he's not busy writing software, you can find him skydiving, BASE jumping, biking, or fishing.
Links Referenced:
Transcript
Speaker 1: Hello and welcome to Screaming In The Cloud with your host cloud economist, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world and ridiculous titles for which Corey refuses to apologize. This is Screaming In The Cloud.
Corey: This week’s episode of Screaming in the Cloud is sponsored by X-Team. X-Team is a 100% remote company that helps other remote companies scale their development teams. You can live anywhere you like and enjoy a life of freedom while working on first-class company environments. I gotta say, I’m pretty skeptical of “remote work” environments, so I got on the phone with these folks for about half an hour, and, let me level with you: I’ve gotta say I believe in what they’re doing and their story is compelling. If I didn’t believe that, I promise you I wouldn’t say it. If you would like to work for a company that doesn’t require that you live in San Francisco, take my advice and check out X-Team. They’re hiring both developers and devops engineers. Check them out at the letter x dash Team dot com slash cloud. That’s x-team.com/cloud to learn more. Thank you for sponsoring this ridiculous podcast.
Welcome to Screaming In The Cloud. I'm Corey Quinn. I'm joined this week by AJ Stuyvenberg, a senior engineer at Serverless Inc. Welcome to the show, AJ.
AJ: Thank you, Corey. Thanks for having me.
Corey: So, we've had Austin Collins, the founder and CEO, if I'm not mistaken, of Serverless Inc. on the show before, but enough has changed since then that it's time to have a different conversation ideally with a different person. So, we've at least validated now that two people work at Serverless.com.
AJ: That's correct. And there are at least two of us.
Corey: Excellent. We have not yet proven that you folks stay longer than 15 minutes. But that's a Serverless joke. Ba-dum-tiss!
AJ: Love it.
Corey: So, let's start at the very beginning. What do you do at Serverless Inc?
AJ: Yeah. So I am a senior platform engineer and I'm working on some of our new features that we launched on July 22nd that we call the Serverless Framework Dashboard. It's sort of a sister product that is launched along with our Serverless Framework CLI that everyone kind of knows and loves already and the goal is really to offer a full life cycle Serverless application experience.
Corey: Got you. Let's start at the beginning. I'm sure you've told the story enough, so I'm going to take a whack at it about what the Serverless Framework does and please correct me when I get things hilariously wrong.
Once upon a time we would build lambda functions and things like it in AWS or their equivalent in other lesser five providers and you would wind up seeing that it was incredibly painful and manual to do yourself. There were a lot of boilerplate steps, a lot of things that were finicky and you'd tear your hair out. Then we sort of evolved to the next step, at least in my case, of writing terrible Bash scripts that would do this for us. From there we iterated the step further and thought, okay, now I could do a little bit of this with Python and hey there's this framework. Now I write in my favorite configuration language YAML where I wind up effectively giving only a few lines telling it what to do. This interprets it in Node, the worst of all languages, and then spits out effectively under the hood cloud formation then applies it it with no interaction from me into my AWS environment. Have I nailed the salient historical points?
AJ: Yeah, I think you did along with your beautiful commentary as well.
Corey: Well thank you. And that's effectively what this did a year ago when I finally made the switch, saw the light, et cetera, et cetera. My ridiculous newsletter, for example, has about a dozen of these things all tied together under the hood for the production pipeline and hey, using Serverless made this a lot easier. But everything I've described so far, first has sort of been in a bit of stasis for a little while, and secondly is entirely open source and available to the community. So first, have I just been not paying attention? Or has there been a bit of a lull in feature releases until recently?
AJ: That's a great question. We've been making a lot of headway supporting a lot of different runtimes just in general, along with lots of, you know, supporting lots of new features that AWS has launched. So specifically, I would point to the recent launch of EventBridge. Almost two weeks after that was launched we actually had a support for it inside of the Serverless framework. So a lot-
Corey: As of time of this recording, it's been out for about a month and there is still no cloud formation support.
AJ: That's correct. We had to implement it using a custom cloud formation resource.
Corey: Because everything is terrible.
AJ: Yeah. And to get something done in life, you have to suffer a little bit.
Corey: Well, NDAs are super important and whenever you're building something at AWS, you sign one that agrees that you won't tell the rest of the world what you're doing. Then you start building a new service and you sign a second NDA saying that you won't breathe a word of what you're building to either the CloudFormation or the tagging teams.
AJ: That doesn't make any sense to me, but I don't work there.
Corey: I can imagine that none of that is actually true, but that's my head narrative of why things come out without baseline support for these things.
AJ: It does feel like that sometimes and a lot of what we've done over the last year is really try and support the breadth of the services, not only inside AWS but elsewhere. Because what we found is to make a compelling offering on a framework level, we really have to have everything that people want to do, right? If people end up going back to that sort of a Bash script, deploy pipeline kind of world you described earlier, we've really failed. Right? So in that time that maybe some people perceive as us going dark, we've really been working on supporting a lot of different services and making, making improvements to the framework that maybe aren't as big as far as like a big splashy product on Hacker News kind of launch.
Corey: Okay. I would also point out that I am the absolute worst kind of customer in that I'm not really a customer at all. Everything I've been using the Serverless framework for is freely available as open source. I pay you nothing and I complain incredibly loudly. I'm like the internet brought to life.
AJ: Absolutely.
Corey: So with that in mind, what is effectively the business model here? At what point do people start paying you? I imagine it's not an out of the goodness of their heart situation and I don't think that the investors behind you folks have suddenly decided to turn this into the weirdest form of philanthropy we've ever seen.
AJ: Yeah, that's a great question. So alongside with the framework and open source contributions we've made over the last year, we've also been really hard at work on this dashboard product and that's what we actually do sell. There's commercial offerings. It is completely free to try. Free up to 1 million invocations. We'll track them, we'll give you all sorts of insights onto what your services are doing. You'll be able to use things like our secrets functionality, which will allow you to encrypt secrets and then decrypt them at run times so you can pass them between your service without actually having been floating around in plain text or in get repo.
You can use safeguards, switches and policies, code framework. It allows you to control what your team can and can't do, like which regions you can use, which AWS accounts you can employ to, when you can deploy, et cetera. All this stuff is completely free to use. But we do have paid plans and that's where we do make money. So after you go past a million invocations in a month we'll charge $10 per million of invocations and $99 per seat. And then we do have Enterprise plans that are available, which allow you to run this entire thing on your own cloud infrastructure.
Corey: Awesome. Okay. So let's break down some of the releases in a bit of a ... I guess order that they were released. Correct me if I get any of this wrong. The big one that really caught my attention was that the Serverless framework is apparently now going full life cycle around offering things around testing, deployments, monitoring and observability and probably a few pieces I'm missing.
AJ: Yeah, that's completely correct. On July 22nd we announced this kind of new and expanded Serverless framework, which includes the real-time monitoring, the testing, the secrets management, other security features. They live inside the Serverless framework dashboard, which is kind of integrated into this Serverless framework CLI that we already know. And again, this dashboard is completely free for you to use, up to a million invocations per month.
Corey: Got you. So the way that I always thought of Serverless framework is it wound up ... I would run effectively a wrapper command around a whole bunch of stuff deep under the hood and it would package up my Lambda functions. It would deploy things appropriately. It would wire up API gateway without forcing me to read the documentation for API gateway, which was at the time impenetrable. It's like a networking Swiss army knife. It can do a whole bunch of different things and the documentation was entirely in Swiss German. So you'd sort of get it wrong, get it right by getting it wrong a bunch of times first. But now it's added a bunch of capabilities that go beyond just pushing out functions and keeping them updated. What have you added?
AJ: Yeah, absolutely. So the biggest thing would be the kind of monitoring and observability capabilities we've added on this dashboard. So we'll get you insights into things like hey, a brand new error was just detected. Here's the full stack trace pointing to where the error was thrown. Here's how many times it was thrown in the last X amount of time. Here's the complete reconstructed logs from Lambda, it kind of allows you to immediately diagnose and describe the issue to your coworkers or yourself to go off and patch and figure out and solve the problem.
That, those types of insights are sort of available kind of in aggregate where you're able to see ... Okay, so let's say for example, during the average week I might do 5,000 invocations per day and then one day I might do 10,000 or a hundred thousand invocations. We'll trigger an automated insight that says, "Hey, this function is now doing a lot more invocations than it was doing previously. This might be something you want to look into."
So it's the sort of the full life cycle of your application more than just the packaging, more than just the configuration of your services that you're interacting with inside of your favorite cloud provider, but also bring it all together and experience that, you know, someone who's not necessarily traditionally familiar with Serverless would be able to understand and grapple with.
Corey: Understood. So if I take a look across the ecosystem now, I think that the biggest change is that historically I would use the Serverless framework to package these things up and get my applications up and running. I'd use a different system entirely for the CI/CD stuff that I was doing. I would pay a different vendor to handle the monitoring and observability into it. And now it seems like you're almost making a horizontal play across the ecosystem. What's the reasoning behind that?
AJ: Yeah, that's a great question. So we think we're in the best position to offer the best experience using the Serverless framework. We don't think that anyone should be forced to cobble together their own solution using multiple providers or writing their own custom log pipeline to do analytics or any of the sort. We think that we should be able to offer something compelling out of the box and easy. After all, that's kind of the Serverless promise. Like get up and running, very little configuration, scale to zero, scale to infinity, paper execution.
And that's the type of thing we're trying to bring to the entire lifecycle of your app. Because once it's running in production you need more than simply a really ease of access of different services and really an easy way to package and deploy your application. You need to monitor it, right? You need to handle secret management. You need to make sure that proper safeguards are followed and things are done according to your company or your group's policies. And you need to be able to keep an eye on things. And that's what we're trying to do. We're trying to be the one stop shop for all things Serverless.
Corey: Got you. It's interesting because historically in order to get all these things done responsibly with best of breed, you had to go build a microservices architecture by stringing together a microservices vendor strategy where you have a bunch of different companies doing the different components for you and then tying that all together into something that vaguely resembled some kind of cohesive narrative. Now it seems like that's no longer the case.
AJ: Yeah, absolutely. You know, the downside was sort of that approach and experiences that you end up with this really sort of fragile ecosystem surrounding your application. And these applications don't live in a vacuum. They have to interact with other services, other applications. So to have this sort of really immense configuration alongside of it simply to monitor your applications isn't really a solution anymore. So we needed this way to have one place to go and look and see what is my service? What is my application? What is my function doing at this time? And why is it broken? Let me get there and fix it quickly.
Corey: Right. And that tends to lead to some interesting patterns where you effectively have to pass through a whole bunch of different tooling in order to get a insight into it. Which I guess raises the real question I've got. Again, this is not a sponsored episode. You're here because I invited you here. It's ah, nice of me, wasn't it? But it also means that it's not a sales pitch. So you get to deal with the fun questions. Namely, if I'm going to effectively pick a single vendor and go all in with them for all of my Serverless needs, why wouldn't I go with, for example, AWS themselves, if that's what I'm doing? I mean they have services that do this, they have services that do everything up to and including talking to satellites in orbit. So if I'm going to wind up going down that strategy, why pick you instead of them?
AJ: Great question. So the answer is simply that we think we offer the best experience on top of the Serverless framework that you're already using. We understand everything that's going on in that Serverless EMO file that you're configuring. If you have multiple Serverless apps, we are understanding how they're talking across things like API gateway or SQS or SMS. So it's a lot simpler for us to give you a perspective, you as the customer, a perspective of your application that mirrors what you understand it and not simply a bunch of little services linked together.
Now I think there's competing offerings all over the map here. And if you still want to go through the joy of creating your own log pipeline or all of your own metrics or ingest system or monitoring or what have you, you still can. The Serverless framework is still completely open source. You're free to do that. But if you're looking for one place to get up and running quickly, to get started and get your code out the door to production as simply as possible, I think we offer the best solution there.
Corey: Got you. I've got to say that's ... as much as I like to challenge you on this, I obviously agree. I've been using you folks for a while now. So what came after the full life cycle release? There was something else.
AJ: Yeah. Just a couple of weeks after we finally announced the Serverless components, which is sort of a new take on using Serverless services in your entire application ecosystem. The idea is you should be able to deploy an entire Serverless use case, like a blog or a registration system, a payment processing app or an entire full stack page. Should be able to do that on top of whatever you're doing in the cloud without ever managing that configuration. Right? That's kind of the vision behind components. And the idea is that you can define these use cases, these Serverless use cases as components and you interact with them in a way that you would be familiar with if you are using React, for example.
Corey: Got you. Didn't you release something that was called Serverless Components a year or so ago?
AJ: I think it went into beta officially a year or so ago and then we finally released a GA.
Corey: Okay. So is it fair to view this as effectively, I need a portion of an app to do a thing. Maybe it's an image re-sizer, it's a classic canonical Serverless example. And normally you might consider using something like AWS' Serverless application repo, but maybe you don't hate yourself enough to wind up using SAM CLI instead. So this winds up meaning you don't have to make that choice.
AJ: Yeah, you can sort of pick and choose what aspects of Serverless use cases you want. And like you had said, the image re-sizer is like a super, super common example, but there's so much more than that. Right? If you want to run a really, really simple monolith out the side of ... or I'm sorry, on top of your application, you can. There are examples for how to do this where you might have like a ... what we would call like a mono Lambda structure where you have a rest API that's routed under the root domain, right? And this entire application can simply be deployed with one command using Serverless components.
Corey: Got you. So as you look at this across the board, what inspired you folks to build this out? What customer pain was not being met by other options?
AJ: Yeah, that's a great question. I think the biggest was reuse, right? When we talk about developer practices, things like Solid, what we really want to do is reduce the coupling between aspects of your software. And we're trying to do the same thing for Serverless use cases. So instead of having ... You might have that image re-sizer could be part of one application in one aspect or one area of your microservices architecture, but you're going to want to use it somewhere else likely. And sometimes that means either redeploying it or other times it means simply has to go around a route to do that. Either way with components you can package these things up in a really easy to use way. Include them just like you would any other piece of code, right? And then inject it into your service. And I think that's where that really came out of. That was kind of the inspiration.
Corey: So how much of what you've built out as a part of Serverless Components is, I guess, tied to your enterprise offering versus available to the larger community?
AJ: Yeah, it's 100% open source right now. There are a few last steps we have to complete before we'll tie it into the enterprise offering. Like I said, we did just launch it. However, I don't think that road will be very long.
Corey: Yeah, there's some of the things you've said are compelling. At various times in my evolution of what I built, it would have been useful, for example, to have a payment gateway that I could have dropped in rather having to build my own.
AJ: Totally.
Corey: For better or worse, I'm not irresponsible enough to try rolling my own payment system or shopping cart or crypto. So I smiled, nodded and paid someone else to make all that problem go away. But there's something valuable about being able to take what other people have built and had audited and done correctly and just drop it in place.
AJ: Precisely. And that value extends to more than simply the use case inside of that generic image re-sizer or payment gateway. But put yourself in a position in a larger corporate environment where you might have several teams working together and let's pretend that their application has specific API contracts that kind of bind the different services together and you want to just deploy that middleware layer anywhere you want inside of your application. You could write that as a component and then simply share it so you have one source of truth for that sort of interoperability and you can share that between all the different teams. And now it's very simple to get started instead of kind of each team implementing their own flavor, which I'm sure you've experienced at different parts in your career.
Corey: So something else you launched recently was called Safeguards. What is that?
AJ: Great question. So Safeguards is a feature that is built into our Serverless dashboard. What it is really is a policy as code framework, which means you can define different policies for your Serverless applications in code. Now we include several for free that you can try out. Some really simple examples are Whitelist. You know, AWS regions you can deploy to. For example, you can Whitelist specific accounts that you can deploy to. You could restrict things like, for example, people often wildly over-provision IAM roles with wildcards. So we can easily restrict things like no wildcards in your IAM roles. You can also-
Corey: Is that done via config rules? Service control policies? Something else?
AJ: It's done by ... it's actually ingesting your Serverless YAML file. So because we understand what you're trying to do and we are the ones who are ... like, our framework is responsible for translating your YAML into cloud formation and then we can actually, we can use safeguards to digest that and appropriately allow or deny those configuration changes. But it's more than just configuration management. It also allows you to control when your application could or couldn't be deployed. For example, if you're one of the many groups that has a no deploy on Friday policy or you say no deploying on Friday afternoon.
Corey: Careful, say that three times and you'll end up summoning Charity Majors. They yell at you.
AJ: I personally believe that we should deploy forever and always, right? As frequently as possible. But I understand that some people don't.
Corey: Well that depends, too. Are we talking about code that you trust or code that someone else wrote?
AJ: Absolutely. I mean we're at the size at Serverless Inc. thankfully where the answer is both. We have seven people who've been working on this Serverless dashboard offering, so the group is small and the knowledge is tribal at this point. But we're still, we're growing fast and we're making lots of good changes as we go.
Corey: This week’s episode is sponsored by CHAOSSEARCH. If you’ve ever tried managing Elasticsearch yourself, you know that it is of the Devil. You have to manage a series of instances, you have to potentially deal with a managed service. What if all that went away? CHAOSSEARCH does that. It winds up taking the data that lives in your S3 buckets and indexing that and providing an Elasticsearch compatible API. You don’t have to manage infrastructure, you don’t have to play stupid slap-and-tickle games with various licensing arrangements, fundamentally, you wind up dealing with a better user experience for roughly 80% less than you’ll spend on managing actual Elasticsearch. CHAOSSEARCH is one of those rare companies where I don’t just advertise for them, I actively recommend them to my clients because, fundamentally, they’re hitting it out of the park. To learn more, look at CHAOSSEARCH.io. CHAOSSEARCH is of course all in capital letters because despite CHAOSSEARCHING they cannot find the caps lock key to turn it off. My thanks to CHAOSSEARCH for sponsoring this ridiculous podcast.
Corey: How do you wind up building something like this, I guess in the shadow of AWS, because they kind of cast a large one? Where this started gaining traction and then it felt like they realized what was going on. Shrieked, decided they were going to go in their own direction and started trying to launch the SAM CLI, which despite repeated attempts, I can't make hide nor head or tail of, and it still feels to me at least like it is requiring too much boiler plate and it doesn't make the same intuitive level of sense that the Serverless framework does. That's just my personal opinion, but it seems to be one that's widely shared. You take a look at the rest of the stuff that you're offering and they are building offerings around that stuff as well. At some point, does it feel like you have diverged from them in a spiritual alignment capacity?
AJ: I think we diverged from the very beginning. I mean our goal is to let you build Serverless applications on whatever cloud provider you want. We support AWS, we support Azure, we support Google Cloud Platform and we support IBM OpenWhisk. So that's something that SAM is never going to compete with on a philosophical level. They would not build tools for their competitors and that's where I think is kind of the ideological separation of the two. It's really-
Corey: Yes. But not for nothing. I mean that's valuable in a tool, sure. But at the same time, how many people do you really see using the Serverless framework and then deploying from the same repository, for example, into multiple providers?
AJ: Yeah, that's a great question. I haven't personally seen it. I would expect that that will probably come up a lot more as different vendors kind of continue to either dominate or introduce new features that people want to use. Obviously it's all about capabilities, right? It's all about using services that these vendors provide and that's something that I think we have the most compelling offering on right now.
Now our question was, why would you build something like this kind of in the shadow of AWS? The answer was we needed it. We weren't getting enough from the services to do what we needed to do. So you know, Serverless Inc. is a big believer in dog feeding our own product. All of our entire dashboard application is all built using the Serverless framework. A lot of aspects of our development are monitored using the Serverless dashboard. So we're using it every day. And we think that that sort of mentality really can put us a step in front.
Corey: I would agree with you and I think there is value in a tool being able to speak to anything. As far as any individual customer, I get the sense that they probably don't care. For example, I care profoundly about your support for AWS functions, but I don't use Serverless technologies from other providers so I could not possibly care less about the state of your support for those things. I feel like it's one of those things that matters in the aggregate, but on the individual customer level it's pretty far down the list of things anyone cares the slightest about.
AJ: Yeah. And concerns about that type of thing vary depending on who you are. Right? Like a developer or an individual contributor like yourself doesn't care about a service they're not using. But a chief information officer really does care if they have the capability to move aspects of their Serverless application from one vendor to the other if needed. So it really depends on the target audience.
Corey: Got you. So next, normally the way that one contributes to an open source project is they open issues on GitHub, which is how I insist upon pronouncing it, but I don't have to do that because I have a podcast. Instead, I'm going to give you a litany of complaints about Serverless for you to address now. That's right. It's ambush hour.
AJ: Let's do it, I'm ready.
Corey: All right. For starters, I have to use NPM to get it up and running, which exposes a lot of things under the hood, namely NPM. Does require NPM in the first place?
AJ: A great question. Right now our framework is published on NPM. We are experimenting with publishing binaries on our own...
Corey: Now in theory, I could wind up just rolling it myself without ever touching NPM and just use the Java script and compile it manually, but that sounds like something a fool would do.
AJ: It does sound like something a fool would do. Yes. And we are, like I said, trying to work through a point where you can download this binary on your own.
Corey: Right, because invariably I find that everything wants different versions of NPM, so I have to use NVM to manage NPM versions and now I'm staring down at a sad abyss that annoys me. I want to be able to do things like brew install Serverless. Or I don't know, app get install Serverless. Or if I'm using Windows I just go home and cry for a while. Then I get a Linux box and then I can Yum install Serverless.
AJ: Yeah, absolutely. I think we see that vision, too, and like I said, that's been on the roadmap and that's one of the things we're really working towards is being able to do binary drop in installations of our framework.
Corey: Okay, next complaint. It feels like it is fighting ideologically with SAM. AWS is a Serverless application model. Part of this is SAM's complete and inability to articulate what it's for in any understandable capacity. You read the documentation, you are more confused than when you started. This feels like it's an artifact of AWS' willingness to be misunderstood for long periods of time and that being interpreted as licensed to mumble.
AJ: Yeah. I mean I'm not going to comment necessarily on your interpretation of SAM, but a big part is buying into sort of the ethos and the vision of the tool you're using, right? Like our vision is to let you just deploy use cases simply and really focus on writing your business logic in the form of a Lambda. You should not be responsible for going out and trying to figure out how to wire your Lambda up to API gateway or SNS or SQS. That's not something that any developer wants to spend their time on. And that's what we're trying to do. We're trying to abstract away the configuration of these services and let you as a developer focus solely on the experience of building your business logic.
Corey: Fair enough. Next complaint. It seems like you try to be all things to all supported Lambda runtime. So there is, of course, the whole story of running your own custom layer, which generally is not a best practice if you don't have to, but it does definitely feel like there are favorites being played. For example, it is way easier for me to build a function in Python than it is in COBOL, which is probably as it should be. But do you find that the experience is subpar depending upon other lesser widely deployed languages?
AJ: Yeah, it really depends. Clearly if you look into the large ecosystem of Serverless plugins that are available, you'll notice a trend towards things like Python and node JS. I think that reflects just the reality of the world we're in right now in the modern web development age. If you do really want COBOL, I mean I know the head maintainer of the Serverless framework and we can talk with them about it, but I don't expect you to get much traction because I don't think it's really being demanded.
Corey: Fair enough. Last time I played with this in significant depth in the wake of the Capital One over scoped IAM rule issue, it was ... you could set an IAM policy ... Sorry an IAM role within a service and it would apply to all functions in that role. But scoping that down further on an individual function basis, for example, that function needs to be able to write to DynamoDB, but none of the others do, was painful. I'd have to wind up rolling a fair bit of custom CloudFormation myself. So I just shrugged and over scoped IAM rules because I'm reckless. Is that still the case or is that changed and I just never noticed?
AJ: So all of the functions take an IAM role statement that you can actually give individual IAM role access control to on an individual level. But at the same time, that still creates a rule. It doesn't prevent you from inheriting it in another function. Cloud security's a really tricky thing and like the Capital One breach and et al. We see that routinely it gets mis-configured and that's kind of a big part of what we're trying to do around Safeguards is to kind of define these and limit their access.
But for your specific question, the answer is you can, right underneath the name of the function, it takes a parameter called IAM role statement, which then takes an array of IAM role statements, affect, allow action, whatever resource, whatever.
Corey: Fair enough. Another one is I need to use the universe of plugins to do a lot of common things that tends to cause a few problems. One, I have to wind up installing them and then NPM shrieks whenever it can't find them and I get to go back into node hell, which isn't really your fault. But then the quality of some of those plugins is uneven. There are plugins out there that let me integrate with CloudFront and Route 53 for domain management, but they in turn then, oh we're going to update your CloudFront distribution. Not going anywhere for awhile? Grab a Snickers. And that's painful, for example, when you're doing this part of a CI/CD pipeline because you pay by the minute in code build. So that feels like it's one of those, I never quite know whether I can trust a plugin as something that is a first class citizen or something that someone beat together in the middle of the night. Is there any curation around that?
AJ: That's a really good question Corey, and the answer is yes. If you go to Serverless.com/plugins we have a full plugin directory that you can search. There are check boxes for certified and approved and community, so they're kind of different levels. Starts at community then there's approved and certified. And that'll be what I'd suggest going to as your first resource, to kind determine-
Corey: And the ones without the check marks install Bitcoin miners?
AJ: I can't guarantee that, I haven't read the code, but it is open source and I would encourage you to do that.
Corey: Excellent. I encourage me to what? Read the code or install Bitcoin miners on other people's systems?
AJ: Read the code and then funnel the Bitcoin funds to me. Thank you.
Corey: Absolutely. It turns out as we said on the show before, it is absolutely economical to mine Bitcoin in the cloud. The trick is to use someone else's account to do it.
AJ: Yeah. It's even trickier with Lambda.
Corey: So one of my personal favorite things to make fun of is naming things and this is no exception, as well. First, the fact that you called it Serverless at all. Are we talking the architectural pattern? Are we talking about the Serverless framework? Are we talking about the company? And it's sort of very challenging, disambiguate that from time to time. First, awesome SEO juice, but on the other side, it feels like that tends to cause a fair bit of customer confusion.
AJ: Yeah. Austin touched on that actually in your first episode with him and I would echo his sentiment that-
Corey: But he hasn't renamed the company since, so we're touching on it again.
AJ: I think we're really pushing the Serverless Inc. to brand the actual company and Serverless to define the framework, would be my answer to that question.
Corey: Understood. And that's fair. We're not done with naming yet. You have plugins and you have components and it's going to become increasingly challenging, at least for me, to keep straight which does which. Am I the only person that's seeing issues with that, I guess, overlap between which side of the fence one of those things would go on? Or is that something that is ultimately designed to be aligned along the same axis?
AJ: I don't think you're the only one confused about that. I think that'd be a stretch to say. Serverless components are really about reusing Serverless use cases. Right? And Serverless plugins are really about enhancing the Serverless framework to do other things on top of the open source offering. So that would be how I would kind of delineate between the two.
Corey: That's fair and understood. So I think that really runs out my list of things to complain about. What haven't I complained about that I really should?
AJ: I think we're all still waiting for NVS Lambda MVPC cold start time to go down. I don't know about you, but I come from a very relational database background using my SQL or PostgreSQL and right now-
Corey: My relational database of choice remains Route 53.
AJ: Oh wow. That's one option. You do send out-
Corey: You can use things that are not databases as databases and it's a lot of fun and scares people away from asking further questions.
AJ: It's true. Anything's a metadata service if you try hard and believe in yourself.
Corey: Exactly.
AJ: One of the things that I would really like to see out of the Serverless ecosystem is a reduction in the cold start time of AWS Lambda functions inside of VPC. That would really allow us to start to utilize all the services that Amazon includes. Things like RDS, right? Databases, your relational aspects, relational databases that you can't use right now and you're kind of stuck to using HTP implementations. Obviously we've seen Jeremy Daily's blog posts about Aurora getting a lot better over the last year and I think it's a great step. But ultimately, I think for me, the biggest thing that I'd love to be able to interact with inside of a Serverless application is a relational database. I think that's kind of the last big piece before all the services that developers are frequently using become available. Things like you know, Reddis or Memcached or Postgres can actually be utilized in an efficient way because right now that that cold start time is just killer.
Corey: Understood. One last question I have for you around this, and it's a bit of a doozy I suppose, is if I take a look across the ecosystem, and as a cloud economist I tend to see a fair bit of this, there doesn't seem to be any expensive problem around Serverless technologies, if we restrict that definition to functions, yet. Sure S3 you can always store more data there and it gets really expensive. DynamoDB if you're not careful, but Lambda functions always seem to be a little on the strange side as far as no one cares about the cost. For example, last month my Lambda bill for all this stuff I do was 27 cents before credits. And if you take a look at other companies, whenever you see hundreds of dollars in Lambda, you're seeing many thousands or more in easy to usage. Are there currently expensive cost side problems in the world of, let's say Lambda functions?
AJ: Yeah, that's a really good question and I'll actually answer that in a couple of ways. The first, we should admit that compute is a commodity at this point. Would you agree?
Corey: Absolutely.
AJ: Right. So like any commodity, the providers are finding more and more efficient ways to provide them. Lambda is sort of the natural evolution of that provision. They previously AWS was, you know, selling their EC2 instances, virtualized instances on top of machines. But they were guaranteeing a certain amount of memory, a certain amount of CPU power. Really like a certain amount of compute was being sold to you. Lambda takes that a step further by not guaranteeing you anything and just saying, we'll run your function when it gets called, which allows them to really pack more of these Lambda functions and runtimes into smaller servers, really at the end of the day. I mean we say Serverless, but somewhere down there are servers, I just don't care about them.
AJ: So from that standpoint, you're correct in saying that the compute bills are generally cheap now. The expensive part depends on which services you interact with and how they're set up. I've read several blog posts about people getting burned by a ridiculous DynamoDB and API gateway builds. There's been popular blog posts that made the circuit discussing how you can save, you know, save 30 or 60% of your AWS bill by switching from API gateway to Application Load Balancer, which I think is all true.
I think a lot of people getting burned on the cost front comes from not recognizing essentially what they're provisioning or not necessarily using the correct data model or data access pattern for their use case. That being said, it makes sense that your Lambda bill will be cheaper than your EC2 bill, for the most part. Right? Your EC2 bill is like your house with the air conditioning running all day long versus your Lambda bill is more like just stepping into your car with air conditioning running and then turning it off when you're done. It's just going to ... it's night and day.
Corey: It absolutely is. The concern that I have is that it's always challenging to wind up convincing a company that's spending, I don't know, $300 a month on their Lambda bill to spend even at least as much, if not more, on the tooling around Lambda. I was using a monitoring product for awhile that would tell me in big letters on the dashboard that this month's Lambda bill is 22 cents. That is 7 cents higher than it was last month. Maybe look into this. Yeah. How about I spent my time doing literally anything else because it's more valuable than that, and it continually almost eroded the value proposition I was getting. I was thrilled to pay more for that than I was for my Lambda functions, but the focus on cost and cost optimization in that scenario felt like a hell of an anti-pattern.
AJ: Yeah, absolutely. I mean there's a price for your time and at 7 cents it seems like it's a little cheap.
Corey: A little bit.
AJ: That being said, you know, scale to zero and paper execution are what Serverless is all about. Only paying what you use for are what it's all about. And I think we're at that point now where it's really enlightening for people that have been AWS customers for five, ten years who are used to paying hundreds of dollars in compute bills to see new services cost pennies, right? Now is that worth an alert in your inbox? I don't know. I would guess that a person in your position doesn't read too much email anyway-
Corey: I email 15,000 people a week. I assure you I read more than you'd think.
AJ: Oh man, that sounds awful.
Corey: People have opinions on the internet.
AJ: Yeah, and they have to be heard. And that's why we follow you on Twitter, Corey.
Corey: Exactly. Wait, people read that? I thought that was a write only medium?
AJ: Nope. No, you'd be shocked. It's a multiplexing system.
Corey: Oh dear.
AJ: One to many.
Corey: Okay, so if people want to learn more about Serverless or what you're up to in particular, for some godforsaken reason, where can they find you?
AJ: Yeah, you can check out what we're doing at www.Serverless.com. You can catch us on GitHub, github.com/Serverless. If you have any interest in following me whatsoever, I don't recommend it for the same reason you don't recommend people follow you, you can follow me on Twitter. Reach out.
Corey: Well, thank you so much for taking the time to speak with me today. I appreciate it.
AJ: Absolutely, Corey, thanks for having me.
Corey: Of course. If you're listening to this show and you love it, please give us a positive rating on iTunes. If you're listening to this show and can't stand it, please give us a positive rating on iTunes. I'm Corey Quinn. This has been AJ Stuyvenberg. This is Screaming In The cloud.
Speaker 1: This has been this week's episode of Screaming In The Cloud. You can also find more Cory screaminginthecloud.com or wherever fine snark is sold.
Speaker 1: This has been a HumblePod production. Stay humble.