Episode Summary
Join Jesse, Amy and Tim again to get more on AWS and the kind of insight only our cloud economist can provide. This week, AWS Application Cost Profiler is in the rotation. They discuss what exactly it is? What actually is going on under all the buzzwords? What problems is it supposed to solve? Tune in to this week's “Friday From the Field” for the latest.
Our hosts dive into the details on how the AWS Application Cost Profiler works and the kind of issues it is advertised to solve. Some of the power under the hood is notable and worth “kicking the tires” on. But will it pay off with the amount of work that it requires from clients? They’ve taken it for a test drive and have all that you need to know. At the end of the day it comes down to—is it worth it? This week we find out.
Episode Show Notes & Transcript
Transcript
Corey: This episode is sponsored in part byLaunchDarkly. Take a look at what it takes to get your code into production. I’m going to just guess that it’s awful because it’s always awful. No one loves their deployment process. What if launching new features didn’t require you to do a full-on code and possibly infrastructure deploy? What if you could test on a small subset of users and then roll it back immediately if results aren’t what you expect? LaunchDarkly does exactly this. To learn more, visitlaunchdarkly.com and tell them Corey sent you, and watch for the wince.
Jesse: Hello, and welcome to AWS Morning Brief: Fridays From the Field. I’m Jesse DeRose.
Amy: I’m Amy Negrette.
Tim: And I’m Tim Banks.
Jesse: This is the podcast within a podcast where we talk about all the ways we’ve seen AWS used and abused in the wild with a healthy dose of complaining about AWS for good measure. Today, we’re going to be talking about a recent addition to the AWS family: AWS Application Cost Profiler.
Tim: But hold on for a second, Jesse, because AWS Application Cost Profiler we can get to; that’s rather unremarkable. I really want to talk about how impressed I am with AWS InfiniDash. I’ve been benchmarking this thing, and it is fan… tastic. It’s so good. And we could probably talk about for a while, but suffice to say that I am far more impressed with AWS InfiniDash than I am with AWS Application Cost Profiler.
Jesse: You know, that’s fair. And I feel like InfiniDash should absolutely get credit where credit is due. I want to make sure that everybody can really understand the full breadth of everything that InfiniDash is able to accomplish. So, I want to make sure that we do get to that; maybe in a future episode, we can touch on that one. But for right now, I have lots of feelings about AWS Application Cost Profiler, and what better place to share those feelings than with two of my favorite people, Amy and Tim, and then all of you listeners who are listening in to this podcast. I can’t wait to dive into this. But I think we should probably start with, what is AWS Application Cost Profiler?
Amy: It is [unintelligible 00:01:54] in a trench coat.
Jesse: [laugh].
Amy: Which is the way AWS likes to solve problems sometimes. And in this case, it’s talking about separating billing costs by tenants by service, which is certainly a lot of things that people have problems with.
Jesse: That is a lot of buzzwords.
Amy: A lot of words there.
Jesse: Yeah. Looking at the documentation, the sales page, “AWS Application Cost Profiler is a managed service that helps us separate your AWS billing and costs by the tenants of your service.” That has a lot of buzzwords.
Tim: Well, to be fair, that’s also a majority of the documentation about service.
Jesse: Yeah, that is fair. That is a lot of what we saw, and I think we’ll dive into that with documentation in a minute. But I do want to call out before we dive into our thoughts on this service—because we did kick the tires on this service and we want to share what our experience was like, but I do want to call out that this problem that AWS Application Cost Profiler is trying to solve. This idea of cost allocation of shared resources, it is a real, valid problem and it is one that is difficult to solve.
Amy: And we’ve had clients that have had this very explicit problem and our findings have been that it’s very difficult to accurately splice usage and spend against what’s essentially consumption-based metrics—which is how much a user or request is using all the way along your pipeline—if they’re not using dedicated resources.
Jesse: Yeah, when we talk about cost allocation, generally speaking, we talk about cost allocation from the perspective of tagging resources, broadly speaking, and moving resources into linked accounts and separating spend by linked accounts, or allocating spend by linked accounts. But if you’ve got a shared compute cluster, a shared database, any kind of shared resources where multiple tenants are using that infrastructure, slapping one tag on it isn’t going to solve the issue. Even putting all of those shared resources in a single linked account isn’t going to solve that issue. So, the problem of cost allocation for shared resource is real; it is a valid problem. So, let’s talk specifically about AWS Application Cost Profiler as a solution for this problem. As I mentioned, we kicked the tires on this solution earlier this week and we have some thoughts to share.
Tim: I think one of the main things around this AWS Application Profiler like I said, there’s some problems that can be solved there, there’s some insights that people really want to gain here, but the problem is people don’t want to do a lot more work or rewrite their observability stack to do it. The problem is, that’s exactly what AWS Cost Profiler seems to be doing or seems to want you to do. It doesn’t get data from, I think it only gets data from certain EC2 services, and it’s just, it’s doing things that you can already do in other tools to do aggregation. And if I’m going to do all the work to rewrite that stack, to be able to use the Profiler, am I going to want to spend that time doing something else? I mean, that kind of comes to the bottom line about it.
Jesse: Yeah, the biggest thing that I ran into, or that I experienced when we were setting up the Cost Profiler, is that documentation basically said, “Okay, configure Cost Profiler and then submit your data.” And [unintelligible 00:05:54] stop, like wait, what? Wait, what do you mean, ‘submit data?’ And it said, “Okay, well now that you’ve got Cost Profiler as a service running, you need to upload all of the data that Cost Profiler is going to profile for you.” It boggles my mind.
Tim: And it has to be in this format, and it has to have these specific fields. And so if you’re not already emitting data in that format with those fields, now you have to go back and do that. And it’s not really solving any problems, but it offers to create more problems.
Amy: And also, if you’re going to have to go through the work of instrumenting and managing all that data anyway, you could send it anywhere you wanted to. You could send it to your own database to your own visualization. You don’t need Profiler after that.
Jesse: Yeah, I think that’s a really good point, Amy. AWS Cost Profiler assumes that you already have this data somewhere. And if not, it explicitly says—in its documentation it says, to generate reports you need to submit tenant usage data of your software applications that use shared AWS resources. So, it explicitly expects you to already have this data. And if you are going to be looking for a solution that is going to help you allocate the cost of shared resources and you already have this data somewhere else, there are better solutions out there than AWS Application Cost Profiler. As Amy said, you can send that data anywhere. AWS Application Cost Profiler probably isn’t going to be the first place that you think of because it probably doesn’t have as many features as other solutions.
Amy: If you were going to instrument things to that level, and let’s say you were using third-party services, you could normalize your own data and build out your own solution, or you can send it to a better data and analytics service. There are more mature solutions out there that require you to do less work.
Corey: This episode is sponsored in part by ChaosSearch. You could run Elastic Search or Elastic Cloud or Open Search, as they’re calling it now, or a self hosted out stack. But why? ChaosSearch gives you the same API you’ve come to know and tolerate, along with unlimited data retention and no data movement. Just throw your data into S3 and proceed from there as you would expect. This is great for IT operations folks, for App performance monitoring, cyber security. If you’re using ElasticSearch consider not running ElasticSearch. They’re also available now on the AWS market place, if you prefer not to go direct and have half of whatever you pay them count toward your EDP commitment. Discover what companies like, Klarna, Equifax, Armor Security and Blackboard already have. To learn more visit chaossearch.io and tell them I sent you just so you can see them facepalm yet again.
Jesse: I feel like I’d missed something, broadly speaking. I get that this is a preview, I get that this is a step on the road for this solution, and I’m hoping that ultimately AWS Application Cost Profiler can automatically pull data from resources. And also, not just from EC2 compute resources, but from other shared services as well. I would love this service to be able to automatically dynamically pull this data from multiple AWS services that I already use. But this just feels like a very minimal first step to me.
Tim: And let’s be honest; AWS has a history of putting out services before they’re ready for primetime, even if they’re GA—
Jesse: Yeah.
Tim: —but this seems so un-useful that I’m not sure how it made it past the six-pager or the press release. It’s disappointing for a GA service from AWS.
Amy: What would you both like to see, other than it just being… more natively picked up by other services?
Tim: I would like to see either a UI for creating the data tables that you’re going to need, or a plugin that you can automatically put with those EC2 resources: an agent you can run, or a sidecar, or a collector that you just enable to gather that data automatically. Because right now, it’s not really useful at all. What it’s doing is basically the same thing you can do in an Excel spreadsheet. And that’s being very, very honest.
Jesse: Yeah, I think that’s a really good point that ultimately, a lot of this data is not streamlined and that’s ultimately the thing that is the most frustrating for me right now. It is asking a lot of the customer in terms of engineering time, in terms of design work, in terms of implementation details, and I would love AWS to iterate on this service by providing that dynamically, making it easier to onboard and use this service.
Amy: Personally, what I would like is some either use case, or demonstration, or tutorial that shows how to track consumption costs using non-compute resources like Kinesis especially, because you’re shoving a lot of things in there and you just need to be able to track these things and have that show up in some sort of visualization that’s like Cost Explorer. Or even have that wired directly to Cost Explorer so that you can, from Cost Explorer, drill down to a request and be able to see what it is actually doing, and what it’s actually costing. I want a lot of things.
Jesse: [laugh]. But honestly, I think that’s why we’re here, you know? I want to make these services better. I want people to use the services. I want people to be able to allocate costs of shared resources. But it is still a hard problem to solve, and no one solution has quite solved it cleanly and easily yet.
You know what? Amy, to get back to your question, that’s ultimately what I would love to see, not just specifically with an AWS Application Cost Profiler necessarily, but I would love to see better native tools in AWS to help break out the cost of shared resources, to help break out and measure how tenants are using shared resources in AWS, natively. More so than this solution.
Amy: I would love that. It would make so many things so much easier.
Jesse: Mm-hm. I’m definitely going to be adding that to my AWS wishlist for a future episode.
Tim: How many terabytes is your AWS wishlist right now?
Jesse: Oh… it is long. I, unfortunately, have made so many additions to my AWS wishlist that are qualitative things—more so than quantitative things—that just aren’t going to happen.
Amy: You become that kid at Christmas that, they get onto Santa’s lap in the mall, and it’s a roller page that just hops off the platform, and just goes down the wall, and all the other kids are staring at you and ready to punch you in the face when you get off. [laugh].
Jesse: [laugh]. All right, well that’ll do it for us this week, folks. If you’ve got questions you’d like us to answer please go to lastweekinaws.com/QA, fill out the form and we’d be happy to answer that question on a future episode. If you’ve enjoyed this podcast, please go to lastweekinaws.com/review and give it a five-star review on your podcast platform of choice, whereas if you hated this podcast, please go to lastweekinaws.com/review, give it a five-star rating on your podcast platform of choice and tell us how you allocate the costs of shared resources.
Announcer: This has been a HumblePod production. Stay humble.