S3’s Hidden Features and Quirks with Daniel Grzelak

Episode Summary

Corey Quinn and Daniel Grzelak take you on a journey through the wild and wonderful world of Amazon S3 in this episode. They explore the fun quirks and hidden surprises of S3, like the mysterious "Schrodinger's Objects" from incomplete uploads and the head-scratching differences between S3 bucket commands and the S3 API. Daniel and Corey break down common misunderstandings about S3 encryption and IAM policies, sharing stories of misconfigurations and security pitfalls.

Episode Video

Episode Show Notes & Transcript


Show Highlights: 

(00:00) - Introduction
(03:49) - Schrodinger's Objects
(05:23) - S3 Permissions and Security
(06:44) - Incomplete Multipart Uploads Causing Unexpected Billing Issues
(10:28) - Historical Oddities and Unexpected Behaviors of S3
(12:00) - Encryption Misconceptions
(15:17) - Durability and Reliability of S3
(17:49) - AWS Security and Trust
(21:01) - Practical Tips for S3 Users
(26:10) - Compliance Locks and Data Management
(29:13) - Closing Thoughts

About Daniel:

Daniel Grzelak is a 20-year cybersecurity industry veteran, currently working as Chief Innovation Officer at Plerion. He is no longer the CISO at Linktree nor the Head of Security at Atlassian, but he tries to stay relevant by hacking AWS and Cloud in general.

Links Referenced:

Personal Website: https://dagrz.com/
Things you wish you didn't need to know about S3: https://blog.plerion.com/things-you-wish-you-didnt-need-to-know-about-s3/

S3 Bucket Encryption Doesn't Work The Way You Think It Works: https://blog.plerion.com/s3-bucket-encryption-doesnt-work-the-way-you-think-it-works/

*Sponsor

Transcript

Daniel: And the best part of it is, it actually works. Like that's, that's what I love about Amazon. Like, oh, it's so complicated. Uh, there's so much scale to it and it continues to work.

Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. A couple of years back, I had my annual charity t shirt focus on S3 as the eighth wonder of the world, because it legitimately is. It is an amazing service that has in many cases transformed the way that We store things the way we improperly use things as message queues or databases that perhaps shouldn't be, and myriad other things.

However, I firmly believe that there is nothing so perfect that it cannot be made fun of, nitpicked to death, or in other ways dragged through the public square. My guest today apparently feels somewhat the same. Daniel Gjerlach is the Chief Innovation Officer at Plarion. Daniel, thank you for joining me.

Daniel: I'm excited. I'm a big fan of your shitposting, Corey.

Corey: Well, thank you. I try to be as well. Cause otherwise I'd get really bored really fast. Cause let's face it. Our industry is boring if you take it too seriously. This episode has been sponsored by our friends at Panoptica, part of Cisco. This is one of those real rarities where it's a security product that you can get started with for free, but also scale to enterprise grade.

Take a look. In fact, if you sign up for an enterprise account, they'll even throw you one of the limited, heavily discounted AWS skill builder licenses they got, because believe it or not, unlike so many companies out there, they do understand AWS. To learn more, please visit panoptica. app slash last week in AWS.

That's panoptica. app slash last week in AWS. You had a post that came out about a week before this recording titled things you wish you didn't need to know about S3. And I saw that come across my desk, and okay, great, let's look into this, because I've seen blog posts with similar titles somewhat frequently over the years, and it's, I bet you didn't know this one weird trick, and invariably there's not a whole lot of new information hidden in those posts.

There was a lot of new information hidden in this post, so I absolutely wanted to talk to you about it. Where did this post come from? I guess is probably the best starting point for us.

Daniel: Right. So, so, so before I jump into that, I actually don't think there was much. New in the post, there was something new for everyone.

Like everyone found something interesting and the genesis of it was we were trying to build some more detailed risk analysis about S3 at Plarium. So I went and started having a look at like, how does, how does it work? Make sure we get everything right. Make sure we've got all the details correct. And so I started testing S3, started playing with it.

And every time I found a quirk, I went, Oh, that's, that's not exactly how I thought it would work. I would send it to my engineering team and they would go, no, that's not right, that can't be right. And so the more I kept going, the more of these quirks I would find. And that's how we ended up making the list.

It's just someone would always go, I thought it worked differently in some way.

Corey: When I say a lot of this was new information, I mean, some of it was new to me. Uh, if I haven't seen it before, it's new to me. I don't know that there were necessarily any groundbreaking revelations that, The S3 team is gonna be reading this and going, holy crap, it does what?

But it, but it addresses a bunch of things that I had either not been aware of or in some cases not thought about for a while, and others should never bothered to think about it all. I mean, your first point's terrific where you talk about S3 buckets are the S3 API. It never occurred to me to dig into the, the question of why the AWS command line interface has a sub command of S3 and a separate sub command of S3 API.

What is that? I know I use the latter when I want to handle weird modifications to buckets and whatnot, and the S3 subcommand, generally when I just create or delete a bucket and work with objects within it, but it had never occurred to me to delve into the nuances of why in the way that you did.

Daniel: Yeah.

And I'm not clear on the difference between why there's two command lines, but I think the big difference is the end point that you end up sending API operations to. Typically it's a central end point. So, again, Controls the whole service, and then you provide the parameters that you want it to use. Here we basically send the API calls to the S3 bucket.

Now, I'm not sure if underlying that is some big general endpoint, but for the user, that's the way it looks. And so when you are able to go and then delete that bucket by just sending it a HTTP request, That's something that people don't necessarily expect to happen.

Corey: I would honestly expect that not to happen just because CloudFormation likes to lose its mind every time you try to delete a stack that has a bucket in it that has a lingering thing there.

And they recently fixed the ability to, Oh, you can just go ahead and delete the stack now and check the box to orphan the bucket. No, I want you to clean the thing up and get rid of it. I'm telling you explicitly, go ahead and blow out the data. I don't care. This is all for scratch stuff anyway. And I understand why you don't want to make it too easy to do that in production by accident, but there are different use cases for different things.

Daniel: Yeah. And I think that's still true. The bucket still needs to be emptied. The interesting thing in my blog post was that if you accidentally say, give S3 star permissions, Uh, on a bucket, you can actually just send a delete request to the bucket and the bucket will be deleted. Now, obviously, permissions have to be wildly misconfigured for something like that to happen.

Corey: Yeah, and who would ever screw up IAM permissions, right?

Daniel: Exactly. That's what everyone's been doing for a very long time. And that's, that's the point here. It's like, AWS has done a really good job over the years of removing many of the foot guns. I couldn't see a use case for where a bucket could be deleted by literally anyone on the internet.

So that's a foot gun that I think could be removed in the future.

Corey: I am very surprised by that. Did you test whether it works if there's data in the bucket?

Daniel: I didn't, but, but I expect it doesn't. I still think you've got to remove all the objects inside before you run it.

Corey: That would at least make it a little bit better in that I don't, I can't think of any buckets in my accounts that are purely empty with not a single object or version stored within them.

Daniel: But I expect if you have S3 star permissions misconfigured, then you can probably just Send delete requests for all the objects as well.

Corey: Yeah, I guess by the point you can delete, you can definitely do a list and see the buckets. Uh, frankly, at that point, there's no reason you couldn't also just send a lifecycle policy up as well and configure it to just blow everything away.

Daniel: It's just a, I mean, it's working as expected. You've literally given anyone on the internet all the permissions for the S3 bucket. It's just, I just don't think there's a use case for something like that.

Corey: You found a whole bunch of strange things in this. Uh, I, it's been a while since I thought about this, but I have found clients at smaller scale where this becomes significant.

At, at large enough scale, all, all weird billing misconfigurations become small enough not to matter because they would have gotten fixed otherwise. Uh, You wind up with the Schrodinger's Objects, as you call them, of incomplete multi part uploads for objects. If the upload fails midway through, those objects that have already been received by default sit around forever.

They don't show up in the console, they don't show up under a list objects without very specific parameters, and they do charge you. So I have seen in the early days when I was working at a much smaller scale, yeah, someone said that, alright, I have a 1TB bucket, why am I being charged 50TB? And incomplete multi part uploads were the issue, which at that point became clear that there's something systemically wrong here.

Figure out what keeps dying trying to upload things and make sure that gets fixed. And also, here's a lifecycle policy to clean those out to fix the end result of it. But it's been a while since I've seen that because most folks are not gonna be spending a hundred million dollars a year on AWS and discover that $20 million of it is incomplete, uh, S3 uploads.

That just, that isn't a thing that happens here in the real world.

Daniel: No, uh, and I think that the interesting thing here is that you could have it happen by accident, where you end up with a bucket with all these objects that you don't know about, but also if you allow anonymous people on the internet to dump stuff in your buckets.

then they could potentially do it on purpose.

Corey: Oh yeah, it's, and there is, it is the least discoverable thing in the world. There is no way, except maybe S3 Storage Lens, if you really go looking, to figure out what, how many of those you have account wide and whether or not you are being, and what it is you're being charged for those.

It's one of those very, you have to know the secret passcode to get in to the hidden speakeasy in order for that to begin making sense or being something you'd even, that would even occur to you that might exist.

Daniel: Yeah, and like you said, though, you can put a lifecycle policy on all of your buckets to protect against this kind of thing.

It's just that I'm not sure that anyone does that by default.

Corey: I did not know, for example, that multi part upload listings will return the principal ARNs. That was novel to me.

Daniel: Yeah, that's a fun one. And look, there's not much confidential about a principal ARN, but in some cases an attacker, like, wants to do something and they don't know what the resource identifier is that they need to target.

And so when you leak these kind of bits of information all over the place, there's some very specific edge cases in which, hey, knowing a resource name or knowing a full ARN is really important from an attacker's perspective.

Corey: What I've found is that when I've talked to the S3 team, it is pretty clear that, I mean, they put a good spin on it, don't get me wrong, but it is abundantly clear that in nobody's wildest dreams, when this service was built and What damn near 20 years ago now, uh, that what it would grow into, what it would become.

And for every other AWS service where I've spoken to service teams, they learn more about their services from how customers use or in some cases misuse them than was ever accounted for in any planning document that could have existed.

Daniel: I think AWS, the S3 history, is one of the fun parts of this. It's obviously one of the most robust, most used services that's been around for a long time.

But because of that, it's got some of the sort of the archaeology of its past that's now gone away. For example, ACLs. Now all resources in AWS are protected by fancy policies that are very well defined and very well understood. But In the past, S3 had this concept of ACLs, which is now turned off by default, but you can still do interesting things with it, things that you perhaps don't expect.

And the mental model for that is very different to what it is for IAM policies.

Corey: Absolutely. I was always so confused by a bucket ACLs. I inadvertently reported what I thought was a security issue, politely, because I'm never sure of what I'm looking at when I didn't understand the interplay of an ACL with an IAM policy.

And was, and found very quickly that nope, the problem is, is that I'm a fool. Okay, great, I can own and accept that. But I'm also never the only fool. I generally have good company in people making poor decisions as I travel throughout the industry.

Daniel: Uh, do you remember the old authenticated users group? in ACLs.

Corey: With the checkbox next to it in the S3 console, people would click it. I know I did in the very early days, assuming, oh, any user authenticated to my AWS account should be able to look at this bucket because it's a company wide thing. Yeah. Turns out that meant every AWS S3 user on the planet.

Daniel: Yeah. And that was a fun one where people could make that mistake very legitimately going like it's authenticated users.

It must be ours. It's not everyone on the planet, but I think that's part of the interesting archeology of S3. And so. acls have a bunch of those interesting quirks like that. For example. The other one that I, uh, that I ran into was that you can pri provide permissions to people based on the email address associated with the root user of their AWS account.

And there's a fancy error message that comes up and tells you if that email address doesn't exist in the AWS database. So you can basically figure out if an email has an AWS account associated with it, which is another thing to work.

Corey: That is wild to me because remember the canonical user ID where you used to have to, where there was this giant, I don't know what the hell it was, it was some huge alphanumeric string as I recall.

That was the canonical user that owned the bucket, because S3 significantly predates IAM and everything else. It's part of the, basically, the fossil record at this point. But it's, it was always a separate AWS user identity in some ways. I never saw it used for much other than S3 policies. That really messed with me.

Daniel: Yeah, I think it's a 32 character hex string. And another fun thing about that is if you find that string anywhere, for example, uh, you know, objects listing, you take that canonical string, you chuck it in an IAM policy, save it. And when you see that policy come back up again, it'll have that string resolved, uh, to the ARN of that canonical user.

Corey: Wild to me that, that, that still works. I mean, it makes sense that it does. S3 is so much lore around it now that people legitimately don't know when I'm shitposting or not. I'll talk sometimes about how S3 used to basically have a BitTorrent endpoint if you enabled it on a bucket, and people thought I was making it up.

That was one of the few deprecations that AWS has done, because it turned out approximately nobody needs it. It's not how we transfer large files across the modern internet anymore. For at a time when a lot of folks were highly bandwidth constrained, that was not nothing.

Daniel: Yeah, it's actually still in the documentation.

That's one of the things I ran into. Oh, that can't be right. Went and tested it and found out it was deprecated, but it's still hanging around in, uh, in some writing.

Corey: The documentation is basically engraved upon stone tablets. As I recall, they have a couple of versions of the API. Like one is like a 2012 date.

And I think it's the last time it was updated where you still have to specify for some things, a version string that references a date that is older than my elementary school child. Nice.

Daniel: That's fun.

Corey: Few things are better for your career and your company than achieving more expertise in the cloud.

Security improves, compensation goes up, employee retention skyrockets. Panoptica, a cloud security platform from Cisco, has created an academy of free courses just for you. Head on over to academy. panoptica. app to get started. It's it, you can't change these things very much. I was talking with Jeff Barr once, and he made a great observation that I asked if we turned it into a blog post, and he wrote the intro for it, which was lovely of him.

But he talked about the idea that S3 at this point has become a generational service, where they have no idea what's in any given S3 bucket. They aren't scanning stuff, and there are encryption practices and policies in place to prevent them from ever doing this. But it's definitely something they have to think about.

Which, if you don't know what's in a given bucket, maybe it's a bunch of shitposting meme images. Maybe it's incredibly important bank records. Maybe it's the nuclear codes. They just don't know. So they have to not lose data, they have to make sure that it is accessible via a variety of embedded API calls that are never going to get updated anywhere.

And they have to make plans for this to still be there in 500 years. Because as long as the account, the bill gets paid on an account, who's to say whether something's right or wrong? Lord knows I have a bunch of old S3 buckets. I have no idea what's in them. I'll never touch again. And they round up to less than a penny, so I don't particularly care.

But those things have to still exist.

Daniel: And the best part of it is It actually works. Like that's, that's what I love about Amazon. Like, oh, it's so complicated. Uh, there's so much scale to it and it continues to work, but I really would love to touch on your encryption point if you don't mind. Cause I think that's, that's another area where they mistake assumptions about how S3 encryption works.

And one of my friends was actually talking to a CISO after a major break and the CISO was telling them. Hey, it's okay that our objects got stolen because we've got encryption at rest enabled. So it's, it's perfectly fine. And so people's mental model for encryption is the file is encrypted. It goes away when the attacker tries to open the file.

It'll be a bunch of garbage. They won't have the keys, so they won't be able to decrypt it. But that's not exactly how encryption works in S3. I can see you want to say something.

Corey: Oh, last year I wrote a whole article on this. It's on my site. I'll put a link to it in the show notes. S3 encryption at rest does not solve for bucket negligence.

And I go into a whole spiel on exactly what you're talking about. You're right. I always found that encryption at rest in the cloud context to be basically a box checking exercise and little more. Because, okay, if you can break into an AWS datacenter, steal the drives, get out alive, uh, have stolen them from the proper places to reassemble the, uh, the severed objects into various ways and recombine them, you kind of earned it at that point.

That's not really my Encryption at rest matters a hell of a lot more for laptops that you're going to leave at the coffee shop or in your car. They matter a lot more for your crappy data center where the security guard forgets to go and lock the door at night. There are, those are going to be areas where it absolutely matters.

With this, it just isn't a realistic threat model, because regardless of how well encrypted at rest something is, it still is going to be returned via the API when it's requested, assuming the permissions are right. There are exceptions with KMS encryption in certain ways. Please continue.

Daniel: And that's exactly the point here, is In S3, if you don't have access to the key, it actually works as an access control mechanism rather than you getting back a bunch of garbage data.

So if you don't have access to the key, you just don't get the object. The only way you get the object is if you have access to the key, in which case you get it in plain text. And so if your data gets walked, it gets walked in plain text.

Corey: Do you believe that there is a risk of AWS using its privileged position to scan the contents of S3 buckets?

I know that people love to have conspiracy theories around this all the time, that they're looking through all the data you put in S3. I have a rough idea, at least, of what general order of magnitude of compute power it would take to actually do that, and I have some sub questions about your conspiracy theory.

But again, you focus on this stuff a lot more than I do. What are your thoughts?

Daniel: Yeah, I don't know how it works underneath, and I don't see inside AWS, but I, that's not a Theory I subscribe to. It just, it doesn't make sense from a business perspective. It doesn't make sense from a, like a technical perspective.

Why would you do that? They've got way more to lose than they have to gain by doing something like that.

Corey: That's always been my perspective. And we know it's expensive to do it because look at how they priced Amazon Macy when it launched. And even after they redid the pricing, this is something that explicitly looks through your S3 buckets on your behalf, looking for sensitive information so that you can make sure that you're, you know, where it lives in your environment.

Yeah. And it is extortionately slow, incredibly expensive, and not widely deployed for those two reasons. I have a really hard time imagining that if they had this magic thing on the back, on the back end that would just tell them what everyone had in every bucket, that they wouldn't find a more cost effective way, a more widely adopted way of being able to perform that task.

I just don't see it.

Daniel: Yeah, look, I just don't think, I think AWS is a good actor. Fundamentally. They're not, they're not a bad actor. They have so, so much complexity and not like over 300 services, like tens of thousands of API calls. Like at that scale, you end up seeing weird things happen because there's just so many things that could, that could happen.

But fundamentally I've always found them to. Uh, be a good actor, try their best to do security right, and all of that kind of thing.

Corey: I would agree with that sentiment. There are, AWS does a lot of things that I find questionable and weird, but they don't tend to touch security, particularly of foundational services.

They, they mean well, and there are enough people I know who work there that I think of as canaries, who would resign on the spot. On ethical grounds, if nothing else, if something like that were to take place, then I'm comfortable, uh, making that assertion. I don't know if that's enough for some people. I mean, obviously, if you're a government and like, Well, Cory's got a good feeling about that.

Does not check your audit box, nor should it. Let's be very clear here. But for my dumb Twitter for pets startup, yeah, that's good enough for me in my use case.

Daniel: Yeah. How would you even check that assumption if you want them to?

Corey: Exactly. The way that they've done it before is, oh, well, we have all these third party audits that validate the things that we've said are correct, etc, etc.

I know that there are people that I've spoken to that I trust. These are phenomenal technologists, and they are Supremely confident that it functions as described. But I've always viewed, on some level, you have to trust your cloud provider or your data should not live within that cloud provider. Because there's nothing out there that says, Oh, when it's Corey's specific requests, we're going to send him to an identically performing set of API endpoints that just don't do all that pesky encryption stuff under the hood, and we can inspect every aspect of what he does.

I don't think that they're doing that. But there's nothing to my understanding that would prevent them, on a technical basis, from doing so.

Daniel: Yeah, look, and you touch on an interesting point. There was a great post by Nick Frechette, uh, recently. He's been digging into, uh, sort of non production endpoints, things that, like, you don't expect to be there, uh, and where you can send production data.

And so, I would encourage everyone to read that blog post. I think, again, that's, that's a case of Hey, there's so much complexity that these non production endpoints have snuck in to the sort of the production landscape and can very occasionally be used with production data, but AWS will fix all of that stuff once they find out about this kind of thing.

Corey: That's generally the response that I've gotten. Since this article, as you say, doesn't include anything groundbreaking, new revelations around S3, Have you gotten any feedback from the S3 team about, oh, hey, we didn't realize this, or, or you misunderstood something, which I, I get a fair bit when I wind up writing deep technical dives, because this stuff is complicated and no one has all the pieces in their head at once.

Have you gotten feedback at all from them on this, or is this just one of those things where we let our work speak for itself?

Daniel: No, look, I always send them my stuff just to make sure they, like, I'm, I'm often wrong. And I, it's 5am and I just got some, I got an email this morning, And I know the big, the big thing in there was they wanted to make sure that people understood that these weren't vulnerabilities.

And the vast majority of these things could be protected against with native configuration. And I 100 percent agree with them. These are not vulnerabilities. And that's why I specifically in the blog post, I said, I call them quirks and oddities and just things you need to know rather than, hey, these are things that AWS needs to fix.

Corey: Yeah, it's a well written post and it's very engaging. But at no point from the point where I first saw it come across my desk to now, did I, was I ever under the misconception that, oh, this is a vulnerability. This is, I see things that could lead themselves to customer side vulnerabilities if the customer had a misunderstanding about how something functioned.

Now that is not to say that AWS has a vulnerability on their side, but technically the all authenticated users was not a vulnerability on their side, but it led to thousands upon thousands of customer side vulnerabilities because misconfiguration is one of the biggest threat vectors in cloud.

Daniel: Exactly.

The way I think about it is, if you've got an expectation or a mental model that's one way because of the way that things are worded or because of your experiences, often the complexity of AWS will result in a slight deviation from that mental model or that expectation. And so you'll end up making a mistake that perhaps you otherwise wouldn't.

Um, one of the examples I give in the blog post about that is object keys inside S3. Now, object keys look very much like file names on your file system, but they're specifically called keys because they're not files. But most people will assume they will function like file names. And it turns out, that because they're keys and not file names, you can put any sort of characters that you want in them.

Slash, slashes, hashes, percentage signs, etc. And that's completely fine and okay to do and very well documented. But if your application treats them as if they were file names, it might end up being Misfunctioning or introducing a vulnerability itself because it thinks it's a farming.

Corey: Exactly. It's the, it's the interpretation of a very complicated, very explicit set of things that are designed from the ground up as very base level service primitives.

That in turn composed together into something truly incredible. It's an, a lot of those incredible things are in fact emergent properties as best I can tell. Because I don't think that there are any people with the perfect ability to predict the future hanging out on the S3 team 15 years ago. This is stuff that happens.

And uh, Mylon Thompson Bukovec gave a talk at reInvent a few years ago. Mentioning how after the S3 apocalyptic event, I think in 2017, they rebuilt all of S3 as 235 microservices. And my comment on that immediately was, this is important for S3. This is not a how to guide. They are not Pokemon. You need not collect them all.

Your five person startup should absolutely not do this. Because, yeah, it makes perfect sense for them to do it the way that they have at their scale. Whatever your startup is doing, I promise it is not S3 levels of scale and won't be for many, many, many years and at least seven rewrites. So you're fine.

Don't, don't view it as an instructional guide, but that, that glimpse under the hood that they were able to completely rewrite all of S3 and customers never knew because it still supports the same APIs in bug for bug level of reproducibility is nothing short of amazing.

Daniel: Yeah. And that's the beauty of it.

It's there's all that complexity and it's just really simple to use. You just. Send your files there and then they live there forever unless you ask them to be deleted. It's beautiful, I love it.

Corey: Yeah, one of the scarier things for enterprise is, okay, when I say delete and you say it's deleted, how deleted is it at that specific moment?

People always wonder about that one and they're not wrong to have that question in their minds because yeah, there are legal terms, there are legal definitions in contracts around what exactly deleted means and Let's not blunder our way into inadvertently making representations to our own customers that turns out aren't strictly true.

Daniel: Yeah, but it's again, it's one of those things like, doesn't matter if your object is deleted right now or a minute from now, uh, across the internet. Companies and most customers, there really is no difference. One of the fun things I found was, uh, the, uh, this idea of compliance locks. So if you're a legal team and let's say someone sent you a subpoena or something like that and said, Hey, do not delete these things under any circumstances.

You're able to implement that in an S3 bucket and say, Hey, this, this object cannot be deleted under any circumstances. And in fact, the only way that it can be deleted once, once it's The bucket has, has that compliance mode enabled and the object has been set to a compliance lock is to delete the entire AWS account.

Corey: Exactly. I've always wondered if there was an end run around this because, okay, I break into someone's account. I now go ahead and waive the compliance lock, uh, the legal hold, uh, I forget which version of it it is. And I can set it for up to a century. Congratulations in your great grandchildren will despise your negligence because they will still have to pay your AWS bill unless you can delete everything else in the account.

Is there a mitigation for bad actors? And I've never gotten a satisfactory response on that question.

Daniel: So, so two things. Well, I don't know the answer to that. I found in the past, for example, When I've made a KMS key policy that meant that I could never delete the key. If I got on an enterprise support plan, AWS would find a way around it and help me do it.

So I think it's possible that if you made a cataclysmic mistake, it would help you. Delete the objects. However, the way that object uploads work means that if the bucket has compliance mode enabled, the uploader actually gets to set whether the object has compliance locking enabled or not, which again is just a Different work about how the service works.

Corey: I wish to hell that you could do that for storage classes. Everything in this bucket or prefix is going to be, uh, intelligent tiering, Go, without having to teach every single thing that I've got, including some legacy desktop applications that are proprietary that I cannot modify. Yeah, make sure that you put that into the Intelligent Tiering Storage class.

Instead, I have to go through with a lifecycle policy, which means that every object gets written again, and that counts as a chargeable fee, which at scale is not nothing. And then, and only then, it starts aging into that, which is frustrating.

Daniel: But it's the way it works, and it matters for some customers, but not most.

Corey: I don't think it would break anything to be able to say, Okay, now by default, yeah, everything goes into standard because the way it currently works today, but you now have an option where the bucket can set aside that anything placed into it winds up being put into that storage class. I think that would be a welcome enhancement.

I don't know that would necessarily break anything customer side. It might very well break things on the AWS side. I know I've told this to them years ago and it's, and it just has, no one was listening is not the reason that they haven't gone ahead and implemented something like this. I'm sure it's complicated, but man, as a customer, wouldn't that be nice?

Daniel: It would be indeed.

Corey: Uh, I really want to thank you for taking the time to not just go ahead and write all this up, but also to speak with me about it. If people want to learn more, where's the best place for them to find you?

Daniel: Uh, thanks by the way. Uh, yeah, on blog. daryon. com. That's where generally I do my shit posting.

Luckily, my employer allows me to write in the style of it, but I like to. So, uh, it's good fun and there's a good bit of research on there.

Corey: Excellent. And we will of course, put links to all of this in the show notes. Thank you so much for taking the time to speak with me. I really appreciate it.

Daniel: Been a good time.

Thanks, Corey.

Corey: Daniel Zerlak is the Chief Innovation Officer at Plurium. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you enjoyed this podcast, please leave a five star review on your podcast platform of choice. Whereas if you hated this podcast, please leave a five star review on your podcast platform of choice, along with an angry, insulting comment that will live there forever because some joker wound up turning on compliance lock.

Newsletter Footer

Get the Newsletter

Reach over 30,000 discerning engineers, managers, enthusiasts who actually care about the state of Amazon’s cloud ecosystems.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Sponsor Icon Footer

Sponsor an Episode

Get your message in front of people who care enough to keep current about the cloud phenomenon and its business impacts.