Complex Tech, Public Learning, & Impostor Syndrome with Kyler Middleton

Episode Summary

Kyler Middleton, a Senior Principal Engineer at Veradigm and co-host of the Day Two Cloud podcast, joins Corey in this Screaming in the Cloud episode to talk about how tech careers are changing and the big impact of AI on starting in tech. Kyler, who once wanted to be a librarian, tells her story of becoming a tech pro. She highlights the importance of learning and sharing what you know, especially in tech. Corey and Kyler also get into how AI is changing the game for new techies and what that means if you're starting. Kyler's take on using and teaching tech offers some really helpful tips for anyone looking to get into or move up in the tech world.

Episode Video

Episode Show Notes & Transcript



Show Highlights: 
(00:00) - Introduction
(01:49) - Kyler describes her multiple roles
(03:21) - Discussion on the realities of 'Day Two' operations in cloud environments
(07:38) - Insights into technical debt and the concept of 'Day Two' in DevOps 
(13:54) - The importance of sharing knowledge and learning in public to benefit others in the tech community
(20:07) - The use and limitations of AI in professional settings
(26:41) - Debate on the overreliance on AI technology in decision-making processes and its potential consequences
(32:05) - Closing remarks & where listeners can connect with Kyler


About Kyler:
Kyler grew up in rural Western Nebraska, fixing neighboring farmers’ computers in exchange for brownies and Rice Krispies. Then she was going to be a librarian to help people find the information they need. Then she discovered computers were a real job, and more than just a fix for her munchies, and she's now been a systems, network, call center, and security engineer, and is now a DevOps lead, and software engineer. She speaks at any conference that will have her, hosts Day Two Cloud podcast from Packet Pushers, and writes up cool projects with approachable language and pictures as part of her Medium series, Let's Do DevOps, with the intention to upskill anyone of any skill level. I have an insatiable curiosity and desire to help the folks around me succeed and grow. So - Let's Do DevOps.


Links Referenced:
Kyler's Blog on Medium: https://kymidd.medium.com/


Sponsor

Transcript

[00:00:00] Kyler: Imagine that as a human person that you're hiring for a job, and you would say, of course not, you're not allowed in my company, I'm not giving you any authorization over my systems. And yet with AI, we're just like, eh, someone will catch it, probably. That is concerning.

[00:00:19] Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn. I periodically have lamented the fact that the path I walked to get into the role that I'm in, which By the way, is not something to aspire to so much as be a cautionary tale, uh, has long since closed. Where does the next generation come from when we're talking about technologists?

Sending people down the same technological path that I went down isn't viable for a variety of reasons. Here to talk about that and several other things as well is Kyler Middleton, who is a Senior Principal Software Engineer at Veridigm. Kyler, thank you for joining me.

[00:00:55] Kyler: Absolutely, I'm really excited to be here.

Thanks for having me.

[00:00:58] Corey: This episode's been sponsored by our friends at Panoptica, part of Cisco. This is one of those real rarities where it's a security product that you can get started with for free, but also scale to enterprise grade. Take a look. In fact, if you sign up for an enterprise account, they'll even throw you one of the limited, heavily discounted AWS skill builder licenses they got, because believe it or not, unlike so many companies out there, they do understand AWS.

To learn more, please visit panoptica. app slash last week in AWS. That's panoptica. app slash last week in AWS. So let's start for, at the beginning, for folks who have no idea who you are, where, what place you occupy in our ridiculous ecosystem. Where do you start? Where do you stop?

[00:01:49] Kyler: Oh my goodness, I am collecting jobs.

So that's a great question that I ask myself each day too. So my day job is a Senior Principal DevOps Engineer at a healthcare company in the United States. So I'm writing automation with GitHub. I'm trying to help the software team develop and deploy their software and have it actually work and do interesting stuff.

[00:02:08] Corey: What a novel concept.

[00:02:09] Kyler: Oh my goodness. I also do consulting on the side, just direct hours consulting to help people with stuff, mostly because I get a little bored and have ADHD and it helps me stay entertained. And job number three is hosting a podcast with Ned Bellavance for Packet Pushers called Day 2 DevOps.

And you should totally come listen to us. We're awesome. And also just writing blogs. I generally try to do my work out loud and release all the stuff that I can legally release. Uh, as open source tools and open source content.

[00:02:39] Corey: Let's start with the podcast piece. And I think that's probably the, uh, the easiest point of entry.

Because every time I've seen the name, I can't help but snicker. Amazon has famously said, it's always day one. What does day two look like? Someone once asked Bezos. And he gave an answer that looks suspiciously like the Amazon of 2024. And great, like they're in denial that it's day two. But it's very much day two.

I'm curious what day three looks like. But yeah, day two DevOps in many cases. Microsoft Mechanics Seems kind of like what is set in over there in a bunch of different respects. That said, that is certainly not how you intend the podcast to come across because most people don't live their lives playing a game of inside baseball with Amazon corporate references.

What is day two in your context?

[00:03:21] Kyler: I think it's run. I think it's in the, the crawl, walk, run sphere. This is run. You are in the cloud, you've deployed, and now you have to keep the damn thing on. Um, I, I hope cursing's okay. I think it is. Oh, I think we'll allow it. Yes. Sweet. So I, I just think it's interesting and I think it's really a science that we're discovering together collectively as we go.

How do you control your costs? How do you control your security? When, with any, when, when anyone with a credit card can build their own VPC or VNet and deploy software to it, And maybe even connect it to your corporate network. How do you secure everything?

[00:03:58] Corey: Uh, generally after the fact, in my experience.

People care about these things in a reactive context a lot more than they do proactively. It's, it's like buying fire insurance for your building. People care about it right after they really wish they cared more about it. Same as backups.

[00:04:11] Kyler: Yep, flood insurance sales go nuts after a flood, but not before, of course.

[00:04:16] Corey: I've always felt that the day two approach of how to run things has been dramatically underserved because there's a universe of blog posts of varying quality on here's how to set up a thing ranging from a V host in an Apache configuration all the way up to Kubernetes itself. Great, okay, now it's up and running.

How do I maintain this thing in an ongoing basis? Now something has gone wonky. How do I troubleshoot what that is? And historically, in the roles that I've had, the way that this was addressed was they hired ops people, uh, who were generally a little older than a lot of the developers who were around there.

Because there's no such thing in my experience as a junior DevOps person or junior sysadmin. It, the answer you want to hear when there's a weird problem is, oh yeah, I've seen this before. This is what it is and here's how you fix it. As opposed to, this is an interesting problem, which is scary when you, oh, I don't know, work at a healthcare software company, for example.

[00:05:12] Kyler: I, I previously worked for A security startup called IAM Pulse, which was focused on IAM in the cloud, which is foundational security. It's how absolutely everything works for authentication and authorization. And if you Google any of that to go learn it on the internet, which is how we all teach ourselves this job anyway, right?

You will find all of these examples that say this is how it works. And then a little note at the bottom that says don't do this. It's totally insecure. So we're teaching all of our newbies, which are all of us at one point when we're learning something new, the wrong way to do it. We're not teaching them the way to run the cloud.

We're teaching them the way to set it up in a terrible way. So, There's, that's just endemic to a lot of these fields.

[00:05:52] Corey: But it's worse and stupider than that. Because now, not only do you have the, the blog post where, in this tiny type at the bottom for a human to read, great, oh yeah, go ahead and just don't ever do it this way.

We're just doing this because it's expedient to get it out there. Yeah, you know what else doesn't read that thing? Is these AI, uh, large language models that are trained on everyone else's work. And that, oh, okay, this must be how you're supposed to do it. Because surprise, virtual dumbasses tend to lack context, almost like actual dumbasses.

Hi, I say that as a dumbass myself. But there's a, the problem then is that you wind up with this very confident, also wrong, uh, robot that's doing its best damn impersonation of a white guy in tech that I can imagine. Because being confidently wrong is my job, goddammit. I feel like I should have a union.

Maybe I shouldn't have fought against it for all of those years. The white guy's local. Yeah, no, it's The problem that you have, though, is that now you have all these bad examples out there and they're terrible and people learn from them. Uh, to their credit, AWS has gone back and fixed virtually all of those, those blog posts historically, where we're just going to grant star permissions on everything.

Because if we don't do it that way, it's going to be the first half of the blog post. It's setting up permissions. At which point I freaked out and yelled at them, like, how about that? Because that's what everyone does. I was like, yeah. I want to get this thing up and running. I'll go back and fix the security later.

Spoiler, later never comes and that to do in the comments becomes load bearing.

[00:07:16] Kyler: Absolutely. I have reviewed so many pull requests that are, Oh, I'm just going to do it this way for now to get it running. And I, I swear that's a lesson that we all learn eventually, but it's years down the road. And we have built so much crap that our production now relies on.

Really, we gotta start teaching that to our entry level folks. Do not do stuff just for now. It'll never go away.

[00:07:38] Corey: Yeah, tomorrow never comes, and technical debt is something that tends to get a bit of a bad rap. You're always gonna make trade offs and compromises in favor of getting to a goal that matters to the business.

It's not necessarily a bad thing, but you have to understand what you can give up to get there, what should not be addressed, and what, at some point, you need to go back and service some of those things, and go back and clean some of them up Amazon, to its credit, does have the concept of one way doors, where don't casually make decisions that are going to be very difficult and painful to unwind.

Most decisions don't look like that, but recognizing them from where one starts and the other one stops, that becomes a bit of a challenge as far as identifying them in advance, and that's often where I think experience plays in. Absolutely.

[00:08:23] Kyler: And I like to think that's what the podcast Day 2 DevOps is about.

The Day 2 is we're interviewing people that have been through those painful exploratory journeys where they've done it wrong and they've come here to tell you what not to do. Sometimes they even tell you what to do. But even if you're getting started, you want to listen to the folks that have done it wrong.

before and noticed because they'll, they'll help you avoid those same pitfalls. And that's all we can really hope for, right?

[00:08:48] Corey: That's part of the challenge I keep running into is people, for whatever reason, are reluctant to talk about their own failures. They're reluctant to talk about things that they have done going down a path that didn't work out.

I still have a strong memory early on in my conference speaking career where I was watching another. Presenter, give a talk about how they had this amazing infrastructure where they worked. And I turned to the person next to me like, Wow, I would love to work at a place that ran infrastructure like that.

And they said, Yeah, I would too. And I looked at their badge and they worked the same company as the presenter. It's the, everyone gets up and tells a modified glorified version of a story. And understand I do not believe in mindless adherence to literal truth when giving a conference talk. Because sometimes you have to embellish a story to make it make it make sense or dramatically abbreviate it because you don't want to tell a joke effectively that takes 20 minutes of setup in order to get to a punchline.

Usually. That's, uh, So you have to make the story work, but by going in the other direction and just hand waving over as if everything were perfect, everyone else starts to feel inherently bad about their environment. Now, I've been a consultant to an awful lot of big companies, and I have worked in a variety of environments over the 20 years I've been in this space, and I have never yet found an environment that wasn't on some level a dumpster fire internally.

And I'm sure that some of my clients are going to be upset if they're listening to that. Like, hey, our environment's awesome. It's like, is it though? What about X, Y, and Z? And they start going, well, and my point is not to name and shame anyone. It's to name and shame everyone. Because every single environment, including the stuff that I built six months ago, is trash.

And there is technical debt there. And I would never do these things a second time the same way. But that's the way that infrastructure inherently works. That is the nature of the reality. And Amazon, Google, Microsoft, all the giants, they don't have this magical Valhalla style infrastructure. They have a different kind of problem in some cases, but they very much have infrastructure fires.

[00:10:51] Kyler: Absolutely. There are thousands of engineers at all of those providers running around, putting out fires all day, all the time. And when your webpage loads for Facebook, you think, oh, their infrastructure is perfect. Nothing ever goes wrong. That's not the case. Stuff breaks all the time. Thank you, Steve Jassy.

But you do need to just Put out the fires, learn and help scale. We have the same problem with people, especially when you're a junior engineer, you look at your seniors and think, Oh my goodness, they're geniuses. They've been the smartest people in the room their entire careers. And that is such a silly idea because I have, I am the smartest person in the room sometimes.

And I have done so many dumb things and broken so many systems. And that's how I've learned. I've broken a ton of stuff. And that's why I know stuff today. So, uh, I think it really sets our learners back when we're not up front as senior engineers that, you know what, I've done a lot of dumb stuff and I'm here to tell you about it and what not to do.

[00:11:43] Corey: Oh, I've done that a few times now and you can't ever, you, you, I have always found being unassuming about those things is incredibly important because otherwise it's like, huh, why aren't the slides working? Someone's going to chime in. Just hang on a second. The smartest person in the room forgot to plug in the projector.

Like, great. We all, like, pride goes before the fall. It always does. Now, I want to be clear. I introed you with the idea of talking about where the next generation comes from and where We find, uh, we find the next generation, bring them up. That is not you. Again, you are a senior principal engineer, and unless title inflation has gotten bizarrely out of hand where you work, you are very clearly not a junior person.

But the reason I say that, the reason I allude to you in that sense is that you have done a lot of learning in public. You are passionate about passing on the things that you learn, learning in public every chance that you get, and my, you know, I gripe about it, it's just that this is such a rare thing. I feel like people are terrified that they're going to be discovered as a giant fraud that they think they secretly are.

Yeah, you and everyone else, there's a support group for this, it's called All of Us, and we meet at the bar.

[00:12:50] Kyler: And I feel that way all the time still, I, I have done so much in my career that is incredible and still just about every day I have a moment where I'm like, how did I trick all these people? to let me on shows like this, to let me lead meetings like this.

It's, it's ridiculous. Um, yeah, I've done as much as I can to be introspective and think about when I was a new learner and I knew nothing because you do when you're new to anything. I Googled stuff and I read people that released information for free because you don't have a lot of money when you're getting started and you're young and you're just trying to survive.

And that stuff only existed to get me here because people like me. made it free, made it available, put it on the internet, took the time to write it and speak it. And so as much as I can, I'm, um, releasing all the software that I write that I can legally get away with. Uh, Hello Veritim Lawyers, if you're listening to this, I do that.

And also just write everything that I can down. And I put it on Medium to pay for coffee, all the caffeine that I imbibe that lets me write all those blogs, um, as much as I can.

[00:13:54] Corey: And it's useful. It's a, it's the sort of thing where, like I have lost count a number of times. I have gone looking for how to do a specific thing and discovered a great blog post that explains exactly how to work with, it's doing, it's explaining this in terms and from someone who is clearly smarter than I will ever be.

And then I look at who wrote it and it was me five years ago or something. And it's, Oh. Huh, I guess I have forgotten how to do the nuances of those things. I mean, half of the blog posts I wrote, right, are basically notes to my future self because I'm probably going to come back this way around again.

There's a, there's a lot to be said for, for knowing how to look for things and how to, how to figure out the answer to something you don't know. If people were to ask me when, if they're getting into tech as a whole, or honestly, most things, what's the first thing they should learn? One of the things that I've, that I would come back with almost instantly is how to ask questions in productive ways.

I. This used to be a problem in IRC. It's been a persistent condescending problem people make fun of folks for in really obnoxious ways over on Stack Overflow. But you see it again and again and again. It's not working. It's the worst report. It's the worst bug report in the world. I always liked the approach of breaking it down into a small reproduction case.

Okay, I'm trying to do X. And I'm not seeing it. Instead, I'm seeing why. The documentation says this, but I'm not seeing it. And over half the time when I'm putting together that minimal reproduction case, I solve the problem. It's, oh, I forgot something simple. And occasionally, sometimes, oh, the documentation is wrong.

Or, huh, I found a really weird bug. What's going on? Because even if I discover these things, I am never the only person to make that mistake. I just have zero problems saying, Hey, I'm a fool. You know, commas are super important.

[00:15:34] Kyler: And I remember learning stuff. Anytime I have a problem, I'm going to Google and I find a Stack Overflow question that it's, Oh, it's my problem.

That's, that's great. And the most upvoted answer is, Only an idiot would ever ask this. You should never be doing what you should, what you're doing here. And it makes me, I have been mad for 15 years about those types of responses. So I am out there to seed the world as much as I can with stuff that says it's okay to not know.

In fact, it's probably better. That's where you should start. Ask those questions and educate yourself. And if someone doesn't know, teach them, don't make them feel bad. You were there once too.

[00:16:09] Corey: Few things are better for your career and your company than achieving more expertise in the cloud. Security improves, compensation goes up, employee retention skyrockets.

Panoptica, a cloud security platform from Cisco, has created an academy of free courses just for you. Head on over to academy. panoptica. app to get started. Anyone who comes back with, Oh, you shouldn't ask that question. Uh, it's only something a moron would ask that. It becomes ridiculous. It's a, what, who answers a question like that in good faith?

Now, I will absolutely answer bad faith questions like that, but you've got to do a lot of work to convince me you're asking something in bad faith. And it, to be honest, the signs are pretty freaking obvious.

[00:16:54] Kyler: Absolutely. Someone just saying, it doesn't work. Here's a picture. Like, I get it. You probably need to put a little more effort into giving me your situation and context and architecture and error messages.

But still, sometimes that's people who are busy. It's easy to give people grace and it's free to give people grace too.

[00:17:11] Corey: Remember today, someone else doesn't know something or is having an issue or whatnot, but tomorrow it's going to be you. And how do you want to be treated when that happens? Spoiler, it's probably not with a bunch of sarcastic jokes when you're in the middle of a production outage.

What is it that, I'm curious, what got you into the idea of effectively learning in public and writing everything down as you discover it yourself? Was this just something that you came by naturally? It's just a part and parcel of who you are? Is it a habit you had to train yourself to do?

[00:17:40] Kyler: It's a funny story.

I didn't, when I first started doing computers as a, as a young teenager, they kind of just made sense to me that like the color coded connectors, and it's very logical. And I just, that's how my brain works anyway. So it's great for me. And I didn't realize until college that that is a career because it doesn't come easy to everyone.

And Until that point, I was planning to be a librarian because I like to help people. When I was a kid, the librarian was always there, and they were always helpful, and they never judged me for my dumb questions. And so I mixed that with just absolutely chaos ADHD that can't remember anything. And so wanting to learn with not remembering anything.

I, you know, combined everything into writing everything down, doing as many podcasts as I can, because before it leaves my brain, which is about two weeks time, I need to write it down or I will not remember it.

[00:18:32] Corey: I am in the same boat. I, I wind up, I've tried, but The ADHD urge to set up a system of note taking and the rest.

Setting up a system is fun. That's day one. Day two, using the system? Nope, I have dozens of them all over the place. The things that I have found that work for me are I can basically have a heap of notes. I use drafts historically. I'm getting into Obsidian. I'm sure I'm using it wrong, but it's a bunch of marked down text files.

I know how to handle those. But you know what's gotten really, really good? Good over the years is searching through a bunch of text files with my old buddy Grep. Who knew? So I can find the thing in the reference. It's not like a room where I have a big pile of components like, now where did that hammer go?

And then I'm trying to find it and I can't and, well I guess technically isn't anything a hammer if you hold it right and that leads to disaster and I've replaced iPad. But yeah, there's a, there's this entire approach of, oh I'm just going to have this system that works. The only system I found that works is letting computers do the things that I'm inherently bad at.

[00:19:31] Kyler: Absolutely. And that's their job. They're, they're dumb. They're not terribly creative, but they're incredibly fast. So use them for that. You're the creative one. And that's why I'm not terribly afraid of AI. And I suppose if I'm eating my words and eventually I'm put out of business or enslaved by an AI, like, I guess we all will be.

For now, you should be using AI to help you. It is there to assist. It's gonna be dumb, and it's gonna be not creative, but it can totally help you get there. And I, I similarly have, I think, 2, 500 Apple Notes in my little Notes app, and I just search through them when I need to remember how to do something.

[00:20:07] Corey: I do want to talk about AI, because in 2024, I'm legally required to in every conversation. I have, apparently. But the I was a big skeptic of machine learning and AI for a long time. And it took a single instance, I think with GitHub Copilot, to radicalize me. Which was, I asked it for funsies, Okay, you think you're good at writing code, try this one.

Uh, go ahead and query the AWS Public Pricing API to come up with the hourly cost per region of a managed NAT gateway, and then display in the table going from most to least expensive. And it did it, and it worked with very little tweaking, which was, okay, there is actually something of value here. This would have taken me a few hours to do myself, because I am bad at data structures, and this is the AWS Pricing API, because they are worse at data structures.

And it looks like JSON, and often is not. But it's an annoying layout here. This would have taken a couple of hours for me to do easily. And I showed this to a senior engineer I work with, and his immediate response was, Well, that's great for like the easy stuff you'd throw to like a developer on Upwork, but there is always going to be a place for senior engineers.

And I thought that was an interesting response on a few axes. First, and obviously, was the defensiveness. Interesting. I didn't expect it, but I guess I'm not surprised that that does make sense. Like, is this thing coming for my job? Followed as well by, Okay, but let's be honest with ourselves for a second here.

Senior engineers don't just emerge fully formed from the forehead of some God. They, what is a senior engineer, a junior engineer who's fucked up enough times that now they're going to be they. They know where the sharp edges are if you outsource all of the easy, low end stuff. Quote unquote easy, let's be clear, nothing is easy if you don't know how to do it.

But the, so where do these, where do these people come from? And I've seen this in the ops world enough. I started off doing support and that was a gateway to doing really interesting things and moving up and expressing curiosity. Today, those jobs don't exist the way they once did. They're largely highly metric to death.

They are effectively, you will quit the same job that you entered. There's not nearly the level of upward mobility that there once was as the industry becomes increasingly stratified. So I don't know what it would be like to be entering the sector in 2024. I don't have advice that is useful. In fact, I worry that most advice I would have would be actively harmful in this era.

And I don't want to give boomer style interview advice. Oh, hit the bricks. Print out your resume on fancy paper. Ask to speak to the owner. Have a firm handshake. You'll have a job by dark. I don't want to give bad advice. But I do not know where I'd even start if I were entering the space today. What's your take on it?

[00:22:44] Kyler: I have the exact same perspective. I get asked all the time, well, how do I get started? I, I'm a janitor or something. I don't use computers for my day job. And I want to because look at all the money there. And there is, there's so much money in tech today. And, um, My advice is the same. It's it's go out and be an IT one, do support, learn how networking works.

But when ais are doing those jobs, which they soon will be right, we're gonna start to have licensed full-time employee seats that at larger companies that handle, you know, tier one. I have no idea how we get folks to tier two without them being able to do tier one. And I, my best advice is go to school for it.

But I'm a little bit worried that if you get a two or a four year degree. By the time you're there, AI might have taken Tier 2 as well. And so it's concerning for entry level folks and for just overall the health of the ecosystem here that, that leads to senior engineers. I don't have the answer here and it's concerning.

[00:23:39] Corey: I'm very interested in the idea of sending the elevator back down. I, I find the attitude of, well, I got mine, kids will figure it out for themselves to be largely abhorrent. It's, I got supremely lucky in the course of my career. I am enough of a statistical aberration that I should not exist. And when people ask, oh, how do I, how do I grow my career that way that you do?

My immediate response was, oh my God, don't do that. Don't do that. Part of the reason I'm good Thinking on my feet and telling stories quickly is because I was great at getting myself fired from jobs. And when rent's due in six weeks and you don't have it, because I was also bad with money in my 20s, you learn to tell the story they want to hear during a job interview and get the offer quickly.

I don't suggest then as a result, well, how do I let, how do I get to be more like you? Walk in tomorrow and call your boss an asshole and get fired. That's step one. Don't do that. It is counterproductive.

[00:24:34] Kyler: I think of. AI is an assistant that makes a lot of mistakes, but is very fast. It's a tier one that is just kind of bad at their job.

But if you don't have the exposure in your career path, your, your ecosystem to call it out when it does dumb shit, then I don't know what you do. I think you, you learn to trust it. And that problem will only get worse as it gets smarter and it starts to make mistakes a fewer percent of the time. So today it's maybe right.

90 percent of the time, and that means, like, intuitively, I don't trust it. 10 percent of the time, someone tells me a lie, I'm going to check the 90 percent that are right.

[00:25:10] Corey: Exactly. If I'm interviewing a candidate and they make up an answer to a technical question, which, by the way, if an interviewer asks you a technical question, they probably know what the right answer is, and the, and the candidate is confidently wrong, that, I can't trust anything that they tell me.

The correct interview answer, by the way, is, I don't know, but if I had to guess and then speculate wildly, if you're wrong, you've already disclaimed it. And if you're right, you've shown an ability to pick up concepts quickly and reason your way through a problem. Either way, it's a win. But I guess my challenge right now is that I see AI as being terrific at delivering a surface topical level of answer to things.

But as soon as I start asking it about anything that I'm More than passingly familiar with and questioning it, it's answers fall completely apart. And it's, okay, this is a thin veneer of bullshit on some level. And the disturbing part is the realization just how much of the world functions on a thin veneer of bullshit.

And that's not to say it doesn't have value. It is not useful. It is terrific at taking my emails, which are very terse, which codes as rude, and turning them into Make this polite and friendly. And it adds four paragraphs. And people are like, Oh, it was such a lovely email you sent me. My, my prompt was simply like, Make this polite.

Give me the, give me the file. Like that. Great. Make that polite. Good. It's a, and that's fine. But there's a, there's the danger of it. The hallucination problem I think is endemic. I don't know there's necessarily going to be a fix there. And at some level, I can't shake the feeling that companies are over indexing on AI and its solution to all things.

Way more aggressively than any aspect of the technology currently deserves.

[00:26:41] Kyler: I've seen so many jokey style posts that are like from people outside the industry saying tech leaders see this, you know, technology that hallucinates 10 percent of the time and they're like, oh great, let's put it in everything.

I'm sure nothing bad can happen. But of course it can. In the interview scenario you set up, I look for the exact same thing. I want candidates to say, I don't know, because that is very important to stop and evaluate when you're on a project and you don't know the answer. I would rather you go look or ask for help than just make it up, because make it up is where you destroy stuff.

You break your databases, you delete your data, you bring down production. So unless we can have AI say, I don't know, I need help, I'm going to have a hard time trusting it. ever. And I feel that we should have that response and we don't. As an industry right now, we're just accepting that AI is going to lie to us and AI is going to make up stories.

And I don't think we should. I think that's concerning. I know they're banking on AI will be fixed, whatever fixed means by the time that it rolls out broadly. I sure hope that's true. I don't have the answer to that one either.

[00:27:46] Corey: I've seen no indication. that the hallucinations are getting better. What I have seen in some exam, some example tests that I run, because I have a whole library of fun prompts I like to hit things with from time to time, and I usually don't share unless the answers are outright hilarious, but I have noticed a increased tendency of AI to double down when wrong.

[00:28:05] Kyler: Imagine that is a human person that you're hiring for a job and you would say, of course not, you're not allowed in my company, I'm not giving you any authorization over my systems. And yet with AI, we're just like, eh, someone will catch it. Probably. That is concerning.

[00:28:19] Corey: Something you learn pretty quickly as a consultant, when a, when a client says something to you that you know for a fact to be wrong, you don't exactly win points by contradicting them outright.

You say, Oh, that's interesting. That doesn't match my understanding and experience. Let's look it up together as a quick detour. Oh, look! It, it works! Oh, it turns out it does work this other way. Huh, I guess now we both know. And it's polite, it makes people feel like you've been along with them on the journey, and you didn't just call them out in front of their boss, which is helpful.

It's a, but that is, there's a human element to this, and I think that there is an increasing direction of AI drift. Nope, I'm going to choose my own facts, and basically if you don't like it, you're wrong.

[00:28:57] Kyler: Absolutely, and I'm concerned. I'm very concerned about that part. I don't exactly think it'll take all of our jobs, but I am worried that we're going to pivot so heavily into a technology that Imagine it's own reality that we're going to start to over depend on it and underdevelop our own skills.

And I see that as the future of AI is we're going to be shepherding the AIs and catching their bullshit and trying to correct their mistakes. And I don't know if I want that job as much as I like just building technology. Regardless, that is potentially the future in five or 10 years. Automating that AI.

[00:29:33] Corey: I have had such a mixed bag experience with so much of, with so much of the AI stuff, that it's just, it's great, but if you send, if you send this output to the outside world to speak on behalf of your company or as you, without human review at the least, you're a fool. I've done a number of experiments to see can it analyze AWS product releases in the tone of voice that I use and the insight that I tend to apply to it.

And the answer is on a terrific day, maybe 60 percent of it. It is, it is tricky to get there. It misses a lot of it. And sometimes it goes in horrifying directions, like brand destroying directions if this stuff saw the light of day, which is why it never does. It's, Great, you've given me some awesome turns of phrase.

Thanks, AI, and you can create hilarious images when I basically bully you into it, but that's about it at the moment. I'm sure, I'm sure this will change, but there are, today at least, I'm not willing to put my entire professional future in the hands of a robot.

[00:30:33] Kyler: I've heard an engineer, I can't remember their name now, say that AI Today is the internet in 1999.

And sure, a 15 year old will break into the FBI a couple of times and destroy a telco. That stuff's going to happen, so put your safeguards around your AI, but that doesn't mean stop using it. That means learn to use it better, learn how it works, develop it. And I think for better or worse, That's our future, so that's the skill set you should be learning, is how to use it and what its limitations are.

[00:31:01] Corey: I don't disagree. It reminds me on some level of the math teachers back in the 90s when I was in school telling me, Oh, you won't have a calculator in your pocket all the time, so you've got to be able to do it all by hand. And I think the better, more realistic answer is you have to understand enough about this to know that when the calculator gives you an insane answer, there's an error somewhere in there and maybe just don't blindly trust it.

But there are definitely scenarios where it almost feels like it's a protectionist thing. Like, well, I wouldn't do well in an AI world, so therefore I'm going to boohoo it, the whole thing. No, I think it would be terrific if this stuff worked as advertised. I am tired of being shoved down my throats in every product under the sun and it taking all the oxygen out of the room compared to, you know.

Things in infrastructure that I really care about today that are not touching AI, but I feel like that does get fixed in the fullness of time. I hope, anyway.

[00:31:47] Kyler: I hope so, too. And I don't have the answer. I don't think anyone does, because we're still collectively building it together. And they're sure, um, you know, collecting all the data that I put on the internet.

So, hey, I'll do my best to help out.

[00:31:59] Corey: Exactly. I really want to thank you for taking the time to speak with me about all of this. If you want to learn more, where's the best place for them to find you?

[00:32:07] Kyler: Totally. I have the hilariously URL'd kyler. omg. lol, which is a real website that you can go to fantastically.

I'm very active on LinkedIn. Please connect and message me on there and also on Medium where I'm writing Let's Do DevOps and Packet Pushers, Day 2 DevOps with Ned Balavance.

[00:32:28] Corey: And we will put links to all of that in the show notes. Thank you so much for taking the time to speak with me today. I really appreciate it.

Thank you so much. Kyler Middleton, Senior Principal Engineer at Veridigm. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five star review on your podcast platform of choice. Whereas if you hated this podcast, please leave a five star review on your podcast platform of choice, along with an angry, insulting comment that says in small text at the very end, That this is not actually how you're supposed to do it, that I'm sure everyone will read.

Newsletter Footer

Get the Newsletter

Reach over 30,000 discerning engineers, managers, enthusiasts who actually care about the state of Amazon’s cloud ecosystems.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Sponsor Icon Footer

Sponsor an Episode

Get your message in front of people who care enough to keep current about the cloud phenomenon and its business impacts.