Sort By
Search
Episode 29: Future of Serverless: A Toy that will Evolve and Offer Flexibility
Are you a blogger? Engineer? Web guru? What do you do? If you ask Yan Cui that question, be prepared for several different answers.
Today, we’re talking to Yan, who is a principal engineer at DAZN. Also, he writes blog posts and is a course developer. His insightful, engaging, and understandable content resonates with various audiences. And, he’s an AWS serverless hero!
Some of the highlights of the show include:
Some people get tripped up because they don’t bring microservice practices they learned into the new world of serverless; face many challenges
Educate others and share your knowledge; Yan does, as an AWS hero
Chaos Engineering Meeting Serverless: Figuring out what types of failures to practice for depends on what services you are using
Environment predicated on specific behaviors may mean enumerating bad things that could happen, instead of building a resilient system that works as planned
API Gateway: Confusing for users because it can do so many different things; what is the right thing to do, given a particular context, is not always clear
Now, serverless feels like a toy, but good enough to run production workflow; future of serverless - will continue to evolve and offer more flexibility
Serverless is used to build applications; DevOps/IOT teams and enterprises are adopting serverless because it makes solutions more cost effective
Links:
Yan Cui on Twitter
DAZN
Production-Ready Serverless
Theburningmonk.com
Applying Principles of Chaos Engineering to Serverless
AWS Heroes
re:Invent
Lambda
Amazon S3 Service Disruption
API Gateway
Ben Kehoe
Digital Ocean
Episode 28: Serverless as a Consulting Cash Register (now accepting Bitcoin!)
Is your company thinking about adopting serverless and running with it? Is there a profitable opportunity hidden in it? Ready to go on that journey?
Today, we’re talking to Rowan Udell, who works for Versent, an Amazon Web Services (AWS) consulting partner in Australia. Versent focuses on specific practices, including helping customers with rapid migrations to the Clouds and going serverless.
Some of the highlights of the show include:
Australia is experiencing an increase in developers using serverless tool services and serverless being used for operational purposes
Serverless seems to be either a brilliant fit or not quite ready for prime time
Misconceptions include keeping functions warm, setting up scheduled indications
Simon Wardley talked about how the flow of capital can be traced through an organization that has converted to serverless
Concept of paying thousands of dollars up front for a server is going away
Spend whatever you want, but be able to explain where the money is going (dev vs. prod); companies will re-evaluate how things get done
Serverless is either known as an evolution or revolution; transformative to a point
Winding up with a large number of shops where when something breaks, they don’t have the experience to fix it; gain practical experience through sharing
Seek developer feedback and perform testing, but know where and when to stop
With serverless, you have little control of the environment; focus on automated parts you do control
Serverless Movement: People have opinions and want you to know them
Understand continuum of options for running your application in the Cloud; learn pros and cons; and pick the right tool
Reconciliation between serverless and containers will need to play out; changes will come at some point
Blockchain + serverless + machine learning + Kubernetes + service mesh = raise entire seed round
Links:
Rowan Udell’s Blog
Rowan Udell on Twitter
Versent on Twitter
Lambda
Simon Wardley
Open Guide to AWS Slack Channel
Kubernetes
Aurora
Digital Ocean
Episode 27: What it Took for Google to Make Changes: Outages and Mean Tweets
Google Cloud Platform (GCP) turned off a customer that it thought was doing something out of bounds. This led to an Internet outrage, and GCP tried to explain itself and prevent the problem in the future.
Today, we’re talking to Daniel Compton, an independent software consultant who focuses on Clojure and large-scale systems. He’s currently building Deps, a private Maven repository service. As a third-party observer, we pick Daniel’s brain about the GCP issue, especially because he wrote a post called, Google Cloud Platform - The Good, Bad, and Ugly (It’s Mostly Good).
Some of the highlights of the show include:
Recommendations: Use enterprise billing - costs thousands of dollars; add phone number and extra credit card to Google account; get support contract
Google describing what happened and how it plans to prevent it in the future seemed reasonable; but why did it take this for Google to make changes?
GCP has inherited cultural issues that don’t work in the enterprise market; GCP is painfully learning that they need to change some things
Google tends to focus on writing services aimed purely at developers; it struggles to put itself in the shoes of corporate-enterprise IT shops
GCP has a few key design decisions that set it apart from AWS; focuses on global resources rather than regional resources
When picking a provider, is there a clear winner? AWS or GCP? Consider company’s values, internal capabilities, resources needed, and workload
GCP’s tendency to end service on something people are still using vs. AWS never ending a service tends to push people in one direction
GCP has built a smaller set of services that are easy to get started with, while AWS has an overwhelming number of services
Different Philosophies: Not every developer writes software as if they work at Google; AWS meets customers where they are, fixes issues, and drops prices
GCP understands where it needs to catch up and continues to iterate and release features
Links:
Daniel Compton
Daniel Compton on Twitter
Google Cloud Platform - The Good, Bad, and Ugly (It’s Mostly Good)
Deps
The REPL
Postmortem for GCP Load Balancer Outage
AWS Athena
Digital Ocean
Episode 26: I’m not a data scientist, but I work for an AI/ML startup building on Serverless Containers
Do you deal with a lot of data? Do you need to analyze and interpret data? Veritone’s platform is designed to ingest audio, video, and other data through batch processes to process the media and attach output, such as transcripts or facial recognition data.
Today, we’re talking to Christopher Stobie, a DevOps professional with more than seven years of experience building and managing applications. Currently, he is the director of site reliability engineering at Veritone in Costa Mesa, Calif. Veritone positions itself as a provider of artificial intelligence (AI) tools designed to help other companies analyze and organize unstructured data. Previously, Christopher was a technical account manager (TAM) at Amazon Web Services (AWS); lead DevOps engineer at Clear Capital; lead DevOps engineer at ESI; Cloud consultant at Credera; and Patriot/THAAD Missile Fire Control in the U.S. Army. Besides staying busy with DevOps and missiles, he enjoys playing racquetball in short shorts and drinking good (not great) wine.
Some of the highlights of the show include:
Various problems can be solved with AI; companies are spending time and money on AI
Tasks can be automated that are too intelligent to write around simple software
Machine learning (ML) models are applicable for many purposes; real people with real problems and who are not academics can use ML
Fargate is instant-on Docker containers as a service; handles infrastructure scaling, but involves management expense
Instant-on works with numerous containers, but there will probably be a time when it no longer delivers reasonable fleet performance on demand
Decision to use Kafka was based on workload, stream-based ingestion
Veritone’s writes code that tries to avoid provider lock-in; wants to make an integration as decoupled as possible
People spend too much time and energy being agnostic to their technology and giving up benefits
If you dream about seeing your name up in lights, Christopher describes the process of writing a post for AWS
Pain Points: Newness of Fargate and unfamiliarity with it; limit issues; unable to handle large containers
Links:
Veritone
Christopher Stobie on LinkedIn
Building Real Time AI with AWS Fargate
SageMaker
Fargate
Docker
Kafka
Digital Ocean
Episode 25: Kubernetes is Named After the Greek God of Spending Money on Cloud Services
Google builds platforms for developers and strives to make them happy. There's a team at Google that wakes up every day to make sure developers have great outcomes with its services and products. The team listens to the developers and brings all feedback back into Google. It also spends a lot of time all over the world talking to and connecting with developer communities and showing stuff being worked on. It doesn't do the team any good to build developer products that developers don’t love.
Today, we’re talking to Adam Seligman, vice president of developer relations at Google, where he is responsible for the global developer community across product areas. He is the ears and voice for customers.
Some of the highlights of the show include:
Google tackles everything in an open source way: Shipping feedback, iteration, and building communities
Storytelling - the Tale of Kubernetes: in a short period of time, gone from being open source that Google spearheaded to something sweeping the industry
Rise of containerization inside Linux Kernel is an opportunity for Google to share container management technology and philosophy with the world
Google Next: Knative journey toward lighter-weight serverless-based applications; and GKE On-Prem, customers and teams working with Kubernetes running on premise
Innovation: When logging into GCP console, you can terminate all billable resources assigned to project and access tab for building by hand
GCP's console development strategy includes hard work on documentation, making things easy to use, and building thoughtfulness in grouping services
Google is about design goals, tradeoffs, and metrics; it’s about hyper scale and global footprint of requirements, as well as supporting every developer
Conception 1: Google builds HyperScale Reid-Centric user partitioned apps and don't build globally consistent data driven apps
Conception 2: Software engineers at the top Internet companies do the code and write amazing things instantly
12-Factor App: Opinions of how to architect apps; developers should have choices, but take away some cognitive and operating load complexity
Businesses are running core workloads on Google, which had to put atomic clocks in data centers and private fiber networking to make it all work
Perception that Google focuses on new things, rather than supporting what's been released; industry is on a treadmill chasing shiny things and creating noise
Industry needs to be welcoming and inclusive; a demand for software, apps, and innovation, but number of developers remains because everyone’s not included
Human vs. Technology: More investment and easier onboarding with technology and an obligation to build local communities
Goal: Take database complexity and start removing it for lots of use cases and simplify things for users to deal with replication, charting, and consistency issues
DevFest: Google has about 800 Google developer groups that do a lot of things to build local communities and write code together
Links:
Adam Seligman on Twitter
12-Factor App
I Want to Build a World Spanning Search Engine on Top of GCP
DevFest
Kubernetes
Docker
Heroku
Google Next
Google Reader
Episode 24: Serverless Observability via the bill is terrible
What is serverless? What do people want it to be? Serverless is when you write your software, deploy it to a Cloud vendor that will scale and run it, and you receive a pay-for-use bill. It’s not necessarily a function of a service, but a concept.
Today, we’re talking to Nitzan Shapira, co-founder and CEO of Epsagon, which brings observability to serverless Cloud applications by using distributed tracing and artificial intelligence (AI) technologies. He is a software engineer with experience in software development, cyber security, reverse engineering, and machine learning.
Some of the highlights of the show include:
Modern renaissance of “functions as a service” compared to past history; is as abstracted as it can be, which means almost no constraints
If you write your own software, ship it, and deploy it - it counts as serverless
Some treat serverless as event-driven architecture where code swings into action
When being strategic to make it more efficient, plan and develop an application with specific and complicated functioning
Epsagon is a global observer for what the industry is doing and how it is implementing serverless as it evolves
Trends and use cases include focusing on serverless first instead of the Cloud
Economic Argument: Less expensive than running things all the time and offers ability to trace capital flow; but be cautious about unpredictable cost
Use bill to determine how much performance and flow time has been spent
Companies seem to be trying to support every vendor’s serverless offering; when it comes to serverless, AWS Lambda appears to be used most often
Not easy to move from one provider to another; on-premise misses the point
People starting with AWS Lambda need familiarity with other services, which can be a reasonable but difficult barrier that’s worth the effort
Managing serverless applications may have to be done through a third party
Systemic view of how applications work focuses on overall health of a system, not individual function
Epsagon is headquartered in Israel, along with other emerging serverless startups; Israeli culture fuels innovation
Links:
Epsagon
Email Nitzan Shapira
Nitzan Shapira on Twitter
Heroku
Google App Engine
AWS Elastic Beanstalk
Lambda
Amazon CloudWatch
AWS X-Ray
Simon Wardley
Charity Majors
Start-Up Nation
Digital Ocean
Episode 23: Most Likely to be Misunderstood: The Myth of Cloud Agnosticism
It is easy to pick apart the general premise of Cloud agnosticism being a myth. What about reasonable use cases? Well, generally, when you have a workload that you want to put on multiple Cloud providers, it is a bad idea. It’s difficult to build and maintain. Providers change, some more than others. The ability to work with them becomes more complex. Yet, Cloud providers rarely disappoint you enough to make you hurry and go to another provider.
Today, we’re talking to Jay Gordon, Cloud developer advocate for MongoDB, about databases, distribution of databases, and multi-Cloud strategies. MongoDB is a good option for people who want to build applications quicker and faster but not do a lot of infrastructural work.
Some of the highlights of the show include:
Easier to consider distributed data to be something reliable and available, than not being reliable and available
People spend time buying an option that doesn’t work, at the cost of feature velocity
If Cloud provider goes down, is it the end of the world?
Cloud offers greater flexibility; but no matter what, there should be a secondary option when a critical path comes to a breaking point
Hand-off from one provider to another is more likely to cause an outage than a multi-region single provider failure
Exclusion of Cloud Agnostic Tooling: The more we create tools that do the same thing regardless of provider, there will be more agnosticism from implementers
Workload-dependent where data gravity dictates choices; bandwidth isn’t free
Certain services are only available on one Cloud due to licensing; but tools can help with migration
Major service providers handle persistent parts of architecture, and other companies offer database services and tools for those providers
Cost may/may not be a factor why businesses stay with 1 instead of multi-Cloud
How much RPO and RTO play into a multi-Cloud decision
Selecting a database/data store when building; consider security encryption
Links:
Jay Gordon on Twitter
MongoDB
The Myth of Cloud Agnosticism
Heresy in the Church of Docker
Kubernetes
Amazon Secrets Manager
JSON
Digital Ocean
Episode 22: The Chaos Engineering experiment that is us-east-1
Trying to convince a company to embrace the theory and idea of Chaos Engineering is an uphill battle. When a site keeps breaking, Gremlin’s plan involves breaking things intentionally. How do you introduce chaos as a step toward making things better?
Today, we’re talking to Ho Ming Li, lead solutions architect at Gremlin. He takes a strategic approach to deliver holistic solutions, often diving into the intersection of people, process, business, and technology. His goal is to enable everyone to build more resilient software by means of Chaos Engineering practices.
Some of the highlights of the show include:
Ho Ming Li previously worked as a technical account manager (TAM) at Amazon Web Services (AWS) to offer guidance on architectural/operational best practices
Difference between and transition to solutions architect and TAM at AWS
Role of TAM as the voice and face of AWS for customers
Ultimate goal is to bring services back up and make sure customers are happy
Amazon Leadership Principles: Mutually beneficial to have the customer get what they want, be happy with the service, and achieve success with the customer
Chaos Engineering isn’t about breaking things to prove a point
Chaos Engineering takes a scientific approach
Other than during carefully staged DR exercises, DR plans usually don’t work
Availability Theater: A passive data center is not enough; exercise DR plan
Chaos Engineering is bringing it down to a level where you exercise it regularly to build resiliency
Start small when dealing with availability
Chaos Engineering is a journey of verifying, validating, and catching surprises in a safe environment
Get started with Chaos Engineering by asking: What could go wrong?
Embrace failure and prepare for it; business process resilience
Gremlin’s GameDay and Chaos Conf allows people to share experiences
Links:
Ho Ming Li on Twitter
Gremlin
Gremlin on Twitter
Gremlin on Facebook
Gremlin on Instagram
Gremlin: It’s GameDay
Chaos Engineering Slack
Chaos Conf
Amazon Leadership Principles
Adrian Cockcroft and Availability Theater
Digital Ocean
Episode 21: Remember when RealNetworks used to– BUFFERING
Are you about to head off to college? Interested in DevOps and the Cloud? Is there a good way for someone like you who is starting out in the world of technology to absorb the necessary skills? The Open Source Lab (OSL) at Oregon State University (OSU) is one program that helps students and serves as a career accelerator. OSL is a unicorn because OSU is willing to invest in open source.
Today, we’re talking to Lance Albertson, director of OSL at OSU. OSL does a variety of projects to provide private Clouds that are neutrally hosted on its premises. The lab also gives undergraduate students hands-on experience with DevOps skills, including dealing with configuration management, deploying applications, learning how applications deploy, working with projects, and troubleshooting issues. OSL is for any student who has a general interest or passion for it, and a willingness to learn.
Some of the highlights of the show include:
Workflow focuses on what students need to learn about Linux and giving access to various repos; then they experience the lab’s configuration management suite
Interview Process: Put out a posting, student submits an application online, each candidate is reviewed, student is given a screening quiz,
If a student passes the screening process, they are brought in for an in-person interview for personality and technical questions
Students tend to initially have the least amount of experience and most difficulty with a repository that has multiple people committing to it and dealing with PRs
Spinning up VMs and understanding how configuration management is connected, how services communicate, and how to set up an application
Round-Robins and System Sprint Meetings: Focus on discussing and documenting processes, issues, suggestions, comments, and other information
Younger students are mentored by Lance and the older students; every generation has to evolve because the environment and industry evolve
OSL made OpenStack work on POWER8, PowerPC, and PowerPC little-endian; gateway into Cloud - having OpenStack instance to offer services
Vast majority of OSL’s revenue comes from donations; no direct support from the university; finding companies to serve as sponsors is beneficial to all
Future of OSL: Providing more Cloud-like services; creating a more internal, private Cloud’ and containerized ways of running or deploying applications
Links:
Apache Software Foundation
BusyBox
Buildroot
Chef
Ruby
Freenode
OpenStack
Sphinx
Docker
Neutron
Seth
Rackspace
CoreOS
Kubernetes
Digital Ocean
Episode 20: The Wizard of AWS
Today, we’re talking to Jeff Barr, vice president and chief evangelist at Amazon Web Services (AWS). He founded the AWS Blog in 2004 and has written more than 2,900 posts for it and another 1,100 for his personal blog. As chief evangelist, Jeff strives to explain the benefits of Cloud computing and Web services to anyone who will listen.
Jeff is the voice of AWS. He does what he does best - exploits his superpower of explaining technology in ways that people can understand it. Jeff tries to be the same person all the time. He loves to meet people and go out of his way to say “Hello.” So, if you see him at re:Invent, say “Cheese” and take a selfie with him!
Some of the highlights of the show include:
Jeff uses AWS Workspaces for his blog; one of Jeff’s blogging principles is to not take anybody else's word for anything to the absolute best of his technical ability
Zero Client: Jeff has no rotating hardware, disk drives, just a zero client; wherever he is, it's the same workspace
AWS has something for everyone; it build things in response to customers’ questions, requests, and feedback
Naming Services and Products: Is it helpful? Is it descriptive? Does it have any hidden meanings?
Amazonian DNA and Dog Friendly Workspace: Jeff went from super fearful to accepting, to now thinking of dogs as incredible creations because they add fun and excitement to the office
As part of hiring, each interviewer is assigned Amazon leadership principles (LPs) to ask questions that measure a candidate against those LPs
What is the secret to getting hired at Amazon? Study the LPs to understand what they're about and be able to express your philosophies and history with LPs
re:Invent makes sure customers understand services - What is it? What does it do? How do they put it to work? What are the best use cases for it?
Things can never be too simple; you start from zero, put a lot of different things in there, and then you need the feedback to build in simplicity
AWS is following a more on-demand approach than traditional reserve instances; it opens the door to being used in a lot of ways
AWS does a lot of work before a launch to make sure it’s got infrastructure, scaling, monitoring, and capacity in place
If you are a customer, talk to AWS and let them know what they're doing right or wrong; write a blog post, tweet about it, share it with them in some way
Is the breadth of product offerings from AWS too vast? Is it offering too many things?
AWS was not explicit about where it was going with Cloud computing or do analyses or projections about it; it simply launched SQS and let it speak for itself
Customer feedback shapes what Amazon works on; customers share and then AWS re-prioritizes to make sure it’s delivering the right thing at the right time
Remember: It's not just bits and bytes, it's about the organic life form
Links:
Jeff Barr on Twitter
Jeff Barr on LinkedIn
AWS
AWS Blog
Jeff Barr’s Blog
Amazon Machine Images
Zero Client
AWS Workspaces
AWS Lambda
Amazon Leadership principles
re:Invent
The Robot Uprising Will Have Very Clean Floors
Serverlessly Storing My Dad Jokes in a Dadabase
Days Until re:Invent
Episode 19: I want to build a world spanning search engine on top of GCP
Some companies that offer services expect you to do things their way or take the highway. However, Google expects people to simply adapt the tech company’s suggestions and best practices for their specific context. This is how things are done at Google, but this may not work in your environment.
Today, we’re talking to Liz Fong-Jones, a Senior Staff Site Reliability Engineer (SRE) at Google. Liz works on the Google Cloud Customer Reliability Engineering (CRE) team and enjoys helping people adapt reliability practices in a way that makes sense for their companies.
Some of the highlights of the show include:
Liz figures out an appropriate level of reliability for a service and how a service is engineered to meet that target
Staff SRE involves implementation, and then identifying and solving problems
Google’s CRE team makes sure Google Cloud customers can build seamless services on the Google Cloud Platform (GCP)
Service Level Objectives (SLOs) include error budgets, service level indicators, and key metrics to resolve issues when technology fails
Learn from failures through instant reports and shared post-mortems; be transparent with customers and yourself
GCP: Is it part of Google or not? It’s not a division between old and new.
Perceptions and misunderstandings of how Google does things and how it’s a different environment
Google’s efforts toward customer service and responsiveness to needs
Migrating between different Cloud providers vs. higher level services
How to use Cloud machine learning-based products
GCP needs to focus on usability to maintain a phase of growth
Offer sensible APIs; tear up, turn down, and update in a programmatic fashion
Promotion vs. Different Job: When you’ve learned as much as you can, look for another team to teach something new
What is Cloud and what isn’t? Cloud deployments require SRE to be successful but SREs can work on systems that do not necessarily run in the Cloud.
Links:
Cloud Spanner
Kubernetes
Cloud Bigtable
Google Cloud Platform blog - CRE Life Lessons
Google SRE on YouTube
Episode 18: Sitting on the curb clapping as serverless superheroes go by
What’s serverless? Are you serverless now? Is going from enterprise to serverless a natural evolution? Or, is it a “that was fun, now let’s go ride our bikes” moment? Is serverless “just a toy?” Is it a wide and varied ecosystem, or is it Lambda plus some other randos? What's up with serverless vs. containers?
Today, Forrest Brazeal is here to answer those questions and discuss pros and cons of serverless. He was a senior Cloud architect prior to joining Trek10. Forrest spent several years leading AWS and serverless engineering projects at Infor. He understands the challenges faced by enterprises moving to the Cloud and enjoys building solutions that provide maximum business value at a minimal cost.
Some of the highlights of the show include:
Bimodality: Backend development going away and being replaced by managed services; undifferentiated items are being moved to the Cloud
Serverless is application designs with “Backend as a Service” (BaaS) and/or “Functions as a Service” (FaaS) platforms; everything is managed for you
AWS Lambda: Is it today’s trend or a bias that everyone is using it; Lambda makes up 80% of current FaaS adoption
Serverless Ecosystem: You can build it however you want, and you’re doing it right; but don’t take that at face-value; no two Lambda environments are alike
Cloud services at this scale have not been knitted together to form applications that are serving major workloads; best practices need to be established
Native Cloud providers will consolidate, and individual frameworks will be created with components of application stacks tied together to build systems
Serverless vs. Containers: No need for disparity - we can learn to get along; people use containers because it is easier than going serverless
Serverless Heroes series features people thinking out-of-the-box and helps identify emerging trends; serverless is growing, and it’s not just about startups
Went from working with a Sharpie to Procreate for the FaaS and Furious cartoon series; serverless component of process is for invoicing
Changes? Packaging to handle sharing; more knobs on console; unified process needed because too many building own workflow and tooling
Certification: Proof-positive that you know what you’re talking about or is it questionable value if not backing up expertise in the real world?
Links:
Forrest Brazeal on Twitter
Invoiceless
Summon the vast power of certification - Dilbert cartoon
Trek10 blog
A Cloud Guru ThinkfaaS podcast
A Cloud Guru - Serverless Superheros
Why We’re Excited About AWS AppSync
Serverless Architectures with Mike Roberts
AWS Lambda
AWS Serverless Application Model (SAM)
Procreate
AWS Certified Cloud Practitioner
Serverlessconf
Digital Ocean