Episode 26: I’m not a data scientist, but I work for an AI/ML startup building on Serverless Containers
About the Author
Corey is the Chief Cloud Economist at The Duckbill Group, where he specializes in helping companies improve their AWS bills by making them smaller and less horrifying. He also hosts the "Screaming in the Cloud" and "AWS Morning Brief" podcasts; and curates "Last Week in AWS," a weekly newsletter summarizing the latest in AWS news, blogs, and tools, sprinkled with snark and thoughtful analysis in roughly equal measure.
Episode Summary
Do you deal with a lot of data? Do you need to analyze and interpret data? Veritone’s platform is designed to ingest audio, video, and other data through batch processes to process the media and attach output, such as transcripts or facial recognition data.
Today, we’re talking to Christopher Stobie, a DevOps professional with more than seven years of experience building and managing applications. Currently, he is the director of site reliability engineering at Veritone in Costa Mesa, Calif. Veritone positions itself as a provider of artificial intelligence (AI) tools designed to help other companies analyze and organize unstructured data. Previously, Christopher was a technical account manager (TAM) at Amazon Web Services (AWS); lead DevOps engineer at Clear Capital; lead DevOps engineer at ESI; Cloud consultant at Credera; and Patriot/THAAD Missile Fire Control in the U.S. Army. Besides staying busy with DevOps and missiles, he enjoys playing racquetball in short shorts and drinking good (not great) wine.
Some of the highlights of the show include:
Various problems can be solved with AI; companies are spending time and money on AI
Tasks can be automated that are too intelligent to write around simple software
Machine learning (ML) models are applicable for many purposes; real people with real problems and who are not academics can use ML
Fargate is instant-on Docker containers as a service; handles infrastructure scaling, but involves management expense
Instant-on works with numerous containers, but there will probably be a time when it no longer delivers reasonable fleet performance on demand
Decision to use Kafka was based on workload, stream-based ingestion
Veritone’s writes code that tries to avoid provider lock-in; wants to make an integration as decoupled as possible
People spend too much time and energy being agnostic to their technology and giving up benefits
If you dream about seeing your name up in lights, Christopher describes the process of writing a post for AWS
Pain Points: Newness of Fargate and unfamiliarity with it; limit issues; unable to handle large containers
Links:
Veritone
Christopher Stobie on LinkedIn
Building Real Time AI with AWS Fargate
SageMaker
Fargate
Docker
Kafka
Digital Ocean
Episode Show Notes & Transcript
Do you deal with a lot of data? Do you need to analyze and interpret data? Veritone’s platform is designed to ingest audio, video, and other data through batch processes to process the media and attach output, such as transcripts or facial recognition data.
Today, we’re talking to Christopher Stobie, a DevOps professional with more than seven years of experience building and managing applications. Currently, he is the director of site reliability engineering at Veritone in Costa Mesa, Calif. Veritone positions itself as a provider of artificial intelligence (AI) tools designed to help other companies analyze and organize unstructured data. Previously, Christopher was a technical account manager (TAM) at Amazon Web Services (AWS); lead DevOps engineer at Clear Capital; lead DevOps engineer at ESI; Cloud consultant at Credera; and Patriot/THAAD Missile Fire Control in the U.S. Army. Besides staying busy with DevOps and missiles, he enjoys playing racquetball in short shorts and drinking good (not great) wine.
Some of the highlights of the show include:
Various problems can be solved with AI; companies are spending time and money on AI
Tasks can be automated that are too intelligent to write around simple software
Machine learning (ML) models are applicable for many purposes; real people with real problems and who are not academics can use ML
Fargate is instant-on Docker containers as a service; handles infrastructure scaling, but involves management expense
Instant-on works with numerous containers, but there will probably be a time when it no longer delivers reasonable fleet performance on demand
Decision to use Kafka was based on workload, stream-based ingestion
Veritone’s writes code that tries to avoid provider lock-in; wants to make an integration as decoupled as possible
People spend too much time and energy being agnostic to their technology and giving up benefits
If you dream about seeing your name up in lights, Christopher describes the process of writing a post for AWS
Pain Points: Newness of Fargate and unfamiliarity with it; limit issues; unable to handle large containers