People tend to think of ChatOps as “a conversation-driven means of running software.” But that, my friends, is an oversimplification that misses a crucial point.
ChatOps is “the novel operational practice of expanding your security perimeter to anyone who has access to the right Slack channel or to Slack’s production infrastructure.” This is obviously my own definition, and people tend not to talk about it this way. I’m afraid that’s going to be a big problem.
You see, there seems to be a large-scale aversion to discussing the risks of ChatOps in public, and I can’t shake the feeling that this is going to bite all of us in the end.
Slack: Storage for all your secrets
Unless you’ve been living in a hole for the last decade, you’ve encountered Slack. Yes, some people use Microsoft Teams for work instead. I do not understand nor endorse this behavior and neither should you, because Teams is trash.
Slack, a Salesforce company, is also the single organization I would attempt to breach if I were looking to do some real damage.
Why? While people store code and databases and naughty videos in their AWS accounts, they talk about things ranging from lunch plans to mergers and acquisitions to their passwords to their extramarital affairs to their insider trading crimes within Slack. This is largely considered a boon for regulators looking to simplify their e-discovery.
People treat chat as if it were ephemeral, with messages gone soon after they’re sent — but this isn’t Snapchat we’re talking about here. All of your Slack messages live not in some ephemeral database like an early version of MongoDB, but rather as rows in MySQL. Slack’s security team is excellent, because it pretty darn well has to be. If it isn’t, your deepest chat secrets are but a SQL query away.
Anyway, some enterprising folks eventually instrumented Slack a bit, because “Jimothy, do you want to go to lunch?” isn’t that far removed from “AWS, deploy to production.” The sound effect Slack plays when that message arrives is the creeeeak of Pandora’s Docker Container opening.
Enter AWS ChatOps and start panicking
Never one to spy an ill-defined buzzword without enthusiastically launching a service into the category, AWS created a full-on service called, of course, AWS Chatbot. It’s roughly here that, as they say, our troubles begin.
With the magic of ChatOps, I fear that among the profound secrets Slack holds is full root access to your company’s AWS accounts.
AWS Chatbot has a deep dive into how to configure Chatbot permissions, which approximately nobody reads or implements. I mean, look at this terrifying thing! Users can be assigned roles, they can change roles, they can assume roles, and at least some of these roles we’re talking about are IAM roles.
Folks are rarely as diligent as we (and, belatedly, they) wish they were when it comes to security. There’s a whole mess of fiddly-to-troubleshoot bits in Chatbot setup that people often override, saying, “The hell with it, I trust the team, we’ll just grant them admin-level access and fix it later.” “Later” never comes, leaving Slack users with access to do truly terrible things in sensitive environments due to the rise of the ChatOps phenomenon.
In the event of a company inviting the wrong user to the wrong channel, a Slack security lapse, or an inside threat at Slack itself, there’s now an entirely new attack vector against a company’s AWS environment. “Insider trading” cuts less viscerally at the engineering mind than “access to production,” or “how AWS might respond to a passive-aggressive API call.”
What makes this pernicious and borderline unique is that I don’t see people talking about this risk as anything other than a passing abstract thought.
They’re trying: AWS’s permission policies
Now, let’s be sure to give AWS some credit here. There are a bunch of permissions that AWS flat-out will not support via Chatbot, no matter how poorly you misconfigure the thing.
That said, there’s more than a little ambiguity here. It’s a denylist rather than an allowlist, so who’s responsible for keeping that list updated with new and excitingly dangerous services? Is there a baked-in permissions boundary that won’t shed these restrictions the moment the Chatbot assumes a different role via STS? As one example, it blocks an EC2 permission — but disturbingly not the associate-iam-instance-profile
variant.
I haven’t gone in-depth with this yet, but I can envision a few ways a bad actor could brush aside these limitations as currently written.
Now’s the time to talk about Slack
I’m not suggesting that Slack is a bad company or product, nor that they will or have suffered a breach. I’m also not suggesting that AWS doesn’t have the tools in place to appropriately limit the blast radius of ChatOps. I’m not even suggesting that any of this is a bad idea!
I am suggesting that people are fallible. From where I sit, Slack with AWS Chatbot feels like a major risk factor that largely goes unacknowledged by the folks responsible for managing risk appropriately. If that’s you, you might want to look a little more closely into your company’s ChatOps guardrails.