Slack employs several robots to augment the product with additional information from around the web. You are probably here because you found us in your access logs and you have questions or are curious. We are here to help.
Each robot we use identifies itself uniquely through its user agent.
Slackbot-LinkExpanding 1.0 (+https://api.slack.com/robots)
This robot responds to links that Slack users post into their channels. It fetches as little of the page as it can (using HTTP Range headers) to extract meta tags about the content. Specifically, we are looking for oEmbed and Twitter Card / Open Graph tags. If a page's tags refer to an image, video, or audio file, we will fetch that file as well to check validity and extract other metadata.
An example rich-media using oEmbed:
An example embed using Twitter Card / Open Graph tags:
Responses to these requests are cached globally across the service for around 30 minutes. You should not be seeing more than one request for the same URL from us more frequently than that. More about how link unfurling works to display summary content.
Slack-ImgProxy 0.19 (+https://api.slack.com/robots)
This robot is used to fetch and cache images posted into Slack channels. Proxying images in this way allows us to hide detailed referrer information (which can include team and/or project names), ensure these images are served over HTTPS, and improve performance.
Slackbot 1.0 (+https://api.slack.com/robots)
This is our default, does everything, kitchen sink robot. Anytime we need to make an HTTP request that is not covered by the above, we use this robot. Examples would include making API requests for services we integrate with or sending Outgoing Webhooks that users have configured on their teams.
As we are mere imperfect humans who direct the robots, it is entirely possible they are doing something they should not. We are more than happy to add you to a blocklist/allowlist or answer questions you have about our robots. Please contact us or send us a tweet @slackapi.
We do not currently honor robots.txt
files. After implementing and experimenting with doing so, we received too many complaints from our users because a great portion of the Internet is inaccessible to crawlers. As we are not a crawler (we don't follow links and we're acting on behalf of a human), it made sense to stop processing robots.txt
files as if we were a crawler. If you would still like your site blocked from slack.com embeds, or would like changes to how your embeds are displayed, please contact us.