Testing shared channels compatibility with Steno

By Ankur Oberoi

Published: September 21, 2019

Shared Channels are an exciting addition to the Slack experience, enabling two organizations to work together from the comfort of their own workspaces. Developers should make sure their applications are ready to go as teams begin to adopt shared channels within their workspaces.

Your development workspace might not have access to shared channels right now, so how can you prepare your app to deliver an uninterrupted experience when your app's users start to enable it? Further, how can you build an even better user experience in preparation for the network effects introduced by Shared Channels?

Build bridges to the future through testing

If there's one thing we hope you take away from this article, it's that working with a test suite will help a great deal toward future-proofing your app.

Aside from the immediate benefit of knowing your application is working correctly, having a well-crafted test suite and process gives you the hooks needed to trial any kind of API modification.

Steno, a tool we built for helping developers test their Slack apps, gives you the freedom to tweak API requests and responses to mimic any new API behavior with very little effort.

Let's walk through an example. We'll take a sample app and use Steno to prepare it for shared channels.

Preparing your environment

In this tutorial we'll work with the source code for the Actionable notifications app blueprint, an app that uses the Web API and Interactive Messages.

To set our environment up we'll follow these steps, explained in more detail below:

  1. Download and install Node.js
  2. Download and install Steno
  3. Clone the template-actionable-notifications blueprint repo
  4. Check out the before-shared-channels branch
  5. Install dependencies

Download Steno and extract the zip file. Find the binary for your platform (macOS, Windows, or Linux) and place it in a directory in your PATH (on macOS and Linux, we recommend /usr/local/bin).

The blueprint is developed in Node.js, which you should download and install now if you haven't already.

With Node.js installed, clone the template-actionable-notifications repository, and then check out the branch named before-shared-channels. Finally, install the project's dependencies.

$ git clone https://github.com/slackapi/template-actionable-notifications.git
$ cd template-actionable-notifications
$ git checkout before-shared-channels

Follow the setup instructions in the README.md to create and configure the Slack app. Continue once you have the app running in a development workspace.

Part I: Build a test suite for the present behavior

Our goal is to build a suite of automated tests that can run quickly, independent of any real requests or responses coming to or from the Slack API. We accomplish this by using Steno to record real interactions with the API and store them along side our tests. When the test suite is run, Steno assists us by reading those interactions and replaying them for our app, masquerading as the Slack API.

To focus on the changes coming thanks to shared channels, Steno's before-shared-channels branch already has the test suite implemented with previously recorded scenarios. But there are no tricks up our sleeves! You'll learn how to build these cases in Part II. Let's examine the project.

Make the Slack API URL configurable

Steno records outgoing interactions by listening for requests from your application and forwarding them to Slack. This means that your application should direct any requests for https://slack.com/... instead to the Steno outgoing proxy, by default: http://localhost:3000/.... We want it to be configurable so that we don't perform the substitution in production.

In the src/util.js file, we've created the function getSlackBaseUrl() to inspect the SLACK_URL environment variable are return the appropriate root URL outgoing Slack API requests. Throughout the rest of the application, we use that root URL instead of hardcoding https://slack.com.

In the same file, the rewriteUrlForSlack(inputUrl) function also helps with substitution. It modifies the inputUrl, an absolute URL, by substituting the SLACK_URL as its base. We use this function in any place where our code receives a URL programmatically (not hard-coded).

Since this application uses Interactive Messages, each response_url in the message action processes this way. The SLACK_WEBHOOK URL is also processed by the function.

Testable interfaces

Steno will take the role of mimicking the Slack API in your tests, but there are likely other pieces in your application that need to be rigged up specially for the test environment. Our sample app (like many others) has a database for persisting state. In order to verify the behavior of our app, we need a back door to inspect the contents of the database and make assertions for each change we expect during the test case.

Open src/index.js and find the function assigned to app.start. This function optionally receives an argument which allows for our tests to inject a the database interface. Normally the db object would be created by the app, but injecting it allows the tests to independently access the database and verify the state changes.

Identifying scenarios and building test cases

Building a test suite begins by identifying the high-level behaviors the app performs. For each of these behaviors, we write a test case that arranges, acts, and then asserts. Below we'll explain these three phases and Steno will help in all three.

Open test/integration/test.js where we take a look at a test cases.

In this application, we chose the popular test runner Mocha.js to structure and run the test suite. Steno allows you to freely pick whichever test runner (in whichever language) you prefer. The only requirement to use Steno is to send HTTP requests to the control API. The control API is just a couple of HTTP endpoints used in the arrange and assert phases of each test.

The first test case starts with the following line:

it('should post a Slack notification to the channel when a new incoming ticket is created', function () {

This is a description of the first behavior we want to test. The remainder of the code inside this function is the test case.

In the first phase, arrange, we need to prepare with everything the application expects before this particular behavior. In this case, the value newTicket contains a dummy set of data, called a fixture, as an input to the application.

Next we call the Steno's control API endpoint /start using the startReplayForScenario() helper, which loads the scenario named new_incoming_ticket from the test/integration/scenarios directory. Loading the scenario for replay is like hiring Steno as an actor to play the part of Slack in this test and giving Steno the script for the scene called new_incoming_ticket.

🎥   Now that Steno knows its lines, it's time to act.

In the second phase, act, we trigger the behavior. In this case, we call the helper sendIncomingTicket() to send the newTicket fixture into the app. Note that in production this is not sent from Slack but rather an outside ticketing system. If it were sent from Slack, Steno would have sent the app the request as soon as it loaded the scenario for replay. The app responds by saving the ticket to its database, and then sending a request to an incoming webhook in Slack. Steno reads from the scenario and knows how to respond to that incoming webhook. The next line of code gives the system one second to finish this replay.

In the last phase, assert, we inspect the state of the system to verify that the actions we expected actually occurred.

The next few lines inspect the database directly to see if the ticket was stored. After that, Steno's control API endpoint /stop is called using the stopReplay() helper which responds with a JSON object we name history. This value contains data about how the replay actually carried out, not just what was in the script.

At a high level, there's a collection of interactions (pairs of requests and responses) and metadata (such as the duration and the number of unmatched interactions) in the history object. The remainder of the test case inspects the history to verify all the parts of the message sent to the incoming webhook we expect were present.

The rest of test/integration/test.js is built the same way but describes the other behaviors of our sample app and loads interactions from other named scenarios inside the test/integration/scenarios directory. Feel free to read though those test cases and use them as examples for building your own.

Let's run the test suite in order to see it all working. At the command line, run npm run test:integration. This will call the "test:integration" script described inside package.json. That script launches steno in replay mode and then starts the mocha test runner. It should report that 4 tests passed.

<script type="text/javascript" src="https://asciinema.org/a/34hb2ulzWy5TgfHoGtaSOEohJ.js" id="asciicast-34hb2ulzWy5TgfHoGtaSOEohJ" async data-autoplay="1" data-loop="1">

Part II: Applying changes for shared channels

The changes required for shared channels did not emerge in isolation. To fully understand the breadth of changes that you need to anticipate and verify your application's behaviors against, you should carefully read each of the following documentation updates: Shared Channels, username changes, and the Conversations API.

In this part of the tutorial, our goal is to extend our existing scenarios to include how the application will function in a shared channel context. You must think about how your app functions in order to anticipate where these changes will manifest.

Fortunately, at this point we have a finite set of scenarios, so all we need to do is evaluate each of them working in shared channels with external users and potentially dealing with strangers.

The sample application uses an incoming webhook, the Web API (chat.postMessage and users.info), and interactive messages. Let's keep these platform features in mind as we examine the scenarios.

Check out the add-scenarios branch to proceed.

$ git checkout add-scenarios

If you have access to a workspace with a shared channel, create new scenarios by setting up a new webhook inside the shared channel, starting Steno in record mode, and walking through each of the existing scenarios individually. The add-scenarios branch includes three new scenarios suffixed with _in_shared that contain interactions recorded exactly that way. Note that sensitive data like tokens and webhook URLs were redacted to make the code shareable.

If you don't have access to a workspace a shared channel, you can arrive at about the same place by duplicating the directory for each of the original scenarios, giving it a name suffixed with _in_shared, and editing the contained interactions individually. In order to edit the interactions correctly, you'll need to refer to the documentation and manually project the changes described, such as adding source_team to API payloads. It may also be possible to avoid some of the manual work by asking another developer with access to a workspace with a shared channel to assist by recording the scenarios in that workspace for you. Steno still helped because it gave you a tangible starting point so that you can eventually replay each scenario with any change you anticipate. If you do gain access to a workspace with a shared channel in the future, you can swap out the manually altered interactions with actual recordings. Fortunately, the add-scenarios branch already contains these recordings for you.

Open test/integration/in_shared_test.js in your editor. It is a duplicate of the test file we examined in part I, except it loads the _in_shared scenarios rather than the originals. With this set up, we can run the larger test suite one more time to find any defects in our application related to running in shared channels.

Fortunately, our sample application already works identically, so we will just see the test cases pass! Instead, if the application had issues that would cause it to break, we likely would have ran into them while re-recording our scenarios. This is a great thing too! It means we uncovered some edge cases that we previously wouldn't have seen, and you have a handy set of test cases to work on satisfying in order to address those edge cases.

Now that we have a basic check of the app's functionality in shared channels, you should start to consider each of the platform features being used in the app (recall from above) to add more test cases and scenarios for other variations. For example, our sample app has a user_assigns_ticket_in_shared scenario that uses interactive messages.

Those interactions were recorded for a local user assigning a ticket to an external user. Because interactive messages can also be triggered by external users, we should record additional scenarios called external_user_assigns_ticket_in_shared to cover the the case where an external user assigns to another external user.

<script type="text/javascript" src="https://asciinema.org/a/BqTR9a26oMNQl0PVZ9MDOk6Gf.js" id="asciicast-BqTR9a26oMNQl0PVZ9MDOk6Gf" async data-autoplay="1" data-loop="1">

Part III: Building a better app for tomorrow

The test suite we've developed so far gives us the confidence we need to deliver the app to workspaces with and without shared channels, and the best part is that we can run these tests on every commit to continue to work with confidence.

We have a great opportunity in front of us now. With the codebase stable and testable, we can start to think about how to uniquely leverage shared channel functionality in a meaningful way with our app. Maybe you want to onboard external users who haven't interacted directly with the app with some information in a DM. Or maybe you'd like to send one introductory message to the channel when you detect it has become a shared channel. Think about how to deliver more value to your users and leverage the network effects.

The opportunity isn't just about new functionality. As we learned about shared channels we also ran across new recommendations related to name tagging and the Conversations API, and you should consider adopting them. As the last change in this tutorial, we'll how we present the "Agent" field in tickets to use the recommended formatting for a user mention, rather than depending on the user.name property, which will be phased out.

Check out the apply-recommendations branch to proceed.

$ git checkout apply-recommendations

Open src/ticket.js and examine the setAgent(userId) method. We made a small change to store the user ID rather than the name, and then use the <@U...> mention formatting syntax to change the message. Open src/template.js. In the fill() function, we made a small change to code that builds attachment.fields in order to also use the mention formatting syntax.

With those source changes, if we try to run our tests again, they will fail. We must update our scenarios. The options are to re-record the scenarios, or to modify them by hand to match our expectations. In the branch, you'll find the updated scenarios.


We've walked through preparing the sample application to take advantage of shared channels and the related changes to the API. The same process and decision making framework can be applied to any existing application. Along the way, we leveraged Steno in order to generate fixtures, implemented a test suite, and learned how to use Steno's control API to replay scenarios.

We hope these skills and tools help you ship your apps with new features proudly and confidently and make your users happy.

Send me your thoughts and feedback on this tutorial. Find me on Twitter @aoberoi

Do you need help working with Steno? Create an issue on GitHub.

Learn more about Shared Channels and what they can do for your team.

Related documentation

Was this page helpful?