In our project we have a REST API using JSON that is used both internally by our mobile clients and offered externally to third parties. Since we were consolidating all functional testing to Cucumber, it was natural to experiment what would be the best way to test API’s in Cucumber.
The common first instinct is to write out the requests, payload and response expectations directly in the feature files. After all, a major point of Cucumber and ATDD is specification by example. The problem is that this results in very verbose descriptions that are hard to follow.
Consider the following example given in the json_spec gem:
Feature: User API
Scenario: Index action includes full user JSON
Given the following user exists:
| id | first_name | last_name |
| 1 | Steve | Richert |
And I visit "/users/1.json"
And I keep the JSON response as "USER_1"
When I visit "/users.json"
Then the JSON response should be:
"""
[
%{USER_1}
]
"""
Consider that for a moment. Do you understand what’s happening in this use case? Next imagine giving this over to your business owner for a feature review.
One of the goals of Cucumber and specification by example is to provide a common language between developers and business. The above example contains so much detail that the intent of the use case is lost. The code reminds me a lot of an example that Aslak Hellesøy gave of using the web_steps.rb “training wheels” originally included in the Cucumber-Rails gem:
Scenario: Successful login
Given a user "Aslak" with password "xyz"
And I am on the login page
And I fill in "User name" with "Aslak"
And I fill in "Password" with "xyz"
When I press "Log in"
Then I should see "Welcome, Aslak"
This kind of detail should be abstracted out of the scenarios into higher-level steps and by use of page objects. Such refactoring makes the tests read better and much less brittle.
So if you don’t write explicit operations, what do you write? Consider what a business owner would write as individual requirements. Describe the intent of the API, not the explicit details.
I came up with a quite nice format in which to write the API tests. As an example of registration:
Scenario: Successful registration
When I perform registration with the required parameters
Then the request should be successful
And I should receive a valid access token
This describes the essential functionality of the API, without any of the gory details. The corresponding step definitions could be something like:
When /^I perform registration with the required parameters$/ do
# Populate request information
@request_method = "POST"
@request_url = "/rest/register"
@request_body = {email: generateEmail, password: "pass123",
terms_accepted: true}
@successful_response_code = 201
@error_response_code_email_reserved = 409
end
Then /^the request should be successful$/ do
# Perform the request and assert the response code
@response = RestApi.call(@request_method, @request_url,
body: @request_body, expect: @successful_response_code)
end
Then /^I should receive a valid access token$/ do
# ... test that the token in @response is valid
end
The interesting part is that the actions (“When”) store fields to the World, and only the post-condition “Then the request should be successful” performs the request. This allows adding more directives that modify the request, resulting in very natural language:
Scenario: Failed registration; email address in use
When I perform registration with the required parameters
But the email address is already reserved
Then the request should fail due to email reserved
Here the step definition for “But the email address is already reserved” would either register the email address in @request_body.email or set the email address to something that is known to be reserved. The corresponding “should fail” step expects the response code in @error_response_code_email_reserved.
This approach also abstracts the setting up the request parameters. When we needed to make a change to the mandatory registration parameters, there was only one method that needed to be changed.
This structure has been a great success in our project. Very often I’ve noticed that after having the feature file defined, it takes only 5-10 minutes to implement the steps. There’s a lot of step reuse, and often you only need to write the request setup step, and possibly some custom post-condition steps.
One caveat of this approach is that the exact methods, URLs and parameter names are not presented in the feature file, and thus it does not function as documentation on its own. We have a separate API document, and we’ve been considering moving the corresponding parts to the free-text description area in the feature files. This would not be executable specification, but I think it’s an acceptable compromise. I feel it more important to keep the scenarios succinct and readable, and thus more maintainable.