Introduction to UI test automation with Ruby

Introduction

Automated UI testing is a (very) oft-discussed topic within our team, and across Showmax Engineering more widely. No one likes manual, repetitive work, so we have automated UI testing for our applications that have a user-facing interface.

In our case, such applications are written in Ruby on Rails. Although there are plenty of gems and tutorials related to Rails UI test automation, it was not easy to find a complex tutorial on how to stitch the technologies together to reach the ultimate goal: Functioning, automated tests.

Here’s my story on how we achieved this goal, our technology stack, and a few of the issues we encountered along the way.

The components

Our task was to have fully-automated UI tests that met the following requirements:

  • The tests verify the main regression flows from the end user perspective.
  • The tests are executable locally on a developer’s machine, and on our CI infrastructure.
  • The tests are implemented together with the code of the application. This is done to avoid the issue of constantly changing the application, where the tests are just catching up with the latest changes.
  • The same tests can also be executed against our staging (pre-production) environment to identify issues caused by other components in our microservice ecosystem.
  • The tests verify both functional and visual regressions.

To get there, we ended up with the following technology stack:

  • Webdrivers - A tool for the automated testing of web applications. It acts as a bridge between our testing code and the browser in which the user journey is executed. It provides capabilities for navigating to web pages, user input, JavaScript execution, and more.

  • Capybara - A framework for testing web applications in Ruby that let us write the test scenarios, describing the user behavior in the browser using a provided DSL language.

  • Site Prism - This provides a semantic DSL for describing the application web pages using the Page Object Model pattern. This allows you to define each page in a single place which (usually) translates into well-structured and DRY code.

  • VCR - This lets you record the test HTTP interactions of your application to “VCR cassette”, and replay them during future test runs. This make tests more deterministic and accurate.

  • ImageDiff - An open-sourced tool developed by us right here at Showmax. This provides detection of visual deviations of rendered pages from a defined baseline. Read more about this cool tool in our blog post “Automated UI testing and catching visual regressions.” This will be used in the second part of our blog.

Put it all together

Here’s how to connect the technologies together to achieve the zen-like state of having fully-functional, easily-maintainable tests. For simplicity, here’s a simple Rails application example:

0. Environment setup

For our purposes, we’ll use Google Chrome as the main browser, and Linux with Ruby 2.6 installed.

First, create a new Ruby on Rails 5.2 project with rspec gem:

<your_file_path>$ ruby -v
ruby 2.6.2p47 (2019-03-13 revision 67232) [x86_64-linux]

<your_file_path>$ gem install rails -v 5.2
<your_file_path>$ rails new <project_name>
<your_file_path>$ cd <project_name>

The Gemfile autogenerated by the rails new command creates a test section that already has the required gems. Add rspec, site-prism, vcr, webdrivers, and webmock gems to this section, so it looks like this:

group :test do
  # Adds support for Capybara system testing and selenium driver
  gem 'capybara', '>= 2.15', '< 4.0'
  gem 'selenium-webdriver'
  gem 'webdrivers'
  gem 'site_prism'
  gem 'rspec'
  gem 'vcr'
  gem 'webmock'
end

Then you need to install the gems.

<path_to_project>$ bundle install
<path_to_project>$ rspec --init

IMPORTANT: Check to make sure that your ChromeDriver version is compatible with your version of Google Chrome. You can use ChromeDriver page or do the verification by running:

<your_file_path>$ chromedriver -v
ChromeDriver 2.46.628388 (4a34a70827ac54148e092aafb70504c4ea7ae926)

<your_file_path>$ chromedriver --minimum-chrome-version
minimum supported Chrome version: 71.0.3578.0

So, we’ve created a clean project and now focus on the test definition. In ./spec folder, create a new folder called features and put the new file google_test_spec.rb there. The _spec suffix gives you the possibility to run rspec commands without specifying the spec files.

Our folder structure now looks like this:

spec
├── features
│   └── google_test_spec.rb
└── spec_helper.rb

1. First Capybara test

Our sample application has no functionality implemented. To keep things simple, we write a test that connects to google.com instead of to our application, and then search for “Showmax”. Edit the google_test_spec.rb file so it looks like this:

require 'capybara/rspec'
require 'webdrivers'

feature 'Google test', type: :feature do

  Capybara.app_host = 'https://google.com'
  Capybara.run_server = false
  Capybara.default_driver = :selenium_chrome

  scenario 'Visit Google' do
    visit '/'
    expect(page.title).to have_content('Google')
    fill_in 'q', with: 'Showmax'
    find('body').send_keys :enter
    find(:xpath, ".//input[@name='btnK']").click
    expect(page).to have_content('Showmax')
  end
end

Run the test with the command:

<path_to_project>$ rspec

2. Time to add Site Prism

At first, it may look like we made the tests more complicated than before, but the truth is quite the opposite. There is significant improvement, especially when working with complicated projects with a lot of tests. Now we have all of the selectors and page’s methods in one place. We can change things (e.g. the selector) in one page element and have the change reflected in all of the tests under this selector.

First, create a folder pages in the features folder. In this folder, create files google_page.rb and google_result_page.rb that will contain classes, describing the tested page definitions. Each class extends SitePrism::Page, and contains selectors, elements, and page url specification.

The folder structure looks like this:

spec
├── feature
│   ├── google_test_spec.rb
│   └── pages
│       ├── google_page.rb
│       └── google_result_page.rb
└── spec_helper.rb

With the page definitions:

# google_page.rb:
module Pages
  class GooglePage < SitePrism::Page
    set_url '/'
    element :body, 'body'
    element :search_field, 'input[name=q]'
    element :submit_btn, :xpath, ".//input[@name='btnK']"
  end
end

# google_result_page.rb:
module Pages
  class GoogleResultPage < SitePrism::Page
    set_url_matcher '/search'
    element :body, 'body'
  end
end

Now, create simple file app.rb with Pages::App class, which includes methods for pages initialization, in the features folder. This factory approach provides a useful additional level of indirection.

spec
├── features
│   ├── app.rb
│   ├── google_test_spec.rb
│   └── pages
│       ├── google_page.rb
│       └── google_result_page.rb
└── spec_helper.rb
require_relative '../features/pages/google_page'
require_relative '../features/pages/google_result_page'

module Pages
  class App
    def google
      @google ||= GooglePage.new
    end

    def google_results
      @google_results ||= GoogleResultPage.new
    end
  end
end

The last step is to rewrite our google_test_spec.rb leveraging the POM pattern.

require 'capybara/rspec'
require 'webdrivers'
require 'site_prism'
require_relative '../features/app'

feature 'Google test', type: :feature do
  let(:app) { Pages::App.new }

  Capybara.app_host = 'https://google.com'
  Capybara.run_server = false
  Capybara.default_driver = :selenium_chrome

  scenario 'Visit Google' do
    app.google.load
    expect(app.google.title).to have_content('Google')
    app.google.search_field.set 'Showmax'
    app.google.body.send_keys :enter
    app.google.submit_btn.click
    expect(app.google_results).to be_displayed
    expect(app.google_results).to have_content('Showmax')
  end
end

3. Add VCR and we’re almost done

Usually, we want to keep our tests isolated and not dependent on other services in our ecosystem or on third-party APIs. To mockup the HTTP interactions, we can use VCR gem.

To demonstrate how the VCR gem works, we add another test that calls the accuweather.com API for the current weather conditions in Beroun (CZ) - with a particular focus on the temperature. Then we add the VCR configuration to a new file current_weather_spec.rb in the features folder.

NOTE: Be sure to replace the API KEY for accuweather.com in your test, otherwise you will get an invalid response.

require 'capybara/rspec'
require 'webdrivers'
require 'site_prism'
require 'vcr'
require 'webmock/rspec'
require 'json'
require_relative '../feature/app'

feature 'Call current weather API', type: :feature do

  VCR.configure do |c|
    c.cassette_library_dir = 'spec/cassettes'
    c.allow_http_connections_when_no_cassette = true
    c.configure_rspec_metadata!
    c.hook_into :webmock
    c.ignore_localhost = true
  end

  Capybara.app_host = 'https://google.com'
  Capybara.run_server = false
  Capybara.default_driver = :selenium_chrome


  vcr_options = { cassette_name: 'current_temperature_beroun', record: :once }
  scenario 'Always comfortable temperature in Beroun', vcr: vcr_options do
    res = Net::HTTP.get_response('dataservice.accuweather.com/currentconditions/v1/125991?apikey=<YOUR_KEY>')
    expect(res.code).to eq '200'
    temperature = JSON.parse(res.body)[0]['Temperature']['Metric']['Value']
    expect(temperature).to be >= 10 
  end
end

Run the tests and you should see the current_temperture_beroun.yml file in the spec/cassettes folder. We have responses from Accuweather stored in the cassette, and all subsequent tests will work even without an internet connection.

The final directory structure looks like this:

spec
├── cassettes
│   └── current_temperature_beroun.yml
├── features
│   ├── app.rb
│   ├── current_weather_spec.rb
│   ├── google_test_spec.rb
│   └── pages
│       ├── google_page.rb
│       └── google_result_page.rb
└── spec_helper.rb

We’re done, but we can’t leave it like this. We need to polish the code a bit. So, we move configurations and requirements to the spec_helper.rb file, and the final result looks like this:

# google_test_spec.rb
require_relative '../features/app'

feature 'Google test', type: :feature do
  let(:app) { Pages::App.new }


  scenario 'Visit Google' do
    app.google.load
    expect(app.google.title).to have_content('Google')
    app.google.search_field.set 'Showmax'
    app.google.body.send_keys :enter
    app.google.submit_btn.click
    expect(app.google_results).to be_displayed
    expect(app.google_results).to have_content('Showmax')
  end
end

# current_weather_spec.rb
require_relative '../features/app'
require 'json'

feature 'Always comfortable temperature in Beroun', type: :feature do
 let(:api_key) { 'YOUR_API_KEY' }
 let(:location) { 125991 }
 let(:uri) { URI("http://dataservice.accuweather.com/currentconditions/v1/#{location}?apikey=#{api_key}") }

 vcr_options = { cassette_name: 'current_temperature_beroun', record: :once }
 scenario 'Call current weather API', vcr: vcr_options do
   res = Net::HTTP.get_response(uri)
   expect(res.code).to eq '200'
   temperature = JSON.parse(res.body)[0]['Temperature']['Metric']['Value']
   expect(temperature).to be >= 10
 end
end

# spec_helper.rb
require 'capybara/rspec'
require 'webdrivers'
require 'site_prism'
require 'vcr'
require 'webmock/rspec'

VCR.configure do |c|
  c.cassette_library_dir = 'spec/cassettes'
  c.allow_http_connections_when_no_cassette = true
  c.configure_rspec_metadata!
  c.hook_into :webmock
  c.ignore_localhost = true
end

Capybara.configure do |c|
  c.app_host = 'https://google.com'
  c.run_server = false
  c.default_driver = :selenium_chrome
end

My experience

This all looks nice and straightforward, but anyone who has tried to write UI tests will agree it’s a painful and complicated task that is pretty much never successful on the first try.

Don’t underestimate the importance of the installation of the webdriver and the compatibility of the versions between the driver and the Chrome browser. When developers are going to run the tests locally, you need to help them with the setup again and again, or make sure they have a well-defined environment prepared (we’ll get into that in more depth in part 2).

It isn’t so obvious from the example above, because there are only two independent tests, but my experience is that you must respect isolation and independence. Imagine that you have tests for a user to create, edit, and delete. If you count on the fact that create will always come first and you will use this created user for the next two tests… that is the highway to hell. It could happen, for example, that somebody starts the tests in a random order, or first test fails and you automatically have three fails instead of one.

Every single test should be executable by itself, without any dependency on any other test result. Use hooks like before and after and prepare clean conditions for every test and scrape it down after the test finishes - even if it costs more time. Shortcuts are tempting, but the extra effort pays off, I promise.

You are testing your application, but it usually interacts with the outside world and VCR does not save you in all cases. VCR is able to record the HTTP interactions of the tested application, but it has no control when part of the user journey flows outside of your application (imagine testing a payment that brings you to third-party payment gate). Consider mocking the third-party API, and if testing the part of the flow dependent on the external application makes actually sense.

Test stability is crucial, as bad tests can lose the respect of developers very quickly. It’s better not to enable the tests than to have them be flaky and/or reporting false positives. It’s very hard to convince developers back to take the tests seriously, so do not enable new tests before they are verified to give reliable results.

Don’t forget about maintenance! Even if you have your CI process set up properly and the test modification is required from the developer that does the change in the application, there is still plenty of room for the tests to unexpectedly break and require additional maintenance.

Conclusion

We don’t need to automate everything, but it’s super useful! In my opinion, automation makes sense and saves us a lot of time and boring work. The fact is that automated tests help us reveal errors that could - and probably would - be overlooked in manual testing.

Don’t be afraid of automation - embrace it! You may even have a bit of fun.

Please check the original version of this article at