Execute asynchronous JavaScript from Selenium

12 Aug 2015 | By Matt Robbins | Comments | Tags testing selenium webdriver javascript

When implementing end user acceptance tests in the browser using Selenium-Webdriver it’s a good idea to follow these rules:

  1. Wherever possible test the site from the perspective of the user i.e. with no access to the internals of the page
  2. If you really need some data exposed from JavaScript and made available to the test (that the user wouldn’t see) - have some kind of ‘debug’ mode where JavaScript events are logged to a debug JS object in the page which you can query from Webdriver using synchronous execute_script calls.

Occasionally however it might be that you can’t add debug objects to the page (it’s not under your control for some reason) and perhaps to know some asynchronous event has occurred it would be useful to, for example, register for an event.

Until recently I didn’t think this was possible from Selenium…however I was completely wrong!

I stumbled across this blog post which shows you how to do it.

I thought it was worth expanding a little bit on this with some working examples and more links.

It turns out there is a method called execute_async_script which exists in most language bindings. In the JavaDoc it’s also well documented with example scripts.

I have converted one to Ruby so you can prove to yourself this really works. If you run this you should observe the client code blocking until the asynchronous JavaScript has returned. The Selenium method has an implicit callback (accessed via arguments[arguments.length - 1]) which we pass as the first argument to the asynchronous setTimeout JavaScript function. The Benchmark result proves this behaves as expected.

require 'selenium-webdriver'
require 'benchmark'

driver = Selenium::WebDriver.for :firefox
driver.navigate.to "http://www.example.com"
driver.send(:bridge).setScriptTimeout(5000) # needs to be > setTimeout value

result = Benchmark.realtime do
  driver.execute_async_script( "window.setTimeout(arguments[arguments.length - 1], 4000);")

fail "Times do not match!" unless result.round(0) == 4

Note that the setScriptTimeout method exists in the bridge class and so it’s not accessible via the public API in the Ruby Bindings (hence the send hack). This millisecond value must be higher than the time you think your callback will return in otherwise the method will return and report a timeout.

For a more tangible example here we register for a custom JavaScript event and accessing some of the event data…you can copy the html to a separate file and run it on your own machine.

      #green-box {
        height: 50px;
        width: 50px;
        background-color: green;
      window.onload = function() {

        greenBox = document.getElementById('green-box');

        var myEvent = new CustomEvent("green-box-click", {
          detail: {
            username: "matt",
          bubbles: true

        window.addEventListener("green-box-click", function(e) {
          document.getElementById('status').textContent = 'clicked by:' + e.detail['username'];

        greenBox.onclick = function() {
    <div id='green-box'><span id='status'>click me</span></div>

require 'selenium-webdriver'

driver = Selenium::WebDriver.for :firefox
driver.navigate.to "file:///home/matt/workspace/selenium-debug/test.html"


puts driver.execute_async_script(
  "var callback = arguments[arguments.length - 1];
    window.addEventListener('green-box-click', function(e) {callback(e.detail['username'])});"


As Johannes points out in his blog post in running such code you have the potential to affect the behaviour of the page under test. Double check that the page functions as expected after your code has executed and be aware of issues such as an event not bubbling correctly up the DOM.

Hopefully this has given some more insight into executing asynchronous JavaScript from Selenium, certainly not something you would want to do unless you really had to but worth knowing about.

Rerun failing tests with Cucumber - a solution for nondeterministic UI tests?

13 May 2015 | By Matt Robbins | Comments | Tags testing cucumber selenium

The evolution of acceptance test automation, or at least my experience of it has been something like this:

2005 - 2008 : QA runs automated test on their machine pior to raising deployment ticket

2009 - 2012 : CI runs automated tests - downstream, everybody ignores them

2012 - present : Continuous Delivery…UI tests broken, no deployments…now we have your attention!

So hopefully you have arrived at stage 3 and have done the following to ensure your UI tests are as resilient as possible:

  1. Followed Martin Fowler’s test pyramid and not implemented every conceivable system test via Selenium
  2. Implemented sane retry logic to find UI elements
  3. Isolated your UI using stub backends to guard against unexpected data
  4. Added spoonfuls of helpful debug logging to highlight issues

But perhaps a couple of your tests still appear now and again be nondeterministic and people are getting frustrated.

This can happen, UI testing is hard and no matter how much you defend against it peculiarities of the runtime environment can conspire against you.

In this instance it might be handy to have ‘one more go’ when you get some failures and Cucumber’s rerun formatter allows this.

Caveat - generally I would say a nondeterminstic test should either be fixed for good or deleted however sometimes the world is just not that perfect, maybe for example you aren’t able to improve underlying infrastrucre issues etc etc. Hence this may still be a valid course of action.

The Example

This is about the simplest example I could come up with.

The test will fail 50% of the time, allowing you to see the rerun kicking in on selected runs.

You can run the test using bundle exec rake

The test should be run for a second time if it fails and should terminate with an appropriate exit code.

You can see some examples of this working in the project’s Travis build.

Other Thoughts

Another useful thing might be to run any failing tests from previous CI runs first and you could probably adapt this approach to do just that!


I just thought I would illustrate with an example what others have already highlighted:



From zero to tested with Centos7, Chrome and Selenium

16 Jan 2015 | By Matt Robbins | Comments | Tags testing centos selenium chrome headless

If you are restricted to Centos in your CI environments and you need to run browser tests previous solutions include running in PhantomJS, an old version of Firefox under Xvfb or farming out to a Selenium Grid.

With Centos7 you now have another option as Chrome is fully supported on this OS.

You can easily run a headless (ish…Xvfb) Chrome browser.

Here I show you the bare minimum required to get from nothing to a working example.

Once you have it working you would need to consider repeatability e.g. provisioning the environment via something like Puppet, spinning it up in a Docker container or maybe just a simple script.

Grab a Centos 7 VM

vagrant init matyunin/centos # a good community base image at time of writing
vagrant up
vagrant ssh

As root

Add google yum repo

cat << EOF > /etc/yum.repos.d/google-chrome.repo
name=google-chrome - \$basearch

As vagrant

Install dependencies

sudo yum install -y ruby ruby-devel gcc xorg-x11-server-Xvfb google-chrome-stable
gem install selenium-webdriver headless
version=`curl -s http://chromedriver.storage.googleapis.com/LATEST_RELEASE`
curl -s "http://chromedriver.storage.googleapis.com/$version/chromedriver_linux64.zip" > chromedriver.zip
unzip chromedriver.zip && sudo mv chromedriver /usr/local/bin/

Create test script

cat << EOF > test.rb
require 'rubygems'
require 'headless'
require 'selenium-webdriver'
headless = Headless.new
driver = Selenium::WebDriver.for :chrome
driver.navigate.to 'http://google.com'
puts driver.title

Run test

ruby test.rb


Selenium 141 - Alternatives to BrowserMob Proxy

15 Aug 2013 | By Matt Robbins | Comments | Tags testing webdriver selenium browsermob-proxy

Selenium Issue 141

The long story of Selenium Issue 141 is not worth recounting here as it has been covered in depth by Selenium core committers Simon Stewart and David Burns.

In a nutshell issue 141 is a hotly contested debate as to whether Selenium-Webdriver should provide public API for querying HTTP status codes and response headers.

I pretty much agree with the Selenium guys and can see how introducing this API would be the start of a very long and slippery slope for the project. However in the day to day work of an automation developer it is often essential to have access this information, for example most Web Apps have to do some kind of tracking which takes the form of a request for a 1x1 image with a query string containing the tracking key value pairs and we are frequently asked to validate these calls.

BrowserMob - the solution….maybe

The conventional wisdom is that you should use the Selenium approved scriptable proxy BroweserMob Proxy to get around this problem.

Up until recently I followed this thinking and had used BrowserMob with reasonable success (see my earlier post about using BrowserMob through a proxy, however I had never had to use it in a critical CI pipeline and when I did things got a little less appealing.

There is no doubt that BrowserMob is a great project but using it via the Ruby bindings I found that all too often the Java process would not shut down cleanly after a test run (particularly if the test run failed for some other reason) meaning that on subsequent runs the port was locked and the proxy would not restart.

All these problems are solvable of course and I intend to keep using BrowserMob and contribute any fixes if I can.

However for the project I was on I needed something bullet proof which limited the amount of moving parts in my CI pipeline to capture and validate tracking calls from a mediaplayer.

Browser Extensions - an alternative approach…

These trials took me to an alternative approach - browser extensions!

Starting with Chrome I knocked together an extension that uses the chrome.webRequest api to capture network requests and push them back into the page under test via local storage. I then raise custom events which the page under test can register for.

Of course this assumes you are in control of the page under test, in my case I was testing a component on a test page so that was fine, I listened for the custom events and pushed them out into the DOM so I could scrape them using Selenium as normal. If this is not the case for you (which is likely) I think you could still use this solution, for example by updating the extension to push the events into a global object which you could query by injecting JavaScript from Selenium. Of course this is not an uber-clean but no solution in this area is!

I won’t go into the code as you can have a play for yourself, it turns out to be very simple and of course it is a bullet proof and fast way of capturing this information.

But what about the other browsers?

Of course this only addresses Chrome…to compete with BrowserMob we need to be able to cover as many of the other browsers supported by Selenium.

For this I used Crossrider which provides a limited api but allows you to generate extensions for Chrome, Firefox, IE and Safari.

You cannot be as sophisticated as you can using the Chrome APIs directly but you can still accomplish a fair amount.

Again feel free to have a play around with the simple crossrider extension I wrote to scrape net requests.


There is no ‘right’ way of dealing with the lower level aspects of web automation. In some cases using BrowserMob makes perfect sense and should be your go to solution. However if you need a resilient way of capturing net requests, blocking net requests, accessing response codes or headers then a browser extension might just be the way to go.

Building Erlang-r16 and Elixir on OSX using HomeBrew

06 Aug 2013 | By Matt Robbins | Comments | Tags erlang elixir osx homebrew

A quick one just in case anybody is struggling to build Erlang-r16 on OSX. I wanted to play around with Elixir the latest and slightly more offbeat (dare I say hipster) functional language to hit the big time, Elixir runs on the Erlang VM and you need Erlang-r16 to build and run Elixir.

To install Erlang-r16 on OSX using homebrew you in theory just need to add the homebrew-versions tap and brew install:

brew tap homebrew/versions
brew install erlang-r16

However I got compile errors when running the install (note I think this may be even specific to the latest MacBooks with the new Haswell chips).

erl -noshell -noinput -run prepare_templates gen_asn1ct_rtt \
           asn1rtt_check.beam asn1rtt_ext.beam asn1rtt_per_common.beam asn1rtt_real_common.beam asn1rtt_ber.beam asn1rtt_per.beam asn1rtt_uper.beam >asn1ct_rtt.erl
erlc -W  +debug_info -I/private/tmp/erlang-r16-HFuS/otp-OTP_R16B/lib/stdlib -Werror -o../ebin asn1ct_rtt.erl
Running Erlang
asn1ct_rtt.erl:1: syntax error before: Running
asn1ct_rtt.erl:14: no module definition
make[2]: *** [../ebin/asn1ct_rtt.beam] Error 1
make[1]: *** [opt] Error 2
make: *** [secondary_bootstrap_build] Error 2

The reason for this is that newer Macs with the latest versions of X-Code use LLVM-GCC and not GNU GCC. Homebrew now attempts to build with LLVM by default which is probably sensible as this is the platform compiler, however it is likely for a fair number of packages this won’t work as their builds have not been tested on LLVM. This seems to be the case with the latest Erlang versions.

The additional problem at the moment seems to be that the Homebrew project is currently changing the way they handle the situation around cross compiling with different C compilers.

You can use the homebrew-versions tap to install a specific gcc e.g.

brew tap homebrew/versions
brew install gcc48

However in Homebrew 0.9.4 at least there seems to be no way of telling it to use these GCC versions when building. You can specify --use-gcc or set the HOMEBREW_CC environment variable but neither worked for me. This issue got me thinking as the only success people had seemed to have is where they had the apple-gcc42 compiler.

When you do export HOMEBREW_CC=gcc Homebrew seems to be hardcoded to look for the apple-gcc42 package and not any from the homebrew-versions tap. I think this pull request will make this integration smoother.

For the time being then, to get a working Elixir and Erlang do the following:

export HOMEBREW_CC=gcc
brew update
brew tap homebrew/dupes
brew install apple-gcc42
brew install erlang-r16
brew install elixir

You can probably avoid exporting HOMEBREW_CC and just use (but I haven’t tested this method):

brew update
brew tap homebrew/dupes
brew install apple-gcc42
brew install --use-gcc erlang-r16
brew install elixir

This gets the 4.2 GCC that currently Homebrew is hardcoded to look for, indeed you can see this if you run brew --config as it now lists a GCC-4.2 value which would have been absent before:

ORIGIN: https://github.com/mxcl/homebrew
HEAD: 6b5b347332f7fdad35a5deab79fa71018e02a2b4
HOMEBREW_CELLAR: /usr/local/Cellar
CPU: quad-core 64-bit haswell
OS X: 10.8.4-x86_64
Xcode: 4.6.3
GCC-4.2: build 5666
LLVM-GCC: build 2336
Clang: 4.2 build 425
X11: N/A
System Ruby: 1.8.7-358
Perl: /usr/bin/perl
Python: /usr/bin/python
Ruby: /Users/matthewrobbins/.rbenv/shims/ruby

Git Clone via SSH using Jenkins on Windows

28 Jun 2013 | By Matt Robbins | Comments | Tags ci jenkins

This blog post is nothing revolutionary but more of a note to self so that I never have to learn this painful lesson again!

My current team has Windows specific CI dependencies, hence I had no choice but to setup our Jenkins instance on Windows. The cleanest way to do this is to use the Jenkins Installer which installs Jenkins as a Windows Service.

That works fine, but of course the first challenge you face is cloning your Git repo over SSH (the default for the Jenkins Git plugin).

I’ll assume you have done the following:

  • Installed Git Client
  • Created SSH Keys and ensured that you can successfully clone a repo from your Git Bash (running as current user).

    Note: In theory you should not use passwordless keys as anybody obtaining your key will have access to your account. However for Jenkins you cannot have a password as the Job would hang when attempting to clone. From Git Bash this problem is solved using ssh-agent but this won’t work in Jenkins as we are using the Git plugin and not the shell.

    Bottom line - for Jenkins use a passwordless key - you can mitigate the risk by having a dedicated Jenkins Git User and only allowing them pull access.

  • Set up the Git plugin in Jenkins via ‘Manage Jenkins’ and point it to the Git cmd file on the system, as in the screenshot below:


Now we get to the tricky part. When Jenkins runs as a Windows Service it does not run under a normal user account, it runs under the “Local System Account”. Hence even though you have Git Clone working as the current user it will fail in Jenkins.

To solve this problem we need to put our private ssh key in the correct location for the Local System Account.

First of all we need to find out where this user’s home directory is, since it will not exist under C:\Users…

  • Download PSTools
  • Extract to a conventiant location and using a command prompt navigate to that location (or add to path if you prefer)
  • Start the PsExec tool by running:
PsExec.exe -i -s cmd.exe
  • Now we need to find out where this user’s home directory is:
echo %userprofile%
  • For me this location was:
  • Now you can open Git Bash within this shell by running the following command:
C:\Program Files (x86)\Git\bin\sh –login –i
  • You can copy your existing private key into the correct location for the Local System User, for me this was something like the following:
cd ~
cp /c/Users/matt/.ssh/id_rsa ~/.ssh/
  • Finally you can test that you can successfully authenticate to Github as the Local System User:
ssh -T git@github.com
  • If all is successful you should see the usual message:
Hi username! You've successfully authenticated, but GitHub does not provide shell access.

All that’s left to do now is test from Jenkins…Huzzah!

3DES Zeros Padding with Ruby and Openssl

29 Nov 2012 | By Matt Robbins | Comments | Tags testing ruby

Although many encryption libraries exist for Ruby (and here I am talking ‘C’ Ruby in the main) you will most typically want to use Openssl.

Recently I needed to encrypt a simple JSON string using 3DES (ECB mode).

IMPORTANT - You should never use ECB mode it is insecure and highly prone to brute force attacks….but I had no choice this is what the external service required.

It all seemed so simple and my initial script looked something like this:

require 'openssl'
require 'base64'
require 'json'

des = OpenSSL::Cipher::Cipher.new('des-ede3')
des.key = 'yourchosendeskey' 

edata = des.update('{"somekey":"somevalue"}') + des.final
b64data = Base64.encode64(edata).gsub("\n",'') #remove EOL insterted in Base64 encoding

Although this worked perfectly in my local test harness the external PHP service failed to decrypt the data. The reason for this is all to do with padding.

Certain encryption schemes (such as 3DES) are known as ‘Block Cipers’ and operate on a fixed block size i.e. in order for the data to encrypt/decrypt successfully it must be presented as a multiple of the required block size (in this case 8 bytes).

Openssl ensures this by inserting PKCS#5 padding, this padding scheme is preferred because it can be easily be distinguished from the actual data and allows for basic integrity or password checking.

Sadly (and possibly unsuprisingly) the PHP library I was integrating with did not use Openssl and instead used MCRYPT which uses ‘Zeros Padding’ and not PKCS#5.

Openssl does not support Zeros Padding out of the box but you can do this yourself by telling Openssl not to pad the data and ensuring the data is a multiple of the block size by adding ‘/0’ characters at the end of the data, the code looks something like this:

block_length = 8
des.padding = 0 #Tell Openssl not to pad
json = '{"somekey":"somevalue"}'
json += "\0" until json.bytesize % block_length == 0 #Pad with zeros
edata = des.update(json) + des.final 
b64data = Base64.encode64(edata).gsub("\n",'')

IMPORTANT - you should choose a much better encryption scheme and padding mechanism, but if you really have to integrate with some ‘interesting’ PHP this may help!

Setting two space indent for Gherkin Features in Vim

26 Nov 2012 | By Matt Robbins | Comments | Tags testing vim cucumber

Users of Vim (or GVim, Macvim etc) who write Gherkin Feature files will know that for most installations (on Mac certainly) you appear to have auto-magical support for Gherkin Feature file syntax i.e. nice colours, auto format etc.

This is possible because the distribution of Vim installed on the machine, or which you installed comes shipped with Tim Pope’s excellent vim-cucumber plugin.

If your Vim does not ship with this you should most certainly install it right away!

I have always auto-indented using gg=G and never really thought too much about the indent level. It turns out that presently the vim-cucumber plugin does not specify this (although I note that it is commented out in the source at the bottom of the syntax file)

My vimrc defines this globally as follows:

set expandtab
set tabstop=4
set shiftwidth=4

So of course I was getting 4 level indent in my feature files.

The current team I am working with was not keen on this and I can see their point it does seem excessive, 2 would seem more than enough and in fact is what is used in the examples on cukes.info

So how best to do this?

One option would be to navigate to the file relevant to your Vim install location:


And change this line:

" vim:set sts=2 sw=2:

To this:

set sts=2 sw=2

This is a hack though (when you reinstall vim this change would be lost) and will need to be done for all copies of vim on the machine.

The answer is to use Vim’s ftplugin functionality.

First of all you need to switch this on in your vimrc, adding the following:

filetype plugin indent on

The you have two options, firstly you can create a file ‘~.vim/after/ftplugin/cucumber.vim’ and add the following:

setlocal expandtab
setlocal shiftwidth=2
setlocal softtabstop=2

Or add the config directly into your .vimrc:

au FileType cucumber setl sw=2 sts=2 et

This is obviously common knowledge to experienced Vim users but may be useful for anybody out there who is relatively new to Vim and wants to customize syntax on a per language basis in a clean way.

Cucumber and the global scope problem - bring on page objects and the MGF pattern

06 Aug 2012 | By Matt Robbins | Comments | Tags testing cucumber cpaybara

There is no argument, Cucumber has added a new dimension to acceptance testing, Gherkin Features are living executable requirements and that for me is the benefit we should never loose sight of.

So you’ve clocked this and created a bunch of automated tests driven from you’re features….but can too much Gherkin be a problem?

In my experience it can and it’s something you need to keep a close eye on to avoid you’re tests becoming as big a maintenance burden as the application code.

The Global Scope Problem

For a project with just a few features you can quite happily get away with a project structure that looks something like that below. All your steps in a single file containing the automation code for your features.

/ /~step-definitions/
/ / /-steps.rb
/ /-example1.feature
/ /-example2.feature
/ /~support/
/ / /-env.rb/

This is fine for a small project, but let’s say you get above 10-15 scenarios, soon the steps file will become very big and you will start spreading steps over a few files with roughly representative names. Then you run headfirst into the ‘Global Scope Problem’….

Cucmber has the concept of a ‘World’ object, a kind of giant mixin which allows your Features, Steps and Support files to share context through the use of Ruby instance variables (the ones preceded by @ characters). Whilst this is essential to the functioning of Cucumber it is also our enemy.

As you’re project grows this shared scope can cause chaos, an IDE or editor that understands Cucumber syntax will help, but ultimately you will have features mapping to steps which could be in any steps file and where state can be affected by any other step in any file via instance variables.

This can end up as mess of procedural spaghetti.

The Page Object solution and the MGF Pattern

An excellent blog post by Joseph Wilk got me thinking seriously about this when I was working on a project which was being bitten hard by the global scope problem. Integrating the Page Object Pattern into your Cucumber tests is really your only option for containing the problem of global scope.

At this point I am going to introduce what I am calling the MGF pattern, which stands for ‘Model, Glue, Feature’.

  • Model - You’re page objects
  • Glue - The step definitions
  • Feature - The Gherkin Features

The Page Object Pattern will be familiar to most of those reading this blog so I won’t go over it again. However I will suggest the following ‘Golden Rules’. I have adapted these from Simon Stewart’s wiki page on the Selenium site because as with most things he has it nailed:

  1. PageObject methods return other PageObjects or Self (whenever a page load is triggered)…
  2. Assertions should nearly always be in the Glue (step defs) not Page Objects (exception - ‘correct location’ check)
  3. Different results for the same action are different methods (‘save_expecting_error’, ‘save_expecting_submit’)
  4. Never call steps from steps (add helpers to objects if compounding steps are needed)
  5. Have a hash of element locators in each PageObject (or register custom locators with Capybara)
  6. NEVER pollute test code with page internals i.e. no css xpath in test code

Page objects possess all the knowledge about the application under test, the Glue simply binds the objects to the Gherkin features.

If you follow this pattern I would expect you to end up with a project layout something like that below.

/ /~examples/
/ / /~step-definitions/
/ / / /-steps.rb
/ / /-signin.feature
/ / /-dashboard.feature
/ /~support/
/ / /-env.rb/
/ / /~objects/
/ / / /-signin.rb
/ / / /-dashboard.rb

Intergrating Capybara

Many people including myself like using Capybara and so it is worth noting how to integrate this into your Page Objects. Because Page Objects will live outside the Cucumber ‘World’ then you have to options available:

  1. Include the Capybara::DSL module in you’re page objects
  2. Pass around an instance of a Capybara Session

The advantage of the first method is that you only have to do this in a ‘base’ class which all other objects can inherit from and obtain those methods, the advantage of the second method is that you could potentially pass in a mock if you wanted to unit test you’re page objects (though the value of doing this is debatable and could fill an entire blog post).

Below is an example base class page object which may help get you started. In this example @session is an instance of an object which acts as a container for our current page and is the only variable that couples the step definitions (or Glue) it also acts as a way of storing data that needs to persist across a test run e.g. user.

class GenericPage
  include Capybara::DSL
  include Capybara::Node::Matchers
  include RSpec::Matchers

  attr_accessor :url

  def initialize(session)
    @session = session

  def correct_page?
    page.should have_css self.class::ELEMENTS['page check element']

  def element_exists?(element)
    page.has_selector? self.class::ELEMENTS[element]

  def current_url

class SigninPage < GenericPage

  LOCATION = 'http://myapp.com/signin'

    'page check element'            => '.myapp-signin > h1',
    'page check value'              => 'Weclcome to MyApp',
    'username'                      => '#username',
    'password'                      => '#password',
    'save'                          => '#submit_button',

  def initialize(session)

  def visit
    Capybara::visit LOCATION

  def signin_expecting_success

  def signin_expecting_error_using(unique)


  def signin
    fill_in ELEMENTS['username'], :with => @session.user.username
    fill_in ELEMENTS['password'], :with => @session.user.password
    click_on ELEMENTS['save']


Adding Page Objects and following the MGF pattern will help you combat the ‘Global Scope’ problem that can hit on large projects using Cucumber to drive automated tests. It will mean changes can be isolated to single points in the code and add logical structure which will reduce the maintenance burden.


I have just finshed reading some excellent posts on Nathaniel Ritmeyer’s blog which elaborate on how we can manage page objects thus tackling the issues of step coupling. He also maintains SitePrism which looks like an interesting dsl for page objects (and removes the need for the ugly constants I have in the examples above).

Anyway I think I will follow up this post in a couple of months and elaborate on the issues around coupling in our glue code and how best we can minimize what is one of the major problems with Cucumber.

Passing parameters to a downstream Hudson (Jenkins) job

03 Aug 2012 | By Matt Robbins | Comments | Tags testing ci

I wanted to run an acceptance test in a downstream job. However I needed to grab the value of a build parameter from the upstream job. It turns out Hudson (Jenkins) does not support this ‘out of the box’.

You can add a plugin, however this was going to cause a time delay in my environment as I did not have admin rights to do this.

So a thought struck me…trigger the downstream job from the shell in the upstream and use the REST API.

Now this is all fine and to be fair there is reference on the Hudson wiki but it is not totally obvious.

You can’t do a get with a query string it needs to be a POST with params, below is an example using curl.

json="{\"parameter\": [{\"name\": \"NAMESPACE\", \"value\": \"test\"}, {\"name\": \"ENVIRONMENT\", \"value\": \"production\"}], \"\": \"\"}"
curl -X POST $url -d token=FRAMEWORKS --data-urlencode json="$json" -k -E "/data/certs/mycert.pem"

You’ll notice you need the security token in order to run this. This can be set in the upstream job ‘Configure > Trigger Builds Remotely > Authentication Token’

The funny thing is I did all of this and then realised the version of curl on the Red Hat 5 system I was on didn’t support –data-urlencode, luckily –data sufficed for my needs.

1 2 3