The goal had been set a day or two prior to the beginning of the hackathon: we were hoping to make Gerrit better at recommending relevant reviewers for a given commit. To those who haven’t heard of it, Gerrit is a web-based code review system. It is a nifty Google-backed open-source project evolving amid an active community of users. We have been using this product here at Intersec since 2011 and some famous software projects also rely heavily on it for their development process.
This would be a good metaphor to illustrate our mindset at the beginning of the hackathon! (credits: Team Fortress 2)
Our team consisted of five people: Kamal, Romain, Thomas, Louis and Romain (myself).
Presentation of the Project
As testers, we spend a lot of time working on
behave, our test automation framework ((
behave is a clone of
cucumber written in Python. It is based on the BDD (Behavior Driven Development) principles. Tests are described as a succession of english-sentences (assumptions, then actions, then results) which are themselves mapped to the corresponding Python code.)). Our test framework is a great tool, but it takes a lot of time starting and initializing the product, running tests one by one.
We are not only test automation developers. We also need to explore the product under test by experimenting again and again based on the information we gather along the way. Manual testing is the basic approach to perform non-trivial experiments, but it can benefit from automated testing as it offers a quick and reliable way to set up a product in any given state.
The hackathon ((a hackathon is an event in which computer programmers and others involved in software development, including graphic designers, interface designers and project managers, collaborate intensively on software projects. Intersec promotes this event to focus and enhance project innovation)) was the perfect opportunity to buy ourselves a new exploratory tool to perform interactive automated testing. To do that, we needed to be able to:
- Gain more control over the set up steps
- Pause the product in order to manually test the state of the product
- Select some more steps to run, check again, and so on
- Explore with some automatic help
We elaborated an interactive mode to manage the run of a scenario. The goal was to be able to play each sentence on demand. This mode would help us in the future to reproduce issues and to set up the environment faster for testing purpose or even for customer demonstration.