FITTEST Java Unit Testing Tool Contest

The winner of the tool competition for FITTEST 2013 was the EvoSuite tool by Gordon Fraser and Andrea Arcuri, with second place going to Wishnu Prasetya and his T3 tool.


From left to right: Tanja Vos, Wishnu Prasetya, Andrea Arcuri, Sebastian Bauersfeld

More pictures from the workshop can be found on facebook: https://www.facebook.com/FITTESTproject


We invite developers of tools for Java unit testing at the class level to participate in a tools competition! Competition entries are in the form of short papers (maximum of 10 pages in LNCS format) describing an evaluation of your tool against a benchmark supplied by the workshop organisers.

The results of the tools competition will be presented at the workshop. We additionally plan to co-ordinate a journal paper jointly authored by the tool developers that evaluates the results of the competition.

The Contest

The contest is targeted at developers/vendors of testing tools that generate test input data for unit testing java programs at the class level. Each tool will be applied on a set of java classes taken from open-source projects, and selected by the contest organization. The participating tools will be compared for:

Each participant should install a copy of their tool on the server where the contest will be run. To this end each participant will have SSH-access to the contest-server. The benchmark infrastructure will run the tools and measure their outputs fully automatically, therefore tools must be capable of running without human intervention.

Participate

To participate, please send a mail to Tanja Vos describing the following characteristics of their tool: name, testing techniques implemented (e.g. search-based testing, symbolic execution, etc.), compatible operating systems, tool inputs and outputs, and optionally any evaluative studies already published.

You will be sent credentials to log-in to the contest server.

Instructions

To allow automatic execution of the participating tools, these need to be configured and installed on the contest-server. Details of the server will be made public soon.

You should install and configure your testing tool in the home directory of your account. You can basically do that in any way you want with the following exceptions.

The Benchmark Automation Protocol

The runtool script/binary is the interface between the benchmark framework and the participating tools. The communication between runtool and the benchmark framework is done through a very simple line based protocol over the standard input and output channels. The following table describes the protocol, every step consists of a line of text received by the runtool program on STDIN or sent to STDOUT.

Step Messages STDIN Messages STDOUT Description
1 BENCHMARK Signals the start of a benchmark run; directory $HOME/temp is cleared
2 directory Directory with the source code SUT
3 directory Directory with compiled class files of the SUT
4 number Number of entries in the class path (N)
5 directory/jar file Class path entry (repeated N times)
6 number Number of classes to be covered (M)
7 CLASSPATH Signals that the testing tool required additional classpath entries
8 Number Number of additional class path entries (K)
9 directory/jar file Repeated K times
10 READY Signals that the testing tool is ready to receive challenges
11 class name The name of the class for which unit tests must be generated.
12 READY Signals that the testing tool is ready to receive more challenges; test cases in $HOME/temp/testcases are analyzed; subsequently $HOME/temp/testcases is cleared; goto step 11 until M class names have been processed

To ease the implementation of a runtool program according to the protocol above, we provide a skeleton implementation in Java.

Test the Protocol

In order to test whether your runtool implemented the protocol correctly, we will install a utility called fittestcontest on the machine. If you run it, it will output:

fittestcontest <benchmark> <tooldirectory> <debugoutput>
            Available benchmarks: [mucommander, jedit, jabref, triangle, argouml]

The first line shows how the tool is used. <benchmark> is one of the installed benchmarks as shown in the second line. <tooldirectory> is the directory where your runtool resides and <debugoutput> is a boolean value to enable debug information. The benchmarks are collections of classes from different open source projects. triangle lends itself to testing the basic functionality, as it is a very simple benchmark consisting of only 2 classes. An example invocation would be:

fittestcontest triangle . true

If you implemented the protocol correctly and generated all the test cases, the tool will create a transcript file in the runtool’s directory. This file will show you several statistics, such as achieved branch coverage, mutation score, etc.

Test Case Format

The tests generated by the participating tools must be stored as one or more java files containing JUnit4 tests.

The generated test cases will be compiled against

If you have any queries, please contact the program chairs. Kiran Lakhotia: k.lakhotia (at) cs.ucl.ac.uk or Tanja Vos: tvos (at) pros.upv.es