FITTEST 2013, November 12, 2013, Istanbul
09:00 | - | 09:15 | Welcome to the FITTEST International workshop |
09:15 | - | 10:15 | The FITTEST project and its final results Tanja Vos (FITTEST coordinator) (Universidad Politecnica de Valencia, Spain) |
10:15 | - | 10:45 | Assessing the Impact of Firewalls and Database Proxies on SQL Injection Testing Dennis Appelt, Nadia Alshahwan and Lionel Briand |
10:45 | - | 11:15 | N-Gram Based Test Sequence Generation from Finite State Models Paolo Tonella, Roberto Tiella and Cu Duy Nguyen |
11:15 | - | 11:45 | Logging to Facilitate Combinatorial System Testing Peter M. Kruse, Wishnu Prasetya, Jurriaan Hage and Alexander Elyasov |
11:45 | - | 12:45 | Lunch |
12:45 | - | 13:00 | Introduction to the FITTEST 2nd round of the Java Unit Testing Tools competition Sebastian Bauersfeld (Universidad Politecnica de Valencia, Spain) |
13:00 | - | 13:20 | Participant: EvoSuite Andrea Arcuri and Gordon Fraser (online via skype) |
13:20 | - | 13:40 | Participant: T3 Wishnu Prasetya |
13:40 | - | 14:00 | Outcome of the Java Unit Testing Tools competition Sebastian Bauersfeld (Universidad Politecnica de Valencia, Spain) |
14:00 | - | 14:15 | Prize awards Tanja Vos (FITTEST coordinator) (Universidad Politecnica de Valencia, Spain) |
14:15 | - | 14:30 | Coffee break |
14:30 | - | 15:00 | Demonstration of the FITTEST Integrated Testing Environment (ITE) Peter Kruse (Berner & Mattner, Germany) |
15:00 | - | 15:30 | Discussion on future research directions for FI testing Moderated by TBC |
1st Future Internet Testing - FITTEST workshop
The first International Workshop on Future Internet Testing (FITTEST 2013) will be held on the 12th of November in Istanbul, Turkey in conjunction with the 25th IFIP International Conference on Testing Software and Systems (ICTSS 2013). FITTEST 2013 will consist off:
- A dedicated workshop for the software testing community working in the domain of Future Internet (FI) applications. Topics include (but are not limited to):
- new techniques and tools for testing Future Internet applications that deal with the dynamic, adaptive and self-‐* properties of FI applications,
- evaluation of testing techniques and tools on real systems through case studies.
- A tools competition for Java Unit Testing tools (ROUND 2). This competition is a follow-‐up on the competition run at SBST 2013 at ICST 2013 in Luxembourg. It will be based on a different benchmark made for FITTEST 2013. The objective is to facilitate a comparison of testing tools, and stimulate discussion at the workshop of the advantages and disadvantages of particular approaches to the problem. Entries will demonstrate their tool at the workshop and present their results of the competition.
Call For Papers (Download Flyer)
The Future Internet (FI) will be a complex interconnection of services, applications, content and media, possibly augmented with semantic information. It will offer a rich user experience, extending and improving the current hyperlink-based navigation. Key technologies contributing to the development of FI services and applications include rich, complex, dynamic and stateful client-server systems. In such systems, the clients interact asynchronously with the servers, where applications are organised as services, usually around an enterprise service bus, taking advantage of dynamic service discovery, replacement and composition. Adaptivity and autonomy improve the user experience, by dynamically changing both the client and the server side, through capabilities such as self-configuration and self-healing. As a consequence, FI applications will exhibit emergent behaviours, which are hard to predict at design time.
Contributions can be:
- Long papers (16 page limit). This format will be used for papers on original research results in the domain of testing Future Internet applications.
- Short papers (10 page limit). This format will be used for papers
(a) on novel techniques, ideas and positions that have yet to be fully developed and
(b) tool competition entry papers that describe the evaluation results of testing a benchmark provided by FITTEST 2013.
All contributions have to be submitted electronically in PDF format via EasyChair. All submissions have to follow the Springer LNCS paper format.
Accepted full papers are published by Springer in the LNCS series. Accepted short papers will also be published in the same LNCS volume in a separate section.
Important Dates:
- Abstract deadline:
August 27, 2013 - Submission of papers:
September 30, 2013 - Notification date: October
0108, 2013 - Camera ready: October
1520, 2013
About FITTEST:
FITTEST is an European project (2010-2013) that will develop an integrated environment for the automated and continuous testing of Future Internet Applications. The Future Internet will be a complex interconnection of services, applications, content and media, on which our society will become increasingly dependent for critical activities such as public utilities, social services, government, learning, finance, business, as well as entertainment. Consequently, Future Internet applications have to meet high quality demands. Testing is the mostly used quality assurance technique applied in industry. However, the complexity of the technologies involved in the Future Internet makes testing extremely challenging and demands for novel approaches and major advancement in the field.
The overall aim of the FITTEST project is to address these testing challenges, by developing an integrated environment for automated testing, which can monitor the Future Internet application under test and adapt to the dynamic changes observed. The environment will implement continuous post-release testing to address self-modifiability and run-time adaptation of Future Internet applications. Since services can be dynamically discovered and added, intended use of the application can change after release. The environment will integrate, adapt and automate various techniques for continuous Future Internet testing (e.g. dynamic model inference, model-based testing, log-based diagnosis, oracle learning, combinatorial testing, concurrent testing, regression testing, etc.).
The environment will make use of evolutionary search based techniques, to make it possible for the above mentioned techniques to deal with the huge search space associated with the Future Internet testing challenges. In this way, we can address new, emerging or unexpected behaviour that may originate from the dynamism, autonomy and self-adaptation. FITTEST results will be evaluated with case studies using real Internet systems like virtual worlds, social networking, highly scalable service providers and a SaaS enabled CASE tool, that are highly relevant to the Future Internet vision.
The project partners are:- Universidad Politecnica De Valencia, Spain
- University College London, United Kingdom
- Berner & Mattner Systemtechnik, Germany
- IBM Israel - Science and Technology LTD IBM, Israel
- Fondazione Bruno Kessler, Italy
- Universiteit Utrecht, Netherlands
- Softeam, France
Visit FITTEST on Facebook and follow the project on Twitter @FITTESTproject.
Program Committee
- Nadia Alshahwan, University of Luxembourg, Luxembourg
- Alessandra Bagnato, Softeam, France
- Nelly Condori Fernandez, Universidad Politecnica de Valencia, Spain
- Mark Harman, University College London, UK
- Yue Jia, University College London, UK
- Atif Memon, University of Maryland, USA
- Bilha Mendelson, IBM Haifa, Israel
- John Penix, Google, USA
- Justyna Petke, University College London, UK
- Andrea Polini, University of Camerino, Italy
- Wishnu Prasetya, Universiteit Utrecht, The Netherlands
- Simon Poulding, University of York, UK
- Scott Tilley, Florida Institute of Technology, USA
- Paolo Tonella, Fondazione Bruno Kessler, Italy
- Joachim Wegener, Berner & Mattner, Germany
FITTEST Java Unit Testing Tool Contest
The winner of the tool competition for FITTEST 2013 was the EvoSuite tool by Gordon Fraser and Andrea Arcuri, with second place going to Wishnu Prasetya and his T3 tool.
From left to right: Tanja Vos, Wishnu Prasetya, Andrea Arcuri, Sebastian Bauersfeld
More pictures from the workshop can be found on facebook: https://www.facebook.com/FITTESTproject
We invite developers of tools for Java unit testing at the class level to participate in a tools competition! Competition entries are in the form of short papers (maximum of 10 pages in LNCS format) describing an evaluation of your tool against a benchmark supplied by the workshop organisers.
The results of the tools competition will be presented at the workshop. We additionally plan to co-ordinate a journal paper jointly authored by the tool developers that evaluates the results of the competition.
The Contest
The contest is targeted at developers/vendors of testing tools that generate test input data for unit testing java programs at the class level. Each tool will be applied on a set of java classes taken from open-source projects, and selected by the contest organization. The participating tools will be compared for:
- execution time
- achieved branch coverage
- test suite size
- mutation score
Each participant should install a copy of their tool on the server where the contest will be run. To this end each participant will have SSH-access to the contest-server. The benchmark infrastructure will run the tools and measure their outputs fully automatically, therefore tools must be capable of running without human intervention.
Participate
To participate, please send a mail to Tanja Vos describing the following characteristics of their tool: name, testing techniques implemented (e.g. search-based testing, symbolic execution, etc.), compatible operating systems, tool inputs and outputs, and optionally any evaluative studies already published.
You will be sent credentials to log-in to the contest server.
Instructions
To allow automatic execution of the participating tools, these need to be configured and installed on the contest-server. Details of the server will be made public soon.
You should install and configure your testing tool in the home directory of your account. You can basically do that in any way you want with the following exceptions.
- You must have an executable (or shell script) named $HOME/runtool that implements the protocol described below
- Your tool must store intermediate data in $HOME/temp/data
- Your tool must output the generated test cases in JUnit4 format in $HOME/temp/testcases
The Benchmark Automation Protocol
The runtool script/binary is the interface between the benchmark framework and the participating tools. The communication between runtool and the benchmark framework is done through a very simple line based protocol over the standard input and output channels. The following table describes the protocol, every step consists of a line of text received by the runtool program on STDIN or sent to STDOUT.
Step | Messages STDIN | Messages STDOUT | Description |
---|---|---|---|
1 | BENCHMARK | Signals the start of a benchmark run; directory $HOME/temp is cleared | |
2 | directory | Directory with the source code SUT | |
3 | directory | Directory with compiled class files of the SUT | |
4 | number | Number of entries in the class path (N) | |
5 | directory/jar file | Class path entry (repeated N times) | |
6 | number | Number of classes to be covered (M) | |
7 | CLASSPATH | Signals that the testing tool required additional classpath entries | |
8 | Number | Number of additional class path entries (K) | |
9 | directory/jar file | Repeated K times | |
10 | READY | Signals that the testing tool is ready to receive challenges | |
11 | class name | The name of the class for which unit tests must be generated. | |
12 | READY | Signals that the testing tool is ready to receive more challenges; test cases in $HOME/temp/testcases are analyzed; subsequently $HOME/temp/testcases is cleared; goto step 11 until M class names have been processed |
To ease the implementation of a runtool program according to the protocol above, we provide a skeleton implementation in Java.
Test the Protocol
In order to test whether your runtool implemented the protocol correctly, we will install a utility called fittestcontest on the machine. If you run it, it will output:
fittestcontest <benchmark> <tooldirectory> <debugoutput> Available benchmarks: [mucommander, jedit, jabref, triangle, argouml]
The first line shows how the tool is used. <benchmark> is one of the installed benchmarks as shown in the second line. <tooldirectory> is the directory where your runtool resides and <debugoutput> is a boolean value to enable debug information. The benchmarks are collections of classes from different open source projects. triangle lends itself to testing the basic functionality, as it is a very simple benchmark consisting of only 2 classes. An example invocation would be:
fittestcontest triangle . true
If you implemented the protocol correctly and generated all the test cases, the tool will create a transcript file in the runtool’s directory. This file will show you several statistics, such as achieved branch coverage, mutation score, etc.
Test Case Format
The tests generated by the participating tools must be stored as one or more java files containing JUnit4 tests.
- declare classes public
- add zero-argument public constructor
- annotate test methods with @Test
- declare test methods public
The generated test cases will be compiled against
- JUnit 4.10
- The benchmark SUT
- Any dependencies of the benchmark SUT