Eclipse Testing Day 2012 Talks

From Eclipsepedia

Revision as of 03:26, 31 August 2012 by (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search



Back to Testing Day 2012 page

Testing of Eclipse RCP based Products

Stability is a key requirement for an established product. Of course, software is never bug free, but bugs being fixed must stay so in later releases. As manual testing is error prone and quite expensive, consequent automated testing is the logical implication.

Especially in the case of Eclipse RCP applications and Plug-Ins, being delivered to different target platforms and installed into many possible client configurations, migration and system testing is also important.

So we're facing challenges on multiple very different layers. In the talk we present our approach to clearing the fog starting with unit testing to black-box UI tests - and talk about our DOs and DON'Ts of using technologies from Jenkins to Jubula.

Manuel Bork, Yatta Solutions GmbH

Manuel Bork is a passionate software developer, committed IT consultant and dedicated Eclipse enthusiast. At Yatta Solutions Manuel works on UML Lab's Round-Trip-Engineering technology. He is Eclipse committer for the Eclipse MFT project, and he loves mountain biking in his free time.
Email:, Twitter: @manuelbork. Web:

Cutting the right corners: Achieving the balance between effort and payoff for GUI test automation

If budgets, resources and time are tight, then one of the first things to suffer is testing. While it’s only natural (and perfectly correct) to want to minimize any waste, it’s important not to cut corners in the wrong place.

This talk looks at the case study of a project that began in a fixed-price phase, with automated GUI tests running in parallel with development. The quality of the released software convinced the team and customer that test automation was a good investment. However, due to personnel and project constraints, it was not possible to have a dedicated tester for the project after the fixed-price phase.
Through various iterations, the team was able to find a good balance between effort and payoff with GUI test automation. They were able to identify factors that can be reduced without negative consequences (constant tester presence in the project, for example) as well as aspects that are mandatory for success (continuous build, regular test analysis and maintenance, good test design). As well as technical requirements for good-value test automation, they were able to identify ways to find and train good testers.
This talk presents the journey of the project and their ways to achieve good results with minimal effort.

Alexandra Schladebeck, BREDEX GmbH

Alexandra earned a degree and an MA in linguistics from York University before starting work at BREDEX GmbH, where she is a trainer and consultant for automated testing and test processes. As product owner for the Eclipse Jubula Project, and its “big brother” GUIdancer, she is responsible for developing user stories with customers as well as documentation and testing. Alex frequently represents BREDEX at conferences, where she talks about agility and testing from project experience. She is also involved in event organisation and customer communication.

Java Development with Contracts in Eclipse

In our talk, we would like to introduce the concept and tool support for Java development with Contracts in Eclipse..

Design by Contract (DbC) is a long forgotten methodology, allowing the specification of software by formulating pre- and post-conditions explicitly in the code. But with increasing complexity of modern software, it was rediscovered and is becoming more popular, for example with Code Contract for .NET.

In order to provide similar functionality for Java, we have developed a Contracts framework for Java, named C4J. It can easily be integrated in new and existing Eclipse projects. Compared to similar approaches, it has the advantage of being 'refactoring-safe': the Contract code won't become invalid after automated refactorings in the IDE. On top of that, we’d like to share our progress in developing an accompanying Eclipse plugin, providing similar functionality for C4J as Eclipse and MoreUnit already do for JUnit.

Stefan Schürle
Stefan Schürle is software developer with a main focus on developing RCP Applications with agile methods like TDD and SCRUM.

Ben Romberg
Ben Romberg is software developer at andrena objects and the developer of C4J 4.0. His main interests are automated tests and contracts for Java.

Iterative Model Based Generation of Test Cases for Graphical User Interfaces

Using activity diagrams as a means to generate and understand Test Cases has various advantages. Activity diagrams are reasonably easy to create and understand, and can offer a more clear view of a Test Case in a graphical form.
Nevertheless, using activity diagrams for Test Case generation does involve some tricky aspects such as keeping the diagram and the Test Cases in sync. In this presentation, we show how Test Cases can be automatically generated from activity diagrams and vice-versa. We discuss the practical uses as well as the limitations of the technique.

Raimar Bühmann

Raimar received his BSc in Computer Science in 2010 from the Technische Universität Braunschweig. He is currently studying for his Master’s degree in Computer Science and writing his MSc Thesis on computer science on “Model Based Testing of Applications with Graphical User Interfaces“ in cooperation with BREDEX GmbH.

Johannes Bürdek

Johannes Bürdek is an employee at BREDEX GmbH and Master’s student in Computer Science at the Technische Universität Braunschweig. The topic for his Master’s thesis is the generation of activity diagrams from test cases for graphical user interfaces.

eDeltaMBT - Delta oriented Model-based Software Product Line Testing

Testing variant-rich software systems, e.g., software product lines (SPLs), is very challenging. Due to the high number of possible software product variants for instance in modern automobiles, testing an SPL product by product is, in general, infeasible. The development of efficient SPL testing approaches with comprehensive tool support counteracting the increasing testing effort is still an open field in research. In this presentation, we introduce a tool chain for efficient model-based SPL conformance testing. It consists of the RCP eDeltaMBT for modeling and assembling reusable delta-oriented state machine test models and of IBM Rational Rhapsody for generating test cases based on the coverage criterion MC/DC. By using the Eclipse Modeling Framework, the RCP offers FODA feature modeling, a guided specification of product configurations and SPL test modeling. The Rhapsody Eclipse Plug-in enables the import of the assembled product test models. For automated test case generation, the Rhapsody Add-on ATG is used. Our tool chain was evaluated in the context of a case study from the automotive domain. We obtained positive results showing the reduction of the testing effort.

Sascha Lity is a PhD Student at the Technische Universität Braunschweig. He finished his Master of Science in 2011 and works as research assistant at the Institute for Programming and Reactive Systems since 2012. Main interests of his research are software product lines, model-based testing and delta-oriented product line testing.
Malte Lochau studied Computer Science at the Technische Universität Braunschweig and received his Diploma in March, 2007. Currently he is a research assistant at the Technical University of Braunschweig, working in the Institute for Programming and Reactive Systems. His research interests are software product lines, model-based testing and formal semantics.
Ina Schaefer is a full professor of Software Engineering at the Technische Universität Braunschweig. Her research interests are software variability, formal methods and automotive software engineering.

Lightning talk 1: Testing for Tool Qualification in Eclipse

Currently Eclipse is on the road toward tool qualification (see To make it also usable for the development of safety relevant applications.
We will enable (but not enforce) the development of qualifiable Eclipse plugins using a qualification model that extends the current Eclipse meta model. The model satisfies all DO-330 requirements and contains therefore different requirement and test elements and links between them. The coverage to achieve with the testing depends on the tools confidence need (conforming to  the safety standards) that can be determined automatically from the process model part of the DO-330 model in which the user shall describe his use cases of the plugin.In the talk we shortly present the roadmap and some testing aspects of the model and show how coverage can be measured within Eclipse, test specification documents can be generated, etc.

Dr. Oscar Slotosch has studied computer science and did his phd at the chair of Prof. Broy in Munich. He has been leading the award-winning academic modeling tool AutoFOCUS. He is a founder of  Validas AG and expert in model-based testing, tool qualification and avoidance of tool qualification.

Lightning talk 2: Mind-meld across teams, tools and time-zones with Eclipse Mylyn

Benjamin will explain why the time is now to embrace developer-centric lightweight collaboration and social coding tools to increase velocity. For many Java developers, Mylyn has become the tool of choice for connecting team communication with coding. Mylyn is now a top-level Eclipse project backed by a thriving open source community of integrations connecting developers to more than 70 tools. This talk will highlight how Mylyn can bridge the gap between developers and testers using tools such as HP ALM/ALI together with the best of breed tools of the world of developers. Live demos presented will showcase how Mylyn’s task-focused interface integrates existing open source, enterprise and Agile tools in the IDE, supports automatic knowledge capture and helps unify distributed teams by reorganizing the IDE around collaborative tasks.

Benjamin Muskalla is a software developer at Tasktop Technologies in Karlsruhe, Germany. He is an active committer on Mylyn (Versions, Builds) and EGit, the Git integration for Eclipse. Ben also contributes to several other Eclipse projects including RAP, Platform UI and JDT. Ben has been deeply involved in the Eclipse community for more than six years and is a regular speaker and author on Eclipse-related topics. Ben is passionate about the quality of the Eclipse community and the transformational productivity gains that Eclipse and Mylyn enable.

Panel discussion

Jana Knopp (Hoffmann), Xing

Jana Knopp studied computer science at the department of Cooperative Studies of the Berlin School of Economics and Law. There she combined full-time classroom study with regular on-the job training at SIV.AG, where she acquired highly technical insights into various different programming languages. After receiving her Bachelor’s degree, Jana Knopp switched her focus from programming to quality assurance and started a Master’s degree in computer science at Bremerhaven University of Applied Sciences where she graduated with honors.

Since 2011, Jana Knopp has been working on various projects as a Junior Quality Assurance Manager at XING AG. She started out in the Member Growth team before switching to the Jobs team where she is responsible for test management, test data creation, test execution, and automation of all product parts related to job searches and job management.

XING profile:

Twitter: @holule1

Dr. Frank Simon, SQS AG

Dr. Frank Simon (43) is working for SQS Software Quality Systems AG and heads the Research Unit to look forward new hypes, trends and related services around quality management and testing. He studied computer science and did his PhD in quality assurance of large IT-Systems. He published several books, many articles and is involved in national and international research projects. He is leading the working group "Software Development Processes and Tools" within BITKOM, member of the German Testing Board (GTB) and regularly publishing on

Michaela Greiler, TU Delft

Michaela Greiler is a PhD researcher at the Delft University of Technology in the Netherlands. Recently, she was a visiting researcher at the University of Victoria in Canada. Her research consists of the development of techniques and tools to support the maintainability of test code, with a strong focus on the comprehension of test code for plug-in based systems. She holds a Master's degree in Computer Science, has presented her work at international industry and research conferences and workshops (e.g. EclipseCon, ICSE, WCRE), and received the Google Anita Borg Memorial Scholarship in 2012.

Dr. Klaudia Dussa-Zieger (Moderator)

Dr. Klaudia Dussa-Zieger studied computer science at the University of Erlangen-Nuremberg. Following her master studies at the University of Maryland, USA, she gained her PhD in computer science at the University of Erlangen-Nuremberg in 1998. She worked for several years as a consultant for software testing and quality assurance as well as software process improvement. In 2006 she began working for Method Park Software AG. One of her interests is on the practical implementation of relevant industrial standards. She is head of the DIN working group 043-01-07 AA „System and Software Engineering“. Klaudia Dussa-Zieger is iNTACSTM Competent Assessor (ISO/IEC 15504) for Automotive SPICE and since 2004 qualified as an ISTQB Certified Tester Full Advanced Level and trainer for ISTQB CT Foundation and Advanced Level. At the ASQF e.V. she heads the expert group for software testing. Klaudia Dussa-Zieger also holds a lecture at the University of Erlangen-Nuremberg on the topic of testing software systems, and she is co-author of the book “Software Engineering nach Automotive SPICE”.

Advance questions for panel

  1. Some people say that testing, as we know it, is dead, because a testing phase is something that only happens in waterfall projects at the very end. Do you agree with this statement?
  2. Most software projects are developed by teams of 5 or less developers. Did you ever see a meaningful conventional (aka closed) testing phase in such a software project? Did the absence/presence of a closed testing phase have impact in the success of the product?
  3. When an active community of users works with a product, they are likely to give feedback in the form of bug reports and enhancement requests. How do you suggest incorporating their feedback into the process for the further development of the product?
  4. The community is a great source of feedback. Nevertheless, it is advantageous to release as few bugs as possible. What do you recommend to gain the balance between community feedback and minimizing bugs in production?
  5. A frequent claim when talking about testing is “my developers don’t make any errors, so we can save money and time by not testing”. What is your view on this?
  6. The Eclipse community has between three and four thousand new bug entries a month entered. Around half of them are non-duplicates. What is your opinion of these numbers? Can they tell us anything about the projects or the community?
  7. Community feedback needs people to oversee the state of the bug entries. How would you suggest planning time and resources to ensure that bug entries are well-documented, reproducible and non-duplicates?
  8. Is there a difference between a product and a project? Do open source projects have different quality standards? Should they have different quality standards?
  9. With smaller release cycles, the existing functionality has to be retested more and more often. GUI Test automation is worth most when a feature/work flow has settled in a product. How do you identify which features should be tested automatically and which still need to be retested manually?
  10. In internal projects, we have some control over who is performing the testing. The skillsets and background of external testers (i.e. users and community) are generally undefined and unknown. Do you see that as a benefit or a disadvantage? What sorts of skillsets and mindsets should we look for in our testers?
  11. How do you deal with test coverage in terms of traceability? What is missing in the current tool chains that would help to improve this area?
  12. What proportion of automated tests, scripted manual tests and exploratory testing is a good balance in your experience? If it is project-dependent, when do you decide what balance to use, when do you perform each type of test, and what factors can lead to the balance being changed?
  13. What do you think of the statement that agile testing focuses solely on white-box testing and forgets about black-box testing?
  14. Is it always advantageous to have testing as close to development as possible? Are there any cases where it is disadvantageous?

Back to Testing Day 2012 page