Difference between revisions of "Eclipse Testing Day 2013 Talks"
|Line 128:||Line 128:|
= Panel members =
= Panel members =
Revision as of 03:23, 1 August 2013
- 1 Talks
- 1.1 Keynote: Flying sharks with Eclipse m2m
- 1.2 Energy testing and optimization of mobile applications
- 1.3 Testing software the crowdsourced way – How to enhance software quality by utilizing real people and real devices
- 1.4 Quality assurance for mobile applications - case studies for GUI test automation
- 1.5 Testing of mobile solutions: new wine in old skins?
- 1.6 Mobile Cross-platform development and testing with Eclipse technologies
- 1.7 Post-release monitoring of apps
- 2 Panel members
Back to Testing Day 2013 page
Keynote: Flying sharks with Eclipse m2m
Did you ever hold a shark as a pet? Those of you who already had the chance to live under the same roof with such an animal will know, that they are very hard to tame. They are mainly driven by their instincts and you need to ensure that they remember their lessons learned over and over again.
Based on an electronic shark we will demonstrate how m2m.eclipse.org may be used to control a shark remotely. A lunifera.org vaadin web UI will send commands to the shark, who will execute them properly. Single commands and complex maneuver commands may be processed.
To ensure that the shark remembers its previous lessons “taught” we will use unit tests. They will send predefined commands to the shark. Using ultrasonic sensors the unit tests get results about the location of the shark in the three dimensional room and will assert that the shark glides through the air – sorry swims - smoothly.
The second test we will execute by unit tests is to ensure that the hardware parts are working properly. If commands are executed on hardware level, the hardware will send an “execution done response” to the unit test. So we are also able to test the proper functionality of hardware stuff.
Energy testing and optimization of mobile applications
Energy consumption has become an important factor for user satisfaction, especially in the context of mobile devices, where energy consumption is highly correlated with device uptime and thus, usability. In this talk, we present a study on app store user comments showing that application users notice negative energy behavior and rank respective applications more negatively. Hence we introduce the JouleUnit tools that allow energy testing and optimization of mobile applications. Profiling is supported both locally at the developer’s desk and remotely vie a Cloud service executing applications on real Android devices while profiling their energy consumption in parallel. Developers are supported with energy consumption feedback for individual test cases, and are thus, able to identify and optimize bottlenecks within their applications influencing their energy consumption negatively. The talk will include both a theoretical presentation of JouleUnit and a live demo.
Testing software the crowdsourced way – How to enhance software quality by utilizing real people and real devices
Crowdsourced software testing is quite a new approach for testing software. Besides supporting functional testing with access to the whole range of devices, it can be used for getting new insights from an outsider’s viewpoint, especially from a customer’s or end user’s viewpoint, which is crucial for the success of any application.
Crowd sourced software testing cannot replace established types of software testing but it can generate additional value for the quality of software.
• What do customer’s actually think about your augmented reality app?
• How well can your field sales force handle their new applications?
• What do your employees think about the new HR self-services?
• And does it run smoothly on every system out there?
The speaker will present examples and case studies on how to integrate crowdsourced software testing into different types of more or less agile development process as well as the values and insights they can generate for product managers, developers and marketing staff.
Quality assurance for mobile applications - case studies for GUI test automation
Over twenty years’ experience in building and testing enterprise desktop applications has taught us the value of cross-platform development and testing (write once, run anywhere), as well as the importance of automated tests. With the increase in customer projects involving mobile technology, an important question for us was: how do platform-independence and test automation fit in?
This talk looks at two projects as case studies for these aspects– and shares what we learned. One application is a customer project that runs only on iOS, the other is an internal project that is developed for cross-platform use. Alongside the actual development of both projects, our aims were to find out:
- What is a sensible test strategy for mobile applications?
- How well can mobile applications be tested automatically?
- How does continuous integration and testing work for mobile projects?
- How realistic (or desirable) is cross platform development and testing for mobile?
Over the course of the talk, we present the projects, our experiences, a short demonstration, the answers to our questions, and the new questions that arise from them.
Testing of mobile solutions: new wine in old skins?
According to ISTQB testing includes activities “to determine that products satisfy specified requirements and that they are fit for purpose”. This technique independent understanding assumes an easy migration to mobile solutions. So what’s new in testing mobiles? It doesn’t really differ if test execution is done on a mobile or on a desktop application.
However, mobile devices generate a new ecosystem, having strong impacts on the objects and requirements to be tested as well as the test process itself.
In this talk we present the 12 most important pitfalls in testing mobile applications:
- Pitfalls in testing requirements: E.g. don’t forget to test energy efficiency!
- Pitfalls in testing objects: E.g. don’t forget to test the network provider!
- Pitfalls in testing environment: E.g. don’t ignore necessity to have a dedicated test environment with virtualized services to test interoperability!
- Pitfalls in testing processes: E.g. don’t create a separate test branch for all target devices!
Both speakers come from BLUECARAT, a German SME for IT consulting and software development. BLUECARAT has a dedicated business unit for mobile business and collected a lot of experiences in developing, testing and establishing mobile solutions (e.g. for Deutsche Bahn). Today BLUECARAT has an active role within BITKOM’s Task force “Mobile Business” to work together with other partners to create cross-platform guidelines for high-quality mobile solutions.
Dr. Frank Simon
Dr. Marcus Iwanowski
Mobile Cross-platform development and testing with Eclipse technologies
Tabris is a toolkit for cross-platform development of native mobile Apps based on the Eclipse Remote Application Platform (RAP).
This talk is about testing Tabris apps with MonkeyTalk. MonkeyTalk is an Open Source toolkit for automating functional tests for native, mobile, and hybrid iOS and Android apps.
MonkeyTalk tests require the integration of the MonkeyTalk framework into the respective native parts of the applications. This allows record and playback testing, making it easy to get started.
In this session you will see the good, the bad and the ugly of MonkeyTalk. Based on our experience of testing Tabris applications we will present some dos and don’ts for testing your own native Apps.
As a round up we will discuss some further ideas regarding integration testing native and Tabris Apps.
Post-release monitoring of apps
Device fragmentation presents a major challenge to mobile app development and testing. On Android alone there are thousands of different devices, each with different screen sizes, internal hardware, OS and ROM version, etc. As a consequence it has become virtually impossible to thoroughly test apps before release.
Users often avoid unstable apps, leaving bad reviews or uninstalling them. Many solutions to this problem have been proposed, from SaaS platforms using real devices to crowdtesting. However none of these approaches can cover the entire spectrum of devices and use cases in the real world.
Post-release monitoring of an app presents a potential solution. By monitoring an app during real-world operation, it becomes possible to detect and fix technical malfunctions before they have a serious impact on an app's reputation. We show how using tools like the Developer Garden App Monitor can make large-scale post release testing important part of the application lifecycle.
Richard Süselbeck, Senior Developer Evangelist, Deutsche Telekom AG
| Andre Jay Meissner is a passionate (tec) diving enthusiast, developer, entrepreneur and currently a BDM/DevRel Web & Mobile at Adobe. He focuses
on webstandards and multiplatform development. Jay is the founder of href="http://lab-up.org" LabUp!, a project to help establish Open Device Labs, and he runs [OpenDeviceLab.com] and the [Berlin Desknots].
|Marcus Schauber worked as a developer and a consultant before moving to his current role at DB Systel, where he is a Central Process Manager for testing.|