QA Navigation Software Lifecycle Manager Documentation


Table of Content

Software Lifecycle Manager Suite

Introduction

The UID Server

Configuring the Test Workbench

QNConfig.xml

QNProjDat.xml

QNDefs.xml

QNLicense.xml

The Test Case Editor

Editing a Test Case

The Status Field and the Log Field

The Info Field

The Test Case Set Compiler

The Test Campaign Manager

The Campaign Status

Expanding Test Case Sets

Customizing Test Cases

Removing Multiple Test Cases

Refreshing the List

Test Case Execution Information

The Test Case Executer

Executing Campaign Test Cases

The Test Case Execution Title

Reporting Problems

Test Case Execution Time

Canceling a Test Case Execution

The Report Manager

User Administration

The Fix Processor

Processing Fixes

The Statistics Generator

Statistics for Campaigns

Statistics for Error Reports

Common Features

Ending Applications

Context Menus

Read-Only Text Fields

Larger View of Text Fields

Saving a Document

Known Problems

Special Characters in Config Files


The Software Lifecycle Manager Suite

Introduction

This documentation is intended to provide you helpful information, instead of boring you with the obvious. Therefore this documentation only explains what can not be comprehended at first sight.

The application is designed to be self-explaining as far as possible. The attempt was made to make it intuitively useable.

As a result it is recommended to read this documentation entirely to understand features and effects that can not be detected just by examining the product.

The UID Server

The UID server is a central part of the Software Lifecycle Manager (SLM). All applications of the SLM need access to a UID server. The access for the Test Workbench is defined in the configuration file.

There is only one UID server required per installation. You may set up a local UID server, if you need to work disconnected from the network.

The UID server reads the license information from the license file, which determines the number of users allowed.

Configuring the Test Workbench

To successfully make use of the tools of the QA Navigator suite some settings have to be defined first. The configuration is stored in a XML files, that can be edited by any plain text editor. The configuration file has already all required markups. Do not rename, remove or add any markups.

QNConfig.xml

QNConfig.xml contains definitions for the individual tester and his workstation.

QNDefs.xml

QNDefs.xml contains definitions of labels, tags and values for processing test cases. These definitions typically a valid throughout the entire company and thus are centrally defined and are not meant to be changed by the individual tester.

All definitions consist of the combination of label and value definitions. While the labels may be freely changed, the values should not be changed, at least not during a test campaign. All changes must be kept consistent with all other QA Navigator installations, that work together.

QNProjDat.xml

QNProjDat.xml contains the information about the testing environment such as Application to be tested, function, test sections etc., information that typically lasts during the time span of a project.

The QA Navigator suite comes with sample configuration files.

QNLicense.xml

QNLicense.xml contains the license information for running the UID server. It is only required where the UID server is running.

Do not change the file! Changing the license file breaks the license code, which causes the UID server to refuse to start.


The Test Case Editor

Editing a Test Case

For the Editors convenience the fields and selection menus “Application”, “Section”, “Function Test”, “Priority”, “Author” and “Version” remain the values of the test case that was edited previously. The fields “Date” and “Time” are set to the current values, but may be changed, if necessary.

The Info Field

The Test Case Info Field contains information about the author, the editing time and date and the version of the test case. The author name is preset during the start up of the TC Editor by the value defined in the configuration file. Date and time are preset by the actual date and time.

Any of these fields can be overwritten.

The Status Field and the Log Field

At the bottom of the Test Case Editor you will find a field the status field which indicates the current status of the application. Normally it has a green background, combined with the word “OK”. While the application is processing the field contains the text “Processing”, with a yellow background, indicating that the application will not accept any input. In case of an error the text “Error” appears, colored in red.

Next to the status field there is the processing log. It contains the messages of the past processing results. To view the messages open the drop-down field and scroll to the message you are looking for. To view a message in a more convenient fashion select the message and use the context menu entry “Display Entry”.


The Test Case Set Compiler

The Test Case Set Compiler lets you compile a arbitrary number of test cases to a test case set without any restrictions. This makes compiling and managing campaigns easier.

To allow accurate counting of Test Cases and execution data in the statistics, the names of a TC Set must be unique, at least within a Campaign.

Please be aware that although the Test Case Set Compiler lets you define limitless levels of TC sets in TC sets the campaign manager is only able to support two levels so far.


The Test Campaign Manager

The test Campaign Manager resembles the heart of the QA Navigator suite. Here all the preparation effort is collected and the execution of the testing is monitored.

The Campaign Status

The different campaign status control the usage of the campaign document. Only test cases of a campaign that are in the state of "In Execution" can be selected for execution. While executing the test cases of a campaign test case data and execution data are linked to each other. That is why one can not delete test cases and test case sets from a campaign document anymore once the campaign enters the state "In Execution". For the same reason the campaign can also not be reset to the state "Open" anymore.

To stop the execution of test cases of a campaign for a while the campaign can be set to the status "Ceased". It can later set back to "In Execution".

The status "Closed" is there to set the campaign document into a archival state. After that the test can not be executed any more, the campaign can no longer be saved and the status cannot be changed anymore. This status is irreversible, so select it only when it is final.

Expanding Test Case Sets

The '+' sign in the first column of an item in the test case list indicates that the item is a test case set and can be expanded. To expand the item double-click the entry.

The Campaign Manager supports only two test case set levels, so far.

Customizing Test Cases

For customizing a test campaign single test cases can be deleted from the list of tests of a test case set. The action can not be reversed.

Removing Multiple Test Cases

One can delete several entries in the campaign list at a time by multi-selecting the items. To avoid unintended actions the removal is only performed for elements of the same type (test case vs. test case sets), checking from top to bottom. Removing stops at the first element, that is not of the same kind as the preceding elements.

Refreshing the List

The menu item "Refresh List" reloads the entire list of test cases and test case sets from the database, including information about the execution of the test cases.

Test case sets and test cases that have been added to the list but have not been saved yet are dropped from the list when executing this command.

Test Case Execution Information

The Campaign Manager provides information on the execution of the assembled test cases. If a test case has been executed the field in the first column of the test gets a green back ground.

If the execution of the test case failed at least once the the field in the second column of the test gets a red back ground.

The status of the test cases are accumulated to a status for the test case set. If all test cases of the test case set where executed the the field in the first column of the test set gets a green back ground. If one or more test cases of the test case set failed the field in the second column of the test set gets a red back ground.

If a test case has been executed more than once the displayed execution time is the calculated average execution time of all executions.

The Campaign Statistics Overview

Behind the list of Test Cases and TC Sets there is a tab that contains a brief overview of the statistics of the Campaign.

The Campaign Statistics Overview shows the essential data related to the execution of the Campaign.

For the execution time there are two types of times provide: the times originally estimated for execution and the actual execution time. The actual execution time again is differentiated between total execution time, that means the accumulated times for all runs of a test within the campaign and the average run for that Test Case.

Based on the relation of the estimated time, that has been executed and the remaining estimated time a forecast of the actual remaining time is calculated.


The Test Case Executer

The Test Case Executer shows a different behavior than the other parts of the Test Workbench. When a test case execution document is saved, instead of re-displaying the document the display is reset to empty contents to signal that the execution of the test is completed and a new test case should be executed.

Executing Campaign Test Cases

Selecting a campaign document through the context menu of the "Campaign" text field changes the behavior Test Case Executer. Selecting the menu "Execute TC" opens a dialog that shows the test case sets and the test cases of the selected campaign instead of the standard dialog, that shows the list of test cases in the data base.

A campaign can not be selected in the TC Ex unless it is in the state of "In Execution".

The Test Case Execution Title

To be able to identify test case execution the title of a test case execution document is constructed by combining the test campaign's name with the title of the test case.

Reporting Errors

To report an error during the execution of a test simply click on the “Error” button. This invokes the Report Manager tool in the predefined Browser and presets the fields for the report, as far as they can be preset from the information out of the TC execution.

When the “Error” button is pushed the timer is automatically stopped. Please do not forget to press the “Continue” button before you carry on testing, to provide you with an accurate time record on your testing activities.

Test Case Execution Time

If the test case is closed as executed to execution time is always rounded up to a minimum of 1.

Canceling a Test Case Execution

Closing a test case always results in writing an execution protocol document.

If you wish to close the application without writing an execution protocol you can exit the application, load a new test case for execution or close the window.


The Report Manager

The Report Manager is a web based tool for creating and administrating error reports. It has its own, extensive documentation.


User Administration

The SLM User Administration Tool is a web application that allows SLM administrator to create or change users and their information and normal users to administrate their data.


The Fix Processor

Processing Fixes

When the reports status is changed to "Fixing" the Report Manager automatically generates a fix document and assigns it to the user that the report is assigned to.

For evaluation purposes the user can navigate to the report that initiated the fix by using the context menu of the report list.

The Fix Processor allows the user to add descriptive information about the fix and append a list of sources changed for the fix. For this purpose the context menu of the fix list provides a menu "Add Source" which redirects the user to a dialog, where he can select from the VSS database configured.

Should the last checked in version of the file not be checked in by the current user, a list of all versions of the file is displayed and the user has to select the right version.

The user can change the status of the fix to "Fixing", signaling that the fix is now processed, and (later) to "Ready for Test", which signals the supervisor that fix has been processed. It also ends the processing sequence of the fix.


The Statistics Generator

Statistics for Campaigns

To generate statistics for a Campaign, start the Statistics Generator tool, press the "Campaign" button and select the Campaign you want to generate the statistics for.

The next dialog asks for the target folder, where the statistics are meant to be stored at. The default path is retrieved from the configuration file.

The Statistics Generator extracts all data related to the execution of the Campaign and provides them in five different csv (comma separated vector) files. One for the Campaign, one for the TC Sets, one for the Test Cases, one for the TC Execution and one for the Report data. The naming convention of the files is <Campaign Title>-Stats-<date>T<time>-<Type>.csv.

The data then can be compiled to reports according to your requirements. The QA Navigator installation provides a sample Excel, found in the "Statistics" folder of the installation, that shows how such a report could look like. To use it, simply copy the data of your statistical extraction into the work sheets named accordingly. There is a "Read Me" worksheet with further information.

The data provided for the TC Sets are cumulative, that means the numbers of all subsequent TC Sets of the TC Set are added to one sum. For the execution time there are two types of times provide: The times originally estimated for execution and the actual execution time. The actual execution time again is differentiated between total execution time, that means the accumulated times for all runs of a test within the campaign and the average run for that Test Case.

Statistics for Reports

To generate statistics for all Reports, start the Statatistics Generator and press the button for the desired Reports.

In a dialog you are asked for the target folder, where the statistics are meant to be stored at. The default path is retrieved from the configuration file.

The Statistics Generator extracts all data related to the reports, except for those reports, that are already closed. The date are stored in a csv (comma separated vector) file. The naming convention of the files is PR-Stats-<date>T<time>-<Type>.csv.

The data then can be compiled to reports according to your requirements. The QA Navigator installation provides a sample Excel, found in the "Statistics" folder of the installation, that shows how such a report could look like. To use it, simply copy the data of your statistical extraction into the work sheets named accordingly. There is a "Read Me" worksheet with further information.


Common Features

Ending Applications

All applications may be ended at any time. The applications do not save any data without a command given. You will not be remembered to save changes made in the mean time.

If you are uncertain if you have saved your changes please refer yourself to the log at the bottom of the window and look for an according entry.

Context Menus

Some of the features of the applications are only accessible through context menus. Try clicking the right mouse button on the applications items to find out about those features.

Read-Only Text Fields

To indicate read-only text fields these fields have a back ground color of light-steel-blue in all applications.

Larger View of Text Fields

If a multi line text fields should be too small to comfortably read or edit the text one can get a larger, sizeable view of that field. Simply double-click the field and a window will pop up, showing the content of the field. If the original field was editable, the pop-up window is also editable and the changes made in that window are written back to the original field.

Saving a Document

When a document is saved by its editor, the display is reset and the newly written document is reloaded from the database. This way the correct saving of the document can be verified.


Known Problems

Special Characters in Config Files

Adding umlaute (ä, ö, ü etc.) or other special characters to the configuration XMLs causes the applications to crash. In some cases the error report points into the wrong direction.

Special characters are not supported for the configuration files.