Home/Publications/Technical Reports

AdaCAST

A White Paper
by
Vector Engineering
November, 1993


Table of Contents


Introduction

This white paper was written to introduce a new software tool called AdaCAST. This tool focuses on unit and regression testing for Ada language programs. Some background information on software test; more specifically unit and regression test is also added for completeness.

A Missing Component to the Ada Toolset

AdaCAST represents a significant advance to the state of the art by automating the development of application specific Test Harnesses required to unit and regression test Ada language programs. We regard a tool of this type as the missing component in the expanding market of Ada language productivity tools. AdaCAST will help transform the software development process in three distinct ways. First, it will help contain the spiraling costs of developing Ada language systems by reducing the man-hours spent on unit and regression testing; a process that historically represents the largest single expense in the development of large software systems. Second, it will ensure higher quality Ada software products by exposing flaws earlier in the software life-cycle when these flaws are more easily and inexpensively resolved. Third, it will automate the labor intensive and repetitive exercise of developing test harness software.

Ada Program Language Basics

The Ada programming language is designed to allow for the modular development of large software programs. The basic building block of the large Ada program is the package. A common implementation of the package concept would group a collection of data definitions, data objects, operators and subprograms which together provide one functional portion of a complete software program. There are two parts to an Ada package. The specification and the body. The package specification contains the well defined interface that users of the package would need to invoke its processing. This interface takes the form of data type definitions and subprogram definitions. The actual implementation details of both the data and the processing are hidden in the package body. The data and procedures contained in the package body cannot be accessed by other software components in the program. This allows a great deal of freedom in the development of large software programs since once the details of the interface between different software components are determined the actual implementation can be carried out autonomously. The concept of partitioning software components into visible and invisible sub-components, is called data hiding. It is this facility of the Ada language more than any other that makes the unit testing of Ada software difficult.

Because it is not possible to access the data structures and subprograms contained in a package body, many programmers will use a test technique called instrumentation to test Ada software. Instrumentation is an obtrusive software technique which has often been used to test software components written in earlier generation software languages. It involves the insertion of additional lines of code into the Unit Under Test (UUT) to record data as the software is executed. The flaw with this intrusive approach is that the performance of the UUT is affected by the additional lines of code and the tested software cannot be delivered into a configuration control system without the instrumentation being removed. Additionally, regression testing is not easily accomplished because the software would have to be re-instrumented each time to recreate the test cases. For embedded and real-time systems, timing constraints must be verified. To assess time-critical software , it is essential to collect data in as unobtrusive manner as possible [SEI 92].

Unit Test Methodologies

Unit testing is the process by which individual software components sometimes as small as individual subprograms, are tested in a stand-alone environment. Unit testing these components can be broken down into three major categories: [SEI 92]

1) Specification Oriented

2) Implementation Oriented

3) Error Oriented

Specification-Oriented Testing
Seeks to show that every requirement is addressed by the software. An unimplemented requirement may be reflected in a missing path or missing code in the software. Specification testing assumes a functional view of the software and is sometimes called functional or black-box testing [Howden 86].

Implementation-Oriented Testing is based on the fact that test data selection is derived from the implementation [Howden 75]. The goal is to ensure that various computational characteristicsof the software are adequately covered. It is hoped that the test data that satisfy these criteria have higher probability of discovering faults. Each execution of a program executes a particular path. Hence, implementation-oriented testing focuses on the following questions: What computational characteristics are desirable to achieve? What paths for this program achieve these characteristics? What test data will execute those paths? What are the computational characteristics of the set of paths executed by a given test?

Implementation-oriented testing addresses the fact that only the program text reveals the detailed decisions of the programmer. For the sake of efficiency, a programmer might choose to implement a special case that appears nowhere in the specification. The corresponding code will be tested only by chance using specification-oriented testing, whereas use of a structural coverage measure such as statement coverage should indicate the need for test data for this case.

Error-Oriented Testing - is necessitated by the potential presence or errors in the programming process. Techniques that focus on assessing the presence or absence of errors in the programming process are called error-oriented.

While our experience tells us that Specification-oriented testing and more specifically input domain testing which consists of extremal and midrange test techniques (those which cover the extremes and interiors of the input domains) [Myers 79] is most widely used; each of the Unit Testing methodologies described above have a single common denominator. They all require an application specific test environment from which all tests, regardless of the method, can be executed. AdaCAST provides this application specific environment for each test methodology. Whichever method of testing is used, AdaCAST provides the components necessary for the testing to be carried out.

Unit Test Terminology

Test Driver (DRIVER)
A driver is a software component created by the test engineer which contains calls to the visible procedures in the UUT. Generally this component will contain multiple calls to the UUT using a different range of input data for each call. The driver will normally record output data to a file for use in examining the results of each test run and determining the performance of the UUT. A driver can be as complex or as simple as the project standards require. Some engineers will develop elaborate drivers which provide detailed reports complete with formatted output data representing each test run. This type of testing makes the review of test results during software code reviews quite painless. Others will provide listings of raw data dumps that require a detailed understanding of the UUT processing to comprehend.

Software Component Stub
A stub is a software component body fragment that is used during unit test to take the place of a component that the UUT depends on to perform its processing. Stubs are generally used when a tested version of the dependent component is not available or when the variability of adding another new software component to the test environment is undesirable. The processing coded into the stub may vary depending on the approach used by the test engineer. For example, the stub may record the value of parameters passed to it in the course of the execution of the test.

Test Bed
Also referred to as a test harness, a test bed is the combination of all the STUBS with the UUT and the DRIVER into a single executable program that allows for the stimulus of all UUT functionality.

This test bed generally provides the following capabilities:

• Invocation of Visible Subprograms
• Manipulation of UUT Inputs
• Capture of UUT Outputs
• Generation of Test Reports
• Capture of STUB Input Parameters
• Manipulation of Stub Output Data
• Testing of Exceptions
• Comparison of Regression Data

Problems Associated With Unit Test

It has been our experience that the design and coding phases of the software development process are efficient using Ada, however, most projects become bogged down during the unit test phase. In addition to the lengthy test period, the unit test drivers and procedures developed are often only suited to a single version of the software being tested. Test procedures require extensive revision to be used to regression test later versions of software components.

Historically, unit testing is performed on a project at the end of the official coding phase. Subsequent to the completion of unit testing, the software goes through many iterations due to design or requirement refinement and as a result, the final product can bear little resemblance to the software components that underwent initial unit testing. Additionally, it is likely that the changes implemented to the software to solve one group of problems caused other problems that may remain hidden through integration and product release.

There are many reasons that unit testing of Ada software is not efficient. Some of the problems are applicable to all programming languages, and some are unique to the Ada language. The following is a list of these problems:

(1) Schedule and Staff Constraints

Rarely is time allocated within a program schedule to run unit test procedures as a means of regression testing software as the requirements and design evolve. Sufficient knowledgeable engineers are rarely available to perform initial unit testing or regression testing because of the schedule pressure that most large development programs are under. Building an environment with which to test software units is not a simple task. Knowledge of the entire surrounding environment is necessary before a test can be written. Inexperienced engineers can spend much time spinning their wheels building such environments.

(2) Test Procedure Maintenance Problems

It is a tedious process to update software test drivers and component stubs that are used to perform unit testing. Most often, the tests are developed months earlier by engineers other than those who are now tasked with re-unit testing or regression testing the updated software. Because they are often-times not deliverable software components, test drivers and component stubs are seldom well documented making reuse by anyone other that the original author virtually impossible. Most often, the old test environment is discarded and a completely new unit test environment is constructed to test each iteration of the software.

(3) Ada Specific Problems

Ada is inherently difficult to unit test using traditional techniques. Implementation details of a software component that must be verified during unit test are hidden from other program components, including test drivers.

(4) Lack of Detailed Testing Guidance

Unit testing methods are almost always left to the discretion of the engineer. This results in as many flavors of unit test approaches and output data formats as there are engineers on the project. Indeed, the GAO report for the A/N BSY-2 Combat System program noted this as a fundamental obstacle in the development of quality software components. (GAO/IMTEC-91-30 BSY-2 Development Risks and Production Schedule)

AdaCAST Features

Although it is widely recognized by software professionals that the lack of quality unit testing results in a lack of quality in the final product, no commercially available software tools currently exist which directly address the problems identified above. It is this gap that Vector Engineering has filled with AdaCAST. This non-intrusive software test tool will automatically generate a complete test environment for a unit to be tested. Test drivers and component stubs needed to exercise the unit under test, a test case interface to assist in the creation of test cases, and a full range of automatically generated reports for the user to compare the actual test results against the expected test results.

AdaCAST is the automation of a proven Vector Engineering developed unit testing methodology. This methodology has been used successfully on all company Ada language software development projects since 1990. AdaCAST will make the unit test process integral to the entire software development life-cycle. Each time a software component is modified it will be quickly tested (using previously developed test cases) with the test results compared against the test results for the previous version of the component. This regression testing will ensure that each change made to the software has only the intended effects on the functionality.

Problems Solved

Most importantly this tool will solve each of the problems identified earlier in this paper. AdaCAST will:

(1) Ease Schedule and Staff Pressures

We project that the time dedicated to unit testing will be reduced by at least fifty percent for large projects. This will allow for quicker off loading of personnel during the unit test phase and ensures more productive use of the limited staff engineers.

Automating the unit test process frees staff for writing the applications software that contributes to the profits of the company rather than writing the disposable test software that can drain a companies budget.

The AdaCAST tool will automate the building of the complex environment of stubs and drivers necessary to exercise a particular unit. This automation of the testing environment will save the inexperienced and experienced developer the time required researching how a particular unit fits into the system.

(2) Eliminates Test Maintenance

The entire test environment is generated automatically, no maintenance of test software is required. All of the environment components are automatically regenerated for each revision of the software component being tested. The test cases and results will automatically be retained for use in performing regression test and analysis.

(3) Ada is well Suited for Automated Testing

There are several attributes of the Ada language that make it well suited to a tool of this type. The strong data typing enforced by the language guarantees that the interfaces between components are well defined and the data values have finite ranges. Even numeric types not explicitly bounded by the user have an implementation defined range that is accessible using the syntax of the language (i.e. integer'last). The handling of erroneous results during program execution using the exception processing provided in the language makes it easy to develop software that is fault tolerant. In the case of unit test drivers this is important because the software components being tested are not mature and it is very likely that they contain catastrophic errors.

(4) Replace Detailed Testing Guidance

AdaCAST will reduce software unit test guidance to a single instruction: "use AdaCAST to test each version of the software prior to delivery into configuration control." AdaCAST will give each organization and individual developer the ability to perform unit-testing and generate unit test reports with a common format.

In Conclusion

We predict that tools such as AdaCAST will become as indispensable as compilers currently are to the software engineer. Imagine if the engineer had to manually check a software component for compliance with the syntax rules of the language. Imagine if symbolic debuggers did not exist and every engineer had to write test software to extract and display each data object to be examined when debugging a software bug. AdaCAST, like compilers and symbolic debuggers, will become an integral part of the software development process and in a very short time an indispensable part of any Ada development toolset. One of the prime design goals for this tool was to provide an easy to use automated implementation of the test concepts that software engineers were already familiar with. The result is a true productivity tool which can be used with minimal training.

Bibliography

SEI 92

Morell, Larry J. ,and Lionel E. Deimel. "Unit Testing and Analysis." Software Engineering Institute Curriculum, Module SEI-CM-9-2.0, June 1992

Howden 75

Howden, William E. "Methodology for the Generation of Program Test Data" IEEE Trans. Computers C-24, 5 (May 1975), 554-560.

Howden 86

Howden, William E. "A Functional Approach to Program Testing and Analysis." IEEE Trans. Software Eng. SE-12, 10 (Oct. 1986), 997-1005

Myers 79

Myers, Glenford J. The Art of Software Testing. New York: John Wiley, 1979.

For More Information Contact:

Vector Software, Inc. (Formerly Vector Engineering)
1130 Ten Rod Road, E-307
North Kingstown, R.I. 02852
Phone: 401-295-5855
Fax: 401-295-5856
e-mail: info@vectors.com