Ada Compiler Evaluation System

User's Guide

for

Version 2.1

FINAL

Contract Number F33600-92-D-0125

CDRL A0014

Prepared for:

High Order Language Control Facility

Business Software Support Branch

88 CG/SCTL

Wright-Patterson AFB OH 45433-5707

Prepared by:

CTA INCORPORATED

5100 Springfield Pike, Suite 100

Dayton, OH 45431

Availability of the Ada Compiler Evaluation System

The ACES software and documentation are available by anonymous FTP from the host "sw-eng.falls-church.va.us" in the directory "public/AdaIC/testing/aces/v2.1" and from other Ada-related hosts. Document files are included in PostScript format and as ASCII text. The ACES files are also accessible via the World-Wide Web. The home page URL is "http://sw-eng.falls-church.va.us/AdaIC/testing/aces/".

For further information about the ACES, contact the High Order Language Control Facility. As of 1 March 1996, the appropriate contact is:

Mr. Brian Andrews

88 CG/SCTL

3810 Communications, Suite 1

Wright-Patterson AFB, OH 45433-5707

(513) 255-4472

E-mail: andrewbp@email.wpafb.af.mil

ABSTRACT

This document is the User's Guide for the Ada Compiler Evaluation System (ACES) contract. The purpose of this document is to guide an ACES user in running the ACES benchmark test suite and supporting tools.

TABLE OF CONTENTS

LIST OF APPENDICES

APPENDIX A - ACES COMMANDS AND THEIR SYSTEM TRANSLATIONS

APPENDIX B - PRETEST REPORT FORM

APPENDIX C - SYMBOLIC DEBUGGER ASSESSOR README FILE

APPENDIX D - DIAGNOSTIC ASSESSOR README FILE

APPENDIX E - PROGRAM LIBRARY ASSESSOR README FILE

APPENDIX F - CAPACITY ASSESSOR README FILE

APPENDIX G - TROUBLE SHOOTING GUIDE

LIST OF FIGURES

1. APPLICABLE DOCUMENTS

The following documents are referenced in this User's Guide.

1.1 Government Documents

ANSI/MIL-STD-1815A Reference Manual for the Ada Programming Language (LRM)

ISO/IEC 8652 (1995) Programming Language Ada, Language and Standard Libraries (RM 95)

1.2 Non-Government Documents

Readers Guide for Ada Compiler Evaluation System (ACES), Version 2.1

High Order Language Control Facility

Business Software Support Branch

88 CG/SCTL

Wright-Patterson AFB OH

Version Description Document (VDD) for

Ada Compiler Evaluation System (ACES), Version 2.1

High Order Language Control Facility

Business Software Support Branch

88 CG/SCTL

Wright-Patterson AFB OH

Primer for Ada Compiler Evaluation System (ACES), Version 2.1

High Order Language Control Facility

Business Software Support Branch

88 CG/SCTL

Wright-Patterson AFB OH

"Improving a Poor Random Number Generator," C. Bays and

S. D. Durham, ACM Transactions on Mathematical Software,

Volume 2, Number 1, March 1976.

Ada Evaluation System AES/1

User Introduction to the Ada Evaluation System Release 2, Version 1,

Issue 1, Crown Copyright, 1990.

Ada Evaluation System AES/2

Reference Manual for the Ada Evaluation Compiler Tests, Release 2, Version 1, Issue 1, Crown Copyright, 1990.

Ada Evaluation System AES/3

System User Manual Parts 0 and 1

Introduction and General Information, Release 1, Version 1, Issue 2, Crown Copyright, 1990.

Introduction to the Theory of Statistics,

A. Mood and F. Grayhill, McGraw Hill, l963.

"Proposed Standard for a Generic Package of Elementary Functions for Ada," Ada Letters - A Special Edition from SIGAda, Volume XI, Number 7, Fall 1991.

Software Manual for the Elementary Functions,

W. Cody, Jr., and W. Waite, Prentice-Hall Series in Computational Mathematics, Prentice-Hall, Inc., 1980.

"The Need for an Industry Standard of

Accuracy for Elementary-Function Programs," C. Black, R. Burton, and T. Miller, ACM Transactions on Mathematical Software,

Volume 10, Number 4, December 1984.

2. OVERVIEW OF THE ACES

ACES Version 2.1 is an update of ACES Version 2.0. The first version (1.0) of the ACES was a merged product between the Ada Compiler Evaluation Capability (ACEC) Version 3.1 (sponsored by the Ada Joint Programming Office (AJPO)) and the Ada Evaluation System (AES) Version 2.0 (sponsored by the Ministry of Defence of the United Kingdom).

The ACES is a set of tests, tools, and assessors to assist in the evaluation of an Ada compilation system. The test suite is designed to measure the performance of an Ada compilation system, emphasizing execution time, code size, and compilation speed, as well as the capabilities of its symbolic debugger, diagnostic messages, program library system, and system capacities. The ACES is contained in its entirety in the distribution files. The Primer, the User's and Reader's Guides and the Version Description Document are included in PostScript and ASCII formats. Fundamental instructions on the operation of the ACES are found in the Primer. Content of the ACES is listed file by file in the VDD. Instructions on how to use the ACES are contained in this User's Guide. Interpretation of the ACES and definitions of the purpose of tests and the analysis of test results are described in the Reader's Guide.

Users who want to quickly estimate execution speed should use the ACES Quick-Look facility. This subset of the ACES performance tests and support software produces data similar to that produced by the PIWG test suite. Typical time for downloading and executing the Quick-Look tests is less than one day. The Quick-Look facility is discussed in detail in Section 8 of the ACES Primer.

Figure 2-1 illustrates the paths through the ACES evaluation process. The first step is performing the Pretest activity. (The Pretest automation tool, ZP_SETUP, is useful here.) The results of the Pretest are recorded on the Pretest Report.

If the user wishes to run the performance tests, then the next step is the use of the Harness program (compiled during the Pretest activity). This program generates command scripts for compiling and executing the performance tests. The user then executes these scripts and captures the results in log files. The Harness program reads these log files and gives execution status information so that the user can choose to re-run selected tests or go on to other tests. This process is iterative.

When the user has run as many performance tests as desired, he/she runs the Analysis Menu program, first selecting the Condense tool. Condense reads the log files and produces the Analysis Database files. The user then selects either Comparative Analysis (in which case Analysis Database files for other systems must be available) or Single-System Analysis. These Analysis programs produce extensive reports, as directed by the user.

The user may elect to run the Assessors before, after, or independently of the performance tests, provided that at least part of the Pretest has been completed. The Assessors may be run in any order. The results from each are recorded on the appropriate Assessor Report form.

The following documents provide the user with the information necessary to set up the ACES, execute the ACES, and interpret the results generated by the ACES. The user's level of expertise with the ACES and the information needed will guide the user as to which document to reference.

Primer for ACES: All new users of the ACES are first directed to this document which provides an overview of the system. More importantly, it provides a quick start-up guide. This document provides the basic instructions for setting up the system, running the system, and interpreting the results.

User's Guide for ACES: Advanced users of the ACES and users who run into problems while working with the Primer are referred to this document to help in resolving problems and to gain more insight into the details of how the ACES is organized.

Reader's Guide for ACES: Users are referred to this document when attempting to interpret test results or attempting to find out the purposes of the ACES tests.

Version Description Document for ACES: Users are referred to this document when it is necessary to find out the components of a test. This document provides a variety of cross-reference tables that describe the contents of the ACES.

Figure 2-1 Overview of the ACES Evaluation Process

2.1 The Support Software

The Support Software Computer Software Configuration Item (CSCI) of the ACES consists of the automated support tools. It contains several separate tools, including:

1. Quick-Look - A set of programs and tools for obtaining execution speed measurements using a subset of the ACES performance tests.

2. Pretest - A set of programs and procedures to prepare for execution of the performance tests and analysis tools.

3. Include - A tool to perform text inclusion into an Ada source text file. It will assist in adapting programs to particular targets.

4. Harness - A program that provides an interactive interface for selecting performance tests, tracking their status, and generating command files which will compile, link, and run the selected tests.

5. Analysis Menu - An interactive interface for calling any of the analysis programs.

6. Condense - A tool to extract the timing data, the code expansion data, compile and link speed data, and certain ancillary data from the output produced by compiling and executing the Operational Software and to write this information in a format usable by the Comparative Analysis program and by the Single System Analysis program.

7. Comparative Analysis (CA) - A tool to statistically compare results of running the performance tests on various systems.

8. Single System Analysis (SSA) - A tool to analyze the performance results from a single system.

2.2 The Operational Software

The Operational Software CSCI of the ACES consists of the following items:

1. Test Suite - A set of performance tests to assess execution time, code expansion size, and compilation and link time.

2. Symbolic Debugger Assessor - A set of programs and procedures to assess symbolic debuggers.

3. Diagnostic Assessor - A set of programs and procedures to assess the diagnostic message quality.

4. Library System Assessor - A set of programs and procedures to assess Ada program library systems.

5. System Capacity Assessor - A set of programs to assess Ada compilation and run-time capacity limits.

2.3 Version 2.1 Modification Highlights

As directed by the funding agency, Version 2.1 development focused on three issues:

* Increased testing of language features introduced by Ada 95;

* Increased ease of use through provision of

+ Default processing choices in the Quick-Look and Pretest activities,

+ Test selection according to performance areas of interest to the user, and

+ Default report choices in the Analysis activity; and

* Simplified inclusion of user-defined benchmark tests in the test selection and Analysis activities.

3. INTRODUCTION TO THE USER'S GUIDE

This Guide provides detailed instructions on using the ACES. It attempts to anticipate problems that may arise and to provide enough information to enable the user to resolve most problems. The Primer for ACES provides a much simpler view of the process. For most users, it is probably best to follow the Primer, using this User's Guide only when problems arise or when more detailed information is desired.

The User's Guide is organized into 11 informational sections and 7 appendices. The informational sections discuss the actual use of the ACES, while the appendices provide step-by-step instructions for the Assessors. In addition, a User Feedback section, a Notes section (containing a list of acronyms), and an Index are provided. We recommend that the user who is depending on the User's Guide read the section on the Pretest or selected Assessor before going to the step-by-step instructions in the relevant appendix.

Section 1, "Applicable Documents," describes the documents, other than the User's Guide itself, that are relevant to the ACES.

Section 2, "Overview of the ACES," gives a very high-level overview of the entire system and the evaluation process.

Section 3, "Introduction to the User's Guide," is the current section. It presents a high-level description of the contents of this document.

Section 4, "What You Need to Know Before Starting," outlines the prerequisites for ACES testing and describes the organization of the ACES.

Section 5, "Getting Started," describes the Pretest activity. It should be read before or in parallel with the "Pretest Readme File" appendix. This section describes the pretest activity when done manually. For automation support see the ACES Primer.

Section 6, "Running the Harness," describes the use of the Harness tool, which may be used to produce command scripts for running the performance tests and to track the run-time status of these tests.

Section 7, "Running the Performance Tests," discusses the actual testing process (not necessarily using the Harness tool).

Section 8, "Running the Assessors," discusses each of the four Assessors. This section should be read before or in parallel with the "Assessor Readme File" appendices.

Section 9, "Running the Analysis," discusses the use of the Analysis Menu, the Condense tool, the Comparative Analysis tool, and the Single System Analysis tool.

Section 10, "ACES User Feedback," provides contact information and forms for problem reports and change requests.

Section 11, "Notes," provides a list of acronyms and abbreviations used in the User's Guide.

Appendix A, "ACES Commands and their System Translations," describes the syntax and semantics of the language used in the template command script files. This appendix is most useful when a user must create her/his own command scripts, based on the distributed samples.

Appendix B, "Pretest Report Form," contains a copy of the file "zp_tmplt.txt", template for the Pretest report.

Appendix C, "Debugger Assessor Readme File," gives detailed instructions for running the Debugger Assessor. It should be read after or in parallel with the corresponding section of Section 8.

Appendix D, "Diagnostic Assessor Readme File," gives detailed instructions for running the Diagnostic Assessor. It should be read after or in parallel with the corresponding section of Section 8.

Appendix E, "Library Assessor Readme File," gives detailed instructions for running the Library Assessor. It should be read after or in parallel with the corresponding section of Section 8.

Appendix F, "Capacity Assessor Readme File," gives detailed instructions for running the Debugger Assessor. It should be read after or in parallel with the corresponding section of Section 8.

Appendix G, "Trouble Shooting Guide," provides descriptions of common problems, along with possible corrective actions.

4. WHAT YOU NEED TO KNOW BEFORE STARTING

This section provides the context for understanding the ACES and its use. It includes a discussion of the resources required and a description of the structure of the ACES.

4.1 Resources

In preparation for running the ACES, some resources must be acquired.

* A compilation system (host platform with compiler)

* A target system (if different from the host)

+ A host-targeted Ada compiler is necessary in this case, since some tools run on the host.

* A copy of the ACES software and documentation.

* An evaluation objective

+ At a minimum, it should be clear whether the objective is the comparison of two or more systems or the analysis of one particular system.

4.2 Level Of Effort Estimates

Resources necessary for the use of the ACES include an operable system with a compiler and a significant amount of disk space. How much space is required varies greatly between software implementations due to the usage characteristics of the host system, but the following information on disk space requirements can be used for planning purposes.

4.2.1 Disk Space Requirements

To run the ACES, disk space will be required to contain performance test problems, support tools, assessor programs, command files, readme files, and documentation present on the distribution tape. This totals over 35 megabytes. Space will also be required for the temporary files created by the zg_incld tool, the intermediate forms created by compiling files in the Ada program library, the executables created by linking test programs, the output files from running the test programs (log files for the performance tests and temporary files for some of the test problems which test the I/O facilities of a system), and both final and intermediate files from the analysis tools.

The sample command files and the command files generated by the Setup and Harness programs are set up to:

* Delete the temporary files created by zg_incld after they are compiled;

* Delete units from the program library when they will not be referenced again; and

* Delete the temporary files created by executed I/O test programs. Some of these are as large as five megabytes.

There are various options if disk space is limited. Users may decide to delete the executables for the performance tests after executing them to conserve disk space. However, because these executables may have to be run more than once, (for example, when the verification code fails) and the programs would have to be recompiled (which might be a slow process on some systems), the Harness has a user option governing whether the generated command files delete executables.

4.2.2 Time Investment

In addition to the disk space guidelines, the following information on time allotment to execute the ACES product can also be used for preliminary planning.

* Preparation time

Preparation is accomplished during the Pretest activity. The time required depends on the user's familiarity with the operating system(s) and compiler being used and their similarity to any of the default systems provided. Two days should normally be sufficient for downloading, decompression of files, and completion of the Pretest activity.

* Time for compilation and execution of the performance tests

After the timing loop code has been included, the performance tests consist of approximately half a million lines of code. For example, it took approximately 90 hours to compile/execute all the performance tests for one compilation system on a MicroVAX II, including the rerunning of some tests. The time may be MUCH slower for embedded targets due to downloading. The time varies greatly between systems and compilation options selected. If errors or restrictions are found in the compilation system being evaluated, then the time for the testing process may increase. As a general guideline, a user should allow 1 to 3 weeks to complete the full set of performance tests and analyze the results.

* Total time

In addition to the preparation time and the compilation/execution of performance tests, the time to execute the four ACES assessors must be factored in to get the total time investment to execute the ACES product. One week should be allowed for each of the assessors on the average. Therefore, as a rough estimate, it could take one programmer about 8 weeks to thoroughly evaluate a compilation system using the ACES test suite and assessors. The amount of time varies with the experience of the programmer, the reliability of the system being tested, and the amount of free disk space. Due to the organization of the performance tests into groups/subgroups and separate assessors, it is possible to run subsets of the ACES in less time.

4.3 Organization Of ACES

The objective of this section is to describe how to start the ACES tool evaluation process.

4.3.1 Grouping of Files

All ACES Operational and Support files are divided into major categories by prime purpose. There are 21 execution-time performance test groups, four assessor groups, seven support groups, and the Quick-Look group. The names of all ACES source files and performance tests reflect the group to which they belong, so they can easily be identified and sorted.

The individual files in the Operational and Support files are prefixed by a standard block of comments, as shown in Figure 4-1.



-- Name : This will identify the test problem or tool


-- Prime Purpose : This states the prime goal of the problem

-- Optimization : Applicable optimizations

-- Related tests : List of similar and related tests

-- Author : Identify the author

-- Reviewer : Identify the reviewer of test

-- Date : Date of original test problem

-- Source : Citation of source of algorithms or test,

-- if appropriate

-- Dependencies : Mention use of system dependent features which

-- may require adaptation or warn of non-portability

-- Other Information : Anything else relevant

-- Revisions : Change history, if required


-- Author : Identify the author



-- Reviewer : Identify who reviewed the changes


-- Date : Date of revision

-- Change : Identify the change made


Figure 4-1 Comment Template

4.3.2 Packaging of Performance Tests

Each of the 21 major groups of performance test files in Figure 4-2 contains one or more subgroups of tests. This extra level makes it possible to do subgroup analysis. Most performance tests are in separate compilation units to isolate the potential effect on other test problems, in the event of a failure to compile a test problem. See the VDD Appendix B, "Test Problem to Source File Map", for subgroups and main programs that contain the tests.

The performance test groups are listed below.



* ap: Application group


* ar: Arithmetic group

* cl: Classical group

* do: Data Storage group

* dr: Data Structures group

* s: Delays and Timing group

* xh: Exception Handling group

* gn: Generics group

* io: Input Output group

* in: Interfaces

* ms: Miscellaneous group

* oo: Object Oriented

* op: Optimizations

* po: Program Organization group

* pt: Protected Types

* sr: Storage Reclamation group

* st: Statements group

* su: Subprograms group

* sy: Systematic Compile Speed group

* tk: Tasking group


* ud: User Defined



Figure 4-2 Major Groups of Performance Tests

Command files that compile, link, and execute all the performance tests by group can be created using the Harness build command (see Section 6 "RUNNING THE HARNESS"). These command files can be concatenated together before execution for the more robust systems, or can be further broken down by self-contained subgroups to give the flexibility to execute only the selected subgroups.

* Dummy Versions - Before the "real" test is compiled, a "dummy" version of each problem which prints the problem name and a compile-time error code is compiled into the program library. There is a file for each of the subgroups containing "dummy" versions of the test problems for that subgroup. This file must be compiled before compiling the performance tests for that subgroup. Dummy files are created by the Harness.

* Common Packages - There are common packages at the subgroup level wherever needed. These packages must be compiled before compiling the performance tests for that subgroup.

* Main Programs - Test problems are grouped into main programs within subgroups and by compiler options. There is a default maximum of nine tests per main program. The packaging of multiple tests into programs eases the download task for embedded targets.

* Support Command Files - There are command files to:

+ Set the default Ada library.

+ Write a time stamp before and after a compile or link statement.

+ Copy an Ada source file from the source directory into the working directory.

+ Compile an Ada program without time stamps.

+ Compile an Ada program with time stamps.

+ Copy Timing Loop initialization parameters.

+ Include Timing Loop code into a test problem.

+ Delete files and library units.

These command files are created by the Setup program or (manually) by using their template files as guides. See Section 5 for more detail.

4.3.3 Naming Conventions

All source file names have eight characters or less and all suffixes have three characters or less. The first three characters (two-character group name abbreviation followed by an underscore) of any file name in the ACES test suite identify the group to which the file or test belongs. The execution-time groups do not have any two letter codes beginning with "y" (reserved for the assessor groups), "z" (reserved for the support groups) or "q" (reserved for the Quick-Look group)). All file names are unique throughout the ACES Operational and Support CSCI's.

4.3.3.1 Execution-Time Performance Test Subgroups

All execution-time performance test groups are comprised of subgroups of tests. The subgroup name abbreviation is formed by the fourth and fifth characters in all the execution-time file and test names. This enables easy identification of those subsets of tests of particular interest and their associated support files.

4.3.3.1.1 File Names

The convention for the performance test file names is: a two-letter code for the group name, an underscore, a two-letter code for the subgroup name, two digits (representing the number of a file within that subgroup), and an underscore (indicating that this is an original release). The trailing underscore will be replaced with an "a" after the first revision. For example, under the Applications group and the Avionics subgroup, the first file has the name "ap_av01_.inc". The suffix ".inc" indicates that this is a file that must have the Timing Loop code "included" by Include before compilation.

* Dummy Versions - The convention for the dummy versions is: a two-letter group name code, an underscore, a two-letter subgroup name code, and the abbreviation, "dum". For example, the dummy file for the Avionics subgroup of the Applications group is named "ap_avdum.ada".

* Common Packages - The convention for the common packages is: a two-letter group name code, an underscore, a two-letter subgroup name code, and the abbreviation, "pkg". For example, the file containing the common packages for the Applications group and the Avionics subgroup is named "ap_avpkg.ada".

* Main Programs - The convention for the main programs is: a two-letter group name code, an underscore, a two-letter subgroup name code, an "m" (identifies the file as a main program), and then incremental numbering to identify the series of main programs within that subgroup. For example, the first main program for the Avionics subgroup of the Applications group is named "ap_avm01.ada".

* Command Files - The naming convention for the test suite support command files needed during the compilation/execution step is covered in Section 4.3.3.3 "Support Groups".

A list of all distributed files is contained in the VDD Appendix C.

4.3.3.1.2 Ada Unit Names

The maximum length of any performance test name is 28 characters. All test names begin with the two letter code for the group name, an underscore, a two-letter code for the subgroup name, and then an underscore. The remaining characters are reflective of the test's prime purpose, if possible. For example, ap_av_arti_asum can be identified as a member of the Applications (ap) group and the Avionics (av) subgroup. Otherwise, an abbreviation of the subgroup name, plus an underscore, and two digits is used, for example, dr_ba_bool_arrays_01. This test can be identified as a member of the Data Structures (dr) group and the Boolean Arrays (ba) subgroup.

4.3.3.2 Assessor Groups

All two letter codes for the assessor group names begin with a "y" to distinguish them from the execution-time test group or the support group names.

* Debugger Assessor - The first three characters of a Debugger Assessor name are "yb_" and the next 5 characters identify the file's purpose, except for the test names, which are numbered incrementally. The sixth character of a Debugger Assessor file name is an underscore character, "_", or a "t"; a test containing tasking constructs has a "t". Where several files are required for one test, they are distinguished by an alphabetic character in the seventh position of the name. An example of a Debugger Assessor test name is:

yb_01_a.ada -- test 01, file a

yb_01ta.ada -- tasking version of test 01, file a

* Capacity Assessor - The first three characters of a Capacity Assessor name are "yc_" and the next five characters identify the file's purpose, except for the test names which are numbered incrementally. For example, "yc_serch.com" is a command file for the Capacity Assessor. The Capacity Assessor tests are divided into two subgroups with identifying characters in the fourth and fifth positions of the name as follows:

ct -- Compile-time tests

rt -- Run-time tests

An example of a Capacity Assessor compile-time test name is:

yc_ct01g.ada -- source generator for compile-time test 01

yc_ct01_.ada -- generated source code for compile-time test 01

* Diagnostic Assessor - The first 3 characters of a Diagnostic Assessor name are "yd_" and the next 5 characters identify the file's purpose, except for the test names, which are numbered incrementally. Where several files are required for one test, they are distinguished by an alphabetic character in the eighth position of the name. The file "yd_compl.com" is the Diagnostic Assessor's command file (VMS) to compile the tests. The Diagnostic Assessor tests are divided into four subgroups with identifying characters in the fourth and fifth positions of the name as follows:

cw -- Compiler Warnings

ce -- Compiler Errors

lt -- Link Time Errors

rt -- Run Time Errors

An example of a Diagnostic Assessor run-time test name is:

yd_rt01a.ada -- Run-time test 01, file a

* Program Library Assessor - The first three characters of a Library Assessor name are "yl_" and the next five characters identify the file's purpose, except for the test names. These contain the characters "ib" in the fourth and fifth positions, and then a number which increases incrementally in the sixth and seventh positions. Where several files are required for one test, they are distinguished by an alphabetic character in the eighth position of the name. The Library Assessor Summary Report Form file is "yl_tmplt.txt". An example of a Library Assessor test name is:

yl_ib14a.ada -- Library Assessor test 14, file a

4.3.3.3 Support Groups

All support group names begin with a "z". Here are examples from each of the seven major support groups:

* Analysis Group - The first three characters for all files in the Analysis group are "za_". Then some of the files in the Analysis group are further broken down into subgroups, one for common files and four for files related to a specific analysis tool. The files in the tool subgroups are identified by the next two characters in the file name:

co -- Common

ca -- Comparative Analysis (CA)

cn -- Condense

mn -- Menus

sa -- Single System Analysis (SSA)

The last three characters in these file names identify the file's purpose. An example of an analysis file name is "za_saopt.ssa" which is a template file used by SSA in generating its Main Report.

* Command File Templates - The first three characters for the templates for low-level command scripts and associated files are "zc_". The next characters (up to five) identify the file's purpose. An example of a test suite command file template name is "zc_adaop.tpl" which invokes an Ada compiler with a compiler option of optimization.

* Documentation - The first three characters for the distributed documentation files are "zd_". The next characters (up to five) identify the file's purpose. An example of a documentation file name is "zd_readg.txt" which is the Reader's Guide delivered as an ASCII text file.

* Global and Timing Loop Files - The first three characters for the Global and Timing Loop files are "zg_". The next characters (up to five) identify the file's purpose. There are four possible suffixes for some of the files in this group based on which measurement technique is desired for testing. The three-character extension is composed of two parts: the two left-most characters for time measurement ("el" for "elapsed time", "cp" for "cpu time") and the right-most character for code size measurements ("g" for the " ZG_GETAD" function, "l" for the "label'ADDRESS" attribute). An example of a Global package name is "zg_glob3.elg".

* Harness Group - The first three characters for the Harness group are "zh_". The next characters (up to five) identify the file's purpose or are numbered incrementally. An example of a Harness file is "zh_ap.txt", which contains test and file-name data for the Application (ap) group.

* Math Group - The first three characters for the Math group are "zm_". The next characters (up to five) identify the file's purpose. An example of a file name in the Math group is "zm_math.ada".

* Pretest Group - The first three characters for the Pretest files are "zp_". The next characters (up to five) identify the file's purpose. An example of a Pretest file name is "zp_stp05.tpl" which is the template for the Setup Step 5 command script.

4.3.3.4 Quick-Look Group

The Quick-Look includes files from the support groups and the performance test groups. In the distribution, these shared files are duplicated in the Quick-Look subdirectory. Files that have special versions applicable to Quick-Look have ".ql" suffixes (e.g., "zg_glob3.ql"). Files that are unique to Quick-Look are identified by the first three characters ("ql_").

5. GETTING STARTED

This section discusses the Pretest activity, the Pretest report form, and user adaptations that may be necessary for running the performance tests.

NOTE: For a greatly simplified process producing limited performance data, see the discussion of the Quick-Look facility (Primer, Section 8).

5.1 Pretest

The ACES was designed to be portable, and the test suite should be ready to use with minimal system-specific adaptations. The Pretest allows the user to identify, perform, and test these adaptations in an organized, semi-automatic fashion. The Pretest process is described in detail in Section 3 of the ACES Primer. The current section provides discussions of possible problems, special actions that may be necessary for some systems, and examples of special units that may be required.

This section is organized into subsections, discussing special situations that may arise. For each such situation, the subsection title indicates both the problem area and the Pretest step in which it might arise.

5.1.1 Measuring Code Size (Running Setup; Pretest Step 2)

Depending on the system, there are two alternative methods available for measuring code size expansion. If the system under evaluation supports the label'ADDRESS attribute, then this measurement is straightforward to compute by taking the difference between two addresses. An ACES user should review documentation on the Ada compilation system to see if the attribute is supported. If it is not, an assembly language function can be written which returns the address of its caller, and this function can be used to bracket code whose size is to be measured.

The ACES uses the label'ADDRESS attribute to collect code expansion size measurements. Not all implementations support this attribute correctly. Some systems accept the use of the attribute, but always return zero for a value, while other systems generate a compile time error message.

The Pretest program "zp_label.ada" helps the user decide which measurement technique to use throughout the Pretest and the execution of the performance tests. This test verifies that the system's label'ADDRESS attribute is working correctly. This test reports in eight-bit bytes the difference between two label'ADDRESS statements surrounding a one-parameter procedure call. The expected range for this value should be anywhere from four to 16 bytes. Then the user can compare this value with a system map or machine code listing in order to verify the accuracy.

If a system's label'ADDRESS attribute is not supported, or "zp_label.ada" shows that the code size value is outside the expected range, a sample assembly routine, "zg_getad.mar", is provided for the VMS targeted systems. It returns the address of its caller and can be adapted by the user to calculate code expansion size. If the user does not care about collecting code expansion sizes, the performance test timing measurements can still be collected. Simply adapt the ZG_GETAD function (in "zg_glob3.elg" or "zg_glob3.cpg") so that it always returns the same value. For Ada 95 implementations, the value "System.Null_Address" is appropriate.

An example that works on DEC Ada Version 2.3 under VMS is shown in Figure 5-1.



.TITLE ZG_GETAD



; This procedure returns the value of the calling



; module's call address in RO


;

.PSECT CODE PIC, SHR, NOWRT, LONG

.ENTRY ZG_GETAD ~M<>

;

; Move the PC contents to RO

;

MOVL 16(SP), RO

RET

.END


Figure 5-1 VAX VMS Procedure Returning the Address of the Caller

To use the "zg_getad.mar" function, select either the set of files with the suffix ".elg" which incorporates this modification for collecting code expansion size and elapsed time measurements, or the set with the suffix ".cpg" for CPU time. The assembly code function should be submitted to the assembler before Pretest Step 5. The command script for Step 5 may need to be modified to make the resulting object file available to the Ada library. It may also be necessary to modify the linker commands in zc_link, zc_linkd, and zc_lnk to specify the library where the assembler object exists.

Details of adapting this routine to other implementations depend on the provisions for calling assembler routines from Ada programs and are highly system dependent. If there is not enough system documentation to adapt the ZG_GETAD assembly routine, a user could write ZG_GETAD as an Ada function that returns a constant of type SYSTEM.ADDRESS. Then the difference between any two label results from ZG_GETAD will be 0. The constraints of this method are:

* The name of this function must be ZG_GETAD.

* The pragma interface to ZG_GETAD statements must be removed from one of the files listed below, depending on the user's choice of measurement technique (either elapsed or CPU time as determined in the next Pretest step).

+ zg_glob3.elg for measurement of elapsed time

+ zg_glob3.cpg for measurement of CPU time

To compute the code expansion size, the Include tool (zg_incld) inserts a unique label after the last line of the insertion for Startime (zg_start); inserts another unique label before the first line of Stoptime0 (zg_stop0); and generates an assignment statement setting the variable ZG_GLOB3.EXPANSION_SIZE to the difference between the addresses of these two labels for Stoptime2 (zg_stop2).

The SUBTRACT_ADDRESS function defined in "zp_label.ada" (Pretest Step 2) and zg_glob3 (Pretest Step 5) must be adapted for 16-bit machines due to their 16-bit addresses. Use a 16-bit unsigned integer arithmetic algorithm or whatever techniques are acceptable for the compiler being evaluated. If the compilation system defines a minus operator for address types (as Ada 95 requires), use that option.

5.1.2 CPU Time vs. Elapsed Time (Running Setup; Pretest Step 3)

The ACES timing measurements can be executed with either elapsed or CPU time. Two versions of the timing code are distributed. One calls CALENDAR.CLOCK, and the other calls an ACES function which must be adapted to call an operating system CPU time function. This test step verifies that the system clock is working accurately. The ACES is set up to measure elapsed time as a default. The user may wish to collect CPU time rather than elapsed time; this may be done if the target operating system provides a CPU time function.

The elapsed timing measurements are performed using the function CLOCK in the predefined package CALENDAR. CALENDAR must work accurately for the Timing Loop code to function. The Timing Loop files for elapsed time have a suffix of ".ell" or ".elg" which is determined by choosing label'ADDRESS or a ZG_GETAD routine for code size measurements. These files are compiled in Pretest Step 5. CALENDAR is tested with the programs "zp_tcal1.ada" and "zp_tcal2.ada" in this Pretest step.

On bare machine targets, elapsed time measurements are the appropriate metric to collect. On multiprogramming target systems, using the CPU time metric will permit the collection of measurements without having to shut the system down to eliminate contending jobs. Measurements of the IO performance tests using CPU times are not comparable with measurements on other systems using elapsed times.

An ACES user can choose to run the Timing Loops using CPU time rather than elapsed time. The Timing Loop files for CPU time have a suffix of ".cpl" or ".cpg" which is determined by choosing label'ADDRESS or a ZG_GETAD routine for code size measurements. The separate function CPU_TIME_CLOCK ("zg_cpu.dec") will need to be replaced, and renamed "zg_cpu.ada", with code which queries the Ada runtime environment and returns the CPU time as a value of the predefined and system-dependent type CALENDAR.DURATION. These files are compiled during Pretest Step 5. To use CPU time measurements, it is necessary that the target environment maintain job CPU time (See Section 5.1.3).

Figure 5-2 is an example of a function ("zg_cpu.dec") which will access CPU time.



-- This library unit contains the function that is called by the function,


-- CPU_TIME_CLOCK, in zg_glob3.CPG or zg_glob3.CPL. This enables the user

-- that wants CPU time measurements to only have to adapt this function

-- to their system dependencies one time, here in the function zg_cpu.

-- This function was developed by the ACM SlGAda, PIWG (Association for Computing

-- Machinery, Special Interest Group on Ada, Performance Issues Working Group).

-- It is their program A000012. This version is compatible with DEC VAX Ada,

-- calling on the VMS System Service routine "$GETJPI". Refer to VAX/VMS

-- System Services Reference Manual, Order No. AA--Z502C--TE for more information.

--

-- The Ada function has a return type of DURATION.

--

-- A common implementation technique introduced errors in using CPU time for

-- timing measurements. One field in the Task Control Block (TCB) will

-- represent cumulative CPU time, but is only updated on task scheduling.

-- A system call which returns the field from the TCB will ignore the time

-- the task has expended in the current quantum (that is, since last scheduled).

-- This would appear to a program as a clock which "stutters", keeping the

-- same value for a relatively long time and then updating itself by several

-- " ticks" at one time. Such a clock can keep long term accuracy, but programs

-- using it must accommodate substantial amounts of jitter. To compute current

-- CPU time, the time since last initiation of the task should be added to the

-- value stored in the TCB. If the built-in system call does not do this,

-- a user can. If not done, the ACES Timing Loop will compute a larger than

-- otherwise necessary value for the jitter compensation time, and the

-- time to execute the test suite will be longer than it needs to be.

-- Accuracy should not be seriously degraded. The VMS system call performs the

-- desired compensation.

with SYSTEM; use SYSTEM;

with CONDITION_HANDLING; use CONDITION_HANDLING;

with STARLET; use STARLET;

FUNCTION zg_cpu RETURN duration IS



CPUTIM: INTEGER;


pragma VOLATILE ( CPUTIM );

JPI_STATUS: COND_VALUE_TYPE;

JPI_ITEM_LIST: constant ITEM_LIST_TYPE :=

( ( 4, JPI_CPUTIM, CPUTIM'ADDRESS . ADDRESS_ZERO ),

(( 0 . 0, ADDRESS_ZERO, ADDRESS_ZERO ) );

CPU_TIME_AS_DURATION: DURATION;

BEGIN


-- Call GETJPI to set CPUTIM to total accumulated CPU time -- (in 1O-millisecond tics)



GETJPI ( STATUS =>, JPL_STATUS, ITMILST=> JPI_ITEM_LIST);


CPU_TIME_AS_DURATION:= DURATION(LONG_FLOAT (CPUTIM)/100.0));

return CPU_TIME_AS_DURATION;

END zg_cpu;


Figure 5-2 zg_cpu for DEC Ada

5.1.3 Appropriateness of Time Choices (Pretest Step 4)

Every sixty seconds, for a period of fifteen minutes (or until the program is aborted by the user), "zp_tcal1.ada" and "zp_tcal2.ada" print out a count of elapsed minutes. Tests zp_tcal1 and zp_tcal2 should be run interactively. The ACES user should be ready with a stopwatch to verify that a line is generated every 60 seconds. Some error is tolerable, but most systems should show no discernible error. A one-second drift in two minutes is less than a 1% error. If CALENDAR.CLOCK doesn't work, the ACES user should get it fixed before proceeding. The first thing to check is that the Ada system has been properly installed.

Program "zp_tcal3.ada" is to be executed only if the user wants to verify CPU time. Programs zp_tcal1 and zp_tcal2 must be run first in order to verify that elapsed time is accurate. Program zp_tcal3 should be run interactively. The first time through, zp_tcal3 must be run as the only task in the system in order to compare that result against a run with contention.

Program zp_tcal3 computes the elapsed time and the CPU time for executing a loop multiple times. It reports the variation between the results. This test verifies that:

* The CPU time measurement is less than, and close to, the elapsed time when the program is run as the only task in the system. The reported difference between CPU and elapsed time is an estimate of operating system overhead in the form of background processing.

* The CPU time is consistent over multiple loop executions.

* The CPU time measurements are affected by contention from concurrent jobs. To test this, two copies of "zp_tcal3.ada" are executed concurrently. The CPU measurements should be shorter than they were the first time zp_tcal3 was executed in standalone mode.

Program "zp_tcal4.ada" is to be run only when users want to collect CPU measurements for the compilation/link process. Program "zp_tcal4.ada" tests whether the CPU function returns a value relative to the current process or the current program. The distinction is important if the user wants to measure CPU time for a compilation process since the measurement technique provided involves several separate programs.

Two programs, zp_tcal4 and zp_tcal5, are contained in "zp_tcal4.ada". These programs consume a significant amount of CPU time and write a value of CPU time as output when they complete. If the CPU time output from the second run (zp_tcal5) is roughly double the CPU output from the first run (zp_tcal4), then the user can conclude that the CPU time being returned by the system is job time. If the outputs are roughly equal, the CPU time is probably program time. To measure compilation CPU time, job time is desired. Operating systems which provide program time will often provide job time through a similar system call. This comparison is performed in the program, which will report whether job or program time is being used.

5.1.4 Choice of Math Package (Pretest Steps 0, 6, 7)

Parts of the ACES test suite and analysis tools require a math library of elementary functions. The ACES uses a compatible subset of the Association for Computing Machinery, Special Interest Group on Ada, Numerics Working Group (ACM SIGAda NUMWG) proposed specifications for an elementary math function library from Ada Letters - A Special Edition from SIGAda, "Proposed Standard for a Generic Package of Elementary Functions for Ada" pages 9-46. It is recommended that the vendor's math library be used where it is available. The ACES provides a portable implementation of a math library to permit execution on compilation systems which do not provide any math library support, or for which the functions are not sufficiently accurate.

For a detailed discussion on adapting the math library options to the compilation system, see Section 5.1.6.1 "Math Packages". The math library is tested in Pretest Step 8. The zp_mt* programs test the accuracy of the math library on a type declared with six digits of precision; the zp_dm* programs test the accuracy on a type declared with nine digits of precision. There are four approaches to adapting the Math packages, "zm_math.ada" (single precision), and "zm_dblma.ada" (double precision), listed below in decreasing order of preference:

* Use the Ada 95 math packages.

If the user has indicated that the system under test is an Ada 95 system, then Setup will use the required math packages. No user action is required in this case. If the system has not implemented the required packages, then modify the generated script for Step 5. Using the template file "zp_stp05.tpl" as a guide, replace the copy and compile statements with the portable math package options.

* Instantiate a system-provided version of the NUMWG package.

If a compilation system provides a NUMWG package, it can be directly instantiated. Where this alternative is available, execution times may be fast because the body of the package may have been tailored to the target hardware. "zm_math.ada" and "zm_dblma.ada" are the versions of Math and Double Math which assume NUMWG support.

* Interface with an implementation-provided non NUMWG math library.

If a compilation system provides a math library which is not compatible with NUMWG recommendations, adapt zm_math and zm_dblma by providing bodies for the functions which pass through calls to the provided non-NUMWG library. Where this alternative is available, execution times may be fast because the procedures may have been implemented as interfaces to highly optimized routines which are tailored to the target hardware.

* Use the ACES portable math package zm_genma with "zm_depen.por", which was developed for use on systems not providing a usable math library. Instantiating this package to implement zm_math should involve the least user effort on systems which do not implement the NUMWG recommendations. The program zp_dptst in Pretest Step 7 can be used to verify correct execution. The performance of this option will probably be slower than for systems where the implementors have provided math packages tailored to the target hardware.

5.1.5 The ZG_CPY files (Pretest Step 5)

The system timing characteristics are determined during this Pretest step. The initialization program is compiled and executed four times (once for each compilation option) in Step 5 for the selected measurement technique. THIS STEP MUST BE RUN WITHOUT CONTENTION ON THE SYSTEM TO INSURE THAT THE INITIALIZATION PARAMETERS ARE AS ACCURATE AS POSSIBLE. If there is a wide variance of results in these values, this command file should be rerun. This program, "zg_init.*" (*= ".cpg", ".cpl", ".elg", or ".ell") sets the value of the Timing Loop variables LOOP_TIME, TIME_PER_TICK, MIN_JITTER_COMPENSATION, OVERALL_MIN_TIME, OVERALL_MAX_TIME, and NULL_LOOP_SIZE for each compilation option. These values are output to a text file, "zg_cpy.*". (The character "*" in a file name represents one of the compilation options.) The four compilation options are:

* op => optimize time, suppress constraint checking

* no => no-optimize time, suppress constraint checking

* ck => optimize time, enable constraint checking

* sp => optimize space, suppress constraint checking

As part of the initialization of each performance test program, the initialization file, "zg_verfy.*" (*= ".cpg", ".cpl", ".elg", or ".ell") incorporates the appropriate copy parameter file, "zg_cpy.*", and executes some code to verify that these values are consistent with the target environment. This is done by executing a null statement within a Timing Loop and confirming that the measured time, using the supplied values for the Timing Loop parameters, is consistent with the variations in the Timing Loop observed during the Pretest. If the test program is not within an acceptable tolerance of the range, it halts with an error message.

If an optimizing compiler is able to translate the timing loop into a noop, then all the non-null ACES timing measurements will be larger than they should be because the compilation system will be executing the loop overhead instructions but subtracting off the computed timing loop overhead (of zero). This will not reflect the timing loop overhead in these cases. The "zg_init" program will write an error message if the measured loop overhead is not statistically significantly greater than zero. The error message informs the user that it may be necessary to adapt the timing loop by inserting a call on an assembler routine into the timing loop. The modification to the timing loop might be as simple as adding an unconditional call on an assembler language procedure as the first statement in "zg_glob3.proc_spoil". If testing shows that this is not sufficient to make the measured timing loop significantly greater than zero, it may be necessary to make all the calls on "zg_glob3.proc_spoil" unconditional by modifying the appropriate IF statements in "zg_init", "zg_stop0", and "zg_verfy". It will be necessary to go back and rerun the "zg_init" programs after these changes are made.

5.1.6 Math Package Errors (Pretest Steps 7, 8, 9)

5.1.6.1 Math Packages

The ACES test suite and tools do not use all the functions and features of the NUMWG specifications. Unused functions are not provided in the portable package zm_genma (GEN_MATH). The NUMWG functions which are not required are:

* The hyperbolic (and inverse hyperbolic) functions.

* The trigonometric (and inverse) functions with a cycle parameter.

* The logarithm function with a base parameter.

* The COT function.

If the zm_genma package is instantiated with a constrained type, it will not operate as a NUMWG conformant would - it may raise an error when any intermediate variable in a computation is out-of-range, rather than only when initial or final values are out-of-range.

The functions required by the ACES test suite are:

* "**" - The power function, raising a real number to a real exponent.

* ARCCOS - The trigonometric arc cosine.

* ARCSIN - The trigonometric arc sine.

* ARCTAN - The trigonometric arc tangent.

* COS - The trigonometric cosine.

* EXP - The exponential function.

* LOG - The natural logarithm.

* SIN - The trigonometric sine function.

* SQRT - The square root function.

* TAN - The trigonometric tangent function.

5.1.6.1.1 Alternative Methods for Math

The following sections discuss details of the alternative methods of providing a math package.

5.1.6.1.2 Instantiate System-Provided NUMWG package

Where provided, a NUMWG package should be straightforward to instantiate and efficient to use.

5.1.6.1.3 Adapting to a Non-NUMWG Math Libraries

This alternative involves providing a package which passes through a function call by interfacing to an implementor-provided routine, mapping name changes, exception processing, and argument definitions as required. The following code shows how this would appear for the ARCTAN function on DEC Ada.



with zg_glob1; use zg_glob1;


package zm_math is

base : CONSTANT := 2.0; --Binary floating-point machine

num_digits_in_mantissa : CONSTANT INTEGER := float6'MACHINE_MANTISSA

argument error : exception ;

...

function arctan (y: float6; x: float6 := 1.0) return float6;

...

end zm_math;

with zm_math_lib;

package body zm_math is

package vms_math_lib is new zm_math_lib( float6 ) ;

...

function arctan (y: float6; x: float6 := 1.0 ) return float6 is

begin

if x = 1.0 then

return vms_math _lib.atan(y);

else

return vms_math_lib.atan2(y,x);

end if;

exception

when others => raise argument error;

end arctan; .

..

end zm_math;


For a compilation system which provides a math library but does not contain all the functions the ACES test suite requires, an ACES user might: adapt the ACES portable math library to provide the missing functions (and access the implementor library for the functions which are provided); use only the portable math library; or use the implementor library and not run the test programs which use the unsupported functions. An implementor-provided math library might not handle exception conditions in a comparable manner - rather than raise an exception for an invalid argument (as the ACES portable math library does) it might crash the program. Such behavior complicates the task of interfacing the ACES test suite to an external library and can make the zp_mt* (MATH_TEST) programs, for example, impossible to execute without source code modifications.

An ACES user can construct a version of zm_math (or zm_dblma) with interfaces to a vendor-provided math library by writing a package zm_math (or zm_dblma). This package specifies the functions, and provides bodies for each function which return the value of a call on the appropriate vendor library function. An example of such an adaptation for the DEC Ada compilation system is distributed in the files "zm_math.dec" and "zm_dblma.dec".

5.1.6.1.4 Making a Non-generic Math Package

The ACES portable math library "zm_genma.ada" (provided as a generic package) may be too large a generic unit for some systems to handle. If so, the compilation system being evaluated might accept the package if it were "de-genericized." To do this, edit the package source as follows:

* Remove the "generic" specification and the generic formals.

* Replace the generic package name (zm_genma) with the name of the desired instantiation (zm_math for the type ZG_GLOB1.FLOAT6; zm_dblma for the type ZG_GLOB6.FLOAT9). A text editor which can replace the string "zm_genma" everywhere it occurs will accomplish this (the string "zm_genma" is NOT embedded within other names in the source file).

* Replace the generic formal parameter name ("FLOAT_TYPE") with the name of the desired type. A text editor which can replace the string "FLOAT_TYPE" everywhere it occurs will accomplish this.

Comparing the performance of such a "de-genericized" version with a system which used the provided generic version may introduce an unfavorable bias, in that the system which supported the generic version without requiring special adaptation effort may run slower than it would if it had also been adapted. This is another reason why it is important that ACES users record all the modifications made to adapt the math library.

5.1.6.1.5 zm_genma with Portable zm_depen

To provide a math library for target systems which do not provide one, the ACES distributes a portable version of a generic math package which can be readily adapted to additional targets.

The math package provided is based on the book Software Manual for the Elementary Functions by William J. Cody, Jr., and William Waite, published by Prentice-Hall in 1980.

The ACES math packages zm_math and zm_dblma depend on two generic packages:

* zm_depen

This is a representation-dependent generic package which provides functions permitting the manipulation of fields of floating point numbers. It is instantiated using the declared types. zm_depen is discussed in more detail later.

* zm_genma

This is a generic package which instantiates zm_depen and contains the algorithms for the supported elementary math functions.

The ACES contains the following versions of the file zm_depen (MATH_DEPENDENT).

.por portable version

.dec DEC Ada VAX host/VAX target

The file zm_depen provides access to the following three attributes of a real number:

* INTEXP ( x ) which returns the integer representation of the exponent in the normalized representation of its floating-point number parameter. For example, INTEXP ( 3.0 ) = 2 on binary machines because 3.0 = 0.75 * ( 2**2 ).

* ADX ( x, n ) which adds N to the integer exponent in the floating-point representation of X, thus scaling X by the N-th power of the radix. For example, ADX ( 1.0, 2 ) = 4.0 on binary machines because 1.0 = 0.5 * ( 2.0**1 ) and 4.0 = 0.5 * ( 2.0**3).

* SETEXP ( x, n ) which returns the floating-point representation of a number whose mantissa is the mantissa of the floating-point number X, and whose exponent is the integer N. For example, SETEXP ( 1.0, 3 ) = 4.0 on binary machines because 1.0 = 0.5 * 2.0**1 and 4.0 = 0.5 * ( 2.0**3 ).

There are two different approaches to implementing zm_depen. The first approach uses implementation-dependent features to manipulate fields with a floating point number, either by using record representation clauses or by calls on assembler routines. The second approach to implementing these functions is not implementation-specific, but relies on the definition of floating point numbers. It uses a table of powers of two. It searches the table for the INTEXP function, does multiplies on the ADX function, and calculates the SETEXP function by scaling the input (effectively dividing by INTEXP(X)) and then multiplying by the desired exponent value.

The portable version of zm_depen is coded by using operations on the floating point values without directly manipulating the bit patterns used to represent the values. To see how this is possible, consider the three functions exported by zm_depen in turn.

* Function ADX

This function is straightforward to implement by multiplying or dividing by an appropriate power of two. It can be tolerably efficient using a precomputed array containing the powers of two.

* Function SETEXP(X,N)

Using the function INTEXP to determine the power of two of a floating point value, the result of SETEXP can be computed as

RETURN ( X / 2.0 ** INTEXP(X) ) * 2.0 ** N;

which is representation independent. This formulation is intended for clarity and would be rather slow if coded as shown. The exponentiation can be efficiently calculated using an array of the powers of two.

* Function INTEXP

The implementation of this function is the key to the proposed representation-independent implementation. It is possible to directly determine the largest power of two greater than or equal to a floating point value (which will be the value of the binary exponent of the value) by searching an array containing powers of two instead of manipulating the bits of the floating point number representation. This will not be as efficient as a direct "bit manipulation" approach, but it is independent of where exponent fields are located in floating point numbers.

The functions must be coded carefully to avoid numeric overflow or underflow.

5.1.6.1.6 zm_genma with Tailored zm_depen

The Ada model numbers are defined in terms of binary radix. For a target machine with a non-binary radix, the error bounds produced by using the Ada model numbers will not be as tight as they are with binary radix targets. This will be most apparent in the zp_mt* (single precision) and zp_dm* (double precision) programs which verify the accuracy of the math library. These programs use attributes MACHINE_MANTISSA, MACHINE_EMAX, and MACHINE_EMIN to obtain the properties of the target machine used to calculate tolerable error bounds. For a non-binary radix machine representation, the value of these attributes will be the smallest binary value which is consistent with the actual representation. Using this definition in the zp_mt* programs will permit (slightly) more numeric errors in the implementation of the math functions before errors are reported.

In the following sections, two separate points are discussed. The first is what types of modification may be necessary to zm_depen, and the second is how various types of errors in the implementation of zm_depen would show up in zp_dptst (DEPTEST) results.

The package zm_depen must be adapted to reflect both the characteristics of the target machine floating point hardware and the facilities which the Ada compilation system provides to manipulate bit fields in floating point variables.

The size and location of the sign, exponent, and mantissa of a floating point number are critical, as are other representation details such as the encoding of the exponent field (biased, sign magnitude, or complement number representation). This information should be extracted from the documentation on the target machine. It is often included in Appendix F of the compiler's documentation.

Once information on the floating point representation is determined, there may still be a problem in coding zm_depen. The fundamental reason is that Ada is designed to be portable and system-dependent operations are not universally supported.

It is possible for an ACES user to implement a tailored version of zm_depen by interfacing with an assembly language routine. This might produce the fastest execution speeds.

There are several approaches to adapting zm_depen to a target system.

* The cleanest approach is the use of record representation clauses to treat the fields of a floating point number as integer (sub)types. Adaptation will involve modification to reflect the target representation. Remember that different compilers for the same target hardware may choose to number the bytes in a record differently (left-to-right versus right-to-left).

* To isolate fields in a floating point value, it is necessary to sidestep normal Ada type rules. This can be done by:

+ Defining several access types which point to integer and floating point objects and arranging for them all to point to the same actual objects. That is, instantiating UNCHECKED_CONVERSION between the access types (pointers to integers and pointers to float) and as part of system setup, initializing all the pointers to the same actual location.

+ Using instantiations of UNCHECKED_CONVERSION, either between floating point types and integer types, or between scalar types and record types. This approach was used for the DEC Ada version of zm_depen.

+ Bit field extraction can then be accomplished using record representation clauses or integer arithmetic:

- Divide and MOD to extract fields (however, a divide of a negative value in a machine using two's complement integer arithmetic is not a logical shift).

- Multiply by powers of two to shift left.

- Add to OR fields (after insuring that the field in one operand is zero).

- Negation to complement bits.

All these methods have disadvantages. The first is the most straightforward, but may not compile when the sizes are different (even though the conversion between different sized objects occurs in a piece of code which will not be executed when the sizes differ). The second alternative relies on the PRAGMA SUPPRESS being honored; however, a compiler which determines at compile time that a constraint violation would occur when a statement is executed may generate a compile time error (warning) and generate code which would raise the CONSTRAINT ERROR exception at execution time. SUPPRESS grants a compiler permission to omit checking, but the LRM and RM 95 explicitly allow compilers to ignore a PRAGMA SUPPRESS.

5.1.6.2 zp_dptst (DEPTEST)

The program zp_dptst (DEPTEST) tests the functions in zm_depen (MATH_D) with a range of arguments which will expose many of the potential errors in the implementation of these functions. The program contains a series of statements which call on the functions and compare the results returned to the correct answer. In the zp_dptst output file, the discrepancies are flagged with a string "<<< ERROR >>>" starting in column 65, making them easy to detect.

Below are listed several symptoms of errors which might be reported by zp_dptst, and some candidates for what the underlying source of the errors might be:

* If INTEXP returns a constant for all values, the function is probably not extracting the correct bit field from the number. Perhaps parameters are grossly wrong (e.g., using record representation clauses, the bit numbering is backwards) or there are byte-ordering problems (machine architectures can number bytes from the high end or the low end; and when the target orders differently than the programmer expected it will appear that the bytes have been interchanged).

* If INTEXP returns a value which is a constant power of two off, the exponent bias is probably not set correctly.

* If SETEXP or ADX modify any bits of the mantissa, the computations to adjust the exponent field are wrong. When the computed results are not some power of two different from the expected results, the exponent field is not being isolated properly. Check for byte-interchange, bit numbering and field sizes.

* If ADX returns a value which is a constant multiple of the correct value, the exponent field location may be a bit or two off.

* If the result of SETEXP is off by a constant power of two, the exponent bias may be wrong. When ADX works properly an invalid bias is a likely cause of a constant factor error.

* Negative values, of either the floating point value or of the exponent field, are given special processing. If results for negative values are wrong, the code for processing negative values needs to be reviewed.

* Optimizing compilers may do strange things with these functions. Consider the following example which occurred on one compiler during development. The compiler noticed that the SETEXP function assigned to a float (through an access type), did an UNCHECKED_CONVERSION of the access to float to another access type and performed some manipulations on that second type, and finally returned the float value pointed to by the first access type. The compiler performed flow analysis and decided that there were no modifications to the value pointed to by the first access type before it was returned and so the load of the modified result could be "optimized" away as invariant. This transformed the SETEXP function into an identity function and made it useless. The compiler vendor agreed after inspection that the UNCHECKED_CONVERSION should have made the flow optimizer aware an alias had been created which could modify the values of the object pointed by the access type, and has modified the compiler to be aware of this fact. The immediate workaround to this problem was to insert an external procedure call between the assignment to the access type variables, which worked, but increased the execution time of the functions.

If zp_dptst does not initially work correctly, and the errors observed do not fit one of the patterns described above, the first thing a user should try is to compile the package zm_depen with no optimizations. On some systems, requesting support for a debugging option is a good way to suppress optimizations. The package may work then. If it does, the ACES user may decide to: isolate the difference optimization makes, perhaps by examining the listing of the machine code generated; or refer the package to the compilation system maintainers for correction; or simply not use any optimization options on that compilation system.

These functions must work properly for zm_genma to work. It is futile to try to verify that zm_genma is correct by running the zm_mt* programs until zm_depen has been verified with zp_dptst. It is much simpler and faster to isolate and correct errors in the zm_depen function using zp_dptst than using zm_genma. It is much easier to debug a function when the expected results are checked by a test program, than when a programmer must observe that a complex function (such as LOG) sometimes returns a wrong result and isolate the problem in that function to an error in a low level function which it calls upon. Most of the problems uncovered at execution time by the zp_mt* programs while transporting zm_genma onto new compilation systems have been due to errors in functions in the zm_depen package.

5.1.6.3 zp_mttst and zp_dmtst Programs

The "zp_mt*.ada" and "zp_dm*.ada" programs test the accuracy of the math packages. These should be run to insure that the math libraries are performing correctly. These test programs, executed in Pretest Steps 8 and 9 might reveal accuracy flaws in a vendor library, or flaws in adapting the ACES generic math package to a new target. For the ACES test problems, it is sufficient if the math library is not grossly inaccurate - say not losing more than 10 bits of accuracy. However, ACES users should carefully examine the accuracy requirements of their applications before using a math library which is not essentially accurate to target machine precision.

The NUMWG has recommended accuracy standards for the elementary math functions in terms of permissible maximal errors over various ranges – the zp_mt* and zp_dm* programs compare the observed errors against this standard and report if it is exceeded. The NUMWG also recommends that the value of functions for some specific values (usually for zero) be precise – the zp_mt* and zp_dm* programs compare the calculated results for these values to the recommend values. These programs test various identities and special cases for the elementary math functions, and output the number of bits in error in the computation of the function.

There are several groups of tests, covering ranges of the functions, and over each range, the program computes 2,000 sample points distributed at random (usually dividing the range into 2,000 intervals and selecting a point within each interval using a uniform random distribution). On each range, the program displays both the maximum relative error and the root-mean-square error in terms of the number of bits of precision lost. The root-mean-square is the square root of the sum of the squares of all the errors. It is commonly used as a measure of the "average" error in the set of numbers.

The programs zp_mt* and zp_dm* will write an error message whenever the maximum error is larger than that recommended by the NUMWG specifications, whenever some of the specific identities that the NUMWG recommends fail, and when selected examples which should (or should not) raise an exception, based on the NUMWG specifications, do (or do not).

These programs are an adaptation of the work of Cody and Waite. The interested reader is referred to their book, cited in Section 1.2, for details.

The zp_mt* programs require the package zm_ran (RANDOM), which contains a random number generator. There are two versions of zm_ran, one using 16-bit integers (zm_ran16) and another using 32-bit integers (zm_ran32). For systems which support 32-bit integer types, zm_ran32 should be used. This package uses a linear congruence, pseudo-random number generator and should be fairly fast. For compilation systems which do not support integer types with that range, zm_ran16 must be used. That package uses a Tausworth random number generator with a shuffling technique as described in "Improving a Poor Random Number Generator," by C. Bays and S. D. Durham, ACM Transactions on Mathematical Software , Volume 2, Number 1, March 1976. zm_ran16 should be fairly portable, although it assumes that it can perform an UNCHECKED_CONVERSION between a packed boolean array of 16 elements and an integer type.

The results of the zp_mt* programs should show very few bad tests, and the loss of significant bits should not be larger than the NUMWG recommendations.

An ACES user may be presented with a choice between using a fast, implementation-provided math library on which zp_mt* programs detect errors, and a zm_genma-based version which is slower, but with smaller errors and which processes exceptions as the NUMWG specifications recommend. This is not a trivial choice. The ACES test suite and support tools do not strongly rely on the NUMWG-recommended exception processing.

The treatment of exceptions may be the only discrepancy reported by zp_mt* and zp_dm* programs in testing an adaptation of a vendor-provided, nonNUMWG math library. In this case, users may elect to use the vendor library for the applications they develop. That is, easy portability to other NUMWG-based systems may not be a concern. Such users would probably not consider testing using anything but the supplied math library. If zp_mt* and zp_dm* detect large numeric errors, an ACES user must decide, based on the expected usage of the math library, which math library to use for testing.

5.1.7 Harness Compilation (Pretest Step 11)

It is possible that some systems may fail to compile and link the Harness program, though the development team believes that this program should operate correctly on any system that is compatible with either Ada 83 or Ada 95. If another compilation system is available for the host, it is worthwhile to attempt to compile and link Harness using the other system. Note that all six of the "zg_glob*" packages must be compiled before attempting to compile the Harness files. The particular version of "zg_glob3" is not important to the Harness, so the simplest one ("zg_glob3.ell") should be used.

5.1.8 Analysis Tool Compilation (Pretest Step 12)

If the Analysis tools cannot be compiled on the system under test, then performance testing can continue. However, to perform analysis of the results, another compilation system should be used to compile them. Note that both "zg_glob1" and "zm_math" must be compiled before attempting to compile the analysis tools. (The development team has observed at least one compilation system that cannot process the first three analysis tool files because of the size of the enumeration types.)

On most Ada compilation systems, the analysis programs can be linked and the analysis tools run from the Menu program. But, in case this first option is not possible due to capacity limitations, there are two options. The second option compiles one of the tools (Condense, CA, or SSA) and a dummy version of the other two tools, and then links these programs with the Menu program. The dummy versions are in the files "za_cndum.ada", "za_cadum.ada", and "za_sadum.ada". If that option is not possible, then the third option would be to compile and link Condense, CA, SSA, and Menu independently from each other. The common data modules need to be compiled regardless of which option is necessary. See Section 9 "RUNNING THE ANALYSIS" for more information.

5.2 Pretest Report Form

The ACES Pretest provides a report form for collecting results. To fill in the report form, the user can invoke a system text editor and edit an on-line version of "zp_tmplt.txt" by replacing the blanks with YES/NO or the appropriate answer. Or, a hard copy can be printed and the blanks filled in by hand. There is space provided to insert comments onto the report form. The user should note all adaptations that were made, and general comments about system operations. When the Pretest is finished, the completed report form provides a summary report for future reference.

5.3 Modifications To Performance Tests

Several test problems may require system-dependent adaptation to operate. These fail to compile if attempted without performing the adaptation on the compilation systems where adaptation is required. Some users may want to perform the adaptation before attempting the problems. Files where adaptation may be required are:

Issue Files

------------------------------------------------------------------

interrupts tk_in01_.inc .. tk_in11_.inc and tk_lf20_.inc

FORM string io_dapkg.inc

time-slicing dt_dp16_.inc .. dt_dp21_.inc

pragma interface ms_il01_.inc

asynchronous IO io_tx01_.inc and io_tx02_.inc,

tk_la01_.inc .. tk_la03_.inc

5.4 Other Adaptations

5.4.1 Running the ACES on a Simulator

Some projects developing code for embedded targets may not have target hardware available or accessible when an evaluation is desired. If a software simulator is accessible, it may still be possible to run the ACES with some adaptation. The ACES user must determine how the simulator treats time. If programs referencing CALENDAR.CLOCK running on the simulator return the actual time-of-day, then the ACES Timing Loop will provide measurements of the execution speed of the simulator. The speed of the simulator will rarely be of much interest; the primary concern is the execution speed on the target.

Some simulators provide estimates for what the eventual target speed would be (based on information on the speed of individual instructions). These estimates of "simulated" time may be accessible to programs running on the simulator. It would be easiest for ACES use if the "simulated" times were reflected in the values returned by CALENDAR.CLOCK (then the ACES might be run without modification). If "simulated" time is accessible from some other call, the ACES must be modified to use it. Perhaps the simplest way is to use the ACES CPU time option with the function returning the "simulated" time replacing the CPU time function. Also, the CPU time cutoff code in ZG_GLOB3.TERMINATE_TIMING_LOOP and ZG_GLOB3.STOPTIME2 would need to be removed because a simulator's measured CPU time would be more than the elapsed time.

If it is possible to measure simulated time, various values in the ACES must be adapted to minimize the testing time. The values of BASIC_ITERATION_COUNT_LOWER and BASIC_ITERATION_COUNT_UPPER should be set to small values. (NOTE: the program zp_basic (Pretest Step 6) will determine these, but as distributed this test itself will probably take an excessive time to run.) MIN_ITERATION_COUNT and MAX_ITERATION_COUNT should also be set to smaller values, both because simulated times should be more reliable and to limit the execution time (values of 2 and 3 might be appropriate). It is appropriate to attempt only a subset of ACES tests to reduce the testing time on a simulator.

If it is not possible to measure simulated time, it might still be useful to run some of the ACES performance test problems, not to collect timing data but simply to observe whether failures occur. This may be particularly helpful for the implicit storage reclamation tests (although, if successful, they will take a long time to complete on a simulator).

5.4.2 Deleting Library Units

The following observations made during ACES testing provide some ways to work around problems with deleting library units:

* One compilation system created library units for local generic instantiations. The Harness command files do delete these units if the user has asked for library deletes. Of course, on systems that do not create these extra library units, this will result in delete requests for nonexistent units.

* One compilation system would not remove a subunit from the program library when a "zc_delbo" was used, even though that system identified the type of the unit as a body when a library directory was displayed. To work around this, an ACES user could modify the calls on "zc_delbo" for subunits to a command which will delete subunits. It may also be feasible to adapt the code within "zc_delbo" to delete all the library entries with the specified name, rather than simply deleting the body.

* A compilation system might create an explicit library unit for a specification for a library subprogram which was compiled with an implicit specification (that is, there was only one file which contained the subprogram definition and there was NOT a file compiled containing only the subprogram specification). It would be possible on such a system to adapt the "zc_delbo" command file to also delete specifications.

* One compilation system always created a library body entry, even for packages containing only a specification part. The sample command files assume this does not occur and will leave the "unexpected" body units in the program library. This might be worked around by making all the zc_del* files delete both specification and body units.

* One compilation system was very sensitive to the order of deleting units from the program library and would not delete a unit which has any other units in the library dependent on it (for example, it was improper to remove a package specification before removing the package body). The command files generated by the Harness try to be consistent with this restriction.

* There was one system which did not provide the capability to delete units from the program library. On this system it was necessary to delete the entire program library when it filled up and create a new one.

One adaptation alternative to the zc_del* files is to use library level deletes or sublibraries. Rather than deleting units as they become obsolete, the ACES users could periodically delete and create the program library and re-enter the basic units compiled by Pretest Steps 1, 3, and 5. It may be possible to avoid recompiling either by setting up a sublibrary structure (where the basic units are in a shared library and the sublibrary for the working units is periodically deleted and recreated) or by using facilities to copy intermediate forms of the basic units from a saved library into the newly created one.

5.4.3 Modifying Tests

In evaluating a compilation system, it is often useful to explore the performance of some implementation-specific extensions. Portability is often not a strict requirement of a project. There are existing ACES tests which might be easily modified to test for some areas where implementation-dependent features are often provided. These include:

1. Alternative interrupt mechanisms

Many implementations for embedded targets provide alternatives to the LRM-specified mechanism to interface interrupts to a task entry. One common approach is to specify a procedure which is to be called when an interrupt occurs, with various constraints placed on the procedure (typically it must be a library procedure which cannot contain any tasking constructions or DELAY statements). Variants can be much faster than the general LRM tasking construction, and constraints placed on it may be quite acceptable. The tests in the interrupt subgroup of the tasking group may be used as models for a set of interrupt tests using implementation-dependent mechanisms for communicating with hardware interrupts.

2. Linkages to other languages

The ACES provides one test for linkage to a procedure without parameters that is written in assembly language. Some systems provide linkages to other languages, and some projects may be very interested in using such linkages. The test ms_il_interface_lang_assem_01 may be used as a model for these tests.

3. Multiple program libraries

Some compilation systems provide for multiple program libraries. The compile and link time for programs exploiting this capability can be rather different from the times for problems using only one library. The tests in the Systematic Compile Speed group can be used as models for additional tests in this area.

4. Operating system services

Most Ada compilation systems provide for access to operating system services. Projects targeted to a specific operating system may want to test the performance of specific OS features. While not necessarily portable to all other operating systems, the performance of some of the features may be critical to some projects. Examples of features which the user may want to test include: windowing system calls; OS process spawning; transaction processing systems interfaces; and database system interfaces.

While there are not individual ACES tests which could serve as specific models for exercising these services, the general approach of the ACES test suite may be appropriate.

Users who wish to add new tests the ACES and include them in the analysis produced by Comparative Analysis should use the User-Defined (ud) group, as discussed in Section 5.5. It is also possible to add tests to the existing Harness and Analysis software and data files. See Section 9.1.6 "Adding Subgroups, Tests, and/or Main Programs" and Section 6.13 "ADDING NEW TESTS TO THE HARNESS".

5.5 User-Defined Benchmarks

Distribution directory "aces/tests/ud" (the User-Defined group) contains one file, "ud_tmplt.__". This file may be used as a template for up to 20 user-defined benchmarks (up to 10 for an Ada 83 implementation). The template file contains instructions for its use.

The Harness and Analysis Menu programs recognize two subgroups of the User-Defined group (the Ada 83 subgroup and the Ada 95 subgroup). When testing an Ada 83 implementation, only the Ada 83 subgroup is available. Ada 95 implementations may use both subgroups. Each subgroup has 10 predefined performance test names that are recognized by Harness and the Analysis tools.

WARNING: If the Harness user selects User-Defined test names that have not been created (by adapting the template file), then executing the generated script will produce errors (due to missing files).

6. RUNNING THE HARNESS

6.1 Tutorial On Using The Harness

This section contains some guidance on how the Harness might be used. It is essential that the System Name file ("zh_cosys.txt") be examined before beginning. It is highly recommended that users who will be evaluating more than one Ada compilation system adapt this file (at least by changing the system name) for each system to be evaluated. It is also recommended that the database file names (produced for each group) also be adapted for each system.

An algorithm for using the Harness is given below in Figure 6-1. This is certainly not the only way the Harness might be used, but it has proven helpful in previous work.



LOOP


select a group and its tests

LOOP

Build a command file for selected tests

submit it (this step must be done outside the Harness)

Update status

use Choose_by_status to select tests with non valid times

examine these test results in the log file more closely


EXIT WHEN out of time OR ELSE have accounted for all tests in this group



END LOOP


END LOOP


Figure 6-1 Harness Use Algorithm

The Harness always begins with the Harness test selection methods screen. This allows the user to choose tests by group, subgroup, or test names, as provided in Harness version 1.0 through 2.0, or the user may select tests by performance topic. To select tests by performance topic, see Section 6.14 "SELECTING TESTS BY PERFORMANCE TOPICS". If the user chooses to select tools by group, subgroup, or test names, then the Harness displays the Groups display. (If the user has identified the system as supporting Ada 95, then all 21 groups are listed; otherwise only 18 groups whose tests are Ada 83-compatible will be available.) The first time the Harness is run this display will always show nothing selected and all tests in the No_data category. A group can be selected by typing its number. All the tests in this group can then be selected by entering "//all" (see Section 6.4 "BASIC SELECTION" and Section 6.5 "ADVANCED SELECTION" for a detailed discussion of how selections are made). After entering this, the display will show that the number of tests selected is equal to the number of tests in the group.

If "b" for Build is then typed, a screen like the one in Figure 6-22, Build Command - One Group - Compile Speed ON, will be displayed. See Section 6.10.7 "Build_com (B)", for more details on possible choices.

After Building the command files, the next step would be to run them (probably, but not necessarily in batch mode). Gather the results in a log file. Then, the Harness command, Update_status_from_log (U), can be used to read the execution time results from this log file into a Harness database file (names are specified in the system name file ("zh_cosys.txt"). Then, the totals in the Show_Groups display (see Figure 6-15) can be observed and the results for individual tests in the Show_Tests display (see Figure 6-17) can be examined. The Choose_by_status command is helpful if the user wishes to examine only the tests that do not have valid execution times. For example, the tests in the status range 4..14 (err_unreliable_time .. err_no_data) might be chosen. Or, if the user cares most about Ada failures, they might choose tests in the status range 8..14 (err_packaging .. err_no_data), since status codes 4..7 represent difficulties in gathering times, but not difficulties in compiling, linking, or running the test programs. The official status list is shown in Figure 6-19, Choose Tests - Sample Screen.

In using the Choose_by_status command you do need to remember that you lose the current selection information. The command works by selecting a subset of the already selected tests in the current group. If the current selections are important to you, you can save them by using the SAve_selection command (see Section 6.9.8 "SAve_selection (SA)"). Every time the Harness is exited, you have the option to save the current selection information in a file which will be automatically read when the Harness is restarted. However, you may also save selection information at any time, and you may read this selection information back in at any time (see Section 6.9.7 "Read_Selection (R)").

The Harness also saves execution time status information which has been read into a Harness database. This information may also be later used by the analysis programs: Single System Analysis (SSA) and Comparative Analysis (CA). The Harness does not gather Compile or Link times. For this purpose you must use Condense. The Harness only monitors and reports on execution time results.

There are 25 tests in the Systematic_Compile_Speed group for which execution time results are not produced. The tests are labeled in the Show_Tests display as library command or compile only. Since there are no execution results associated with these problems, the Harness will never display any results for them since the Harness only recognizes execution results, and not compilation times. Furthermore, you cannot select based on these categories. This is an inconsistency in the Harness. One helpful alternative would be to use the SET_status (SET) command to set the status of these tests (or any tests for which you do not care to gather data) to Not_applicable (NA).

6.2 The System Name File

The system name file ("zh_cosys.txt") is the interface between the Harness and the user file system for both input and output files. The subsections below explain each part of this file. Here, as with the analysis tools, this file, which has a name hard coded into the program, allows the user to easily change almost every other file name.

All output file names are concatenated with the output_path; input files must be in the current directory, or include the path in their name.

The database files (>exe_condensed) are input and output files; they are treated as input files in that they do NOT use the output_path. See Section 6.2.3 "Execution Time Database File(s)".

Many entries are REQUIRED, but all except the system_name have default values which are assigned in the program, and overwritten from the system name file if they are present.

6.2.1 Weight File(s) needed by the Harness

The Harness needs the same Weight (structure) information that is needed by the analysis programs. However, because the Harness operates with data arrays that are limited in size (and therefore only contain one group at a time), this information is packaged in a separate file for each group. The file names are given in the system name file ("zh_cosys.txt") with entries that begin with ">weight_file" as in Figure 6-2. A single structure file could be used by the Harness, but this set of entries, one for each group, is not optional. If a single file were used, that file name would need to appear on the line for each group.



>weight_file - (ap) application := zh_ap.txt



>weight_file - (ar) arithmetic := zh_ar.txt


>weight_file - (cl) classical := zh_cl.txt

. . .

>weight_file - (sy) systematic_compile_speed := zh_sy.txt


>weight_file - (tk) tasking := zh_tk.txt



>weight_file - (ud) user_defined := zh_ud.txt



Figure 6-2 Weight (Structure) Files Used by the Harness

6.2.2 Execution Time Log File

This entry is optional and may not be used by many Harness users. When the Harness starts up, if this log file is present in the current directory, then it is automatically read and the information is used to update the appropriate Harness database(s). The Update_status_from_log command is another option which may be used to interactively perform the same function.

>exe_log := test.log

Experience with the Harness has prompted the following recommendation: leave this field blank in the system name file (or make sure that no file with this name is present) and use the Update_status command to read new log files as they become available.

6.2.3 Execution Time Database File(s)

The execution time database(s) can be created by Harness or by Condense. The Harness will create a separate execution time database file for each group, while Condense creates one database file for execution time data and one for compile/link data. The Harness can use either a single file or one for each group, but the appropriate name(s) must be given for each group in the system name file ("zh_cosys.txt"). See Figure 6-3 for an example. The Harness will be slower if there is only one database file. Also, it will not be possible to set statuses and save this information. For these reasons, it is recommended that the Harness use the separate database files that it creates. Later, when the analysis tools are used, there is an option for Condense to merge these Harness database files so that the results can be used by the analysis tools.



Note: these file names could all be identical if Condense creates the database file.



>exe_condensed - (ap) application := test_ap.dbs



>exe_condensed - (ar) arithmetic := test_ar.dbs


>exe_condensed - (cl) classical := test_cl.dbs

. . .

>exe_condensed - (su) subprograms := test_su.dbs

>exe_condensed - (sy) systematic_compile_speed := test_sy.dbs

>exe_condensed - (tk) tasking := test_tk.dbs

>exe_condensed - (ud) user_defined := test_ud.dbs


Figure 6-3 Harness Database Files

6.2.4 Files Required by the Build Command

The following files are REQUIRED by the Build command if a Build is requested for the Systematic_Compile_Speed group.

>sy_template_main := zh_sy.tpl

>sy_template_1000 := zh_sy000.tpl

>sy_template_cu := zh_sy_cu.tpl

The following files are REQUIRED by the Build command if Ada library deletions are requested.

>lib_delete - (ap) application := zh_ap.lib

>lib_delete - (ar) arithmetic := zh_ar.lib

>lib_delete - (cl) classical := zh_cl.lib

.

.

.

>lib_delete - (sr) storage_reclamation := zh_sr.lib

>lib_delete - (su) subprograms := zh_su.lib

--lib_delete - (sy) systematic_compile_speed := --------- NOT NEEDED

>lib_delete - (tk) tasking := zh_tk.lib

>lib_delete - (ud) user_defined := zh_ud.lib

6.2.5 Summary Status Information File

The Harness writes the summary group status information whenever the Quit command is issued. During startup, this file, if present, is read and used to initialize the group totals which appear in the Show_Groups display. In this way, the Harness does not have to reread all of the database files to produce these totals. Whenever a group is selected, its database file is read and these status totals are brought up-to-date. The names for these files are given in the system_name file. The user does not ordinarily need to know what these file names are, since the process is transparent.

>harness_summary := <test>.sum

The default name is "test.sum". If present and corrupted, harness will not start. Delete the file and restart Harness. It will create a new file.

6.2.6 Selection Summary File

Whenever the Harness terminates, the user is asked if a selection summary file should be written. This file saves the current selection information on all groups, subgroups, and tests. If present, it will automatically be read when the Harness is restarted, and all selection information will be restored. The name for that file is given in the system_name file. The user does not ordinarily need to know what that file name is, since the process is transparent.

>harness_selection := <test>.sel

The default name is "test.sel". If present and corrupted, harness will not save any results. Delete the file and restart Harness. It will create a new file.

6.2.7 Optional Files Produced by the Harness

Build_com (B) - group abbreviation & script_suffix

Write_Groups (WG) - "groups.txt"

Write_Subgroups (WS) - group abbreviation & ".sub"

Write_Tests (WT) - group abbreviation & ".tst"

Write_Chosen (WC) - group abbreviation & ".cho"

6.3 General Harness Command Information

Multiple commands cannot be entered on a line. However, multiple selections can be made without requiring that each selection command be entered on a separate line. Other Harness commands may be either alpha commands or numeric commands. Alpha commands are entered by typing the command or its abbreviation. Other numeric commands are entered by typing their number (which is associated with a description in the current menu) and (sometimes) an optional parameter (which is often a file name). Some of these commands are "toggle" commands which change a value between two choices.

An attempt was made to produce an interface that limits the user as little as possible. Any input should be accepted. Any command that is "possible" should be accepted at any point. Others will cause an error message to be generated, but will have no other effect. Some of the commands in the menus for operations like Write, Build, and Update, look like selection commands, but have other uses (see Section 6.8). In addition, only one toggle command can be entered per line and only one command that can take a parameter can be entered per line.

Note: If a user enters a command incorrectly, an error message will be displayed. This may cause the display to scroll past the point where it can be read. The display can be refreshed by typing the command for this menu. For example, if the user is looking at the groups menu, type Show_Groups (SG).

6.4 Basic Selection

When groups, subgroups, tests, or statuses are displayed, the user may make a selection by typing the appropriate number (listed to the left). Preceding the number by a minus (-) sign will deselect. Ranges of numbers may also be entered in Ada style: "3..7". Ranges preceded by a minus sign will deselect: "- 3 .. 7" will undo "3..7". "all" or "+all" selects all. "-all" deselects all ("*" is a synonym for "all").

6.4.1 Select Groups

Only one group can be the current group, which is the group operated on by all but the Build command. The Build command operates on all currently selected groups. Selecting or deselecting a group does not select or deselect the subgroups or the tests in the group.

6.4.2 Select Subgroups

All subgroups can be selected at once. Selecting or deselecting a subgroup does not select or deselect tests in the subgroup.

6.4.3 Select Tests

All tests can be selected at once. You may select individual tests or ranges of tests. Selecting or deselecting tests does not change the selection status of the group or the subgroup of which they are a part.

6.4.4 Select Statuses

All statuses can be selected at once. You may select individual statuses or ranges of statuses. Status selection is used to reduce the set of already selected tests to the subset consisting of those that have a given status (or have a status in a given range of statuses).

6.4.5 Set Status

Only one status may be selected for this purpose. After typing the "Do" command, the existing status for all selected tests will be changed to the selected status.

6.5 Advanced Selection

6.5.1 Semantics Of Selection

This section discusses the effect of making group, subgroup, and test selections. The essential point to remember is that all of these selections are independent: selecting one has no effect on the selection status of any other part of the structure.

6.5.1.1 Groups

The last group selected is always the current group. This is important for assessing the behavior of other commands. ALMOST ALL ACTION REFERS TO THE CURRENT GROUP. Commands dealing with subgroups and tests, such as Write_Subgroups or Show_Tests assume the current group. If no current group has been specified, these commands cannot be invoked.

The Advanced Selection commands, discussed below, can sometimes operate independently of the current group. However, they operate only on the current group when no group is explicitly denoted in the command.

The only command in which the current group plays no special role is the Build command. This command operates on all selected groups (and within these groups, on all selected tests). In no other command does it matter which groups are selected, EXCEPT for the current group.

6.5.1.2 SubGroups

Subgroup selections are the least important selections. The reason subgroups are included is to ease the task of dealing with all of the performance tests. There is a current subgroup which is similar to the current group in that it is the last subgroup selected. However, it is never necessary to select any subgroup.

The Advanced Selection commands show how the user may use subgroups to facilitate the selection of tests in several ways. For example, users may easily select all of the tests in a subgroup, or the first five tests in every subgroup.

6.5.1.3 Tests

The performance tests are the core of the ACES. Ultimately, the goal of the Harness is to make it easy for the user to monitor the status of each test, and to select tests of interest for further action. The group and subgroup organization is designed to further that goal.

The Advanced Selection commands allow the user to select any test, or any combination of tests at any time, from the group menu, or any subgroup menu, or any test menu. These commands may be more powerful than most users will need. However, they make it easy for the user to manipulate all of the tests in selected groups and/or selected subgroups.

6.5.2 Basic Selection

The selection command, in its simplest form, operates on the choices displayed in the current menu. In all cases, the choices are denoted by a number given on the left side of the line, before the item to be selected (or chosen - these two words are used interchangeably). For example, in the Groups Menu below, the Statements group can be chosen by typing its number (16) at the prompt. Notice that this group could be chosen, even if it were not visible in the display. All groups can be selected by typing "all" (the all command). Deselection occurs by preceding the choice with a minus sign: "-3" would deselect group 3, and "-all" deselects all groups. In the display, selection is shown with a leading plus, deselection with a leading minus. Multiple individual selections can be made on the same command line, including individual choices and ranges. In Figure 6-4, groups 3, 11, 14, 15, 16, and 17 are being chosen. The command to deselect these groups is "-3,-11,-14..17".



----------------------------------------------------------------------------


----- Groups ---------------------------------------------------------------


Va Nu Dy Un Vr Ng Xc Pk Cm Ln Rn Dp In No NA Wd Sum

----------------------------------------------------------------------------

- 15 protected_types (pt) 2 subgroups # tests chosen = 0

- - - - - - - - - - - - - 12 - - 12

- 16 statements (st) 10 subgroups # tests chosen = 0

- - - - - - - - - - - - - 92 - - 92

- 17 storage_reclamation (sr) 2 subgroups # tests chosen = 5

- - - - - - - - - - - - - 65 - - 65

- 18 subprograms (su) 8 subgroups # tests chosen = 0

- - - - - - - - - - - - - 80 - - 80

- 19 systematic_compile Speed (sy) 13 subgroups # tests chosen = 0

- - - - - - - - - - - - - 109 - - 109

- 20 tasking (tk) 9 subgroups # tests chosen = 12

- - - - - - - - - - - - - 142 - - 142

- 21 user_defined (ud) 2 subgroups # tests chosen = 0

- - - - - - - - - - - - - 73 - - 73

############################# end of list ##################################

Sum: - - - - - - - - - - - - - -1863 - - 1863

----------------------------------------------------------------------------

pick groups by number SET_status Build_com Choose_by_status Help

Show | Write (Groups | Subgroups | Tests | Chosen) Previous Next Quit

=> 3,11,14..17 <cr>


Figure 6-4 Group Level Selection

Selecting and deselecting may be mixed in the same command. An example is "-3,5..7,9,-12..17". Contradictory instructions may also be given, but the last one will prevail. Thus, "-3,5..7,3,-5..7" will leave group three selected and groups 5, 6, and 7 deselected. (As will all selection commands, this is true regardless of what the original selection status of the groups was.)

The selection process is exactly the same, regardless of whether groups, subgroups, or tests are being displayed. The effect of selection varies, depending on what is being selected, but the following rules always apply. Selecting (or deselecting) a group has no effect on the selection status of the subgroups and tests in the group. Selecting (or deselecting) a subgroup has no effect on the selection status of the tests in the subgroup. In the groups displayed above, 12 tests in group 20 (tasking) have been selected, but group 20 is not currently selected. (Group 17 also has tests selected.)

There are several ways for a user to select tests. The user may select a group (the last group chosen becomes the current group). The user may then enter the command to show tests (ST). The test menu will appear as in Figure 6-5. The user might then decide to choose test 13 by typing the number 13 following the prompt. Or the user might want to choose all tests in subgroup 1 and do so by entering the range "1..11". If the user also wants to choose all tests in subgroup 2, this can be done with the range "12..64" (64 = 12 + 53 - 1). However, this approach is open to error: some of us occasionally make mistakes when doing mental arithmetic. Another technique is discussed in the section on ADVANCED SELECTION below. It is also possible to select tests using the Get_Test_name_list command discussed in Section 6.9.3.



------ All Tests in tasking (tk) \\20 -------------------------------------


Va Nu Dy Un Vr Ng Xc Pk Cm Ln Rn Dp In Sum

############################ start of list #################################

\ 1 Subgroup: interrupt (in) chosen/total = 0/11

- 1 1 int_00 *1 - - - - - - - - - - - - 1


- 2 2 int_01 *1 - - - - - - - - - - - - 1



- 3 3 int_02 *1 - - - - - - - - - - - - 1


- 4 4 int_03 *1 - - - - - - - - - - - - 1

- 5 5 int_04 *1 - - - - - - - - - - - - 1


- 6 6 int_05 *1 - - - - - - - - - - - - 1


- 7 7 int_06 *1 - - - - - - - - - - - - 1

- 8 8 int_07 *1 - - - - - - - - - - - - 1

- 9 9 int_08 *1 - - - - - - - - - - - - 1

- 10 10 int_09 *1 - - - - - - - - - - - - 1

- 11 11 int_10 *1 - - - - - - - - - - - - 1

\ 2 Subgroup: language_feature_tests (lf) chosen/total = 0/53

- 12 1 task_01 *1 - - - - - - - - - - - - 1

- 13 2 task_02 *1 - - - - - - - - - - - - 1

----------------------------------------------------------------------------

pick tests by number SET_status Build_com Choose_by_status Help

Show | Write (Groups | Subgroups | Tests | Chosen) Previous Next Quit

=>


Figure 6-5 Test Level Selection

6.5.3 Intermediate Selection

The Harness supports a selection syntax which allows all selections to be made from any test or subgroup menu, or from the groups menu. In this syntax there is always a complete specification of any selection. For example, to designate the - second test in the second subgroup in the Tasking group (group 20), the notation is "20/2/2". This notation is always topdown - group/subgroup/test. The first number given is always relative to the current menu.

6.5.3.1 Group Level Commands

At the group level, entering "20/2/2" says: in group 20, in subgroup 2, select test 2. This is the same test we might have chosen in the test menu above by typing the number 13. Notice that there are two numbers before each test. The first is the group test number; the second is the subgroup test number. These commands can be more complicated. To select all tests in all subgroups in the Tasking group, we would type "20/all/all". To select all tests in the second subgroup the command is "20/2/all". To select all tests in the first and third subgroups the command is "20/1,3/all".

It is also possible to skip the subgroup level entirely. The command "20//all" will select all tests in group 20. Entering "20//13" will select the same test that was selected above with the command "20/2/2", which is the same test we selected by entering "13" at the test level.

Note: these command examples above only show test selection. This form of the selection command only acts on the last level designated. Thus "20/2/2" does not select or deselect group 20 nor does it select or deselect subgroup 2 in group 20. Only test 2 is selected. "20/2" would select subgroup 2 in group 20. "20" would select group 20. "20/2/-2" deselects test 2 in subgroup 2 in group 20. "20/-2" deselects subgroup 2 in group 20. "-20" deselects group 20. The ADVANCED SELECTION commands section below discusses how to make selections at more than one level with just one command. Often this facility is not needed.

6.5.3.2 SubGroup Level Commands

At the subgroup level, entering "2/2" will select the second test in the second subgroup. The group is implied: it is whatever group is the current group (the group whose subgroups we are looking at). Entering "2/all" will select all tests in the second subgroup. The command "2/1..5,8" will the select the first through fifth test in this subgroup, as well as test 8.

Any command that can be given at the group level can be entered at the subgroup level. This is accomplished by preceding the group level command by the up-level symbol "\". For example, to select the second test in the second subgroup in the 20th group from the subgroup level, the command is "\20/2/2". If you are looking at the subgroups display for the 20th group, this is unnecessary, but this form of the command works from ANY subgroup display, regardless of the group.

6.5.3.3 Test Level Commands

All of the commands possible at the group and subgroup level can be given from the test level. First, raise your level and then follow it with the appropriate command. From the test level, the command "\\20/2/2" will select the second test in the second subgroup in the 20th group. This command works even if the user is looking at the tests for group 1, Applications.

Figure 6-6 gives equivalent commands for selecting the same set of tests, depending on the level at which the command is given. Each column gives different commands for selecting the same tests, with each row representing a level. For the Subgroup level, it is assumed that the current group is 20. In the second and third columns, it is assumed that a specific subgroup has been selected. The commands in the first column all select the second test in the second subgroup of group 20 (the current group). The commands in the second column select the 13th test in group 20 (which is the second test in the second subgroup of group 20). The commands in the third column again select the 13th test in group 20. The fourth column commands have a different effect; they select all the tests in the first subgroup of group 20.



Level Commands


------------------------------------------------------------

Group Level 20/2/2 20//13 20//13 20/1/all

SubGroup Level 2/2 -- \20//13 1/all

Test Level \2/2 13 \\20//13 \1/all

------------------------------------------------------------


Figure 6-6 Corresponding Commands

It is not recommended that users examine the status results of tests and make selections in an entirely different group, but this is allowed. A more common use of this facility would be to select tests by subgroup while in the test menu. A user who wishes to select all of the tests in the third subgroup from the test menu would type "\3/all".

6.5.4 Advanced Selection

The difference between Intermediate Selection and Advanced Selection is the use of explicit pluses in the path. The command "20/2/2" only selects at the test level because there is no selection asked for at the group or subgroup level. "+20/+2/2" will select group 20, select subgroup 2 in group 20, and mark the second test in this subgroup as selected. This command is not as intuitive as the Intermediate version, but it is extremely powerful and allows the user to explicitly select every group, subgroup, and test referenced in the command.

There is an inconsistency in the requirement for an explicit plus to select a group or subgroup in the path while such an explicit plus is not required for the last entry in a selection. Nor is the explicit plus required when doing basic selection. While consistency was a goal in the design of the language, it was felt that requiring an explicit plus for selections at the basic level was not an intuitive requirement. Ada, as most (all?) programming languages, assumes that an unsigned number is a positive number, and this is the convention when writing and using numbers generally. To reduce the inconsistency between the basic and advanced selection modes, explicit pluses are not required at the lowest level. Users who are bothered by such inconsistencies may always use explicit pluses and minuses - and will always get these results: pluses will select; minuses will deselect; using no sign in the path will not change the selection status of the path.

6.5.5 Miscellaneous Examples

To select all groups and all of the tests in the groups:

group level: "+all//all"

subgroup level: "\+all//all"

test level: "\\+all//all"

note: Without the plus in front of the first all, every test in every group will be selected, but the selection status of the groups will remain as before.

To select all groups and the first 50 tests in each group:

group level: "+all//1..50"

subgroup level: "\+all//1..50"

test level: "\\+all//1..50"

note: If a group does not have 50 tests, then all tests in that group will be selected. This is not considered an error.

note: Without the plus in front of the first all, the first 50 in every group will be selected, but the selection status of the groups will remain as before.

To select all groups and the first ten of every tenth test:

group level: "+all//10,20,30,40,50,60,70,80,90,100"

subgroup level: "\+all//10,20,30,40,50,60,70,80,90,100"

test level: "\\+all//10,20,30,40,50,60,70,80,90,100"

note: If a group does not have 100 tests, then every tenth test in that group will be selected. This is not an error.

note: Without the plus in front of the first all, the selection status of the groups will remain as before.

To select the first five tests in all subgroups in groups 1, 3, 12:

group level: "1,3,12/all/1..5"

subgroup level: "\1,3,12/all/1..5"

test level: "\\1,3,12/all/1..5"

note: If a subgroup does not have 5 tests, then some selections will be ignored. This is not considered an error.

To mix selections and deselections:

group level: "1,+3,12/all/1..5,-6..99"

note: This command will not change the selection status of groups 1 and 12, group 3 is selected. Then, for all three groups, for each subgroup tests 1..5 will be selected and tests 6..99 will be deselected. Any nonapplicable actions will be ignored.



WARNING: The selection language is powerful enough to get unwary users into trouble. Caution is advised.



6.5.6 Selection Command Grammar

This section on formal grammar (see Figure 6-7 and Figure 6-8) is entirely optional and only available for those with an interest in these kinds of issues.



Formal Grammar: [] means zero or one occurrence


{} means zero, one, or many occurrences



selectCommand :== [ upCommand ] [ upCommand ] [ selection ]


[ downCommand ] [ selection ]

[ downCommand ] selection

upCommand :== "\"

downCommand :== "/"

selection :== range { "," range }

range :== number [ ".." number ] | [ sign ] AllCommand

number :== [ sign ] unsigned_integer

sign :== "+" | "-"

AllCommand :== "all" | "*" -- case insensitive

Notes: The final selection is not optional.

Selections never precede going up a level. You only go up

in order to be able to issue commands at a higher level.

Out of range selections are ignored. Warning messages will

be issued, but in some circumstances are just noise.

Multiple selections on multiple levels are done with nested loops

FOR selected groups

FOR selected subgroups in the selected groups

FOR selected tests in the selected subgroups

FOR selected groups

FOR selected tests in the selected groups


Figure 6-7 Formal Grammar



Allowed Combinations: All Implied and Actual Parameters



"(select)" means that selection is optional


group level commands

s/s (select) group / select subgroup

s//s (select) group / / select test

s/s/s (select) group / (select) subgroup / select test


/s implied group / select subgroup



//s implied group / / select test


/s/s implied group / (select) subgroup / select test

subgroup level commands

s/s (select) subgroup / select test

/s implied subgroup / select test

subgroup level - goto group level (all group level commands available)

\s \ select group

\/s \ implied group / select subgroup

\s/s \ (select) group / select subgroup

\//s \ implied group / / select test

\/s/s \ implied group / (select) subgroup / select test

\s//s \ (select) group / / select test

\s/s/s \ (select) group / (select) subgroup / select test

test level - goto subgroup level (all subgroup level commands available)

\s \ select subgroup

\/s \ implied subgroup / select test

\s/s \ (select) subgroup / select test

test level - goto group level (all group level commands available)

\\s \ \ select group

\\/s \ \ implied group / select subgroup

\\s/s \ \ (select) group / select subgroup

\\//s \ \ implied group / / select test

\\/s/s \ \ implied group / (select) subgroup / select test

\\s//s \ \ (select) group / / select test

\\s/s/s \ \ (select) group / (select) subgroup / select test


Figure 6-8 All Allowed Selection Command Combinations

6.6 Display Commands

Sample display screens are shown below in Section 6.10 "SAMPLE DISPLAY SCREENS". These commands (and all non-numeric commands) are selected by typing enough letters of the command to uniquely identify them (underscores are optional), or by typing their abbreviation (the capital letters in the command name). The short form of each command is given in parentheses following the command name.

Choose_by_status (C) - Select a subset of the already selected group, subgroups, or tests based on their status codes. Any command can be entered at this level.

Show_Groups (SG) - Display a list of current groups. Any command can be entered at this level.

Show_Subgroups (SS) - Display a list of subgroups in the current group. Any command can be entered at this level.

Show_Tests (ST) - Display a list of tests in the current group. Any command can be entered at this level.

Show_Chosen (SC) - Display a list of selected tests in the current group. Any command can be entered at this level.

6.7 Commands That Generate Files

Sample output is shown later. These commands (and all non-numeric commands) are selected by typing their abbreviation (the capital letters in the command name). These commands can also be selected by typing enough letters of their names to uniquely identify them (underscores are optional).

6.7.1 Build_com (B)

Construct a command script for processing the currently selected tests in the currently selected groups. The file name will be group abbreviation & script-suffix. Dummy tests and main programs are generated at the same time.

6.7.2 Write_Groups (WG)

Write the same information as the Show_Groups Command. The file name will be "groups.txt". All status codes will be displayed.

6.7.3 Write_Subgroups (WS)

Write the subgroups in the current group, as the Show_Subgroup command does. The file name will be the group abbreviation & ".sub". All status codes will be displayed.

6.7.4 Write_Tests (WT)

Write all of the tests in the current group. The file name will be group abbreviation & ".tst". All status codes will be displayed.

6.7.5 Write_Chosen (WC)

Write the current selected tests in the current group. The file name will be group abbreviation & ".cho". All status codes will be displayed.

6.8 List Of Non-Selection Commands

All of the non-numeric (non-selection) commands are listed in Figure 6-9. This display is available from the Harness by entering "help help" and the user will then be prompted for a specific command for which further information will be displayed. (Note that the two commands with abbreviation "C" are never available at the same time, so there is no ambiguity.



------------------------------------------------------------------------


Command Name Abbreviation Command Name Abbreviation


------------------------------ ------------------------------


All (A) Screen_Length (SL)


Build_command (B) SEt_status (SE)

Cancel (C) Show_Chosen (SC)

Choose_by_status (C) Show_Groups (SG)

Do (D) Show_Subgroups (SS)

Find_test (F) Show_Tests (ST)

Get_test_name_list (G) Update_status_from_log (U)

Help (H) WArning_toggle (WA)

Next (N) Write_Chosen (WC)

Previous (P) Write_Groups (WG)

Quit (Q) Write_Subgroups (WS)

Read_selection (R) Write_Tests (WT)

SAve_selection (SA)


Figure 6-9 All Non-numeric Commands

6.9 Miscellaneous Commands

These commands (and all non-numeric commands) are selected by typing their abbreviation (the capital letters in the command name). These commands can also be selected by typing enough letters of their names to uniquely identify them.

6.9.1 WArning_toggle (WA)

This toggle switch controls whether most messages require the user to enter a carriage return to continue. Help messages always require a carriage return to continue.

6.9.2 Find (F)

Operates on complete test names or fragments which include the first two to five letters of the name: that is, the group and subgroup abbreviations: gg_ss_test_name, where "gg" is the group abbreviation and "ss" is the subgroup abbreviation. Truncated names, as displayed in the Harness or Comparative Analysis cannot be processed. If only the group and subgroup can be identified, the command returns the first test in the subgroup. Figure 6-10 below is a sample output display from this command.



=> f ap_sd


----------------------------------------------------------------------------

== Group = APPLICATION Group Number = 1

== SubGroup = symmetric_deadzone SubGroup Number = 14


== Test Name = ap_sd_sym_deadzone_01 Test Number in Group = 93



Test Number in SubGroup = 1


Enter carriage return <cr> to continue . . .


Figure 6-10 Find Test Name - Sample Output

6.9.3 Get_Test_name_list (GT)

Reads in a list of test names and marks them as selected. This is in addition to the tests which may already be selected. A sample prompt is displayed in Figure 6-11. The user enters the file-name by entering "1 = <file_name>". The file should contain a list of file names, one to a line.



=> g


---------------------------------------------------------------------------

Get Test Name List From File

---------------------------------------------------------------------------

Enter the file name to read the current selection information from.

1 Test File name := ""

---------------------------------------------------------------------------

Make a choice by typing its number, followed by "=" <new-value>

Do Cancel Help Quit

When you are satisfied with your choices, type "do" to read the file

=>


Figure 6-11 Get Test Name - Sample Prompt

6.9.4 Scroll Commands

The scroll commands can take a parameter which must be greater than zero. Previous (or Next) with no parameter means up or down one screen.

6.9.4.1 Previous (P)

Scroll up 1 screen (or parameter lines), but not past the beginning of the list. When scrolling up one screen the top line on the first screen should become the last line on the new screen.

6.9.4.2 Next (N)

Scroll down 1 screen (or parameter lines), but not past the end of the list.

6.9.5 Help (H)

Context sensitive: accept commands as parameters, also certain key words such as status, groups, subgroups, tests. "Help Help" will display a list of all commands.

6.9.6 Quit (Q)

Quit the program immediately. The user is prompted so that a choice can be made about saving the current selection information.

6.9.7 Read_Selection (R)

Reads selection data from a user-given file which was written by SAve_selection. This command will completely redo the selection status of all tests, subgroups, and groups. Unlike Get_Test_name_list (GT), you will lose whatever selections are currently made. A sample screen of Read_Selection is depicted in Figure 6-12 below.



=> r


---------------------------------------------------------------------------

Read Selection Information ---

------------------------------------------------------------------------

Enter the file name to read the new selection information from.

1 Selection File name := "SG.sel"

---------------------------------------------------------------------------

Make a choice by typing its number, followed by "=" <new-value>

Do Cancel Help Quit

When you are satisfied with your choices, type "do" to read the info

=> 1=my_selection.text


Figure 6-12 Read_Selection - Sample Screen

6.9.8 SAve_selection (SA)

Writes the current selection data to a user-given file which can later be read by Read_Selection. This is the same kind of file that the Harness will (optionally) write upon quitting and always read (if present) upon startup. A sample screen of SAve_selection is shown in Figure 6-13 below.



=> sa


---------------------------------------------------------------------------

Save Selection Information

---------------------------------------------------------------------------

Enter the file name to write the current selection information to.

1 Selection File name := "SG.sel"

---------------------------------------------------------------------------

Make a choice by typing its number, followed by "=" <new-value>

Do Cancel Help Quit

When you are satisfied with your choices, type "do" to save the info

=> 1=my_selection.text


Figure 6-13 SAve_selection - Sample Screen

6.9.9 Screen_Length (SL) <numeric parameter>

Dynamically set the length of the display screen.

6.9.10 Update_status_from_log (U)

Update_status_from_log (see Figure 6-14) allows the user to give a new file name (the default name comes from the system name file entry for "<exe_log"). This command allows the user to update test status information whenever a new log file becomes available without exiting and re-entering the Harness.

When the Harness is processing a log file a large number of warning messages may be generated. It is never necessary to examine these messages. However, if some results are missing, these messages can provide clues as to where in the log file to look for problems.

Some test problems in the systematic_compile speed group are shown in the Harness display as either library commands or compile only. These 25 problems are never set by the update commands since there are no execution time results associated with them.



=> u


---------------------------------------------------------------------------

Read Log File

---------------------------------------------------------------------------

Enter the new log file name or you may use the old one.

1 Log File name := "do.log"

---------------------------------------------------------------------------


Make a choice by typing its number, followed by "=" <new-value>



Do Cancel Help Quit


When you are satisfied with your choices, type "do" to read the log file

=>


Figure 6-14 Update_status_from_log - Sample Screen

6.10 Sample Display Screens

6.10.1 Show_Groups (SG)

Display the list of groups. Any command can be entered at this level (see Figure 6-19, Choose_by_status (C), for an explanation of column headers). Figure 6-15, Show Groups - Sample Screen, is an example of the output from this command. Depending on screen size, some or all of the 21 groups are displayed, along with the status counts for the tests in each displayed group, and the total status count for all tests in the ACES.

One possible anomaly in the display of the status information should be noted here. If the user had produced the database files by running Condense, the Harness would not be aware of the contents of all of these files. The Harness only reads the database file for the current group, if any, upon starting. If the database file(s) have been updated for any reason outside of the Harness, the user should select all groups and the Harness will read all of the database files and thereby update all of the status information.



----------------------------------------------------------------------------


----- Groups ---------------------------------------------------------------

Va Nu Dy Un Vr Ng Xc Pk Cm Ln Rn Dp In No NA Wd Sum

----------------------------------------------------------------------------

- 15 protected_types (pt) 2 subgroups # tests chosen = 0

- - - - - - - - - - - - - 12 - - 12

- 16 statements (st) 10 subgroups # tests chosen = 0

- - - - - - - - - - - - - 92 - - 92

- 17 storage_reclamation (sr) 2 subgroups # tests chosen = 0

- - - - - - - - - - - - - 65 - - 65

- 18 subprograms (su) 8 subgroups # tests chosen = 0

- - - - - - - - - - - - - 80 - - 80

- 19 systematic_compile Speed (sy) 13 subgroups # tests chosen = 0

- - - - - - - - - - - - - 109 - - 109

- 20 tasking (tk) 9 subgroups # tests chosen = 0

- - - - - - - - - - - - - 142 - - 142

- 21 user_defined (ud) 2 subgroups # tests chosen = 0

- - - - - - - - - - - - - 73 - - 73


############################# end of list ##################################



Sum: - - - - - - - - - - - - - -1863 - - 1863


----------------------------------------------------------------------------

pick groups by number SET_status Build_com Choose_by_status Help

Show | Write (Groups | Subgroups | Tests | Chosen) Previous Next Quit

=> 3,11,14..17 <cr>


Figure 6-15 Show_Groups - Sample Screen

6.10.2 Show_Subgroups (SS)

Display a list of subgroups in the current group. Any command can be entered at this level. (See Figure 6-19, Choose_by_status (C), for an explanation of column headers.) A sample screen of Show_Subgroups is shown in Figure 6-16 below.



----------------------------------------------------------------------------


----- Subgroups in tasking (tk) \20 ----------------------------------------

Va Nu Dy Un Vr Ng Xc Pk Cm Ln Rn Dp In No NA Wd Sum

############################ start of list #################################

- 1 interrupt (in) # tests chosen = 11

- - - - - - - - - - - - - 11 - - 11

- 2 language_feature_tests (lf) # tests chosen = 53

- - - - - - - - - - - - - 53 - - 53

- 3 language_feature_tests_abort (lb) # tests chosen = 6

- - - - - - - - - - - - - 6 - - 6

- 4 language_feature_tests_async_io (la) # tests chosen = 3

- - - - - - - - - - - - - 3 - - 3

- 5 language_feature_tests_select (ls) # tests chosen = 35

- - - - - - - - - - - - - 35 - - 35

- 6 rendezvous (rz) # tests chosen = 23

- - - - - - - - - - - - - 23 - - 23

- 7 storage_size (ss) # tests chosen = 3

- - - - - - - - - - - - - 3 - - 3

############################# end of list ##################################

Sum: - - - - - - - - - - - - 142 - - 142

----------------------------------------------------------------------------

pick subgroups by number SET_status Build_com Choose_by_status Help

Show | Write (Groups | Subgroups | Tests | Chosen) Previous Next Quit


=>



Figure 6-16 Show_Subgroups - Sample Screen

6.10.3 Show_Tests (ST)

Display a list of tests in the current group. Any command can be entered at this level. (See Figure 6-19, Choose_by_status (C), for an explanation of column headers.) A sample screen of Show_Tests is shown below in Figure 6-17.



-----------------------------------------------------------------------------



---- All Tests in tasking (tk) \\20 -----------------------------------------



Va Nu Dy Un Vr Ng Xc Pk Cm Ln Rn Dp In Sum



############################ start of list #################################



\ 1 Subgroup: interrupt (in) chosen/total = 0/11


- 1 1 int_00 *1 - - - - - - - - - - - - 1

- 2 2 int_01 *1 - - - - - - - - - - - - 1

- 3 3 int_02 *1 - - - - - - - - - - - - 1

- 4 4 int_03 NA Not applicable (set by user)

- 5 5 int_04 *1 - - - - - - - - - - - - 1

- 6 6 int_05 *1 - - - - - - - - - - - - 1

- 7 7 int_06 No No data available

- 8 8 int_07 *1 - - - - - - - - - - - - 1

- 9 9 int_08 *1 - - - - - - - - - - - - 1

- 10 10 int_09 *1 - - - - - - - - - - - - 1

- 11 11 int_10 *1 - - - - - - - - - - - - 1

\ 2 Subgroup: language_feature_tests (lf) chosen/total = 7/53

- 12 1 task_01 *1 - - - - - - - - - - - - 1

- 13 2 task_02 Wd Erroneous test: has been withdrawn

- 14 3 task_03 *1 - - - - - - - - - - - - 1

- 15 4 task_04 *1 - - - - - - - - - - - - 1

----------------------------------------------------------------------------

pick tests by number SET_status Build_com Choose_by_status Help

Show | Write (Groups | Subgroups | Tests | Chosen) Previous Next Quit

=>


Figure 6-17 Show_Tests - Sample Screen

6.10.4 Show_Chosen (SC)

Display a list of selected tests in the current group. Any command can be entered at this level (see Figure 6-19, Choose_by_status (C), for an explanation of column headers). A sample screen of Shown_Chosen is depicted in Figure 6-18 below.



----------------------------------------------------------------------------


----- Selected Tests in tasking (tk) \\20 ----------------------------------

Va Nu Dy Un Vr Ng Xc Pk Cm Ln Rn Dp In Sum

############################ start of list #################################


\ 1 Subgroup: interrupt (in) chosen/total = 7/11


+ 3 3 int_02 *1 - - - - - - - - - - - - 1

+ 4 4 int_03 NA Not applicable (set by user)

+ 6 6 int_05 *1 - - - - - - - - - - - - 1

+ 7 7 int_06 No No data available

+ 9 9 int_08 *1 - - - - - - - - - - - - 1

+ 10 10 int_09 *1 - - - - - - - - - - - - 1

+ 11 11 int_10 *1 - - - - - - - - - - - - 1

\ 2 Subgroup: language_feature_tests (lf) chosen/total = 3/37

+ 12 1 task_01 *1 - - - - - - - - - - - - 1

+ 13 2 task_02 Wd Erroneous test: has been withdrawn

+ 15 4 task_04 *1 - - - - - - - - - - - - 1

\ 3 Subgroup: language_feature_tests_abort (lb) chosen/total = 1/ 6

+ 50 2 delay_abort_02 - - - - - - - *2 - - - - - 2

\ 4 Subgroup: language_feature_tests_async_io (la) chosen/total = 2/ 3

+ 55 1 async_03 No No data available

+ 56 2 async_04 - - - - - - - 1 - - - *1 - 2

############################# end of list ##################################

pick tests by number SET_status Build_com Choose_by_status Help

Show | Write (Groups | Subgroups | Tests | Chosen) Previous Next Quit

=>


Figure 6-18 Show_Chosen - Sample Screen

6.10.5 Choose_by_status (C)

Select a subset of the already selected tests based on their status codes. See Figure 6-19 below for a sample screen of Choose_by_status selections.



-----------------------------------------------------------------------------


---- Choose Tests By OFFICIAL Status ----------------------------------------

############################ start of list #################################

- 1 valid Va Valid non-zero execution time

- 2 null_time Nu Execution times of 0.0


- 3 delay_problem Dy Measures time for delay statement



- 4 err_unreliable_time Un Timing measurement was variable


- 5 err_verification Vr Verification error

- 6 err_large_negative_time Ng Timing ERROR: large negative time

- 7 err_claim_excess_time Xc Claim test that ran out of time

- 8 err_packaging Pk Packaging error - might be OK

- 9 err_at_compilation_time Cm Compile time error

- 10 err_at_link_time Ln Link time error

- 11 err_at_execution_time Rn Run time error

- 12 err_dependent_test Dp System dependent test failure

- 13 err_inconsistent_results In Execution and compile inconsistency

- 14 err_no_data no No data available

- 15 not_applicable NA Not applicable (set by user)

- 16 err_withdrawn_test Wd Erroneous test: has been withdrawn

############################# end of list ##################################

pick statuses by number SET_status Build_com Choose_by_status Help

Show | Write (Groups | Subgroups | Tests | Chosen) Previous Next Quit

When you are satisfied with your choices, type "do" to make the selection(s)

=>


Figure 6-19 Choose_by_status - Sample Screen

6.10.6 SET_status (SET)

Only one status may be selected. This becomes the official status for all selected tests. Not all statuses may be set: only those preceded by a number. No status that involves valid numeric data may be set. Applicable undoes not_applicable. Otherwise, it has no effect. Setting err_no_data destroys existing data, but setting err_withdrawn_test and not_applicable hides existing data. To undo err_withdrawn_test and set everything to zero, use err_no_data. To undo err_withdrawn_test and restore to original state, use not_applicable and then applicable. (This is expected to be rare.) See Figure 6-20 below.



Semantics of changing status: Apply top horizontal line (7..17) to existing


data (1..16)


Codes : Add : add code (if not present): make it official status code



- : no change


Rpl : replace old official status code with new code

R-- : replace old official code with former code or no_data

Des : replace official status code and DESTROY all old data

==========================================================================

\ new status: | Xc Pk Cm Ln Rn Dp In no Wd NA

+---------+ +------------------------------------------------------

old status \ | 7 8 9 10 11 12 13 14 15 16 17

-------------------+------------------------------------------------------

1 valid Va | Add Add Add Add Add Add Add Des Add Add -

2 null_time Nu | Add Add Add Add Add Add Add Des Add Add -

3 delay_proble Dy | Add Add Add Add Add Add Add Des Add Add -

4 err_unreliab Un | Add Add Add Add Add Add Add Des Add Add -

5 err_verifica Vr | Add Add Add Add Add Add Add Des Add Add -

6 err_large_ne Ng | Add Add Add Add Add Add Add Des Add Add -

7 err_claim_ex Xc | - Add Add Add Add Add Add Des Add Add -

8 err_packagin Pk | Add - Add Add Add Add Add Des Add Add -

9 err_at_compi Cm | Add Add - Add Add Add Add Des Add Add -

10 err_at_link_ Ln | Add Add Add - Add Add Add Des Add Add -

11 err_at_execu Rn | Add Add Add Add - Add Add Des Add Add -

12 err_dependen Dp | Add Add Add Add Add - Add Des Add Add -

13 err_inconsis In | Add Add Add Add Add Add - Des Add Add -

-------------------+

14 err_no_data no | Rpl Rpl Rpl Rpl Rpl Rpl Rpl - Rpl Rpl -

15 err_withdra Wd | Rpl Rpl Rpl Rpl Rpl Rpl Rpl Des - Rpl -

16 not_applica NA | Rpl Rpl Rpl Rpl Rpl Rpl Rpl Des Rpl - R-

-------------------+

17 applicable | -- not present in DB

-------------------+-------------------------------------------------------


Figure 6-20 Semantics of Changing Status

Sample output from the Set Status command may be found in Figure 6-21.



---------------------------------------------------------------------------


------ Set OFFICIAL Status for Selected Tests -----------------------------

############################ start of list #################################


valid Va Valid non-zero execution time



null_time Nu Execution times of 0.0


delay_problem Dy Measures time for delay statement

err_unreliable_time Un Timing measurement was variable

err_verification Vr Verification error

err_large_negative_time Ng Timing ERROR: large negative time

- 7 err_claim_excess_time Xc Claim test that ran out of time

- 8 err_packaging Pk Packaging error - might be OK

- 9 err_at_compilation_time Cm Compile time error


- 10 err_at_link_time Ln Link time error



- 11 err_at_execution_time Rn Run time error



- 12 err_dependent_test Dp System dependent test failure



- 13 err_inconsistent_results In Execution/compile inconsistency


- 14 err_no_data No No data available

- 15 not_applicable NA Not Applicable (Set by user)

- 16 err_withdrawn_test Wd Erroneous test-has been withdrawn

- 17 applicable Applicable (undo Not Applicable)

applicable is not a status - it is only used to undo NA

############################# end of list ##################################

pick ONE (1) status SET_status Build_com Choose_by_status Help

Show | Write (Groups | Subgroups | Tests | Chosen) Previous Next Quit

When you are satisfied with your choices, type "do" to set the status code

=>


Figure 6-21 Set_status - Sample Screen

6.10.7 Build_com (B)

This command will construct a command file for the current operating system and compilation system information. In most cases, this command also generates the dummy files and the main programs that are needed. The options are:

* Option 1, Compilation-Time Analysis. This option is always possible if you have selected all of the tests in each subgroup. (See Figures 6-22 and 6-23.) Otherwise, the command files generated will not gather compile time measurements. Compile (and link) time measurements are directly tied to executables, which in the ACES are main programs. In order for this data to be comparable across systems, main programs must contain the same tests. However, if you have selected only some of the tests in some subgroup, then the resulting main programs will not be comparable. This is the reason that Options 1 and 2 are linked together. If you request that the maximum number of tests per main program be changed, the organization of tests in main programs will also be changed, and we no longer have comparable units for the analysis of compile and link times. (See Figure 6-24.)



----------------------------------------------------------------------------


---- Building command files for application


----------------------------------------------------------------------------


1 Compilation Time Analysis is ON (Off) <toggle>

ON => Compile and Link Time Data Will Be Generated

=> Maximum number of tests / main program := 9

OFF => Compile and Link Time Data Will NOT Be Generated

2 Maximum number of tests / main program := 9 (1..999)

Changing tests/main program => Compilation Time Analysis := OFF

3 Ada Files DELETED (Saved) <toggle>

4 Include Files DELETED (Saved) <toggle>

5 Executables DELETED (Saved) <toggle>

6 Library Units DELETED (Saved) <toggle>

-

8 File name := "ap.unx"

9 Output Directory := "/usr/people/barbara/test/"

---------------------------------------------------------------------------

IF <toggle> command, type its number to change the value, ELSE

Make a choice by typing its number, followed by "=" <new-value>

Do Cancel Help Quit

When you are satisfied with your choices, type "do" to start the build

=>


Figure 6-22 Build Command - One Group - Compile Speed ON



---------------------------------------------------------------------------


---- Building command files for application

---------------------------------------------------------------------------

1 Compilation Time Analysis is OFF (On) <toggle>

ON => Compile and Link Time Data Will Be Generated

=> Maximum number of tests / main program := 9

OFF => Compile and Link Time Data Will NOT Be Generated


2 Maximum number of tests / main program := 9 (1..999)


Changing tests/main program => Compilation Time Analysis := OFF


3 Ada Files DELETED (Saved) <toggle>


4 Include Files DELETED (Saved) <toggle>

5 Executables DELETED (Saved) <toggle>

6 Library Units DELETED (Saved) <toggle>


8 File name := "ap.unx"



9 Output Directory := "/usr/people/barbara/test/"



Figure 6-23 Build Command - One Group - Compile Speed OFF



---------------------------------------------------------------------------


---- Building command files for application

---------------------------------------------------------------------------

1 Compilation Time Analysis Is NOT Possible

Compile and Link Time Data Will NOT Be Generated

2 Maximum number of tests / main program := 99 (1..999)

3 Ada Files DELETED (Saved) <toggle>

4 Include Files DELETED (Saved) <toggle>

5 Executables DELETED (Saved) <toggle>

6 Library Units DELETED (Saved) <toggle>

-

8 File name := "ap.unx"

9 Output Directory := "/usr/people/barbara/test/"


Figure 6-24 Build Command - One Group - Compile Speed Impossible

* Option 2, Maximum number of tests per main program. The standard organization of the ACES performance tests is a compromise between several competing goals. More tests per main program usually simplify the compiling, linking, and running of the performance tests, unless problems or capacity limitations are encountered. Particularly for embedded systems, where downloading to a target can sometimes be a major part of the total testing time, it is advantageous to have as many tests in each executable as possible. However, when problems arise in compiling or running tests, it is often helpful to isolate each test in a separate executable, which in the ACES means a separate main program. Our recommendation to users who wish to run all of the ACES performance tests is to start with the default organization. If any tests fail to compile or run for some reason, and you wish to try these tests again, then we suggest that you set the number of tests per main program to 1. The Build command will generate the appropriate commands files and the appropriate main programs to meet your request in almost all cases. The exceptions are:

+ (1) The Systematic_Compile_Speed group where user choices are severely constrained: you can only generate command files for all tests in the group (no subsetting is supported) and the file names are not all user selectable;

+ (2) The Tasking group; and

+ (3) The Storage_Reclamation_Implicit subgroup in the Storage_Reclamation group, where subsetting is allowed, but users cannot change the number of tests per main program (which is usually one).

* Options 3, 4, and 5 are concerned with deleting (or not deleting) three kinds of files that will be present in the working directory. Some of these files will come from the ACES distribution. Some will be produced in the process of Including, compiling, and linking the tests. The "Include" files all come as part of the distribution. Some of the "Ada" files come in the distribution; some are produced by running Include. All of the executable files are produced at the link step during the invocation of the command (script) file. The user will need to make some choices here, based on the following considerations:

+ The amount of available disk space is the major constraint. If there is enough space, the recommendation is to delay deleting any files (except possibly the Included files) until after you have completely finished with the group (or groups) you are working on. Saving source files will make it easy to recompile any test where compile problems were encountered (sometimes turning off optimization will help).

+ Saving executables will make it easier to run any tests where verification errors or unreliable results were encountered. However, if you are short of disk space, and need to delete as many files as possible, the Harness does make it much easier to retry just those tests with questionable results. When the Build command is invoked, it always produces, for each selected group, a list of files needed. This file is always named group_abbreviation & ".lst". If you are running all of the tests in a group or subgroup, it is simpler to fetch (or copy) all of the files with the appropriate prefix, but if you are only running a few tests, then the list of files which Build produces will simplify the task of making sure that you have the proper files to complete your task.

* Option 6, "Library deletes", is also related to disk space considerations. If disk space is limited, then it can be helpful to delete Ada units from the Ada library immediately after the link step, when they are no longer needed. See the discussion in Section 5.4.2. The Build command will attempt to generate a command file that does this, if you request it. However, there are several difficulties here. The first difficulty is the lack of a standard in Ada library systems. For example, some Ada libraries create a specification and a body for every procedure; others do not. Some Ada libraries create "extra" entries for generics. Some require that units be deleted in a specified order. Others do not care. The practical importance of all this is that the deletes in the command files generated by the Harness may not work correctly on your system. In addition, the information that the Build command has available to it is not complete; this is a weakness in the Harness as delivered; this information is stored in the files named "zh_<group_abbreviation>.lib". There is an alternate solution to this difficulty which works on many Ada library systems which support sublibraries. If the ACES global packages are compiled into a main library and then the individual performance tests (and supporting packages) are compiled into a sublibrary, then it is possible to either delete all units in the sublibrary at the end of each subgroup, or to delete and recreate the sublibrary at this time.

* Option 7 is not present on all Build menus. This option only applies when a build is taking place for more than one group. (See Figure 6-25 and 6-26.) In that case, you may decide to have the command files written to one file, or to separate files for each group. In every case but one, you may select the names of the files. The options for the Systematic_Compile_Speed group are somewhat more limited. You may select the name for one of the three files explicitly on the Build menu in Option 8. However, three files are actually written; the names of the other two files are not menu selectable. The prefix for one is "sy_cu" and the prefix for the second is "sy_1000"; the suffix comes from the adaptation file discussed in Section 6.11 "ADAPTATION FOR DIFFERENT ADA SYSTEMS AND DIFFERENT OPERATING SYSTEMS".



---------------------------------------------------------------------------


---- Building command files for All groups

---------------------------------------------------------------------------

1 Compilation Time Analysis is OFF (On) <toggle>

ON => Compile and Link Time Data Will Be Generated

=> Maximum number of tests / main program := 9

OFF => Compile and Link Time Data Will NOT Be Generated

2 Maximum number of tests / main program := 9 (1..999)

Changing tests/main program => Compilation Time Analysis := OFF

3 Ada Files DELETED (Saved) <toggle>

4 Include Files DELETED (Saved) <toggle>

5 Executables DELETED (Saved) <toggle>

6 Library Units DELETED (Saved) <toggle>

7 Write to ONE file (separate files) <toggle>

8 File name := "groups.unx"

9 Output Directory := "/usr/people/barbara/test/"

---------------------------------------------------------------------------


Figure 6-25 Build Command - Multiple Commands - Write to One File

* Option 8 varies depending on whether a Build has been requested for one, or more than one group, and if more than one group, depending on Option 7. If only one file name is needed, then the user has complete control over that name (See Figure 6-25). However, if several files will be written, one per group, then the user can only specify the suffix. (See Figure 6-26.) The prefix will always be the group abbreviation.



---------------------------------------------------------------------------


---- Building command files for application, exception_handling, tasking

---------------------------------------------------------------------------

1 Compilation Time Analysis is OFF (On) <toggle>

ON => Compile and Link Time Data Will Be Generated

=> Maximum number of tests / main program := 9

OFF => Compile and Link Time Data Will NOT Be Generated

2 Maximum number of tests / main program := 9 (1..999)

Changing tests/main program => Compilation Time Analysis := OFF

3 Ada Files DELETED (Saved) <toggle>

4 Include Files DELETED (Saved) <toggle>

5 Executables DELETED (Saved) <toggle>

6 Library Units DELETED (Saved) <toggle>

7 Write to SEPARATE files (one file) <toggle>

8 File name suffix := ".unx"

File name prefixes are group abbreviations.

File Names = "ap.unx", "xh.unx", "tk.unx"

9 Output Directory := "/usr/people/barbara/test/"

---------------------------------------------------------------------------


Figure 6-26 Build Command - Multiple Commands - Write to Separate Files

* In Option 9, the user can specify the output directory that all files (command files, dummy files, and main program files) will be written to. The default for this directory comes from the system name file ("zh_cosys.txt"), but can be changed by the user here - for this command only.

6.11 Adaptation For Different Ada Systems And Different Operating Systems

When Pretest Step 11 is executed, a user adaptation file (zh_usr.txt) is created from the corresponding template (zh_usr.tpl). It is normally not necessary to modify this file. Harness requires this file to adapt the command files it generates to accommodate the different operating system and Ada compiler combinations. If you did not run Step 11, either do so before continuing with Harness, following the instructions in Section 3.2.11 of the Primer or copy the "zh_usr.tpl" file to "zh_usr.txt" and add the values needed for your system. Figure 6-27 shows an example of a file generated for a UNIX system.



---------------------------------------------------------------------------


-- user defined elements for system adaptation

---------------------------------------------------------------------------

First_Of_Line ""


Comment_Begin "# "



Comment_End ""


Invoke_Com_File ""

--

Include_Command "zg_incld.unx"

Run_Command "zc_run.unx"

Delete_Command "rm"

Link_Command "zc_link.unx"

Link_Command_no_time "zc_lnk.unx"

Compile_Command "zc_adaop.unx"

--

Echo_Command "echo"

Echo_Delimiter """

Source_Directory "/base/test/aces/source/"

Copy_File "cp"

Current_Directory "./"

Set_Library "zc_setli.unx"

If_Ada_95_Then "n"

--

Delete_Lib_Unit "zc_delsb.unx"

Delete_Lib_Unit_sb "zc_delsb.unx"

Delete_Lib_Unit_so "zc_delso.unx"

Delete_Lib_Unit_bo "zc_delbo.unx"

--

Inc_File_Suffix ".inc"

Ada_File_Suffix ".a"

Exe_File_Suffix ".exe"

Com_File_Suffix ".unx"

--

Delete_Suffix ""

--

Group_Begin "#!/bin/csh"

Group_End ""

SubGroup_Begin ""

SubGroup_End ""

--

Compilation Strings ""

optimize "zc_cmpop.unx"

nooptimize "zc_cmpno.unx"

check "zc_cmpck.unx"

space "zc_cmpsp.unx"

Compilation_Strings_no_time ""

optimize "zc_adaop.unx"

nooptimize "zc_adano.unx"


check "zc_adack.unx"



space "zc_adasp.unx"


Copy_Strings ""

optimize "zc_cpyop.unx"

nooptimize "zc_cpyno.unx"

check "zc_cpyck.unx"

space "zc_cpysp.unx"

----------------------------------------------------------------------------


Figure 6-27 Example Adaptation File (zh_usr.txt) for UNIX System

6.12 Write Commands

After entering one of the Write commands the following choices menu appears (see Figure 6-28). The user can change the number of columns in a line, the number of lines on a page, the output file name, and the output directory. Only the header is different for different write commands. The effect of changing the number of columns in a line may not be all that a user might hope for. The only difference is in the output for the Write Groups and Write Subgroups commands. The screen displays take two lines per entry. If the output line is long enough these displays will take one line per entry. This is the only effect of changing this parameter. Examples of the output from these commands follow. The information in these files is almost exactly the same information that appears in the screen displays, except that selection status is not written.



---------------------------------------------------------------------------


------ Writing group information to a file --------------------------------

---------------------------------------------------------------------------

---------------------------------------------------------------------------

------ Writing subgroup information to a file -----------------------------

---------------------------------------------------------------------------

---------------------------------------------------------------------------

------ Writing test information to a file ---------------------------------

---------------------------------------------------------------------------

---------------------------------------------------------------------------

------ Writing selected test information to a file ------------------------

---------------------------------------------------------------------------

1 Number of columns on page := 80

2 Number of lines on page := 66

3 File name := "groups.txt"

4 Output Directory := "/usr/people/barbara/test/"

--------------------------------------------------------------------------

Make a choice by typing its number, followed by "=" <new-value>


Do Cancel Help Quit



When you are satisfied with your choices, type "do" to write the file.


=>


Figure 6-28 Write Commands - Sample Menu

6.12.1 Write_Groups (WG)

Write the status information for all groups. The default file name will be "groups.txt". Sample output may be found in Figure 6-29, Write Groups Report.



Group Data


-----------------------------------------------------------------------------

Va Nu Dy Un Vr Ng Xc Pk Cm Ln Rn Dp In No NA Wd Sum

-----------------------------------------------------------------------------

1 application (ap)

- - - - - - - - - - - - - 99 - - 99

2 arithmetic (ar)

100 - - - - - - - - - - - - 31 - - 131

3 classical (cl)

- - - - - - - - - - - - - 87 - - 87

4 data_storage (do)

- - - - - - - - - - - - - 101 - - 101

5 data_structures (dr)

180 - - - - - - - - - - - - 84 - - 264

6 delays_and_timing (dt)

- - - - - - - - - - - - - 42 - - 42

7 exception_handling (xh)

- - - - - - - - - - - - - 58 - - 58

8 generics (gn)

- - - - - - - - - - - - - 27 - - 27

9 input_output (io)

- - - - - - - - - - - - - 122 - - 122

10 interfaces (in)

- - - - - - - - - - - - - - - - 0

11 miscellaneous (ms)

- - - - - - - - - - - - - 17 - - 17

12 object_oriented (oo)

- - - - - - - - - - - - - 9 - - 9


13 optimizations (op)



- - - - - - - - - - - - - 324 - - 324


14 program_organization (po)

- - - - - - - - - - - - - 75 - - 75

15 protected_types (pt)

- - - - - - - - - - - - - 12 - - 12

16 statements (st)

- - - - - - - - - - - - - 92 - - 92

17 storage_reclamation (sr)

- - - - - - - - - - - - - 65 - - 65

18 subprograms (su)

- - - - - - - - - - - - - 80 - - 80

19 systematic_compile_speed (sy)

- - - - 77 - - - 3 - - - - 29 - - 109

20 tasking (tk)

120 - 8 - - - - - 1 - - 1 - 12 - - 142

21 user_defined (ud)

- - - - - - - - - - - - - 73 - - 73

-----------------------------------------------------------------------------

------ Group totals : official status ------

Va Nu Dy Un Vr Ng Xc Pk Cm Ln Rn Dp In No NA Wd Sum

-----------------------------------------------------------------------------

400 - 8 - 77 - - - 4 - - 1 - 1373 - - 1863

-----------------------------------------------------------------------------

-----------------------------------------------------------------------------

ACES Version <ver #> Page <Page #> <Date> <Time>


Figure 6-29 Write_Groups Report

6.12.2 Write_SubGroups (WS)

Write the status information for all subgroups in the group. The default file name will be group abbreviation & ".sub". Sample output may be found in Figure 6-30, Write SubGroups Report.



SubGroup Data


------------------------------------------------------------------------------


- application (ap)



Va Nu Dy Un Vr Ng Xc Pk Cm Ln Rn Dp In No NA Wd Sum


-----------------------------------------------------------------------------

1 artificial_intelligence (ai)

9 - - - - - - - - - - - - - - - 9

2 avionics (av)

15 - - - - - - - - - - - - - - - 15

3 avl_tree (at)

12 - - - - - - - - - - - - - - - 12

4 cyclic_redundancy_check (cr)

- - - - - - - - - - 5 - - - - - 5

5 data_encryption_standard (de)

6 - - - - - - - - - 5 - - - - - 11

6 error_correcting_code (ec)

4 - - - - - - - - - 1 - - - - - 5

7 filter (fi)

6 - - - - - - - - - - - - - - - 6

8 integration (in)

2 - - - - - - - - - - - - - - - 2

9 kalman_filter (kf)

1 - - - - - - - - - - - - - - - 1

10 lag_filter (lf)

2 - - - - - - - - - - - - - - - 2

11 matrix_operations (mo)

12 - - - - - - - - - - - - - - - 12

12 object_based (ob)

1 - - - - - - - - - - - - - - - 1

13 polynomial_coding_style (pc)

4 - - - - - - - - - - - - - - - 4

14 simulation (si)

8 - - - - - - - - - - - - - - - 8

15 symmetric_deadzone (sd)

2 - - - - - - - - - - - - - - - 2

16 symmetric_limiter (sl)

2 - - - - - - - - - - - - - - - 2

17 trie (tr)

2 - - - - - - - - - - - - - - - 2

------------------------------------------------------------------------------

------- Group totals : official status - application (ap)

Va Nu Dy Un Vr Ng Xc Pk Cm Ln Rn Dp In No NA Wd Sum

-----------------------------------------------------------------------------

87 - - - - - - - - - 11 - - - - - 99

-----------------------------------------------------------------------------


Figure 6-30 Write_SubGroups Report

6.12.3 Write_Tests (WT)

Write the status information for all tests in the current group. The default file name will be group abbreviation & ".tst". Sample output may be found in Figure 6-31, Write Tests Report.



Test Data


------------------------------------------------------------------------------

Group: tasking (tk)

------------------------------------------------------------------------------

interrupt Va Nu Dy Un Vr Ng Xc Pk Cm Ln Rn Dp In No NA Wd Sum

(tk_in_) ----------------------------------------------------

Subgroup totals 11 - - - - - - - - - - - - - - - 11

----------------------------------------------------

1 int_00 *1 - - - - - - - - - - - - - - - 1

2 int_01 *1 - - - - - - - - - - - - - - - 1

. .

10 int_09 *1 - - - - - - - - - - - - - - - 1

11 int_10 *1 - - - - - - - - - - - - - - - 1

------------------------------------------------------------------------------

. .

------------------------------------------------------------------------------

storage_size Va Nu Dy Un Vr Ng Xc Pk Cm Ln Rn Dp In No NA Wd Sum

(tk_ss_) ----------------------------------------------------

Subgroup totals 3 - - - - - - - - - - - - - - - 3

----------------------------------------------------

129 task_54_mod *1 - - - - - - - - - - - - - - - 1

130 task_55_mod *1 - - - - - - - - - - - - - - - 1

131 task_56 *1 - - - - - - - - - - - - - - - 1

------------------------------------------------------------------------------

------ Group totals : official status - tasking (tk)

Va Nu Dy Un Vr Ng Xc Pk Cm Ln Rn Dp In No NA Wd Sum

------------------------------------------------------------------------------

106 - 8 - 1 - - - - - - 1 - 28 - - 142

------------------------------------------------------------------------------


Figure 6-31 Write_Tests Report

6.12.4 Write_Chosen (WC)

Write the current selected tests in the current group. The default file name will be group abbreviation & ".cho". All - status code information is displayed. Sample output may be found in Figure 6-32, Write Chosen Report.



Selected Test Data


----------------------------------------------------------------------------

Group: tasking (tk)

----------------------------------------------------------------------------

interrupt Va Nu Dy Un Vr Ng Xc Pk Cm Ln Rn Dp In No NA Wd Sum

(tk_in_) ----------------------------------------------------

Subgroup totals 11 - - - - - - - - - - - - - - - 11

----------------------------------------------------

1 int_00 *1 - - - - - - - - - - - - - - - 1

2 int_01 *1 - - - - - - - - - - - - - - - 1

8 int_07 *1 - - - - - - - - - - - - - - - 1

9 int_08 *1 - - - - - - - - - - - - - - - 1

11 int_10 *1 - - - - - - - - - - - - - - - 1

----------------------------------------------------

Selected Subgroup Totals 5 - - - - - - - - - - - - - - - 5

----------------------------------------------------------------------------

. .

----------------------------------------------------------------------------

------ Selected Group totals : official status - tasking (tk)

Va Nu Dy Un Vr Ng Xc Pk Cm Ln Rn Dp In No NA Wd Sum

----------------------------------------------------------------------------

15 - - - - - - - - - - - - 1 - - 16

----------------------------------------------------------------------------

------ Group totals : official status - tasking (tk)

Va Nu Dy Un Vr Ng Xc Pk Cm Ln Rn Dp In No NA Wd Sum

----------------------------------------------------------------------------

106 - 8 - 1 - - - - - - 1 - 28 - - 142

----------------------------------------------------------------------------


Number of tests selected = 16



----------------------------------------------------------------------------


Status Codes:

1 valid Va Valid non-zero execution time

2 null_time Nu Execution times of 0.0

3 delay_problem Dy Measures time for delay statement

4 err_unreliable_time Un Timing measurement was variable

5 err_verification Vr Verification error

6 err_large_negative_time Ng Timing ERROR: large negative time

7 err_claim_excess_time Xc Claim test that ran out of time

8 err_packaging Pk Packaging error - might be OK

9 err_at_compilation_time Cm Compile time error

10 err_at_link_time Ln Link time error

11 err_at_execution_time Rn Run time error

12 err_dependent_test Dp System dependent test failure

13 err_inconsistent_results In Execution and compile inconsistency

14 err_no_data no No data available

15 not_applicable NA Not applicable (set by user)

16 err_withdrawn_test Wd Erroneous test: has been withdrawn


Figure 6-32 Write_Chosen Report

6.13 Adding New Tests To The Harness

New tests can be added to the ACES so that they can be recognized and handled just as existing tests are. Most of that process is addressed in Section 9.1.6 "Adding Subgroups, Tests, and/or Main Programs". The Harness gets all of its information about tests, main programs, and subgroups from the weights (structure) files which are listed in the system name file. See Section 6.2.1 "Weight File(s) needed by the Harness". Users who change the single weight file used by the analysis tools can make that information available to the Harness in two ways. One way is to replace the names of the supplied structure files (one per group) with the name of the new structure file - there must still be one entry for each group in the system name file, but the file name could be the same. Functionally, this will work fine, but because the Harness accesses this file frequently, execution speed may suffer. The other alternative is for the user to edit the modified structure file and produce eighteen (or twenty-one, for Ada 95 implementation) separate files, one for each group, which can then be accessed more efficiently by the Harness.

Tests can also be added using the User-Defined Benchmark facility. (See Section 5.5.)

6.14 Selecting Tests by Performance Topics

If the user wants to select a set of tests based on a performance topic that spans the defined groups, then the user should choose Option 2, "Choose Tests by Performance Topic" in the initial Harness screen. This selection will present the user with a screen as given below in Figure 6-33.



-------------------------- PERFORMANCE ISSUES MENU ------------------------



1. - Concurrency

2. - Floating-Point Data/Operations

3. - Integer Data/Operations

4. - Fixed-Point Data/Operations

5. - Character and String Data/Operations

6. - Representation Clauses and Attributes

7. - Child Library Units

8. - Controlled Types

9. + New ACES 2.1 Tests

10. + All Ada95 Specific Tests

---------------------------------------------------------------------------



pick issues by number Harness menu Clear all issues

Next screen Previous screen Select all issues Quit

>


Figure 6-33 Performance Issues Menu

Default selections are indicated by a "+" in the menu. The user can select or deselect any topic by number. The current default, using an Ada 95 compiler, will provide the user with all of the new ACES 2.1 tests and all of the Ada 95 specific tests (the union of these two sets intersect).

If the user selects "H" or "Harness" as an input, then the Harness group level menu will be displayed with the tests selected as determined by the Performance Issues Menu.

The "Next screen" and "Previous screen" allow the user to look at additional performance issues, if they exist. The "Clear all issues" command allows the user to deselect all performance issues and the "Select all issues" command allows the user to select all performance issues. The "Quit" command will exit the Harness program.

The data file used to identify the performance issues and the tests that belong to each performance topic is "zh_prfmi.txt", for Ada 83 compilers, and "zh_prfmi.a95", for Ada 95 compilers. Figure 6-34, as given below, gives a partial view of the current "zh_prfmi.a95" file.



Concurrency


Floating-Point Data/Operations

Integer Data/Operations

Fixed-Point Data/Operations

Character and String Data/Operations

Representation Clauses and Attributes

Child Library Units


Controlled Types



+New ACES 2.1 Tests


+All Ada95 Specific Tests

ar_dc_decimal_attributes_01 Fixed-Point Data/Operations

ar_dc_decimal_ops_01 Fixed-Point Data/Operations

ar_dc_decimal_ops_02 Fixed-Point Data/Operations

ar_dc_decimal_ops_03 Fixed-Point Data/Operations

ar_dc_decimal_ops_04 Fixed-Point Data/Operations

ar_dc_decimal_attributes_01 New ACES 2.1 Tests

ar_dc_decimal_ops_01 New ACES 2.1 Tests

ar_dc_decimal_ops_02 New ACES 2.1 Tests

ar_dc_decimal_ops_03 New ACES 2.1 Tests

ar_dc_decimal_ops_04 New ACES 2.1 Tests

ar_dc_decimal_attributes_01 All Ada95 Specific Tests

ar_dc_decimal_ops_01 All Ada95 Specific Tests

ar_dc_decimal_ops_02 All Ada95 Specific Tests

ar_dc_decimal_ops_03 All Ada95 Specific Tests

ar_dc_decimal_ops_04 All Ada95 Specific Tests

ar_el_gen_elem_fcns_01 Floating-Point Data/Operations

ar_el_gen_elem_fcns_02 Floating-Point Data/Operations

ar_el_gen_elem_fcns_03 Floating-Point Data/Operations

ar_el_gen_elem_fcns_04 Floating-Point Data/Operations

ar_el_gen_elem_fcns_05 Floating-Point Data/Operations

ar_el_gen_elem_fcns_06 Floating-Point Data/Operations

ar_el_non_gen_elem_fcns_01 Floating-Point Data/Operations

ar_el_non_gen_elem_fcns_02 Floating-Point Data/Operations

ar_el_non_gen_elem_fcns_03 Floating-Point Data/Operations

ar_el_gen_elem_fcns_01 New ACES 2.1 Tests

ar_el_gen_elem_fcns_02 New ACES 2.1 Tests

ar_el_gen_elem_fcns_03 New ACES 2.1 Tests

ar_el_gen_elem_fcns_04 New ACES 2.1 Tests

ar_el_gen_elem_fcns_05 New ACES 2.1 Tests

ar_el_gen_elem_fcns_06 New ACES 2.1 Tests

ar_el_non_gen_elem_fcns_01 New ACES 2.1 Tests


Figure 6-34 Partial View of "zh_prfmi.a95"

The "zh_prfmi.a95" file is structured as follows where n is the number of performance topics with a maximum of 30:


Line             Column         Contents                                   

                                                                           

1..n             1              "+" or " " to indicate default selection   

1..n             2..42          Performance Topic                          

n+1                             Blank line to indicate end of performance  
                                topics                                     

n+2 ... end of   1..m           Test name                                  
file                                                                       

n+2 ... end of   m+1            Blank                                      
file                                                                       

n+2 ... end of   m+2 .. m+42    Performance topic for this test            
file                                                                       



The user can add performance topics to a maximum of 30 and can assign any number of tests to any performance issue.

7. RUNNING THE PERFORMANCE TESTS

The following three sections discuss running the performance tests.

7.1 How To Run Tests

This section describes how to compile and execute the performance test programs. The Pretest should have been completed before starting to run the performance tests. The Pretest sets up the program library with the necessary shared compilation units. In Step 11 of the Pretest, the ACES user adapted and executed a representative sample of ACES performance test programs and obtained familiarity with the "core" operations necessary for running the performance tests.

The Harness program, which is compiled and tested in Step 11 of the Pretest, should be used to generate the scripts to run the performance tests. It is recommended that the performance tests be run by group. The scripts generated by Harness use the information supplied by the user in Pretest to copy the performance test files from the "source" directory to the "working" directory before compiling and running the tests. The user must manually copy the performance test files from the "distribution" directory to the "source" directory before running the scripts. The purpose of this manual copy is to preserve the "distribution" directory since some of the performance tests must be edited in the "source" directory before running the tests (see Section 5.3 and 5.4).

The files may be run as a batch job (via a SUBMIT command on VMS or an "at" command on UNIX) or from a terminal session (on UNIX or VMS). Similar options should be available on any host which supports command files. Batch jobs are attractive because they will save a log of results and because their execution can be scheduled for submittal at user-specified times.

It is possible to submit one (large) command file which calls on each of these group files in turn. On a MicroVAX II, one compiler took 90 hours to compile and run the performance tests. However, this time can be highly variable. This may be longer than a user wants to tie up a system at a stretch. One attractive alternative is to submit a job to compile and run everything overnight, abort it the next morning and resubmit a job to finish what wasn't completed the following night. This approach assumes that the compilation system (and the program library) is not corrupted when a job is aborted. When running the performance tests on some Ada systems, some programs have hung the system due to errors. This is most likely for the implicit reclamation tests and the tasking tests.

The following test programs must be executed interactively:

* io_txm01

* io_txm02

* io_txm03

* io_txm07

* tk_lam01

* tk_lam02

The Systematic_Compile_Speed group may take a particularly long time to compile if it doesn't crash. One set of problems compiles a thousand units into the program library, and on several systems this step takes many hours to complete (over 36 hours on one system).

The following test programs are particularly likely to execute for a long time:

* sr_im* (may provoke thrashing in a virtual memory system)

* io* (may be slow on systems with slow IO devices and no caching or buffering)

* tk_lfm28

The sr_im* subgroup has been observed to crash several systems and some care should be taken in running it.

7.2 How To Restart

It may be necessary to restart the job running the performance tests for various reasons:

1. The user aborted the job because someone needed to use the system and executing the job with contending users is discouraged.

2. The system may have encountered a fatal error which crashed the job (or the system), bypassing the normal error handling in each test program.

3. Individual test programs may have failed and need to be selectively rerun. Sometimes a failure will indicate that additional system-dependent adaptations are necessary. Other times a simple resubmittal without modification is appropriate. For example, a Timing Loop verification failure can be caused by contending processes, and simply rerunning the program can often resolve the problem. If the compiler failed because it ran out of disk space, resubmittal without modifying the programs is appropriate if enough space is now available.

To restart using the Harness, it is possible to request that the Harness Build a command file to compile and run the remaining tests. If the Harness cannot be used, the user may edit the command file to skip over the commands already performed. The simplest way to proceed is on a subgroup/main program basis. To recompile a main program, it is generally necessary to re-execute the subgroup setup commands (the statements textually between the subgroup identification comment lines and the first command line with the "BEGIN MAIN" comment) which will perform the overhead operations, compile the dummy units, and (if necessary) compile the common packages for the subgroup. In the process of rerunning a subset of main programs from a subgroup, the recompilation of the dummy units will leave undeleted units in the program library corresponding to the test problems excluded from the subset (users may want to review their program library and clean up these "extra" units). For Harness users, Appendix B of the Version Description Document may be used to provide the link between test names (used in the Harness) and file names (used in the command scripts). The command scripts indicate which files make up a main program, and the VDD appendix gives the corresponding test names.

7.3 Output

The output of the performance tests follows a standard layout, as shown in Figure 7-1.



\ACES begin mainprogram\ ******************** xh_prm04


outer loop count

inner loop count |

bits microseconds | |

problem name size min mean | | sigma

\aces_problem_name\ xh_pr_prgma_sup_range_ck_08

ri := 1; -- range check enabled

\aces_measurements\ 40.0 2.8710E-01 4.4011E-01 18 19 35.9%#

\aces_problem_name\ xh_pr_prgma_sup_range_ck_09

ri := ll; -- range check enabled

\aces_measurements\ 224.0 1.5642E+00 1.7069E+00 17 9 7.2%

\aces_problem_name\ xh_pr_prgma_sup_range_ck_10

r2 := ei; r1 := ei; --range check enabled

\aces_measurements\ 248.0 1.9055E+00 2.0674E+00 18 6 4.8%

\aces_problem_name\ xh_pr_prgma_sup_range_ck_11

r2 := ei; r1 := r2; -- range check enabled

\aces_measurements\ 248.0 2.0794E+00 2.3060E+00 17 8 7.2%

\aces_problem_name\ xh_pr_prgma_sup_range_ck_12

r2 := ei - ei; -- range check enabled

\aces_measurements\ 240.0 1.9374E+00 2.0967E+00 17 6 6.0%

\aces_problem_name\ xh_pr_prgma_sup_range_ck_13

r2 := 0; --range check enabled


\aces_measurements\ 24.0 3.7110E-01 5.0731E-01 17 19 19.0%#



\aces_problem_name\ xh_pr_prgma_sup_range_ck_14


--dynamic testing of range constraint test which fails

\aces_measurements\ 632.0 337.2 338.7 12 3 0.4%


\ACES end mainprogram\ ******************** xh_prm04



Figure 7-1 Performance Test Output

Note the following points:

* There are lines starting with "\ACES begin" and "\ACES end" bracketing the execution of each main program. These lines include the main program name, which will simplify the task of determining what program was executing if a failure is observed in the output. If the trailing line is not present in the output, the program did not reach normal termination. The main programs are structured so that when an exception is propagated to the main program, there will be an exception handler which writes the trailing line.

* The next four lines contain labeling information identifying the performance test results. These are:

+ Problem Name - The name of the test problem is output left-justified and prefixed by the string "\aces_problem_name\".

+ Size - The code expansion size in bits is output as a floating point number.

+ Min and mean execution times - The minimum and the mean execution time for the test problems are output in microseconds. For values less than ten microseconds, and for large values, the time is output in an exponential format so that column alignments can be maintained while displaying the results with at least as much precision as they are measured with. Negative values of minimum time are used as coded values to indicate various error conditions which are enumerated in Figure 7-2.

+ Inner loop and outer loop count - These Timing Loop variables are displayed (see the Reader's Guide for details of their significance). The number printed for the inner loop count is the value of N such that 2**N-1 is the actual inner loop count.

+ Sigma - The standard deviation as a percentage of the mean is printed in the column headed by the title "sigma".

+ Unreliable indicator - The "#" occurring after the "%" symbol for some entries in the output is an indicator that the measurement is not considered statistically reliable. See the Reader's Guide for a discussion of the criteria for printing this indicator.

+ Error codes can be printed in place of a minimum time measurement in various cases. The codes are listed below in Figure 7-2. Some of these error codes are generated automatically by the system and some must be entered manually, based on user analysis. For more information on error codes, see Sections 9.3.4.1 "Diagnosis of Errors" and Section 9.3.6 "Modifying the Database Manually".

* A verification failure message such as the following might occur immediately after the main program message.

Verification of Timing Loop parameters failed. There may be a mismatch between options used to compile INITTIME and those used in this program or there may be competing tasks in the system. If simply rerunning this program does not resolve the problem, read the guides for a troubleshooting discussion.

The measured null_loop_time for this program is: 16.7

A value was expected between: 13.9 and 15.5.

This indicates that Timing Loop verification failed. Often rerunning the test program is sufficient to remove the problem, if it was caused by transient contention from other processing such as the activity of a system daemon. For a main program in which this message occurs, all the test problems will be flagged with the ERR_VERIFICATION code.

* Ancillary information is prefixed with the string " >>> ", as in the following example.

>>> cl_ac_acker_01

>>> time per call is 12.0

The first line identifies the test problem name the ancillary information is associated with. The next line contains the text of the information.

Ancillary information is provided for various constructions, including the instruction execution rate as defined for the Dhrystone and Whetstone; rendezvous times for several of the tasking tests; information about the magnitude of observed errors; deductions about use of particular implementation techniques; and inferences based on comparing results from several different test problems (such as program tk_lf29, which computes an estimate for the task-switch time based on results from tk_lf_task_60, tk_lf_task_61, and tk_lf_task_62).

* The min time field may contain one of the following error codes, as shown in Figure 7-2.



NAME CODE MEANING


err_at_compilation_time -1.0 Problem did not compile and link

err_at_execution_time -2.0 Problem failed runtime check

err_no_data -3.0 No information on problem

err_dependent_test -4.0 Problem is system dependent

err_packaging -5.0 Failure of an earlier problem in the same main

program precluded this problem from running

err_unreliable_time -6.0 Timing measurement is unreliable

err_withdrawn_test -7.0 Problem has been withdrawn

delay_problem -8.0 Problem measures non-null DELAY and is not

appropriate to compare with CA

err_at_link_time -9.0 Problem had error at link time

err_claim_excess_time -10.0 One of the SR_IM* problems would have taken an

excessive time to complete


err_verification -11.0 Timing Loop verification test failed



err_large_negative_time -13.0 Error in the Timing Loop measurement



Figure 7-2 Error Codes

Output of the performance tests is input to Condense for use in Comparative Analysis and Single System Analysis. Also results from the Systematic Compile Speed group are used to complete the Library Assessor Report.

8. RUNNING THE ASSESSORS

A brief description of each of the assessors is provided below. For detailed instructions in their use, see the appendix identified for each of the assessors listed. An extensive background discussion of the assessors may also be found in the ACES Reader's Guide, Section 8.

Each of these assessors contains a summary report to record the results noted by the user. Summary reports are in the form of text files which may be completed on-line, or printed so that the hardcopy may be filled out by hand. They are part of the readme files shown in the appendices of this document.

Before attempting any of the assessors, at least part of the Pretest should have been completed. At a minimum, the global packages zg_glob1, zg_glob2, zg_glob3, zg_glob4, and zg_glob5 must have been compiled and an executable must have been created for zg_incld. These operations are performed in Pretest steps 1 and 5.

The command script files for the assessors must be adapted to the appropriate operating system. Sample command scripts are provided (".com" for VAX/VMS and ".unx" for UNIX). In addition, these command scripts invoke lower-level command scripts that are compiler-specific. If the Pretest activity has been completed, these script files should already have been adapted. Otherwise, manual adaptation is required for the following low-level command scripts: zc_adack, zc_adadb, zc_adano, zc_adaop, zc_cpydb, zc_delbo, zc_delsb, zc_delso, zc_link, zc_linkd, zc_setli, and zg_incld. Finally, the Diagnostics, Library, and Capacity assessors have their own low-level command scripts (yd_compl, yl_compl, and yc_compl) that must be adapted to the compiler.

8.1 Symbolic Debugger Assessor

The ACES Symbolic Debugger Assessor is a procedure to assess the quality of a symbolic debugger associated with an Ada compilation system. The Symbolic Debugger Assessor determines the functional capabilities of a symbolic debugger, and measures the performance impact when a program is executed under the debugger. The Symbolic Debugger Assessor includes a set of programs, a set of scenarios describing what operations the evaluator should use the symbolic debugger to perform, and instructions for evaluating the performance of the debugger. Because symbolic debuggers provide different user interfaces and capabilities, ACES users will have to manually adapt the tests for each implementation. Detailed procedures for running the Debugger Assessor test are found in the text file "yb_readm.txt", which is reproduced in Appendix C of this document.

8.2 Diagnostic Assessor

The Diagnostic Message Assessor is a procedure to assess the quality of the diagnostic messages provided by an Ada compilation system. The assessor contains a set of erroneous programs, and instructions explaining how to evaluate system responses to the errors.

There is a separate Ada compilation unit for most of the diagnostic test problems. This will minimize cases where a failure to process one condition leads to another condition not being attempted. There are command files or scripts to compile, link, and execute the diagnostic test problems on two systems, VMS and UNIX. Users can adapt these examples to the implementation-dependent requirements for the system they are evaluating. Detailed procedures for running the Diagnostic Assessor tests are found in the text file "yd_readm.txt", which is reproduced in Appendix D of this document.

8.3 Program Library Assessor

Ada program library systems are assessed by using a set of compilation units and operations (scripts) to be performed on a library. Because there is no standard for library commands, the ACES user will have to adapt the scripts to the compilation system being evaluated.

The Library Assessor contains different scenarios, or scripts, which the ACES user must adapt to the system under test. Each scenario describes what must be accomplished to perform it. Users will typically measure execution time, disk space, or whether some operation is possible. Results from the Systematic Compile Speed group in the performance tests are required to complete the assessor. Detailed procedures for running the Library Assessor tests are found in the text file "yl_readm.txt", which is reproduced in Appendix E of this document.

The scripts define each step to perform and the measurements to make. The scenarios are as order-independent as possible; however, because the sequence of library updates can affect the correctness and performance of library commands, the sequence of commands within a scenario must be the same on different systems to insure that the operations are comparable.

8.4 Capacity Assessor

The Capacity Assessor consists of a set of programs, common data files, usage instructions and scripts for running the tests.

This assessor tests for compile-time and run-time capacity limits of the Ada compilation system. It tests features of the language that could be subject to less than desirable limits for the timely and satisfactory completion of some projects. The Capacity Assessor is designed to use a branch-and-bound search technique where applicable to find limits in the compile-time and run-time tests.

The ACES user will have to adapt the scripts to the system being evaluated. The user will typically measure maximum compilable sizes of generated Ada source code within user-specified ranges, and maximum attainable sizes of run-time constructs in supplied Ada source programs.

Detailed procedures for running the Capacity Assessor tests are found in the text file "yc_readm.txt", which is reproduced in Appendix F of this document.

9. RUNNING THE ANALYSIS

9.1 Preparing Output For Analysis

This section describes preparation for analysis and applies to the analysis programs Condense, Comparative Analysis (CA), and Single System Analysis (SSA). The following topics are discussed.

* Compiling/Running the analysis programs

* Preparing the data

* Preparing to run the analysis programs

* Modifying the System Names file, "za_cosys.txt"

* Modifying the Structure (weights) file, "za_cowgt.txt"

* Adding groups, subgroups, tests, and/or main programs

* Using "readStructure" routines

* Using Condense

9.1.1 Compiling/Running the Analysis Programs

The analysis programs may be run in several ways.

1. Run the Menu program, make selections interactively, and execute them directly from the "run or save" screen of the menu.

2. Use an existing request file and execute it through the menu.

3. Run the Menu program and make selections interactively. Then save the selections to a Request file to be run later from the menu or from a standalone analysis program.

The most convenient way to run the analysis programs is to use Menu when all of the analysis programs are linked. However, Condense, CA, and SSA will all run as independent programs if the appropriate Request files are prepared (the names of these files appear in the System Names file, "za_cosys.txt").

Figure 9-1 shows the shell of a sample command script to compile and link the analysis program set. Unless the user plans to do a great deal of analysis, it is recommended that these files be compiled with no optimization. The extra time saved during compilation will probably make up for the time lost while running, and this choice may make the difference between success and failure at compile time.

If the user chooses to compile the tools separately, the command script for Step 12 can be adapted for that purpose. The user will need to compile and link the Menu program as well as each of the three tools in order to have a way to create the request files that can then be used by the appropriate tools. After creating the menu and tools, the user will need to clean excess files from the directory.

WHEN RUNNING THE TOOLS INDEPENDENTLY OF THE MENU SYSTEM, THE USER WILL NEED TO FIRST RUN CONDENSE TO CREATE THE DATABASES NEEDED BY BOTH COMPARATIVE ANALYSIS AND SINGLE SYSTEM ANALYSIS.

Note that in the command script for Pretest Step 12 there are commands for copying the "dummy" files. These commands have been commented out for standard use. The aces/support directory contains the "dummy" files for each of the tools. These files were also included in the list of files that were copied over at the beginning of Pretest. They are named:

- za_cndum.ada Condense

- za_cadum.ada Comparative Analysis

- za_sadum.ada Single System Analysis

9.1.1.1 To Create A Separate MENU

This will allow the user to create request files to be run independently by a single tool.

1. Copy the command file from Pretest Step 12 to a new command file.

2. In that new command file, comment out or delete the lines for copying, compiling, and deleting ALL FILES EXCEPT THE DUMMY FILES for:

- Condense

- Comparative Analysis

- Single System Analysis

3. Use the commands given as comments in the new file for copying, compiling, and deleting the DUMMY FILES for:

- Condense

- Comparative Analysis

- Single System Analysis

4. Be certain to leave in the lines that copy and compile the COMMON FILES (za_co*) and the MENU files (za_mn*).

5. Be certain to leave in the line to link the MENU file.

6. Run the new command file. You will have an executable file named "menu".

9.1.1.2 To Create A Separate CONDENSE Tool

1. Copy the command file from Step 12 to a new command file.

2. Comment out or delete the lines in that new command file for copying, compiling, and deleting ALL FILES for:

- Comparative Analysis

- Single System Analysis

- Menu

3. Note: For this tool only, you will need none of the DUMMY FILES.

4. Be certain to leave in the lines that copy and compile the COMMON FILES (za_co*) and the CONDENSE (za_cn*) files.

5. Be certain to change the name of file for the link command from "menu" to "condense".

6. Run the new command file. You will have an executable file named "condense".

9.1.1.3 To Create A Separate COMPARATIVE ANALYSIS Tool

1. Copy the command file from Step 12 to a new command file.

2. Comment out or delete the lines in that new command file for copying, compiling, and deleting ALL FILES EXCEPT THE DUMMY FILES for:

- Condense

- Single System Analysis

- Menu

3. Use the commands given as comments in the new file for copying, compiling, and deleting the DUMMY FILES for:

- Condense

4. Be certain to leave in the lines that copy and compile the COMMON FILES (za_co*) and the COMPARATIVE ANALYSIS (za_ca*) files.

5. Be certain to change the name of the file for the link command from "menu" to "ca".

6. Run the new command file. You will have an executable file named "ca".

9.1.1.4 To Create A Separate SINGLE SYSTEM ANALYSIS Tool

1. Copy the command file from Step 12 to a new command file.

2. Comment out or delete the lines in that new command file for copying, compiling, and deleting ALL FILES EXCEPT THE DUMMY FILES for:

- Condense

- Comparative Analysis

- Menu

3. Use the commands given as comments in the new file for copying, compiling, and deleting the DUMMY FILES for:

- Condense

4. Be certain to leave in the lines that copy and compile the COMMON FILES (za_co*) and the SINGLE SYSTEM ANALYSIS (za_sa*) files.

5. Be certain to change the name of the file for the link command from "menu" to "ssa".

6. Run the new command file. You will have an executable file named "ssa".

9.1.1.5 To Clean Up The Directory and Eliminate Excess Files

1. Copy the command file from Step 12 to a new command file.

2. Comment out or delete all the lines in that new command file down to the last section entitled "Clean up sources and Library Units". Refer to the command file or files you created in one of the previous section to verify exactly which files need to be deleted.

3. Run the new command file. This should clean up your directory.


#!/bin/sh



#


# This UNIX command script compiles and links the Analysis

# tools for analyzing the performance test results. All

# the tools are linked into a single executable, Menu.

#

# For self-hosted system

#

zc_setli.unx

#


# Sources common to all Analysis tools



#


zc_adano.unx za_co01.ada

# ... -- other za_coXX files


zc_adano.unx za_co28.ada



#


# Sources for Condense

#

zc_adano.unx za_cn01.ada

# ... -- other za_cnXX files

zc_adano.unx za_cn07.ada

#

# Sources for Comparative Analysis

#

zc_adano.unx za_ca01.ada

# ... -- other za_caXX files

zc_adano.unx za_ca13.ada

#

# Sources for Single System Analysis

#

zc_adano.unx za_sa01.ada

# ... -- other za_saXX files

zc_adano.unx za_sa13.ada

#

# Sources for Menu

#

zc_adano.unx za_mn01.ada

# ... -- other za_mnXX files

zc_adano.unx za_mn06.ada

#

# Link

#

zc_lnk.unx menu


#


Figure 9-1 Command File to Compile and Link Analysis Program Set

9.1.2 Preparing the Data

The available output for each system should be concatenated into one log file (or one file each for execution and compilation results) and the names of the log files should be entered in the System Names file. If all log data cannot be made available in one file (or one file each for execution and compilation data) then Condense must be run explicitly on each log file to build the database incrementally. See Section 9.3.5 "Adding Data to the Database (Incremental Mode)".

If the log files for a system are read by the Harness, the Harness will create a database of execution-time data for each group. These databases can be input to Condense, instead of the execution log files. This method preserves any changes the user has made to the data in the Harness. To use the Harness-created execution databases as input to Condense, the directory containing the Harness files for a system must be listed in the System Names file. Compilation data, if any, must be read from the log file as usual.

9.1.3 Preparing to Run the Analysis Programs

The System Names file, "za_cosys.txt" must be modified, either by editing or by selecting the "View/modify the system names file" option from one of the Analysis Menu screens. It is required by all analysis programs (Condense, CA, SSA). It is strongly recommended that system names be short (less than eight characters). See Figure 9-2 for an example of the System Names file.



>output_path := [terrell.aces.analysis] -- entered by user


>error_file := za_coerr.txt -- entered by user

>request_file_CA := request.ca -- entered by user

>request_file_CON := request.con -- entered by user

>request_file_SSA := request.ssa -- entered by user

>database_CA := CAdb.txt -- default name

>weight_file := structur.txt -- entered by user

>system_name := system_1 -- entered by user

--System_1 comments

<ada_version := 83

>execution_log := [terrell]cmp_1.log -- entered by user

>execution_condensed := system_1.dbs -- entered by user

>compilation_log := [terrell]cmp_1.log -- entered by user

>compilation_condensed := system_1.dbc -- entered by user

>harness_directory := -- entered by user

>system_name := system_2 -- entered by user

<ada_version := 83

>execution_log := [terrell]cmp_2.log -- entered by user

>execution_condensed := [terrell]cmp_2.e01 -- default

>compilation_log := [terrell]cmp_2.log -- entered by user

>compilation_condensed := [terrell]cmp_2.c01 -- default

>harness_directory := [terrell.sys_1] -- entered by user

>system_name := system_3 -- entered by user

<ada_version := 83

>execution_log := -- entered by user

>execution_condensed := system_3.dbs -- entered by user

>compilation_log := [terrell]cmp_3.log -- entered by user


>compilation_condensed := system_3.dbc -- entered by user



>harness_directory := [terrell.sys_3] -- entered by user



Note: Ada style comments on the far right are for information purposes and are not part of the System Names file.



Figure 9-2 Sample System Names File (with VAX VMS Names)

9.1.4 Modifying the System Names File, "za_cosys.txt"

The System Names file, "za_cosys.txt", is used by Condense, Menu, Comparative Analysis, and Single System Analysis. It must be modified by the user to specify the name of each system to be analyzed, and the names of the performance test output files (log files) for each system. The directory file containing a system's Harness System Name file, "zh_cosys.txt", and the Harness-created execution databases should be specified if the execution data is to be obtained from the Harness databases instead of the log file. Non-default names for some analysis input and output files may also be specified in the System Names file. See Figure 9-2, Sample System Names File, for an example of the format.

To modify the system names file interactively, choose the option "View/modify the system names file" from the Systems menu screen. The next screen presents default values for several file names. These may be changed, but are generally satisfactory. Choosing "Next" brings up a sequence of identical screens, one for each system identified in the system names file. You may replace any of the values given. Note that for each system, you must specify at least one log file or at least one condensed data file. To delete a system from the list, enter "clear". To add a system, choose "Next" until a system with all blank names appears. Enter "Do" at any screen to store ALL system data (including any systems that you have not already viewed).

While the names of other analysis files may be changed either in the Menu or in the System Names file, the name "za_cosys.txt" must be used for the System Names File (unless changed in source code) because programs running in batch mode must be able to find this file. The only time another name may be used is when running the Menu, which will prompt for an alternate name if "za_cosys.txt" is not found. The Menu will immediately copy the alternate file into a file named "za_cosys.txt", however, so that the file is available for the other analysis programs.

9.1.4.1 System Data

For each system, create a system record specifying the system name and the names of its data files or the location of its Harness execution databases. The data files for a system are the execution and compilation log files, the Condensed "database" files containing execution and compilation results, and the Harness group "databases" (optional). See Section 9.6 "ADAPTATIONS AND LIMITATIONS", for a discussion of the type of the "database" files. It is not necessary to have both execution and compilation data to perform analysis. At least one data file must be specified for each system. Data files names may consist of a prefix and suffix, each a maximum of 40 characters (suffix includes '.'). If the name and label will not fit on an 80-character line, place the name on the line following the label. The Harness directory path name may be a maximum of 40 characters.

* System Name

Specify a system name to the right of the label ">system_name := ". This system name will be used to identify the system in the analysis reports and database files. The system name must be one word (no internal blanks). The system name may be a maximum of 40 characters; it will be truncated to as few as 8 characters for some reports, however, so the name should be unique in the first 8 characters.

* Language Version

Specify 83 or 95 to the right of the label ">ada_version :=", indicating whether the tested system is an implementation of Ada 83 or of Ada 95.

* System Comments

Comments describing the system may follow the system name on subsequent lines. These comments will be inserted in the analysis reports. Each comment must occupy a separate line, beginning with "--", and must be 70 characters or less. System comments are not supported by the Menu's "View/Modify system names file" option.

* Log file names

Specify the name of the performance test execution log file to the right of the label ">execution_log := ". Specify the name of the performance test compilation log file to the right of the label ">compilation_log := ". Both compilation/link times and execution data may be in one log file. If this is the case, enter the file name after both labels. One or both performance test output logs must be specified unless there is an existing database that is to be used in analysis, or unless Condense is being run only to merge the Harness databases.

* Condensed Database Names

The names of the condensed execution database file and the condensed compilation database file may be specified after the labels ">execution_condensed :=" and ">compilation_condensed :=". When a condensed database is created, its name will be written to the System Names file automatically. The user need not specify condensed database file names unless:

+ There is no existing database, and a non-default name is wanted for a database that is to be created; or

+ An existing database file that is not already listed in the System Names file will be used in analysis.

The default name for a database file consists of the prefix of the corresponding log file and the suffix ".e00" for execution data or the suffix ".c00" for compilation data. The "00" in the suffix is incremented each time a new database file is written with that file name prefix. This is done to prevent accidentally overwriting an existing database on systems that do not have file version numbers. A user-specified database file name that contains the string "00" or any two-digit number at the end of the suffix will also be incremented. Database file names with a two-digit number at the end of the suffix must be used on systems without version numbers when Condense is being run to add data to an existing database (because both the old and new files will be open at the same time).

* Harness Directory

The path of a directory containing Harness files for a system may be specified to the right of the label ">harness_directory". This must be done if Condense is to create an execution database by reading and merging the databases that Harness creates for each group. The directory must contain the Harness System Name file, "zh_cosys.txt", and the Harness databases. The Harness directory need not be specified if execution data is to be read from a log file or from an existing database.

Users of the Harness who track their progress by reading log files as they are produced will already have all of the execution time results in the Harness databases. If the set_status command has been used they will also have information in these databases which is not available in the log files. In this case the Condense option to merge the Harness databases to produce one execution database should always be used. If the set_status command has not been used, the user may either merge the Harness databases or use the database produced by Condense from the log file(s). It will make no difference.

However, none of the compilation time data is extracted by the Harness. Users who are not concerned with compilation time analysis do not need to run Condense against the log files at all. Users who do want compile times have to run Condense.

For users who plan to run Condense, it will be easier if all of the log files to be processed have been concatenated together. For users who have separate logs for compile and for execution results, and who have complete Harness results, it will be faster to merge the Harness databases to produce the execution (and size) results and only run Condense against the compilation logs.

9.1.4.2 User-Specified File Names

The System Names file may be used to specify an output path for analysis reports, and to specify non-default file names for some analysis files. These entries are optional.

* Output path - A directory path for reports output by Condense, CA, and SSA may be specified to the right of the label ">output_path := ". The path name may be a maximum of 40 characters.

* Error File - The Analysis Tools' common Error Message file has a default name of "za_coerr.txt". A different name may be specified to the right of the label ">error_file := ".

* Request File for Comparative Analysis - The request file to be read by CA when running in batch mode has a default name of "za_careq.txt". A different name may be specified to the right of the label ">request_file_CA := ".

* Request File for Condense - The request file to be read by Condense when running in batch mode has a default name of "za_cnreq.txt". A different name may be specified to the right of the label ">request_file_CON := ".

* Request File for Single System Analysis - The request file to be read by SSA when running in batch mode has a default name of "za_sareq.txt". A different name may be specified to the right of the label ">request_file_SSA := ".

* Comparative Analysis Database file - The database file written and read by CA (not a condensed database file) has a default name of "za_cadb.txt". A different name may be specified to the right of the label ">database_CA := ".

* Structure (Weights) file - The Structure file has a default name of "za_cowgt.txt". A different name may be specified to the right of the label ">weight_file := ".

9.1.5 Modifying the Structure (Weights) File, "za_cowgt.txt"

The weights are only used by the CA program, but the other information in this file is used elsewhere. This file must be available (and accurate). The information in this file is used to initialize the data structures in the organization package that define which problems are in which subgroups, which subgroups are in which groups, and so on. This file is delivered with all weights set to 1.0. Modification requires editing the file. For a sample file, see Figure 9-3.

If some test problems are considered more important than others, they can be given a larger weight in the Comparative Analysis program. Weights can vary from 0.0 to 10.0. If all problems being analyzed have the same weight, regardless of its value, then the results will be the same as if all weights were 1.0. Weights only matter when problems have different weights assigned to them; otherwise the effect of weighting is null.

Three sets of weights are used in the CA program: weights for test problems (which affect execution time and size results); weights for main programs (which affect compile and link time results); and weights for groups (which affect group summaries of execution time, size, and compile speed data). In the weights (structure) file there is also a space for weights after subgroups. This value, if present, overrides the individual problem weights in that subgroup, making it easy to change all of the weights in the subgroup. This subgroup value will also override all main program weights (used in the compile and link speed analysis).

In Figure 9-3, the weight of 0.0 for the Instantiation subgroup means that all of the weights in that subgroup will be given this value. The absence of a value in this field (as in the Subprogram weight field) means that the weights for the individual tests and main programs are used. Thus, the weights for almost all entities in this subgroup are 1.0 except for gn_su_subprogram_06 and gn_su_subprogram_08 (0.0), for gn_su_subprogram_09 and gn_su_subprogram_13 (10.0).

Weights of zero (0.0) exclude a problem from analysis.

In the Special Analysis mode, problems may be selected from the entire test suite (this is not possible otherwise). In order to make this process as painless as possible, the following conventions are followed: groups that are not selected are excluded. In the groups that are selected, subgroups that have weights of zero (0.0) are excluded. The selection process for groups is explained below. Subgroups that have default weights greater than zero are all included. In subgroups which do not have default weights, problems with weights of zero are excluded; all others are included (with their given weight).

9.1.6 Adding Subgroups, Tests, and/or Main Programs

New tests and main programs can be added and their results will be handled automatically by CA. New subgroups can also be added if the proper conventions are followed (see below).

The first order of business is to build a structure file ("za_cowgt.txt"). This file is a concatenation of all of the harness weight files (zh_??.txt - where ?? is a group identifier). In order to have run any new tests, mains, or subprograms, the user should have already modified the appropriate harness weight files. The user must now concatenate these files together (in the proper order) to arrive at a new strucutre file. The proper order of concatenation is as follows: These files are concatenated together alphabetically except for "zh_io.txt" which needs to be before "zh_in.txt" and "zh_xh.txt" which needs to be between "zh_dt.txt" and "zh_gn.txt".

Once the weight file has been concatenated into a structure file, copy this structure file ("za_cowgt.txt") into a new structure file, named "structur.new". Once this is done, the user needs to compile and link two programs: "za_redo.ada" and "za_semic.ada". The code int he file "za_redo.ada", rebuilds the Analysis software modules that identify test names, main program names, and subprogram names. The code in the "za_semic.ada" file, rebuilds the Analysis software module that identifies the number of lines of code, and the number of semi-colons in each of the tests and main programs.

The redo program reads through the "structure.new" file and produces one ouput file named "names.new". This output file contains the specifications for three packages named "Subgroup_Names", "Main_Names", and "Test_Names". The user needs to divide this file into three files, one for each package. The following list identifies the names of the file to contain the specified package specifications. Note that the user may want to include the header comments currently in these files.

Specification Name                   File Name                             

                                                                           

Subgroup_Names                       za_co03.ada                           

Main_Names                           za_co01.ada                           

Test_Names                           za_co02.ada                           


Before running the semi-colon count program the user needs to build all of the ".inc" and ".ada" files for the entire ACES. To do this, the user needs to run Harness, select all groups, subgroups, and tests, and select the Build operation to generate the main programs (and command script). The value of the Save/Delete option are irrelevant. It is not necessary to execute the resulting script.

The semi-colon count program prompts the user for the name of the structure file, the name of the file to write the output to, and the suffix for Ada programs. This program generates a file containing the package body for the TestRecord package. This file needs to be renamed to "za_sa02.ada" if the defualt name was not used.

* Adding subgroups

New subgroups can be added to the Structure file. The prefix (gg_ss) from the first main program must provide subgroup prefixes (ss) which are unique within a group. This prefix identifies subgroup membership in displaying the results. All names of main programs within a subgroup must begin with this prefix (gg_ssm). All names of tests within a subgroup must begin with this prefix (gg_ss_).



Group generics wt 1.0


SubGroup instantiation wt 0.0

pckage notneeded file

Main gn_inm01 wt 1.0

status normal compile optimize

Test gn_in_enum_io_01 file gn_in01_.inc wt 1.0

Test gn_in_enum_io_02 file gn_in02_.inc wt 1.0

Test gn_in_enum_io_03 file gn_in03_.inc wt 1.0

Test gn_in_enum_io_04 file gn_in04_.inc wt 1.0

Test gn_in_enum_io_05 file gn_in05_.inc wt 1.0

Test gn_in_enum_io_06 file gn_in06_.inc wt 1.0

Test gn_in_enum_io_07 file gn_in07_.inc wt 1.0

Main gn_inm02 wt 1.0

status normal compile optimize

Test gn_in_enum_io_08 file gn_in08_.inc wt 1.0

Main gn_inm03 wt 1.0

status normal compile optimize

Test gn_in_enum_io_09 file gn_in09_.inc wt 1.0


Main gn_inm04 wt 1.0



status normal compile optimize wt 1.0


Test gn_in_no_formals_01 file gn_in10_.inc wt 1.0

SubGroup subprogram wt

pckage nochecknoinclude file gn_supkg.ada

Main gn_sum01 wt 1.0


status normal compile optimize



Test gn_su_subprogram_01 file gn_su01_.inc wt 1.0


Test gn_su_subprogram_02 file gn_su02_.inc wt 1.0

Test gn_su_subprogram_03 file gn_su03_.inc wt 1.0

Test gn_su_subprogram_04_a file gn_su04_.inc wt 1.0

Test gn_su_subprogram_05 file gn_su05_.inc wt 1.0

Test gn_su_subprogram_06 file gn_su06_.inc wt 0.0

Test gn_su_subprogram_07 file gn_su07_.inc wt 1.0

Test gn_su_subprogram_08 file gn_su08_.inc wt 0.0

Main gn_sum02 wt 1.0

status normal compile optimize

Test gn_su_subprogram_09 file gn_su09_.inc wt 10.0

Test gn_su_subprogram_10 file gn_su10_.inc wt 1.0

Test gn_su_subprogram_11 file gn_su11_.inc wt 1.0

Test gn_su_subprogram_12 file gn_su12_.inc wt 1.0

Test gn_su_subprogram_13 file gn_su13_.inc wt 10.0

Test gn_su_subprogram_14 file gn_su14_.inc wt 1.0

Test gn_su_subprogram_15 file gn_su15_.inc wt 1.0


Figure 9-3 Sample Structure (Weights) File

* Adding main programs

New main programs within existing groups and subgroups (or new ones) can be added. The name of the main program must begin with the appropriate six letter prefix (gg_ssm). This prefix identifies group membership and subgroup membership in displaying the results. The file name prefix and the name of the Ada program within the file should be identical. Only one is stored. The file name appears in the command script files, which must be manually modified, if the user wants the new main programs (and tests) to be run with existing tests. The name of the Ada program appears in compile speed analysis. If there is no compile speed analysis, then the correspondence between file name and Ada program name does not matter. Condense uses the Ada program name when gathering the compile speed times.

If compile speed data is to be gathered by Condense, then it is also necessary to modify the command files so that the appropriate information is written to the log file produced while compiling the tests.

Compilation options in the structure file are for descriptive purposes and do not have to be accurate. These have no impact on the analysis programs. They were only used in producing the preliminary version of the command script files. It is essential that the main program status values of "multipleP" versus "normal" be distinguished. (Other values: "noInitTime," "Interactive," "MainWritten," and "noIncludes" are equivalent to "normal".) A status of "multipleP" means that several tests are included in one file, rather than the normal practice of separate files for each test. Again, if there is no compile speed analysis, this does not matter.

* Adding tests

When adding tests, it is important to use the appropriate six character prefix (gg_ss_). Besides identifying group and subgroup membership, the display of test names in CA excludes the first six letters to make the tables more compact. This is done regardless of what these letters are.

Tests should always be added at the end of a subgroup. This is important because file names are not stored. They are inferred. Again, if you are not doing compile speed analysis, this does not matter. If you are doing compile speed analysis, and want to use Condense to read the results from the log file, Condense must be able to identify which file name goes with which test. This also means that the file must be named appropriately. File names for tests are numbered sequentially within subgroups. They have eight-character prefixes (gg_ssnn_) and three-letter suffixes. The suffixes do not matter, as Condense does not look at them. The first two letters, "gg", identify the group; "ss" identifies the subgroup; and "nn" is a two-digit sequential number, beginning with "01".

* Using readStructure routines

The readStructure routines are not very robust, but the user should get a message about where the error occurs. Error recovery is very limited. If five errors are made, the user will probably have to run five times.

Figure 9-4 contains a list of the key words. Key words serve the same function as Ada reserved words in this file. The key words help the program which reads the file to identify what kind of information to expect. Capitalization is not important. Spelling is. The routines that read the file are in the package STRUCTURE.



Group <name>


SubGroup <name>

pckage notNeeded -- TYPE CommonPkgStatus

Assembly

CheckInclude

noCheckInclude

CheckNoInclude

noCheckNoInclude

Dual

Main <name>

status normal -- TYPE MainStatus

notYetWritten

MultipleP

noInitTime

Interactive

MainWritten

noIncludes


ExeNoFile


CompileOnly


libCommand


Custom

compile UnknownCmp -- TYPE compilationOptions

optimize


check



space


Test <name>

file <8 character name>.<3 character name>

wt <real number: 0.0 .. 10.0>


Figure 9-4 Key Word List in Structure File

9.1.7 Using Condense

Condense must be run on the performance log file(s), or the Harness-created databases (for execution data only) from each system. Condense formats the data from log files and writes it to database files, which are read by CA and SSA. Condense will be called automatically by CA or SSA if neither database file exists for the system being processed. CA and SSA will not run Condense automatically if a database file already exists, because the same log file might be needlessly reprocessed each time. If a condensed database does not exist, and the Harness directory is specified, the Harness' execution databases will be merged and written to a condensed database. If a condensed database does not exist, and the Harness directory is not specified, execution data will be extracted from the log file. Compilation data will be extracted from the log file when condensed databases do not exist.

Condense must be run explicitly if it is to be used in incremental mode, that is, when the user is running one group at a time, and then adding a new group to the database. It may also be run explicitly if the reports that it produces are wanted. See Section 9.3 "CONDENSE" for more detail.

9.2 Using The Analysis Menu

The Analysis Menu allows the user to specify data to be processed by the analysis programs Condense, Comparative Analysis, and Single Systems Analysis. The analysis programs may be called from the Menu, or the requests may be saved to files which are then used as input to the analysis programs when the programs are run in batch mode. The use of the Analysis Menu is optional. See Figure 9-5, The Flow through the Menus, for an overview of the menu organization.



Figure 9-5 The Flow through the Menus

9.2.1 Analysis Menu Inputs

The primary input file for the Analysis Menu is the System Names file ("za_cosys.txt"). See Section 9.1.4 "Modifying the System Names File". The weights file ("za_cowgt.txt") is also required. See Section 9.1.5, "Modifying the Structure (Weights) File".

Other input for the Analysis Menu is keyboard input from the user. See Section 9.3.7 "Condense Menu," Section 9.4.2 "Comparative Analysis Menu," and Section 9.5.2 "Single System Analysis Menu", for detailed examples of user input.

Menu items are selected by typing a number, or a range of numbers (for example, "1..3"), corresponding to an item, and entering a carriage return. Multiple items, or an item and a control sequence, can be entered on the same line, but must be separated by a comma (for example, "2,ne"). Items may be deselected by entering a minus sign: "-", and then the number corresponding to the item.

In some menus, values (usually file name components) are displayed to the right of an item and are preceded by the Ada assignment operator: ":=". These values may be changed by typing the selection number, space, the assignment operator, and then the new value. Example: "1 := users_file_prefix". See Section 9.6 "ADAPTATIONS AND LIMITATIONS". If the new value is too long to fit, type the number, space, the assignment operator, and carriage return, then type the value on a new line. File name prefixes may be a maximum of 40 characters. Suffixes (which include the character '.' as the first character) may also be 40 characters.

Control sequences (shown below in Figure 9-6) may be abbreviated to one character. Control sequences provided (where appropriate) in the Menu are:



Help Display help information for the current menu.


Next menu Proceed to next menu.

Previous menu Return to previous menu.


Main Return to main (first) menu.



Quit Return to operating system.


Default names Restore default names and values.

Recall Recall groups selected in the last request that was run or saved.

Clear Deselect all or remove system data from system names file.

Do request Perform the action chosen (to run or save request or save system names file data).

View/Mod Spt View or modify the systems names file.


Figure 9-6 Control Sequences in the Menu

9.2.2 Analysis Menu Outputs

The output of the Analysis Menu is the request files for Condense, CA, and SSA; the error message file; and a temporary file used in appending to the request files.

* Request files - See Section 9.3.2 "Condense Inputs", for a description of the Condense request file. See Section 9.4.2 "Comparative Analysis Menu", for a description of the CA request file and Section 9.5.2 "Single System Analysis Menu", for a description of the SSA request file.

* Analysis Error File ("za_coerr.txt") - The Menu writes error and informational messages to the common error file used by the analysis tools. The name of this file may be changed in the System Names file ("za_cosys.txt").

* Temporary file ("deleteme.tmp") - Will be created if the option to append to a request file is chosen. It may be deleted after running the Menu.

9.2.3 Analysis Menu Limitations

See Section 9.6 "ADAPTATIONS AND LIMITATIONS", for information on Analysis Menu limitations.

9.3 Condense

The Condense tool prepares ACES performance test data for processing by Comparative Analysis, Single System Analysis, Spreadsheets, and also produces some reports on the performance test results. Condense extracts compilation time and link time data from the Performance Test log files. Execution and code size data may be extracted from the log file, or from the Harness-created execution databases (one for each group). Condense writes this data to database files (one for execution and one for compilation/link data). The database files produced by Condense are the input to CA and SSA. Condense can also transform the database files into comma-delimited data files which can easily be loaded into spreadsheet software.

There are two ways to run Condense. The first is to run Comparative Analysis or Single System Analysis, which call Condense; the second is to run Condense explicitly from the Analysis Menu or from the command line. When called automatically, Condense produces no reports.

Condense will be called automatically by Comparative Analysis and Single System Analysis and will create database files if none exist. If one or both database files exist, Condense exits without processing any data. This is done to avoid reprocessing the same log files each time CA or SSA is run. If condensed database files do not exist, execution data will be extracted from Harness databases, if the Harness directory is specified, otherwise it will be extracted from the execution log file. Compilation data will be extracted from the compilation log file.

9.3.1 Running Condense Explicitly

Condense may be run explicitly from the Analysis Menu (if all Analysis units are linked into a single Menu executable); or, if linked as a separate executable, by itself in batch mode. Condense should be run explicitly if:

* You want the reports that it produces.

* You want the comma-delimited data files for compilation and execution.

* You are creating the database incrementally. If Condense has already been run for a system, and you wish to add data from a log file to that system's database, Condense must be run explicitly. When called by CA or SSA, it will not rerun when one or both databases exist. See Section 9.3.5 "Adding Data to the Database (Incremental Mode)".

* There is not enough space to link CA and SSA with Condense.

To run Condense explicitly do one of the following:

* Compile and link the Analysis Menu with Condense, invoke the Analysis Menu, and select options for Condense in the Menu. See Section 9.3.7 "Condense Menu".

* Run Condense in batch mode by linking it separately and creating a Request file to specify the systems to process and the report options desired. See Section 9.3.2 "Condense Inputs", for instructions on creating a Request file.

9.3.2 Condense Inputs

Input files for the Condense program are listed below.

* The System Names file ("za_cosys.txt"), which contains the name of each system and the names of the log files for each system. A sample file which the user may modify comes with the test suite. See Section 9.1.4 "Modifying the System Names File, za_cosys.txt", for instructions.

* Performance test log files for each system. One or both log files must be specified unless Harness execution databases are to be used; in that case the execution log file will not be processed and the compilation log will be processed, if available. Depending on how the log data was captured, both logs may be in one file. In this case, the file name should be entered in the System Names file ("za_cosys.txt") after both labels ">execution_log" and ">compilation_log".

+ The execution log file contains execution times and sizes. For information on the format of the log file, see Section 7.3 "OUTPUT".

+ The compilation log file contains compile and link times. For information on the format of the compilation and link results, see the Reader's Guide Section 5.1 on "OPERATIONAL SOFTWARE OUTPUT".

* One or both Condensed databases from a previous run (only in incremental mode). If data from a log file is added to an existing database, the database must be specified in the System Names file. Data from the existing databases and the log files can be merged; data from an existing condensed database cannot be merged with the Harness databases.

* Harness System Name file (only if execution data was processed by Harness). If execution data has been processed by the Harness, and the Harness databases are to be used as input to Condense, then the Harness directory must be specified in the System Names file. In the Harness directory, a Harness System Name file ("zh_cosys.txt") must be available.

* Harness Execution Time/Code Size databases (only if execution data was processed by Harness). If execution data has been processed by the Harness, and the Harness databases are to be used as input to Condense, then the Harness directory must be specified in the System Names file. In the Harness directory, the Harness databases must be available. The names of the databases may be specified in the Harness System Name file or the default names may be used.

* A Request file (only in batch mode). The default name of the Condense Request file is "za_cnreq.txt". The name can be changed in the System Names file ("za_cosys.txt"). Condense Request files can be created by:

+ Running the menu and taking the option to save the request to a Request file. See Section 9.3.7 "Condense Menu".

+ Copying and modifying the sample Request file ("za_cnreq.txt").

The format of the Request file is as shown in the example below. Each request consists of: each system name (as listed in the System Names file, "za_cosys.txt"), followed by the report options in the order shown, the comma-delimited data option, and options for creating or merging databases. Each item is selected with a "+" or deselected with a "-". File name prefixes for the reports may be specified to the right of the system names, and file name suffixes for the reports may be specified to the right of each report name. The default prefixes are the system names and the default suffixes are shown. If the last characters of a user-specified suffix are "00", or any two-digit number, the number will be incremented each time a report with that name is written.

For compilation data, one of two options must be chosen. The first option is to create the compilation database by processing the log data, discarding any existing database. The second option is to create the compilation database by merging the data from the log files with an existing condensed database. For execution data, one of three options must be chosen. The first two options are similar to the compilation database options: to create the execution database from log file input only, or to merge log file input with an existing condensed database. The third option is to create the execution database by merging the Harness-created database files, discarding any existing condensed database, and ignoring the execution log file.

Each system selected will be processed in a separate run, and each report selected will be created for each system selected in that request. Figure 9-7 displays the format of a Request file for Condense.



-- Request file for CONDENSE


-- Selection file name (or prefix or suffix)

+system_1_name := system_1_name

. . .

+system_N_name :=system_N_name

+NO_DATA_REPORT :=.nda

+EXCEPTIONAL_DATA_REPORT :=.exc

+MULTIPLE_RESULTS_REPORT :=.mul

-Comma_delimited_data

-Cmp_database_from_log

+Cmp_database_and_log

-Exe_database_from_log

+Exe_database_and_log

-Exe_database_from_harness


Figure 9-7 Condense Request file Format

9.3.3 Condense Outputs

The output of the Condense program is listed below and then discussed in further detail.

* Database files

+ Execution database

+ Compilation database

+ Transportable execution data

+ Transportable compilation data

* Reports

+ No data report

+ Exception data report

+ Multiple results report

* System Names file, "za_cosys.txt"

* Analysis Error file, "za_coerr.txt"

A discussion of each of these items follows.

* Database files

For each system requested, Condense produces a database file for each log file input. The default name for a database file consists of the prefix of the corresponding log file and the suffix ".e00" for execution data or the suffix ".c00" for compilation data. If an execution database was created by merging Harness database files, the default prefix is "mergehrn" and the default suffix is ".e00". The database name may be changed in the System Names file. See Section 9.1.4 "Modifying the System Names File, za_cosys.txt".

+ The Execution Database contains the execution time and code size data. The first two lines of the database list the system name and the date that the database was produced by Condense. Following that is a table of error codes. The data for each group is preceded by a group marker. Each result for a problem consists of: problem name, size, minimum time, mean time, inner and outer loop counts, sigma, and if the measurement is unreliable, the unreliable data flag "#". The minimum time may be replaced by a negative error code. Each result must occupy one line. Following a result may be two or more lines of ancillary data preceded by the marker " >>". If a test problem has several results, only one will be selected for analysis. A result selected for analysis must have a name beginning in Column 1; other results will be deselected with "--" in Columns 1 and 2. The measurement values must be in the order specified, but the exact columns occupied by each item are not important. See Figure 9-8, Execution Time/Code Size Database.

+ The Compilation Database contains the compilation time and link time data. The first two lines of the database list the system name and the date that the database was produced by Condense. Following that is a table of error codes. The data for each group is preceded by a group marker. Each result for a main program consists of: main program name, total compilation time for the main program and each of the test files called by it, main program compile time, and link time. Each result for a test file consists of file name and compile time. Any of the times may be replaced by a negative error code. Each result must occupy one line. A result selected for analysis must have a name in Column 1. If a file or main program has several results, only one will be selected for analysis. Other results will be deselected with "--" in Columns 1 and 2. The values must be in the order specified, but the exact columns occupied by each item are not important. See Figure 9-9, Compilation Time/Link Database.

If comma-delimited data is requested, Condense will produce two transportable data files, one for execution and one for compilation. These transportable data files can then be easily read and manipulated by spreadsheet software. The execution file name will be the prefix of the corresponding log file and the suffix ".ecd". The compilation file name will be the prefix of the corresponding log file and the suffix ".ccd".

+ The Transportable Execution data file contains all the data found within the Execution database separated by commas.

+ The Transportable Compilation data file contains all the data found within the Compilation database separated by commas.



-- System: system_1_name



-- Condensed: 5 Dec 1991 9:44:56


---------------------------------------

-- err_at_compilation_time = - 1.0

-- err_at_execution_time = - 2.0

-- err_no_data = - 3.0

-- err_dependent_test = - 4.0

-- err_packaging = - 5.0

-- err_unreliable_time = - 6.0

-- err_withdrawn_test = - 7.0

-- delay_problem = - 8.0

-- err_at_link_time = - 9.0

-- err_claim_excess_time = - 10.0

-- err_verification = - 11.0

-- err_inconsistent_results = - 12.0

-- err_large_negative_time = - 13.0

-- not_applicable = - 14.0

---------------------------------------

**** GROUP APPLICATION

(bits) (microseconds)

problem name size min mean in out sigma

-----------------------------------------------------------------------------

AP_AI_A_STAR 448.0 720.3 726.2 8 3 0.8

--AP_AI_ARTIE 0.0 -1.0 0.0 0 0 0.0

AP_AI_ARTIE 64.0 469118.3 483584.4 1 7 6.7

AP_KF_KALMAN 224.0 1516903.9 1538662.6 1 3 1.7

>>> ap_kf_kalman

>>> approximate time per filter call: 1896.1 number of iterations: 800

**** GROUP ARITHMETIC

(bits) (microseconds)

problem name size min mean in out sigma

-----------------------------------------------------------------------------

--AR_CX_CONV_FIXED_01 576.0 1.4220E+00 1.4293E+00 17 3 0.7

AR_CX_CONV_FIXED_01 576.0 1.4073E+00 1.4215E+00 17 3 0.9

AR_CX_CONV_FIXED_02 384.0 1.1217E+00 1.1286E+00 17 3 0.6


Figure 9-8 Execution Time/Code Size Database



-- System: system_1_name


-- Condensed: 5 Dec 1991 9:44:56

---------------------------------------

-- err_at_compilation_time = - 1.0

-- err_at_execution_time = - 2.0

-- err_no_data = - 3.0

-- err_dependent_test = - 4.0

-- err_packaging = - 5.0

-- err_unreliable_time = - 6.0

-- err_withdrawn_test = - 7.0

-- delay_problem = - 8.0

-- err_at_link_time = - 9.0

-- err_claim_excess_time = - 10.0

-- err_verification = - 11.0

-- err_inconsistent_results = - 12.0

-- err_large_negative_time = - 13.0

-- not_applicable = - 14.0

---------------------------------------

**** GROUP APPLICATION

program/file total_compile_time compile_time link_time

----------------------------------------------------------------------

ap_ai01_ 23.0

AP_AIM01 39.0 16.0 18.0

ap_ai02_ 75.0

AP_AIM02 89.0 14.0 17.0

**** GROUP ARITHMETIC

program/file total_compile_time compile_time link_time

----------------------------------------------------------------------

ar_cx01_ 8.0

ar_cx02_ 7.0

--AR_CXM01 31.0 16.0 18.0

AR_CXM01 21.0 6.0 17.0


Figure 9-9 Compilation Time/Link Database

* Reports

Condense produces three optional reports. The reports may be requested by selecting them in the Menu, or by preceding the report names with "+" in the Request file. The default file names of the reports are made up of a prefix that is the same as the system name specified in the System Names file, and a suffix of ".nda" for the No Data Report, ".mul" for the Multiple Data Report, and ".exc" for the Exceptional Data Report. See the Reader's Guide Section 5.2 "CONDENSE OUTPUT", for a detailed description of the reports.

+ No Data Report

Lists each test which has no execution time and each file or main program which has no compilation time or no link time. If an entire group is missing, the group is listed. If a group is already in the database, it is not included in the report unless data is added to it. Tests marked "not applicable" are not listed as missing.

+ Exceptional Data Report

Lists each test problem which has an exceptional execution result and each file or main program with an exceptional compilation or link result. If a group is already in the database, it is not included in the report unless data is added to it. Possible exceptional result codes are listed below in Figure 9-10. Tests marked "not applicable" are not listed as exceptional.



EXECUTION TIME ERROR CODES




ERR_AT_COMPILATION_TIME



ERR_AT_EXECUTION_TIME


ERR_DEPENDENT_TEST

ERR_PACKAGING

ERR_UNRELIABLE_TIME

ERR_WITHDRAWN_TEST

ERR_AT_LINK_TIME

ERR_CLAIM_EXCESS_TIME

ERR_VERIFICATION

ERR_LARGE_NEGATIVE_TIME


COMPILE TIME ERROR CODES




ERR_AT_COMPILATION_TIME


ERR_DEPENDENT_TEST

ERR_AT_LINK_TIME

ERR_WITHDRAWN_TEST

ERR_INCONSISTENT_RESULTS


Figure 9-10 Exceptional Data Report Error Codes

+ Multiple Results Report

Lists each test which has more than one result (has been run more than once). If a group is already in the database, it is not included in the report unless data is added to it.

* System Names file ("za_cosys.txt")

Condense rewrites the System Names file, inserting the names of the new database files. The System Names file is then ready for use by Comparative Analysis and Single System Analysis.

* Analysis Error file ("za_coerr.txt")

Error and informational messages are written to this file.

9.3.4 Condense Processing

Condense is designed to select one result to be used in analysis for each test problem, to diagnose some execution-time errors, and to check compilation-time and link-time results against the execution-time results for a test problem.

9.3.4.1 Diagnosis of Errors

Condense will insert an error code (see Section 7.3 "OUTPUT", Figure 7-2, Error Codes) for a test result in the following situations:

* An execution error is suspected in the execution results. When Condense finds a performance test name without a corresponding test measurement result, it will insert an execution time error code for the problem. It is assumed that the test failed after writing its name, but before writing results.

* An execution result invalidates a compilation and link time. If both the execution-time and the compilation-time log are available, Condense will check the compilation-time results against the execution-time results, and will invalidate compilation results in certain situations.

+ Some errors appear only in the execution results, but apply to the compile and link time result also.

- Compilation error codes are output at run time, because they are output by dummy program units which execute instead of the real test units when the test units fail to compile. Compilation error codes are extended to test file compilation and link results if the codes appear in the execution results.

- Withdrawn error codes appear only in the execution-time results. A withdrawn error code will be inserted for a compilation or link result, if one appears in the corresponding execution results.

+ Errors in the execution results raise questions about the validity of the compilation, or make it impossible to choose among several results. In these cases, the compilation and link result is not used for analysis; an "inconsistent results" error is inserted in the database. If the user wishes to select a result, the inconsistent error can be deleted and another result uncommented.

- If the execution result for a test is an execution error, it is not clear whether the compilation should be considered valid. The failure of the test could be due to improper compilation, or to a run-time issue such as lack of space.

- If there is a set of valid execution results, and a set of errors, and more than one compilation time, Condense cannot determine which is the "good" compilation time. The user may select one. If there is only one compilation result, it is used in analysis. The assumption here is that all execution results are from the same compile - and sometimes the test fails at run time for reasons not related to the compiler.

9.3.4.2 Selecting from Several Results for Analysis

Condense outputs all results to the database, but only one is chosen for the analysis tools to use. The chosen result is uncommented; other results are commented out with the Ada comment characters "--".

* Selecting execution times and sizes

If a test problem has a not_applicable result code, that result is selected and all other results, whether valid or errors, are commented out.

If results are valid, the smallest minimum time is chosen for analysis, along with the size that corresponds to that time (sizes are not expected to vary with each run).

If results are invalid, error codes are chosen in this order:

+ ERR_WITHDRAWN_TEST

+ ERR_UNRELIABLE_TIME

+ ERR_VERIFICATION

+ ERR_CLAIM_EXCESS_TIME

+ ERR_LARGE_NEGATIVE_TIME

+ ERR_DEPENDENT_TEST

+ ERR_AT_EXECUTION_TIME

+ ERR_PACKAGING

+ ERR_AT_LINK_TIME

+ ERR_AT_COMPILATION_TIME

* Select compilation and link times

If results are valid, and execution results are valid or are missing or not applicable, the smallest compilation time is chosen for analysis, along with the corresponding link time. (This link time may not be the smallest.)

If checking against execution results produces an error code, that code is selected in the database for use by the CA and SSA. Otherwise, if a test has different error codes from multiple test runs, an error code is selected for SSA and CA in the following order (error codes occurring earlier in the list will override codes occurring later).

+ ERR_WITHDRAWN_TEST

+ ERR_DEPENDENT_TEST

+ ERR_AT_LINK_TIME

+ ERR_AT_COMPILATION_TIME

+ ERR_INCONSISTENT_RESULTS

9.3.5 Adding Data to the Database (Incremental Mode)

Condense can be used to add data from a log file to an existing database when run explicitly from the Menu or in batch mode. Condense will not add to a condensed database when run automatically from CA or SSA, and will not add Harness databases to an existing condensed database. Data may be added to the database by running Condense multiple times with different log files specified in the System Names file ("za_cosys.txt").

By default, Condense adds data to an existing database instead of creating a new database each time it is run. In the request file, the options to add data to the database are labeled "CMP_DATABASE_AND_LOG" and "EXE_DATABASE_AND_LOG". In the Analysis Menu, both options are labeled "Append log data to existing database".

The suffix of the database file name will be incremented if it ends in "00", or a two-digit number, as the default name does. An incrementing database name should be used on operating systems without file version numbers, because both old and new database files must be open at the same time when databases are merged.

Adding data to the database incrementally will be helpful on systems with limited space, where Condense may require too much memory when processing all performance test results at once. If data for a group is in the database, but no new data for that group is in the log file, the group data will be copied from the old to the new database. The group does not appear in the reports.

9.3.6 Modifying the Database Manually

The user may modify the database manually by using a text editor to insert or delete data, or to select or deselect results. The format described in Section 9.3.3 "Condense Outputs", must be followed. The user may wish to modify the database manually in the following situations:

* Some error codes (see Figure 7-2) cannot be issued by the test programs or diagnosed by Condense.

+ System-dependent tests - Tests that fail due to system dependencies are likely to appear as missing tests, or be marked as compile or execution-time errors. The user should mark these tests in the database with the numeric code for a system-dependent test (-4.0).

+ Link-time errors - Link-time errors cannot be diagnosed by the test suite or by Condense. If the user determines that a test failed at link time, the numeric code for a link-time error (-9.0) can be inserted in the database for this test.

+ Execution errors - Most execution errors will be diagnosed by the test or by Condense. Some execution-time failures may occur before the test has output a test name or any results. If the user determines that there was an execution-time failure, the execution-time error code (-2.0) should be inserted for the test.

* The user may resolve inconsistent results errors in the compilation/link time database by selecting one result for analysis when there are several compilation times, and corresponding execution times are a mixture of valid times and error codes. The inconsistent results error code will also occur when corresponding execution times are all errors. The user must decide whether to use compilation/link times in this situation.

* The user may want to select a different result than that chosen for analysis by Condense. All results are written to the database. Results that are not selected for analysis are commented out. The user may select another result by uncommenting it and commenting out the result that had been chosen by Condense.

* The system may not support capturing log results to a file.

9.3.7 Condense Menu

Condense may be run by using the Analysis Menu, or the user may choose selections to be saved to the Request file. The Request file also may be constructed manually. The following Figures: 9-11, 9-12, 9-13, 9-14, and 9-15 display the Menu screens for Condense. Sample user responses are shown enclosed in quotation marks, after the prompt arrow "=>".



---------------------------MAIN MENU-------------------------


Tools:

1. -CONDENSE

2. -COMPARATIVE_ANALYSIS

3. -SINGLE_SYSTEM_ANALYSIS

Report Selection Methods

4. -INTERACTIVE SELECTION OF REPORTS

5. -DEFAULT REPORT SELECTION

6. -REPORT SELECTION FROM AN EXISTING REQUEST FILE

--------------------------------------------------------------------

Help Quit


Next menu




Select one tool and one report selection method, and Next to continue


=> "1,4,n"



Figure 9-11 Main Menu Select Condense

This response says, "select Condense, interactively select reports and report options, and go to the next menu immediately".



---------------------------SYSTEMS MENU-----------------------------


(The present file contains data for the following 5 systems

1. -system1 := system1

2. -sys_1 := sys_1

3. -sys_2 := sys_2

4. -sys_3 := sys_3

5. -sample := sample

6. -All Systems

--------------------------------------------------------------------


Help Quit Main


Next menu Previous menu Default names View/Mod syst

Select system(s) to be analyzed and Next, or command

=> "6,n"


Figure 9-12 Systems Menu Select All Systems

This response says, "select all of the available systems and go to the next menu immediately."

Condense produces three optional reports for each system. The file names for the reports are made up of a prefix and a suffix, each up to 40 characters long. The file name prefixes for each system appear to the right of the system names, after the assignment symbol ":=". The default file name prefixes are the same as the system names.

If the user had wanted to change the prefix for the first system, but make the same choices, the user might have responded with:

"1 := new_name"

After each carriage return, the menu would have been rewritten, showing the last requested change. Another alternative would be:

"1 := sys1, n"



----------------------CONDENSE: REPORT OPTIONS----------------------


1. -NO_DATA_REPORT := .nda

2. -EXCEPTIONAL_DATA_REPORT := .exc

3. -MULTIPLE_RESULTS_REPORT := .mul

4. -All Above

5. -Comma delimited data

Compilation/Link database alternatives (SELECT 1):

6. -Create new database from log data

7. +Append log data to existing database (DEFAULT)

Execution/Code Size database alternatives (SELECT 1):

8. -Create new database from log data

9. +Append log data to existing database (DEFAULT)

10. -Merge Harness databases

--------------------------------------------------------------------

Help Quit Main

Next menu Previous menu Default names

Select options if desired

=> "4,n"


Figure 9-13 Condense Report Options

This response says, "select all reports, append data to existing databases, and go to the next menu immediately."

The default names of the report files for system 1 would be:

No Data Report : system1.nda

Exceptional Data Report : system1.exe

Multiple Results Report : system1.mul

If the default name had been changed to "newname":

No Data Report : newname.nda

Exceptional Data Report : newname.exc

Multiple Results Report : newname.mul



----------------------RUN OR SAVE REQUEST--------------------


Current Selection Is:

PROGRAM : CONDENSE

SYSTEMS : system1, sys_1, sys_2, sys_3, sample

OPTIONS : NO_DATA_REPORT, EXCEPTIONAL_DATA_REPORT, MULTIPLE_RESULTS_REPORT

Produce textual data

Append log to existing cmp database.

Append log to existing exe database.

1. -Run immediately

2. -Store request in new request file

3. -Append request to existing request file

--------------------------------------------------------------------

Help Quit Main

Previous menu Do request

Select one option and enter 'Do' to apply

=> "1,do"


Figure 9-14 Run or Save Request Menu

This response says, "I want to run Condense; do it immediately." If the response "1" <cr> had been entered, the menu would be rewritten with option "1" selected with a "+", but Condense would not be executed until the user entered "do" <cr>.

This request, like all requests, may also be written to a new Request file (overwrite Current Request file) ("2, do"), or appended to the current Request file ("3, do"). In this manner, the request(s) can be run in batch mode. In addition, a Request file is a way to save a record of what analysis had been done. The default name for the condense request file is "za_cosys.txt". The user can option to use a different Request file by editing a new Request file via a system editor and declaring the name of the new request file using Option 6 on the Main Menu.

The following Request file summarizes the choices made. Alternatives can be selected by editing the Request file, changing "+" to "-", or vice versa, and by changing the file prefixes and suffixes, or by running menu.



-- Request file for CONDENSE


-- Selection file name (or prefix or suffix)


+system1 := system1

+sys_1 := sys_1

+sys_2 := sys_2

+sys_3 := sys_3

+sample := sample

+NO_DATA_REPORT := .nda


+EXCEPTIONAL_DATA_REPORT := .exc


+MULTIPLE_RESULTS_REPORT := .mul


-Comma_delimited_data


-Cmp_database_from_log

+Cmp_database_and_log

-Exe_database_from_log

+Exe_database_and_log

-Exe_database_from_harness


Figure 9-15 Sample Request file for Condense

9.4 Comparative Analysis

9.4.1 Input and Output Files

The input and output files, and their default names as given in the System Names file, are listed below.

* Input Files

+ System Names file - "za_cosys.txt".

+ Request file - given in the System Names file. Default is "za_careq.txt".

+ Comparative Analysis database file - given in the System Names file. Default is "za_cadb.txt". Used to produce the Summary of All Groups report.

+ Database files (produced by Condense) - given in the System Names file.

+ Structure (weights) file - given in the System Names file. Default is "za_cowgt.txt".

+ Sample Data file for execution times and code size data based on the trial systems - given in the System Names file. This optional file is "za_smple.e00".

+ Sample Data file for compilation and link time data based on the trial systems - given in the System Names file. This optional file is "za_smple.c00".

* Output Files

The Comparative Analysis database file is also an output of Comparative Analysis (CA). It is written to summarize the results from individual group analyses.

The output file(s) depend on the options selected from the Menu or the Request file. If the Single Output file option is selected, then all output from the current request will be written to one file with the designated file name. Otherwise, one file will be produced for each group selected (and for the Summary of All Groups, if selected). This holds true for each metric.

CA may be run by using the Menu program, or the user may choose selections to be saved to the Request file, and run CA in batch mode later.

9.4.2 Comparative Analysis Menu

The following Figures 9-16, 9-17, 9-18, 9-19, 9-19a, 9-19b and 9-20 display the Menu screens for Comparative Analysis. Sample user responses are shown enclosed in quotation marks, after the prompt arrow "=>".



------------------------------MAIN MENU-----------------------------


Tools:

1. -CONDENSE

2. -COMPARATIVE_ANALYSIS

3. -SINGLE_SYSTEM_ANALYSIS

Report Selection Methods

4. -INTERACTIVE SELECTION OF REPORTS

5. -DEFAULT REPORT SELECTION

6. -REPORT SELECTION FROM AN EXISTING REQUEST FILE


--------------------------------------------------------------------



Help Quit



Next menu




Select one tool and one report selection method, and Next to continue

=> "2,4,n"


Figure 9-16 Main Menu Select Comparative Analysis

The sample user response means, "select Comparative Analysis, interactively select reports and report options, and go to the next menu immediately."



---------------------------SYSTEMS MENU-----------------------------


1. -system1

2. -sys_1

3. -sys_2

4. -sys_3

5. -sample

6. -All Systems

--------------------------------------------------------------------

Help Quit Main

Next menu Previous menu View/Mod syst

Select 2 or more systems to be compared or select 6 and enter 'Next':

=> "6,n"


Figure 9-17 Systems Menu Select Systems to be Compared

This response says, "select all of the available systems and go to the next menu immediately."



---------------------------METRICS MENU-----------------------------


1. -EXECUTION_TIME := .tim

2. -CODE_SIZE := .siz

3. -COMPILATION_TIME := .cmp

4. -LINK_TIME := .lnk

5. -COMBINED_COMPILATION_LINK_TIME := .cml

6. -All Metrics

--------------------------------------------------------------------

Help Quit Main

Next menu Previous menu Default names

Select one or more metrics or a command

=> "1,3,n"


Figure 9-18 Metrics Menu

This response says, "select the Execution-time data and the Compilation-time data for analysis and go to the next menu immediately."



----------------------------GROUPS MENU-----------------------------


--------------------------PAGE ONE OF TWO---------------------------

0. -All Groups

1. -APPLICATION := applic00

2. -ARITHMETIC := arithm00

3. -CLASSICAL := classi00

4. -DATA_STORAGE := storag00

5. -DATA_STRUCTURES := struct00

6. -DELAYS_AND_TIMING := delays00

7. -EXCEPTION_HANDLING := except00

8. -GENERICS := generi00

9. -INPUT_OUTPUT := input_00


10. -INTERFACES := interf00


11. -MISCELLANEOUS := miscel00


(Go to Next Menu to Review Remaining Groups)


--------------------------------------------------------------------

Help Quit Main Recall

Next menu Previous menu Default names Clear

Select one or more groups

=> "4,n"


Figure 9-19a Groups Menu (1 of 2)

This response says "Select the Data Storage Group (4) and go to the next menu immediately."


----------------------------GROUPS MENU-----------------------------



--------------------------PAGE TWO OF TWO---------------------------


0. -All Groups

11. -MISCELLANEOUS := miscel00

12. -OBJECT_ORIENTED := object00

13. -OPTIMIZATIONS := optimi00

14. -PROGRAM_ORGANIZATION := progra00

15. -PROTECTED_TYPES := protec00

16. -STATEMENTS := statem00

17. -STORAGE_RECLAMATION := reclam00

18. -SUBPROGRAMS := subpro00

19. -SYSTEMATIC_COMPILE_SPEED := system00

20. -TASKING := taskin00

21. -USER_DEFINED := user_d00

(Go to Previous Menu to Review Remaining Groups)

--------------------------------------------------------------------

Help Quit Main Recall

Next menu Previous menu Default names Clear

Select one or more groups


=> "16,20,n"


Figure 9-19b Groups Menu (2 of 2)

This response says "Select the Statements Group (16) and the Tasking Group (20) for analysis and go to the next menu immediately."

If the use had wanted to change the prefixes (on either of these Groups Menu screens), but make the same choices, these entries might have been made:

"4 := data_storage" <cr>

"16 := statements" <cr>

"20 := tasking" <cr>

Note that these menus always appear sequentially even if all groups (o) is selected on screen 1 of 2.

After each carriage return, the menu would have been rewritten, showing the last requested change.



-----------------COMPARATIVE ANALYSIS (CA): REPORTS-----------------


1. -GROUP_LEVEL_REPORTS

2. -SUMMARY_OF_ALL_GROUPS_REPORT := summry00

3. -Both of the above

4. -SPECIAL_REPORT (APPLICATION PROFILE) := specia00

(CHOOSE one or both of the next two options)

5. +Produce Text Reports

6. -Produce Comma-Delimited Reports

Additional optional selections

7. -Write text reports chosen in current

request to a SINGLE file := compar00.rpt

8. -Write Comma-Delimited reports in

current request to a SINGLE file := cd_all00.rpt

9. -Change length of Text Report output

line to := 80

--------------------------------------------------------------------

Help Quit Main

Next menu Previous menu Default names

Select one or more reports or ('num := newvalue')

=> "3,n"


Figure 9-20 Comparative Analysis Report Options

This response says, "select both the Group Level and the Summary of all Groups Report and go to the next menu immediately." Because Option 5 is selected as a default the system will generate a text file report. The current request, generated from previous menu screens, indicates that the Data Storage group, the Statements group, and the Tasking group should be used with the Execution Time and Compile Time metrics for each analysis.

The user must choose at least one report (1, 2, or 4), and must choose Text Reports (Option 5), and/or Comma-Delimited Reports (Option 6). These report options are applied to each of the groups and metrics chosen in previous menu screens.

The option to create Comma-Delimited Reports will create a file or files that contains the raw and residual data for the requested reports. The comma-delimited data is in a form that can easily be imported into spreadsheet software or other analysis tools.

Options 7 and 8 indicate that all of the textual report data or all of the Comma-Delimited report data will be written to one file. The user can change the default name by entering (for example):

"7 :=ALL_TEXT_DATA.RPT"

or

"8 :=ALL_CD_DATA.RPT"

If Option 7 is not chosen, then the text reports will be written to file names that are a combination of the file prefix indicated in the Groups Menu and the file suffix indicated in the Metrics Menu. For example, the previous request would generate the six files:

storag00.tim statem00.tim taskin00.tim

storag00.cmp statem00.cmp taskin00.cmp

If Option 8 is not chosen, then the Comma-Delimited report files will have the name prefix as given from Figure 9-21 and the suffix from the Metrics Menu.



APPLICATION cd_ap00


ARITHMETIC cd_ar00

CLASSICAL cd_cl00

DATA_STORAGE cd_so00

DATA_STRUCTURES cd_dr00

DELAYS_and_TIMING cd_dt00

EXCEPTION_HANDLING cd_xh00

GENERICS cd_gn00

INPUT_OUTPUT cd_io00

INTERFACES cd_in00


MISCELLANEOUS cd_ms00



OBJECT_ORIENTED cd_oo00



OPTIMIZATIONS cd_op00



PROGRAM_ORGANIZATION cd_po00



PROTECTED_TYPES cd_pt00


STATEMENTS cd_st00

STORAGE_RECLAMATION cd_sr00

SUBPROGRAMS cd_su00

SYSTEMATIC_COMPILE_SPEED cd_sy00

TASKING cd_tk00

USER_DEFINED cd_ud00


Figure 9-21 File Prefix for Comma-Delimited Reports

The Option (9) to change the length of the output line is necessary when many systems are being compared. Naturally, it does not help unless the user has an appropriate way to display or print the wider line. Suggested Values: 1-3 systems => default of 80; 4 systems => 87 characters; 5 or more => 87 characters plus 11 for each additional system.

The Special Report (Application Profile Report) option should not usually be chosen along with the Group-level (Figure 9-22) or Summary of all Groups Reports (Figure 9-23). The Special Report is valuable for users who want to compare weighted averages of selected data points. A Special Report may encompass data from several groups. A regular report will never cross group boundaries.



----------------------CA: GROUP-LEVEL OPTIONS-----------------------



------------------FOR REPORTS ON SELECTED METRICS-------------------


1. -Summary Report Only

2. -Summary Report and All Full Reports

3. + Main Report (default)

4. - Test Problems with Errors Report

5. - Sorted List of Outliers Report

--------------------------------------------------------------------

Help Quit Main


Next menu Previous menu




Select one or more reports



=> "2,n"



Figure 9-22 CA Group-Level Report Options

This response says, "select all Full and Summary reports and go to the next menu immediately."

It is not possible to select the Full Report and not select the Summary Report, since the summary is a part of the full report. Also, if the Full Report is chosen, a Main Report is always produced regardless of which options are selected.

The Summary of all Groups Report depends on a database file that is produced when running the Group Level Report. The Summary of all Groups Report should not be selected unless all Group Level Reports (for the current set of systems) have been selected or have previously been generated. If the Summary is selected without having all the Group Level Reports, then the program will terminate after issuing error messages regarding the state of the database file ("za_cadb.txt").



-----------CA: SUMMARY-OF-ALL-GROUPS-LEVEL REPORT OPTIONS-----------


------------ONE REPORT PRODUCED FOR EACH SELECTED METRIC------------

1. -All Report Sections: High, Intermediate and Full Summary

2. - Both High Level (Bar Chart) Sections (3..4)

3. - Vertical Bars (Overall System Factors and Successes)

4. - Horizontal Bars (Overall System Factors and Successes)

5. - All Intermediate Level Summary Sections (6..9)

6. - System Factors for each group - comparing all systems

7. - System Factors for each system - comparing all groups

8. - Successes for each group - comparing all systems

9. - Successes for each system - comparing all groups

10. + Full Summary Section only (also selected in all other cases)

--------------------------------------------------------------------

Help Quit Main

Next menu Previous menu

Select one or more reports

=> "1,n"


Figure 9-23 CA Summary of all Groups Level Report Options

This response says, "select all Summary of all Groups level reports including the high level, intermediate level, and the full report for selected metrics and go to the next menu immediately."

The Summary Of All Groups Report is based on the results from the regular group analyses. The number of groups included in this report depends on which data is available in the CA database, which is written after each regular group analysis is completed. Thus, it is not necessary to request a report for each group at the same time that the Summary Of All Groups is requested. However, it is necessary that the group reports have been requested previously. One difficulty can arise here. The CA database always maintains a set of consistent findings, consistent in the sense that they are based on the same systems. If the user selects reports comparing different subsets of the systems available, then the CA database will recognize this and not add the inconsistent results to the same database file. The management of such reports is a user responsibility. It is not done automatically. CA will start a new database when an inconsistency is discovered. This may result in the loss of previous results unless the name of the CA database file is changed in the System Names file, "za_cosys.txt". See the Reader's Guide Section 5 for an explanation of the content of the different report options. A sample of a Run or Save Request Menu is shown in Figure 9-24 below.



------------------------RUN OR SAVE REQUEST-------------------------




Current Selection Is:

PROGRAM : COMPARATIVE_ANALYSIS

SYSTEMS : system1, sys_1, sys_2, sys_3, sample

METRICS : EXECUTION_TIME, COMPILATION_TIME

GROUPS : DATA_STORAGE, STATEMENTS, TASKING

OPTIONS : Text Reports

Group-Level Report: Full

Summary-of-All-Groups-Group-Level Report:

High Level, Intermediate Level, Full Report

Output line length: 80


1. -Run immediately


2. -Store request in new request file


3. -Append request to existing request file


--------------------------------------------------------------------

Help Quit Main

Previous menu Do request

Select one option and enter 'Do' to apply

=> "1,d"


Figure 9-24 Run or Save Request Menu

This request says, "I elect to have my request done now; go do it."

This request, like all requests, may also be written to a new Request file (overwrite current Request file) ("2,do"), or appended to the current Request file ("3,do"). In this manner, the request(s) can be run in batch mode. In addition, a Request file is a way to save a record of what analysis has been done. The default name for the CA Request file is "za_casys.txt". The user can option to use a different Request file by editing a new Request file via a system editor and declaring the name of the new Request file using Option 6 on the Main Menu.

The Request file name may be changed in the System Names file, "za_cosys.txt".

9.4.3 Comparative Analysis Request File

The following Request file (Figure 9-25) summarizes the choices made. Alternatives can be selected by editing the Request file, changing "+" to "-", or vice versa, and by changing file prefixes and suffixes.

Notice that some choices override others. The selection of a single output file makes the changes to the default prefixes for group reports irrelevant. However, if that choice is revoked, then those names would be used.



-- Request file for COMPARATIVE_ANALYSIS


-- Selection file name (or prefix or suffix)

-Single_output_file := compar00.rpt

-Single_CD_output_file := cd_all00.rpt

+Text_Reports

-CD_Reports

+system1

+sys_1

+sys_2

+sys_3

+sample

+EXECUTION_TIME := .tim

-CODE_SIZE := .siz

+COMPILATION_TIME := .cmp

-LINK_TIME := .lnk

-COMBINED_COMPILATION_LINK_TIME := .cml

-APPLICATION := applic00

-ARITHMETIC := arithm00

-CLASSICAL := classi00

+DATA_STORAGE := storag00

-DATA_STRUCTURES := struct00

-DELAYS_AND_TIMING := delays00

-EXCEPTION_HANDLING := except00

-GENERICS := generi00

-INPUT_OUTPUT := input_00

-INTERFACES := interf00

-MISCELLANEOUS := miscel00


-OBJECT_ORIENTED := object00



-OPTIMIZATIONS := optimi00


-PROGRAM_ORGANIZATION := progra00


-PROTECTED_TYPES := protec00



+STATEMENTS := statem00


-STORAGE_RECLAMATION := reclam00

-SUBPROGRAMS := subpro00


-SYSTEMATIC_COMPILE_SPEED := system00


+TASKING := taskin00


-USER_DEFINED := user_d00


+GROUPLEVELREQ


-SUMMARYONLY


+ERRORS

+OUTLIERS

+SUMMARYLEVELREQ := summry00

+VERTICAL

+HORIZONTAL

+SYSTEMFACTORS

+ERRORS

+EACHGROUP

+EACHSYSTEM

-SpecialReport := specia00

+Output_line_length := 80


Figure 9-25 Request file for Comparative Analysis

The options are the same in either interactive or batch mode. In the Request file an option is selected when preceded with a plus sign ('+') and not selected when preceded with a minus sign ('-'). The options are:

* Single output file - If this option is selected, then text output from this request is written to the file named here. All other prefixes and suffixes are ignored.

* Single CD output file - If this option is selected, then comma-delimited output from this request is written to the file named here. All other prefixes and suffixes are ignored.

* Text Reports - If this option is selected, then text reports are generated for the current request. At least one of the text report options or the comma-delimited report option must be chosen.

* Comma-Delimited Reports - If this option is selected, then comma-delimited reports are generated for the current request. At least one of the text report options or the comma-delimited report option must be chosen.

* Systems - Two or more systems must be selected from the list in the System Names file, "za_cosys.txt".

* Metrics - Four kinds of measurement data are gathered while compiling and running the ACES test suite. They are performance test execution-time measurements, performance test code-size results, file compilation times, and program link times. Some library command execution times are measured and treated as compilation times. In addition, a combined compile/link time may be chosen. One or more metrics must be chosen. Separate reports are produced for each metric. If the data is not available for the requested report, then the report will say that no cases are available.

If the single output file has not been selected, then separate report files will be written with names formed by concatenating the corresponding prefix for the group choice(s) with the corresponding suffix for the metric choice.

* Groups - One or more groups must be selected or the Summary of All Groups Report must be selected under report options.

If the Summary of All Groups Report is selected, it will be produced after reports are generated for each group selected.

If the single output file has not been selected, then separate report files will be written with names formed by concatenating the corresponding prefix for the group choice(s) with the corresponding suffix for the metric choices.

* Report options - The Summary of All Groups for the selected metric(s) report will be based on the data available in the CA database at that time. Remember that only consistent data will be in the CA database at any time. (Consistent means that the data is from the same set of systems.) This means that all of the groups that you wish included in the summary must either be selected this time, or have been selected, with the same systems and the desired metric, in previous sequential runs. Otherwise, the CA database will not have the information needed when the summary is attempted. This report (like all CA reports) is separately generated for each metric. It does not summarize results from different metrics in the same report.

For the individual group reports, the Summary Report is a subset of the Full Report. You cannot select the Full Report without the Summary. A separate report is produced for each selected group. Except for the Special Report discussed, reports never cross group boundaries. The Summary of All Groups Report is based on the findings from each group, not on the concatenated data from the groups. Special Report results are not included in the Summary of All Groups findings.

The Special Report should be run by itself, with no other report options. The reason for this restriction is that a special weight file will be needed which will probably not provide meaningful results for other analysis. Along with the special weight file, the user can select which test problems to include by selecting the appropriate groups. All groups selected are included; all other groups are excluded. Within a group, problems are selected by weight. Weights of "0.0" exclude a problem; nonzero weights mean that the problem is included. Subgroup weights override individual weights for tests and for main programs. In this manner, whole subgroups can be excluded.

9.5 Single System Analysis

9.5.1 Input and Output Files

The input and output files for Single System Analysis (SSA), and their default names as given in the System Names file, are listed below.

* Input Files

+ System Names file - "za_cosys.txt".

+ Request file - given in the System Names file. Default is "za_sareq.txt".

+ Structure (weights) file - given in the System Names file. Default is "za_cowgt.txt".

+ Table templates for SSA

- "za_salft.ssa" - template file for language feature tests

- "za_saopt.ssa" - template file for optimization tests

- "za_sarts.ssa" - template file for run time system tests

- "za_sasty.ssa" - template file for coding style tests

* Output Files

+ Main Report : system_name.rep

+ Missing Data Report : system_name.mis

+ High Level Summary : system_name.hls

9.5.2 Single System Analysis Menu

The following Figures 9-26, 9-27, 9-28, 9-29, 9-30, and 9-31 display the Menu screens for Single System Analysis. The sample user responses are shown enclosed in quotation marks, after the prompt arrow "=>".



------------------------------MAIN MENU-----------------------------


Tools:

1. -CONDENSE


2. -COMPARATIVE_ANALYSIS



3. -SINGLE_SYSTEM_ANALYSIS


Report Selection Methods

4. -INTERACTIVE SELECTION OF REPORTS


5. -DEFAULT REPORT SELECTION



6. -REPORT SELECTION FROM AN EXISTING REQUEST FILE


--------------------------------------------------------------------

Help Quit

Next menu

Select one tool and one report selection method, and Next to continue

=> "3,4,n"


Figure 9-26 Main Menu Select Single Systems Analysis

This response says, "select Single System Analysis, and interactively select reports and report options, and go to the next menu immediately."



---------------------------SYSTEMS MENU-----------------------------


(The present file contains data for the following 5 systems

1. -system1 := system1

2. -sys_1 := sys_1

3. -sys_2 := sys_2

4. -sys_3 := sys_3

5. -sample := sample

6. -All Systems

--------------------------------------------------------------------

Help Quit Main

Next menu Previous menu Default names View/Mod syst

Select system(s) to be analyzed and Next, or command

=> "6,n"


Figure 9-27 Systems Menu Select All Systems

This response says, "select all of the available systems and go to the next menu immediately." For SSA, this is a request for a separate report for each system selected. The file names for the reports are made up of a prefix and a suffix, each up to 40 characters long. The file name prefixes for each system appear to the right of the system names, after the assignment symbol ":=". The default file name prefixes are the same as the system names.



---SINGLE SYSTEM ANALYSIS (SSA): HIGH LEVEL SUMMARY REPORT OPTIONS--


1. -High Level Summary File Extension := .hls

Sections - MUST Select One or More of the Following Sections

2. - All

3. - Execution

4. - Code Size

5. - Compilation

6. - Errors

--------------------------------------------------------------------


Help Quit Main


Next menu Previous menu Default names

Select one or more reports and a command

=> "2,n"


Figure 9-28 SSA High Level Summary Report Options

This response says, "select all of the report options, and then go to the next menu."

SSA report options are explained more fully in the Reader's Guide Section 5.4 where examples are given for the types of reports which may be requested. The options are only meaningful with report Sections 3..6. For each of those sections requested, the options chosen will be applied to the report generated.

The default prefixes for the reports are the system names displayed in the Systems Menu.



----------SINGLE SYSTEM ANALYSIS (SSA): MAIN REPORT OPTIONS---------



1. -Main Report extension := .rep

Sections: MUST Select At Least ONE Section

2. - All Sections

3. - Language Feature Overhead

4. - Optimizations

5. - Runtime System Behavior

6. - Coding Style Variations


7. - Ancillary Data



Options: MUST Select At Least ONE Option


8. - All Options

9. - Write Problem Descriptions

10. - Write Statistical Tables

11. - Write Table Summaries

12. - Write Missing Data Report := .mis

--------------------------------------------------------------------

Help Quit Main

Next menu Previous menu Default names

Select one or more options

=> "1,3,9,n"


Figure 9-29 SSA Main Report Options

This response says, "select the Language Feature Overhead Section and Write Problem Descriptions for the Main Report options, and then go to the next menu."



------------------------RUN OR SAVE REQUEST-------------------------


Current Selection Is:

PROGRAM : SINGLE_SYSTEM_ANALYSIS

SYSTEMS : system1, sys_1, sys_2, sys_3, sample

OPTIONS : High Level Summary

Sections: EXECUTION, CODESIZE, COMPILATION, ERRORS

Main Report

Sections: LANGUAGEFEATURES

Options: DESCRIPTIONS



1. -Run immediately


2. -Store request in new request file

3. -Append request to existing request file

--------------------------------------------------------------------

Help Quit Main

Previous menu Do request

Select one option and enter 'Do' to apply

=> "1,d"


Figure 9-30 Run or Save Request Menu Select "Run"

This request says, "I elect to have my request done now; go do it."

This request, like all requests, may also be written to a new Request file (overwrite current Request file) ("2,do"), or appended to the current Request file ("3,do"). In this manner, the request(s) can be run in batch mode. See Section 9.1.1 for details of creating the individual executable files. In addition, a Request file is a way to save a record of what analysis has been done. The default name for the SSA Request file is "za_sasys.txt". The user can option to use a different Request file by editing a new Request file via a system editor and declaring the name of the new Request file using Option 6 on the Main Menu.

9.5.3 Single System Analysis Request File

The Request file name may be changed in the System Names file "za_cosys.txt".



-- Request file for SINGLE_SYSTEM_ANALYSIS


-- Selection file name (or prefix or suffix)

+system1 := system1

+sys_1 := sys_1

+sys_2 := sys_2

+sys_3 := sys_3

+sample := sample

+HIGH_LEVEL_SUMMARY := .hls

+MAIN_REPORT := .rep

-MISSING_DATA_REPORT := .mis

+EXECUTION

+CODESIZE

+COMPILATION

+ERRORS

+LANGUAGEFEATURES


-OPTIMIZATIONS



-RUNTIME


-STYLE

-ANCILLARY

+DESCRIPTIONS

-STATISTICALTABLES

-SUMMARIES


Figure 9-31 Request File for Single Systems Analysis

Options for the SSA are the same whether running in interactive or batch mode. In the Request file, options are selected by preceding them with a plus sign ('+') and deselected by preceding them with a minus sign ('-'). The options are:

* Systems - One or more systems must be selected from the list in the System Names file, "za_cosys.txt". SSA only runs on the data from one system at a time. If you choose several systems, you will get a separate SSA report for each system.

* Report options - These options are discussed more fully in the Reader's Guide.

9.6 Adaptations And Limitations

The following adaptations may be made:

* Line length - The line length of CA reports can be changed in the menu or in the CA Request file - The line length of SSA reports cannot be changed.

* Page length - The page length of CA reports can be changed by editing its value in the CA package body, and then recompiling the package body ("za_ca08.ada") and relinking. The page length of SSA reports cannot be changed.

* File names - All file names can be changed. Report names can be changed, either by running Menu, or by editing the Request files. Database and log file names can be changed by editing the System Names file, "za_cosys.txt". The name of the System Names file can only be changed by editing the source code. If the structure file is being modified (for example, to add new tests) then the names of the input and output files for the process can be changed by editing the program "za_redo" and recompiling this small procedure.

These limitations should be noted.

* On all Analysis Menus that display options with file names, there is a limitation on the user input for changing the prefixes or suffixes. There must be a space between the selection number and the ":=" assignment operator.

* The line length of CA reports should be at least 80. This is enforced in the code.

* If you want a Summary of All Groups Report from CA and you proceed incrementally, then you must not change your system choices, or you must change the name of the CA database. Otherwise, the data from the group analyses will be lost.

* If the user adds tests, and wants to do compile speed analysis, then the tests must be added at the end of a subgroup, and the file naming conventions must also be followed. Then everything is automatically done when the "structure" file is changed, and then used to regenerate the Names packages.

* If the user adds tests, the test naming conventions (for the first six characters) must be followed.

* If the user adds main programs, the test naming conventions (for the first five characters) must be followed.

* The "database" files are text files and are not protected or manipulated by any database management system. There is no mechanism to provide for concurrent updating or accessing of the files. Only one executable program may run against the database files at any time.

* The Analysis Menu is designed to display up to 26 items on each menu. If more than 25 systems or groups are to be processed, the Menu cannot be used. Request files should be constructed manually and Condense, CA, and SSA run in batch mode.

10. ACES USER FEEDBACK

ACES users have two formal paths to provide feedback to influence future ACES development. They can submit written problem reports and they can write change requests. No telephone support is provided.

10.1 How To Submit A Problem Report

Not every problem an ACES user encounters with the test suite will be appropriate to report through the ACES problem reporting system. If an ACES program uncovers a clear error in a compilation system, this should simply be reported to the organization maintaining the compiler for resolution. Not all ACES programs will be portable to all systems, since the test suite includes test problems to explore the performance of some implementation-dependent features, and may have some programs which test features not supported on all targets. For example: tying tasks to interrupts; file I/O operations; operations on extended precision floating point types; interface to assembler routines; and some large programs which may exceed the capacity of some systems (at either execution time or compile time).

Failure of a test program is not sufficient reason to write an ACES problem report, unless the user believes that the failure is due to an error in the test problem itself, and is neither a reflection of an implementation error, nor a (legally) unsupported feature, nor a capacity limitation. Alternately, a test program may compile and execute without errors on an Ada implementation, but a user could still believe that the program is erroneous and submit an ACES error report. This might occur when:

* The program improperly uses implementation-dependent features, even though it worked on all the systems tested until the problem was discovered.

* A test problem is unexpectedly optimizable into something much different than the original "intent" of the test problem, as stated in the purpose. An example would be one where an optimizing compiler determines that the initialization code for a test problem can be folded into the body of the test problem, resulting in the test problem simplifying into a literal assignment, when the stated purpose of the problem is not to check for folding. The test problem, as distributed, may be a valid test problem to detect the presence of the "unexpected" optimization, but the purpose is wrong and the original intent may not be adequately tested for in the suite.

* A test problem which does not perform essentially the same computations on each repetition of the Timing Loop is invalid and should be corrected. If such a case is detected, it should be reported.

After completing the form on the next page, mail it to:

Brian Andrews, HOLCF Technical Director

Ada Compiler Evaluation System Software Problem Report

88 CG/SCTL

3810 Communications Suite 1

Wright-Patterson AFB, OH 45433-5707

or e-mail to: andrewbp@email.wpafb.af.mil

Ada COMPILER EVALUATION SYSTEM

SOFTWARE PROBLEM REPORT

ORIGINATOR IDENTIFICATION

Originator's Name ________________________________________________

Organization ________________________________________________

Address ________________________________________________

Telephone ________________________________________________

e-mail ________________________________________________

Date ________________________________________________

SYSTEM IDENTIFICATION

ACES VERSION __________________________________

Compilation System Version __________________________________

Host Operating System Version __________________________________

Target Operating System Version __________________________________

Hardware Identification __________________________________

(if a test program is submitted for incorporation into the ACES, identify where it has been tested)

PROBLEM DESCRIPTION

Source File with Problem ___________________________________________________

Explanation _____________________________________________________________

_____________________________________________________________

_____________________________________________________________

_____________________________________________________________

_____________________________________________________________

_____________________________________________________________

_____________________________________________________________

_____________________________________________________________

_____________________________________________________________

_____________________________________________________________

_____________________________________________________________

(attach more pages if necessary)

10.2 How To Request Changes

The procedure to request changes in either operations, or in interpretation, is the same as for submitting problem reports. Readers and users may submit different types of requests. Readers would be likely to request modification to analysis output, or the addition of new test problems (or areas which should be tested). Users may request changes in the packaging of problems into programs, or modifications to control procedures.

The depth of detail of a change request may vary. Users may request the incorporation of a new test problem (which is submitted for consideration), or there may be a less specific request asking for more emphasis on some areas of concern. The more specific a request is, the easier it will be to respond to. The change request will be logged and evaluated, and a determination will be made.

After completing the form on the next page, mail it to:

Brian Andrews, HOLCF Technical Director

Ada Compiler Evaluation System Software Problem Report

88 CG/SCTL

3810 Communications Suite 1

Wright-Patterson AFB, OH 45433-5707

or e-mail to: andrewbp@email.wpafb.af.mil

Ada COMPILER EVALUATION SYSTEM

CHANGE REQUEST

ORIGINATOR IDENTIFICATION

Originator's Name ________________________________________________

Organization ________________________________________________

Address ________________________________________________

Telephone ________________________________________________

e-mail ________________________________________________

Date ________________________________________________

SYSTEM IDENTIFICATION

ACES VERSION __________________________________

Compilation System Version __________________________________

Host Operating System Version __________________________________

Target Operating System Version __________________________________

Hardware Identification __________________________________

(If a test program is submitted for incorporation into the ACES, identify where it has been tested)

CHANGE DESCRIPTION AND JUSTIFICATION

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

(attach more pages if necessary)

11. NOTES

11.1 Abbreviations, Acronyms

ACEC Ada Compiler Evaluation Capability

ACES Ada Compiler Evaluation System

ACM Association for Computing Machinery

AJPO Ada Joint Program Office

CA Comparative Analysis

CPU Central Processing Unit

DEC Digital Equipment Corporation

HOLCF High Order Language Control Facility

LRM (Ada) Language Reference Manual (ANSI/MIL-STD-1815A)

NUMWG Numerics Working Group (ACM SIGAda organization)

OS Operating System

SIGAda Special Interest Group on Ada (ACM sponsored organization)

SSA Single System Analysis (ACES analysis tool)

VAX Virtual Address eXtension (DEC family of processors)

VDD Version Description Document

VMS Virtual Memory System (DEC operating system for VAX processors)

12. INDEX

Cross Reference Index for ACES Document Set

Acronyms, Abbreviations
Primer 9
RG 13.1
UG 11
Adding/Modifying Tests
RG 6.8
UG 5.4.3, 6.13, 9.1.4, 9.1.5, 9.1.6, 9.3.6
Addressing
Primer 3.2.2, 3.2.5
RG 2.4.3, 6.7, 8.4.1, 10
UG 4.3.3.3, 5.1.1, 5.1.2
Analysis, Running
Primer 1.2, 2.1.3, 3.2.4, 3.2.11, 3.3, 4, 4.1.1, 5, 5.2.1, 5.3.1, 6.4.1, 6.5, 6.5.2,
7.3.3
RG 2.4.2.3, 2.4.2.4, 2.4.2.5, 3.2.7.3, 3.6, 3.6.3, 3.6.4, 5.1
UG 5.4.1, 7, 9.1, 9.2, 9.3, 9.4, 9.5, 9.6
Capacity Assessor
Primer 6.1, 6.5, 6.5.1, 7.3.4
RG 2.4.3, 3.6.4, 8.4
UG 4.3.3.2, 8.4, App. F
Code Size
Primer 1.4.1, 1.4.2, 3.2.2, 3.2.5, 5.3.2, 7.2.1, 7.2.2
RG 5.2.2.1.1, 5.2.2.2.1, 5.2.2.3.1, 5.4.2, 5.4.2.2, 5.4.4.1
UG 4.3.3.3, 5.1.1, 5.1.2, 9.3, 9.3.2, 9.4
Comparative Analysis
Primer 3.2.14, 5.2, 5.2.1, 7.1
RG 2.4.2.3, 5.3, 7.2
UG 2.1, 5.4.3, 6.1, 6.9.2, 9.4
Compatibility of Test Suite
RG 11
UG 4.3.2, 4.3.3.3, 5.1.4, 5.1.6.1, 9.4.3, 10.1
Condense
Primer 3.2.13, 5.1
RG 5.2
UG 2.1, 5.1.8, 6.1, 6.2.3, 6.10.1, 9.1.4.1, 9.1.6, 9.1.7, 9.3
CPU Time
Primer 3.2.2, 3.2.3, 3.2.4, 3.2.5
RG 6.3.2.6, 6.4.2.2, 6.4.2.3
UG 4.3.3.3, 5.1.1, 5.1.2, 5.1.3, 5.4.1
Data Summary Table
Primer 7.1.2
RG 5.3.2.2.3
Decision Issues
Primer 1.4, 1.4.1, 1.4.2, 1.4.3, 3, 3.1, 7.2.2
RG 3.2.6.1, 3.6.2, 5.3, 5.3.2.2.6, 5.3.2.2.7, 5.4.4.3, 6.4.1, 7.2, 7.3
UG 5.1.1, 5.1.6.3, 5.4.3, 6.1, 6.3, 6.9.6, 6.10.7, 9.1.1, 9.3.7, 9.4.3, 9.6
Diagnostic Assessor
Primer 6.3, 7.3.2
RG 2.4.3, 3.6.3, 8.2
UG 2.2, 4.3.3.2, 8.2, App. D
Erroneous Tests
RG 10
UG 6.10

Exceptional Data Report
Primer 3.2.13, 5.1
RG 5.2.2.2
UG 9.3.3
File Name Conventions
Primer 2.1
RG 5.2.2
UG 4.3.3.1.1
Globals
Primer 3.2.1, 3.2.5, 6.5, 6.5.1
RG 3.2.5.2, 8.4.2
UG 4.3.3.3, 6.10.7, 8
Harness
Primer 2.1.2, 3.2.11, 4.1, 4.3
RG 5.2
UG 5.1.7, 6.0
History (ACES)
Primer 1.1
RG 2, 2.1, 2.2, 3.1, 7.1
UG 2
Include
Primer 2.1, 3.2.5, 3.2.10, 4.3.2
RG 4.3.3.1.1, 5.5.2, 6.5, 6.11, 9.1.6
UG 2.1, 4.3.3.1.1, 5.1.1, 6.10.7
Interfacing Spreadsheets
Primer 1.1, 5.1, 5.2.4, 7.1.2
RG 2.3, 9.3, 9.3.1, 9.3.2
UG 2.3, 9.3
Interpreting Results
Primer 1.2, 3, 4.2, 7
RG 5.3.2.2.1, 7, 6.1
UG 2, 2.1, 4.2, 9.3.4.2
Level of Effort
Primer 6.5.1.3, 6.5.4
RG 2.4, 7.2, 7.6, 8.1
UG 6.5.1.3, 6.5.4
Math Implementation
Primer 3.2.5
RG 3.2.3
UG 5.1.4, 5.1.6.1
Operating System Commands
See UNIX Commands
Optimization
Primer 5.3.1, 5.3.3, 7.2.2
RG 6.2, 6.3.2, 6.6
UG 5.1.6.2, 9.1.1
Output
Primer 1.2, 1.4.1, 3.1.1, 3.2.1, 4.3.1, 7.1.1
RG 5
UG 5.1.3.13, 5.1.6.2, 6.2, 6.9.2, 7.3, 9.1, 9.2.2, 9.3.3, 9.4.1, 9.5.1
Performance Tests
Primer 1.2, 2.1.3, 4, 4.1
RG 5.4.2.2, 5.4.4.4, 8.3.1
UG 4.3.2, 5.3, 7.0
Pretest
Primer 3.2, 3.2.1 - 3.2.15
RG 3.2.3, 6.3.1, 6.5, 10
UG 5.1, 5.2, App. B
Program Library Assessor
Primer 6.3, 6.4, 6.4.1, 7.3.3
RG 2.4.3, 3.6.2, 8.3
UG 4.3.3.2, 8.3, App. E
Quick-Look
Primer
Referenced Documents
RG 1, 1.1, 1.2
UG 1, 1.1, 1.2
Reports
Primer 4.2, 6.5.2
RG 2.4.2.3, 2.4.2.4, 2.4.2.5, 2.4.3, 3.1, 3.2.4.2, 3.2.7.4, 3.6, 4, 5
UG 4.2.1, 6.5.3.3, 6.10.7, 6.11, 7.1, 7.3, 8
Resources Needed
Primer 3.5, 5.1, 6.3.1, 6.5.2
RG 3.2.7.3, 3.4, 3.6.2, 3.6.4, 8.2.1, 8.3.1
UG 4.2, 4.2.1
Setup
See Pretest
Simulators
RG 6.9
UG 5.4.1
Single System Analysis
Primer 3.2.15, 5.3, 7.2
RG 5.4
UG 9.5
Symbolic Debugger Assessor
Primer 6.2, 6.2.1, 6.2.2, 7.3.1
RG 2.4.3, 3.6.1, 8.1
UG 8.1, App. C
Testing Scenarios
Primer 1.4
RG 2.1.3.1, 3.2.1, 3.2.3, 3.6.2, 5.1, 5.4, 6.3.2.6, 8.1, 8.2, 8.4, 10
UG 4.2.2, 6.11, 8.3
Timing Techniques
Primer 2.1, 2.1.3, 3.1, 3.2.5, 3.2.10, 6.4, 7.1.2
RG 6.0, 6.2, 6.3, 6.3.1, 6.3.2, 6.3.2.5, 6.7, 7.3
UG 5.1.1, 5.1.2, 5.1.5, 7.3
Usability
Primer 1, 1.2
RG 2.1, 2.4.2.2, 3.1, 3.2.1, 3.2.3, 3.6.2, 5.1, 5.4, 6.3.2.6, 8.1, 8.2, 8.4, 10
UG 4.2.2, 5.1.6.1.3, 6.11
User Adaptation
RG 2.4
UG 5.4, 6.11, 9.6
User Feedback
RG 12
UG 10