!section 04.01 (04) Hans Hurvig 892408 8301328
!version 1983
!topic Access 'out' parameter as attribute prefix
!reference 6.2(5), A62006D.ADA, B62006C.ADA
The rule in 4.1(4) forbidding an access 'out' parameter as a
prefix for any attribute is overly restrictive.
Its justification is that such an appearance can imply that the
access value is dereferenced in connection with appropriateness:
'CALLABLE, 'FIRST, 'LAST, 'LENGTH, 'RANGE, and 'TERMINATED.
But using an access 'out' parameter as a prefix for other
attributes is harmless, and making it illegal is a distinct loss
of functionality.
For instance, 'ADDRESS is quite wellbehaved for any 'out'
parameter, and it is very odd indeed to single out those that
happen to have an access type.
I'd argue that the rule in 4.1(4) should simply be deleted as
redundant; paragraph 6.2(5) says that an 'out' parameter cannot
be read, and using an access object as a prefix for an
appropriateattribute constitutes reading, because it refers to
the denoted entity, thereby making it illegal.
*****************************************************************************
!section 04.01 (05) Peter Belmont 830923 8300071
!version 1983
!topic When is a prefix a function call?
4.1(5) says:
"If the prefix of a name is a function call, then
the name [containing the prefix] is ... an attribute ...
(if the result is an access value) of the object
designated by the result."
Is the following ambiguous under this rule or not:
type xxx is access integer;
foo:xxx := new integer;
function get_foo return xxx is ...
... get_foo'address ...
In this example, the programmer has possibly meant to refer to the
address of "get_foo" itself, but may have meant to refer to the
address of the integer pointed at by the access value returned by a
call of "get_foo".
If the trial meanings of a simple name, "get_foo", include both its
meaning as the name of a function and also its meaning as a call ,
parameterless, of that function, then the rule of 4.1.4(3) requires
the compiler to declare this program fragnment ambiguous since the
prefix cannot be compiled independent of context. On the other hand,
4.1.4(3) does not allow any context to influence the determination of
this meaning. Without context, can "get_foo" be understood as a
call? 4.1(5) doesn't fit well with 4.1.4(3), since the former looks
at context ("if the prefix of a name is ...") and the latter asks
that the context of the prefix be ignored.
************************************************************************
!section 04.01 (05) 13.07.02(06) Ron Brender 850726 8300593
!version 1983
!topic A data point regarding resolution of attribute prefixes
!reference AI00015, 8300418
At the May 1985 LMC meeting, several examples served as focal points
of our discussion. I ventured the opinion that whatever resolution
the LMC might reach should be suspect if it meant that any of the
examples were ambiguous. I made this statement because I believe
nearly all (currently) validated implementations would find them not
ambiguous.
As one data point, here is the result of compiling these examples,
namely in lines 8, 9, 14 and 15, with Digital's VAX Ada:
1
2 with SYSTEM; use SYSTEM;  needed to use 'ADDRESS
3 procedure AI15EX is
4
5 procedure P (X : INTEGER);
6 function P (X : INTEGER) return STRING;
7
8 type ARR is array (P(3)'range) of INTEGER;  a) P(3) ambiguous?
9 O1 : INTEGER := P(3)'SIZE;  b) P(3) ambiguous?
................................1
%E, (1) Result of function P at line 6 is not an object, type, or subtype
10 O2 : ADDRESS := P(3)'ADDRESS;  c) P(3) ambiguous?
................................1
%E, (1) Result of function P at line 6 is not an object, program unit, label,
or entry
11
12 function G return STRING;
13
14 O3 : INTEGER := G'LAST;  d) G ambiguous?
15 O4 : INTEGER := G'SIZE;  e) G ambiguous?
............................1
%E, (1) Function specification G at line 12 is not an object, type, or subtype
16 O5 : ADDRESS := G'ADDRESS;  f) G ambiguous?
17
18 type ACC_STR is access STRING;
19 function F (X : INTEGER) return ACC_STR;
20 function F return ACC_STR;
21
22 O6 : ADDRESS := F(1)'ADDRESS;  g) F(1) ambiguous?
.............................1
%E, (1) Name is ambiguous; an object, program unit, label, or entry is required
%I, (1) For F the meanings considered are:
Function specification F at line 19 with result type ACC_STR at
line 18
Call of F at line 20 with no parameters returning ACC_STR at line 18
Function specification F at line 20 with result type ACC_STR at
line 18 (discarded)
For unresolved application F the meanings considered are:
Call of F at line 19 returning ACC_STR at line 18
Array component {component of array STRING} in predefined STANDARD
of type CHARACTER in predefined STANDARD
For literal 1 the result type is any integer
23
24 procedure P (X : INTEGER) is
25 begin null; end;
26
27 function P (X : INTEGER) return STRING is
28 begin return ""; end;
29
30 function F (X : INTEGER) return ACC_STR is
31 begin return null; end;
32
33 function F return ACC_STR is
34 begin return null; end;
35
36 function G return STRING is
37 begin return ""; end;
38
39 begin
40 null;
41 end;
Note that while lines 9 and 15 have errors reported, these result from
postresolution legality checks  no ambiguity was involved.
For good measure, a few other variations are included, including lines
18 through 22, which are taken from comment 8300418. VAX Ada finds
F(1)'ADDRESS ambiguous in line 22; in 8300418, Goodenough argues it
is not (should not be) ambiguous.
*****************************************************************************
!section 04.01 (10) M Woodger 881113 8301119
!version 1983
!topic A mistake?
!reference Comment 8300533, p29 of AI00039/06
Delete the last words ", except in the case of the prefix of a representation
attribute (see 13.7.2)".
*****************************************************************************
!section 04.01.01 (04) F.Mazzanti  S&M 880712 8300990
!version 1983
!topic discriminant change after prefix evaluation

The evaluation of an indexed component (or slice), both as a name or a value,
can be erroneous if the prefix denotes a subcomponent
that depends on discriminants of an uncostrained record variable, and during
the evaluation of the expressions (or discrete range) the value of any of
these discriminants is changed.

E.g. In the following example:

type VARIANT(DISCR:BOOLEAN:=TRUE) is
record
case DISCR is
when TRUE=>
T_COMPONENT:STRING(1..80);
when FALSE =>
F_COMPONENT:REAL;
end case;
end record;

UNCONSTR: VARIANT;

function F return INTEGER is
begin
UNCONSTR:=(DISCR=>FALSE,F_COMPONENT=>1.0);
return 1;
end;

C:CHARACTER:= UNCONSTR.T_COMPONENT(F);  potentially erroneous

If the expression used as index (the function call F) is evaluated after
the prefix (i.e. while the value of the dicriminant is TRUE) the execution
can be unpredictable (erroneous) because the component denoted by the prefix
cease to exist and the further selection of the array component is undefined.

If the expression of the index is evaluated before the prefix, the
execution is well defined (i.e. not unpredictable) because CONSTRAINT_ERROR
is raised for 4.1.3(8) when the prefix is evaluated.

In any case the indexed component is an incorrect construct, but this fact
should not interfere with the issue of its erroneousness (the evalaution can be
unpredictable and not simply implementation dependent in a way discouraged by
the reference manual).

Suggested addition in 4.1.1 (and/or 4.1.2):
For the evaluation of an indexed component (or slice), if the prefix denotes
a subcomponent that depends on discriminants of an uncostrained record
variable, then the program execution is erroneous if,once the prefix has
been evaluated,the value of any of these discriminants is changed before the
completion of the evaluation of the indexed component (or slice).

*****************************************************************************
!section 04.01.03 (14,16) J. Goodenough 840313 8300343
!version 1983
!topic Using a name decl by a renaming decl as a selector
A renaming declaration does not declare an entity; it declares "another name
for an entity" [8.5(1)] (see also [3.1(1)]). The rules in 4.1.3 for expanded
names require that the selector of an expanded name be either:
"an entity declared in the visible part of a package" or
"an entity whose declaration occurs immediately within a named
construct"
The implication is that the name declared by a renaming declaration can never
be used as the selector in an expanded name.
This conclusion does not, presumably, reflect the intent. The Standard should
replace "an entity" in the above rules by "a name".
************************************************************************
!section 04.01.03 (1418) Peter Belmont 830427 8300137
!version 1983
!topic which kind of expanded name
The LRM seems ambiguous on the treatment of a name like
renamed_package.X
when used of an entity, X, in the visible part of a package
which has been renamed as "renamed_package".
Rule (e) seems to allow this usage, but rule (f), in what appears
to be its second paragraph, disallows it.
I speak, of course, of the situation that the text of the name is
within the package.
I would assume that the LRM means to forbit this usage even when
rule (e) would seem to allow it.
************************************************************************
!section 04.01.03 (15) Ada Group Ltd 831201 8300301
!version 1983
!topic renamed package as a prefix
Consider
package FRED is
A: INTEGER;
package JIM renames FRED;
end FRED;
Can we now write
use FRED;
and then refer to FRED.A as JIM.A(presumably yes) or even as
JIM.JIM.JIM.JIM.A
The latter seems curious but no rule seems to prevent it.
Furthermore can we wite
use FRED.JIM;
(that is without saying use FRED;) and then refer to A directly.
************************************************************************
!section 04.01.03 (15) Ada Group Ltd 840123 8300324
!version 1983
!topic expanded names
An expanded name can denote an entity declared in the visible part of a
package, and a renaming declaration can be used in such an expanded
name. Consider this example.
package OUTER is
A : INTEGER;
package INNER renames OUTER;
B : INTEGER := INNER.A; (1)
end;
Paragraph (e) in section 4.1.3 seems to allow the declaration at (1),
since INNER denotes OUTER and A is declared in the visible part of the
latter. However, paragraph (f) also permits us to regard A as an entity
declared immediately within a named construct. The prefix INNER denoted
OUTER, which is a "program unit" since packages are program units
(Chapter 6). But under this reading, the name is illegal since paragraph
9 of 4.1.3 states that "a name declared by a renaming declaration is not
allowed as the prefix". Is the intention to allow this example under
rule (e)? It seems that the LRM is geuinely ambiguous here. Now consider
what happens if we insert the word "private".
package OUTER is
private
A : INTEGER;
package INNER renames OUTER;
B : INTEGER := INNER.A;
end;
Now A is not in the visible part of OUTER, so rule (e) does not apply.
A is still immediately within outer, and so an expanded name OUTER.A
would be legal, but the actual example (INNER.A) is apparently illegal
because the prefix is a name declared by a renaming declaration.
(If the intention was to forbid only subprogram renames as prefix for
expanded names, this should be made clear).
************************************************************************
!section 04.01.03 (15) J. Goodenough 860907 8300802
!version 1983
!topic Correction to AI00187 discussion
!reference AI00187/04
The discussion of this AI gives the following example:
package FRED is
A : INTEGER;
package JIM renames FRED;
end FRED;
The discussion points out that we can refer to FRED.A and FRED.JIM.A, or
FRED.JIM.JIM.A "since FRED.JIM names a package enclosing the declaration of JIM
and A, and so does FRED.JIM.JIM, etc.)."
The quoted reason is incorrect, since it suggests that if A were not declared
in the visible part, FRED.JIM.A would also be allowed within package FRED.
However, since JIM is declared by a renaming declaration, it is not in general
allowed as the prefix of an expanded name [4.1.3(18)]. The only reason
FRED.JIM.A is legal is because of AI00016, which allows the use of JIM as the
prefix of an expanded name as long as the selector is declared in the visible
part of the package denoted by the prefix.
The discussion section of AI00187 should be revised to replace the quoted
phrase, perhaps by saying:
(since both A and JIM are declared in the visible part of a package and
therefore can be denoted by an expanded name, even when the prefix is
declared by a renaming declaration; see AI00016. FRED.JIM.A would
be illegal if A were not declared in FRED's visible part; similarly
FRED.JIM.JIM would be illegal if JIM were not declared in FRED's
visible part).
*****************************************************************************
!section 04.01.03 (1518) Peter Belmont 851231 8300699
!version 1983
!topic Using a renamed package prefix inside the package
!reference AI00016/05
I wish to give argument here for allowing the use of a renamed
package denoter within an expanded name for referring to items
defined in the visible part of a package from within that package.
(A) We have allowed
with X; package body X is ...
in order, so far as I can see, to allow idiosyncratic programming
style, and because it does no harm to allow this rather odd notation.
(B) There is a very reasonable programming style which is based on
using a large number of utility packages. These packages have long,
informative names (which do not conflict in the program library) and
short renamings. A typical program looks like
with long_name_1;
with long_name_2;
package body long_name_3 is
package LN1 renames long_name_1;
package LN2 renames long_name_2;
package LN3 renames long_name_3;
...
LN3.var_1 := LN2.const_1;
There is no reason for the programmer, working within this system, to
have to be aware of which package he is immediately (or deeply)
within. He wants to be able to use a uniform notation for referring
to items declared in the visible parts of the packages that he is
working with.
This is a useful and innocent practice which encourages uniformity of
notation. RM 4.1.3(15) can be interpreted to allow this. Why, faced
with the contradiction between 4.1.3(15) and 4.1.3(18), should we
decide to enshrine 4.1.3(18) and thus to forbid a useful practice?
For the Future:
RM 4.1.3(18) forbids the use of a renaming as a prefix in an expanded
name which appears within the renamed entity. This rule should be
eliminated. If one is within nested overloaded subprograms , there
is no way other than the use of expanded names based on renames to
refer to items within one or the other of these subprograms with an
expanded name syntax. If the use of expanded names within a named
entity has any utility at all, and I think it does, and if
overloading has any utility, and I think it does, and if nested
overloaded programs has any utility, and it may well have utility,
then the rule of 4.1.3(18) forbids something that has utility and
which cannot be done otherwise.
*****************************************************************************
!section 04.01.03 (17) J. Goodenough 870819 8300941
!version 1983
!topic Visibility of accept statements
4.1.3(1719) say for expanded names within accept statements:
The prefix must denote a construct that is either a program
unit, a block statement, a loop statement, or an accept
statement. In the case of an accept statement, the prefix must
be either the simple name of the entry or entry family, or an
expanded name ending with such a simple name (that is no index
is allowed). ...
... If the prefix is the name of a subprogram or accept
statement and if there is more than one visible enclosing
subprogram or accept statement of this name, the expanded name
is ambiguous, independently of the selector.
If, according to the visibility rules, there is at least one
possible interpretation of the prefix of a selected component
as the name of an enclosing subprogram or accept statement,
then the only interpretations considered are those of rule (f)
as expanded names (no interpretations of the prefix as a
function call are then considered).
These paragraphs talk as though accept statements can be named as denoted
entities, but accept statements are not mentioned in the list of declared
entities (see 3.1(1)), nor are accept statements declared. The visibility
rules associate identifiers with declarations, so, strictly speaking, it is
unclear what the Standard means when it speaks of a name that "denotes" an
accept statement or even the name "of" an accept statement. What is the
intent here? What does it mean for an accept statement to be "visible" when
it is not declared? Note that 8.3(2) says:
The visibility rules determine a set of declarations [for an
identifier] that define possible meanings of an occurrence of
the identifier. A declaration is said to be visible at a given
place in the text when, according to the visibility rules, the
declaration defines a posible meaning of this occurrence.
In these terms, an accept statement is never visible since it is never declared.
In considering this issue, note that the declarative region for an entry
includes the accept statements for the entry. Also, consider the following
examples:
task T is
entry SINGLE(X : INTEGER);
entry FAMILY(BOOLEAN)(Y : INTEGER);
end T;
task body T is
begin
accept SINGLE(X : INTEGER) do
declare
function SINGLE return INTEGER is
Y : INTEGER;
begin
SINGLE.Y := X;  illegal
At the point of the assignment statement, it is clear that two declarations of
SINGLE are visible  one within the declarative region associated with the
entry declaration and accept statement, and the other within the region
associated with the function body. SINGLE.Y is clearly an expanded name since
SINGLE can be interpreted as the name of an enclosing subprogram [4.1.3(19)].
Now we come to 4.1.3(18):
If the prefix [of an expanded name] is the name of a subprogram or
accept statement and if there is more than one visible enclosing
subprogram or accept statement of this name, the expanded name is
ambiguous, independently of the selector.
"The name of an accept statement" is not a technically meaningful phrase, but
given the context, the intent is clear: presumably it means "the entry or
entry family name given in an accept statement", and the visibility rules do
apply to these names. In the above example, there is an enclosing accept
statement associated with the single entry SINGLE, and an enclosing function
named SINGLE. The visibility rules say that both the entry and the
subprogram are visible at the point of the assignment statement, so the rule
in paragraph 18 means that SINGLE.Y is illegal.
Now consider a similar example with the entry family:
accept FAMILY(TRUE)(Y : INTEGER) do
declare
function FAMILY return INTEGER is
Z : INTEGER;
begin
FAMILY.Z := Y;  legal
Presumably this is okay because the normal visibility rules make only the
function FAMILY visible at the assignment statement since entry families are
not overloadable. In this case, one could say the enclosing accept statement
is not visible, but presumably such a phrase must be understood as saying
that the entry or entry family given in the enclosing accept statement is not
visible.
*****************************************************************************
!section 04.01.03 (18) Ron Brender 850917 8300661
!version 1983
!topic Using a renamed package prefix inside a package
!reference AI00016, 8300137, 8300324
Draft AI00016/03 tentatively takes the position that Rule (e) in
4.1.3(1415) cannot be used to allow an expanded name when Rule (f) in
4.1.3(18) disallows the expanded name. There are several variations
that need to be considered:
package P is
A : INTEGER := 0;
package RP1 renames P;
B : INTEGER := RP1.A;  [1]
private
C : INTEGER := RP1.A;  [2]
package RP2 renames P;
D : INTEGER := RP2.A;  [3]
end;
package RP3 renames P;
package body P is
X : INTEGER := 1;
package RP4 renames P;
E : INTEGER := RP1.A;  [4]
F : INTEGER := RP1.C;  [5]
G : INTEGER := RP2.A;  [6]
H : INTEGER := RP2.C;  [7]
I : INTEGER := RP3.A;  [8]
J : INTEGER := RP3.C;  [9]
K : INTEGER := RP4.A;  [10]
L : INTEGER := RP4.C;  [11]
M : INTEGER := RP1.D;  [12]
N : INTEGER := RP4.D;  [13]
end P;
The rule disallowing the prefix to be a renaming has the one virtue
that it makes all of the above selected components illegal.
It has been suggested, however, that perhaps rule (e) should not be
precluded by rule (f) when they appear to overlap. In that model,
cases 1, 2, 3, 4, 6, 8 and 10 (all those that name A) would be legal
while cases 6, 7, 9, 11, 12 and 13 (all those that name C or D) would
be illegal. This seems very strange indeed. The contrast between the
pairs 67, 1011 and 1213 is especially curious because the prefixes
RP2 and RP4 are themselves declared and only visible within P, so the
only utility they can possibly have is for naming other entities also
declared within P (as well as P itself) from within P. Why A should
be namable in this manner and not C or D truly escapes me.
Comment 8300324 closes by suggesting that it might have been intended
to forbid only subprogram renamings as the prefix. This leads one to
wonder why the prohibition against renamings exists in the first
place. (Note that without it, all of the above selected components
would be legal  also a nice simple outcome.) But I can find neither
LRM text nor technical rationale for making such a distinction (nor
rationale for the restriction itself).
There is one piece of ambiguity regarding the phrase "within the
construct itself" in 4.1.3(18). It is not entirely clear that a
package declaration (specification) and package body are intended to
be one construct for the purposes of this section, although I admit
this seems the most plausible intent. Moreover, if a distinction is
made, then it appears that one can argue that 1, 2, 3, 12 and 13 are
illegal, while 4 through 11 are okay. This doesn't seem to be a
useful line of inquiry.
If I had my choice, I would discard the restriction about renamings in
the prefix altogether. But if that possibility gains no support in
light of the existing LRM text, then I recommend that AI00016/03 be
adopted as is (with the discussion to incorporate more of the cases
considered above).
Perhaps a better rationale would be to assert that rule (e) is
intended to apply only to selected components that occur outside of a
package (in which case a renaming for allowed in the prefix) while
rule (f) is intended to apply only within a construct (including a
package, in which case a renaming is not allowed for the prefix).
This seems to eliminate the apparent overlap on a more plausible
basis.
*****************************************************************************
!section 04.01.03 (18) M. Woodger/Alsys 860211 8300710
!version 1983
!topic History of using a renamed package prefix inside a package
The July 1982 version of the Standard (Internal Version 16)
contained the following paragraph at the end of section 8.5:
"Renaming of a subprogram or package is not allowed within
the subprogram or package itself."
In response to Comments #2529 and #3715 by D. Taffs, and #4482 by
R. Eachus (820917), it was accepted to delete the restriction,
and the Response stated it had been "motivated by difficulties
with expanded names".
However, in response to #5135 (820830) by Taffs, who showed in
detail how the restriction could be circumvented, a new sentence
was inserted into 4.1.3(18) for Version 17 (821125), which
reimposes the restriction in a stronger form:
"A name declared by a renaming declaration is not allowed as
the prefix."
The latest Commentary AI16/07, and the previously approved AI
187/03, go some way to removing such restrictions. The
"difficulties" have not been explained.
*****************************************************************************
!section 04.01.03 (18) M. Woodger 870430 8300924
!version 1983
!topic Prefix of expanded name
!reference 8300812, AI00119/01
I have checked the records, which are quite clear. The sentence "A name
declared by a renaming declaration is not allowed as the prefix" was inserted
into what is now 4.1.3(18) in response to comment #5135 by D. Taffs
(820830), who pointed out that without this restriction one could achieve
renaming of a subprogram or package within itself. That used to be forbidden
explicitly by an extra paragraph 8.5(10), which was accordingly deleted.
Another point on AI119 is that F.X should not be called an expanded name
until it has been determined that rule (f) applies. Before that, the correct
description is "selected component". This applies to "expanded name" in line
15 of the response, and the first line of the final large paragraph, which
could just say "The name F.X is ..."
I never got round to responding to comment 8300812. We are at
crosspurposes, because I am considering the case where F can name an
enclosing subprogram, and you are not. I agree with you, of course, that for
T1=T2, my F.X must be interpreted as selecting from the result of a function
call, and is legal. I fell into the same trap as described above! I do not
accept the last paragraph of the comment: I do not say that 4.1.3(19) applies
to names declared by renaming declarations.
*****************************************************************************
!section 04.01.04 (02) M Woodger 881105 8301120
!version 1983
!topic Universal_static_expression should be static_universal_expression
The syntactic category "universal_static_expression" should be
"static_universal_expression", since adjectives are prefixes and there is
already a category "universal_expression" that can be qualified as static.
*****************************************************************************
!section 04.01.04 (03) Peter Belmont 830923 8300070
!version 1983
!topic independence of context for attribute
I assume that the meaning of 4.1.4(3):
"The meaning of the prefix of an attribute
must be determinable independently of the
attribute designator and independently of the fact
that it is the prefix of an attribute."
would be better stated:
"The prefix of an attribute is a complete context
in the sense of 8.7(3); that is, the meaning of
the prefix and of all of its components must be
completely determinable without regard to context."
or, had the style of the LRM been somewhat different, by a rule
forbidding the passage of information down into the prefix's tree
from above, visibility apart, (but, naturally, allowing information
to be extracted from the prefix and passed up the tree).
************************************************************************
!section 04.01.04 (03) J Storbank Pedersen (DDC) 830829 8300243
!version 1983
!topic The meaning of prefixes
The last sentence of 4.1.4(3) reads: "The meaning of the prefix of an
attribute must be determinable independently of the attribute
designator ...". The question is: What is "the meaning" of a name
designating a parameterless function? The "meaning" must be either the
function itself or a call of the function. These two "meanings" must according
to 4.1.4(3) give rise to an ambiguity because the attribute designator is not
considered. This implies that for example the special rules for 'ADDRESS
(13.7.2(6)) must be irrelevant because the "meaning" of the prefix is
ambiguous in the first place. If this comment does not reflect the intended
semantics of "meaning", the RM should contain its own special definition of
the word "meaning", see 1.5(3).
************************************************************************
!section 04.01.04 (03) Software Leverage, Inc. 840119 8300272
!version 1983
!topic Implicit Conversions within Prefixes
!reference 8300070
An earlier Ada Comment 8300070 by Peter Belmont suggested a rewording
of 4.1.4(3), to wit, replacing "The meaning of the prefix of an
attribute must be determinable independently of the attribute
designator and independently of the fact that it is the prefix of an
attribute" with "The prefix of an attribute is a complete context in
the sense of 8.7(3); that is, ...". We wish to point out that the
two are not necessarily the same, due to implicit conversions.
Consider this code:
with TEXT_IO;
procedure P is
A: STRING(1..100);
function "<"(L, R: INTEGER) return INTEGER is separate;
function F(X: INTEGER) return INTEGER is separate;
function F(X: BOOLEAN) return INTEGER is separate;
begin
if A(F(1 < 2))'ADDRESS = A(1)'ADDRESS then
 Which F is meant?
TEXT_IO.PUT("Same");
end if;
end P;
Here, if the first prefix were a complete context, the rules in
4.6(15) could be invoked to give it a unique meaning. As it isn't,
but must have its meaning determined before the "innermost complete
context" is even examined (the possible meanings of the latter require
resolving the attribute's interpretation first!), the above is
ambiguous.
This of course doesn't bear on whether Peter's suggestion is a good
one. It is worth observing, though, that as things stand the change
isn't just a rewording but makes a semantic difference.
************************************************************************
!section 04.01.04 (03) Software Leverage, Inc. 840128 8300294
!version 1983
!topic Implementation Defined Attributes
In 4.1.4(3) we find: "An attribute can be a basic operation
delivering a value; alternatively it can be a function, a type, or a
range."
Does this imply that an implementationdefined attribute can only be
one of these? In particular, may an implementationdefined attribute
denote
1. A subtype?
2. An exception?
3. A package?
4. A procedure?
5. An entry?
6. A task?
7. A generic unit?
8. An object (as opposed to a value)?
9. A label?
10. A block or accept statement?
We observe that July 1980 Ada had a similar statement but also defined
the attribute 'FAILURE as an exception, which may be a hint that a
restrictive interpretation was not intended.
Also, we find in 13.4(8) that "An implementation may generate names
that denote implementationdependent components... ([these] could be
implementationdependent attributes)". Since components aren't
values, this also suggests that the wording of the cited passage was
just saying what attributes could be, and not forbidding them to be
other things.
Is this correct?
************************************************************************
!section 04.01.04 (03) J. Goodenough 850801 8300601
!version 1983
!topic The meaning of an attribute prefix
!reference AI00015, 8300418
Comment 8300418 gave the following example:
type Acc_String is access String;
function F (X : Integer) return Acc_String;  F#1
function F return Acc_String;  F#2
...
F(1)'ADDRESS
I argued that F(1) was unambiguous despite 4.1.4(3)'s statement that "the
meaning of the prefix of an attribute must be determinable independently of
the attribute designator and independently of the fact that it is the prefix
of an attribute." At the time of the Februrary 1985 meeting, I was not able
to generate much support. At the May 85 meeting, I found some examples that
caused further consideration of the issue. The purpose of this note is to
describe these examples and also to review some of the history behind the
current wording of the RM. My discussion first takes a "legalistic"
approach, and then later I review relevant documentation on the intent. It
can reasonably be argued that my legalistic reasoning is contrary to the
intent, but I will address that point in my conclusion.
THE "LEGALISTIC" ARGUMENT
The basic issue is to what extent syntactic considerations are allowed to
influence the determination of an attribute's "meaning". Consider the
following declarations:
EXAMPLE A
type ACC_STRING is access STRING;
procedure F (X : INTEGER) is ... end;
function F (X : INTEGER) return ACC_STRING is ... end;
... F(3)'LAST ...  legal?
... F(3)'SIZE ...  legal?
4.1.4(3) says:
The meaning of the prefix of an attribute must be determinable
independently of the attribute designator and independently of
the fact that it is the prefix of an attribute.
Taken literally, one could conclude that F(3)'LAST is ambiguous since we
can't tell whether F(3) is a procedure call or a function call, so we don't
know which F is meant.
Now consider:
EXAMPLE B
function G return ACC_STRING is ... end;
... G'LAST ...  legal?
... G'ADDRESS ...  legal?
Here G unambiguously denotes a specific function, but the prefix of an
attribute can be a name or a function call. Is G's "meaning" in the first
case that G is called (i.e., that G is parsed as a function call) and in the
second case, is G's "meaning" that it is a name? Should G'LAST be considered
ambiguous because we can't decide whether it is to be parsed as a name or as
a call, but is G'ADDRESS to be considered legal because 13.7.2(6) says that
in this case, the prefix is to be considered the name of a function (not the
result of calling the function)? How is it consistent to let G'ADDRESS be
legal in light of 4.1.4(3)'s requirement that the "meaning" of the prefix be
determinable "independently of the attribute designator?"
I pose these questions to show that there are real problems of interpretation
that must be resolved. One cannot say that the implications of 4.1.4(3) are
so clear it is impossible to understand how the rule could be stated more
clearly!
Well, I didn't pose these problems without having a solution in mind. I
think there is a consistent reading of the RM that resolves these problems.
The resolution hinges on the term "meaning," which is defined in 8.3(1):
The meaning of the occurence of an identifier at a given place
in the text is defined by the visibility rules and also, in the
case of overloaded declarations, by the overloading rules.
(Note that the prefix F(3) does not have a meaning in the sense of 8.3(1)
since it is not an identifier, but we can understand 4.1.4(3)'s use of the
phrase "meaning of the prefix" as shorthand for "the meaning of the
identifiers in a prefix".)
8.7(2) continues:
For overloaded entities, overload resolution determines the
actual meaning that an occurrence of an identifier has,
whenever the visibility rules have determined that more than
one meaning is acceptable at the place of this occurrence. ...
8.7(3) goes on to discuss how overloading resolution requires that each
constituent of the innermost complete context have exactly one
"interpretation".
Now consider Example B in light of these rules. First each use of G has an
unambiguous meaning because only one declaration of G is visible.
Nonetheless, G'LAST is ambiguous and illegal because we don't know whether
to parse G as a name or as a function call. There is no rule that allows us
to resolve this ambiguity about the "interpretation" of G. G'ADDRESS is
unambiguous, however, because of 13.7.2(6).
Now let's consider example A. Here it is clear that two declarations of F
are visible and overloading resolution is required. 8.7(7) says:
When considering possible interpretations of a complete
context, the only rules considered are the syntax rules, the
scope and visibility rules, and the rules of the form described
below [in 8.7(815)].
In particular, note that use of "the syntax rules" is required in deciding
what the interpretation (i.e., meaning) of an identifier is, and AI00157
concludes that "the syntax rules" include rules that are stated narratively
in the text, such as the rule in 13.7.2(6). In this case, the syntax of
attribute prefixes implies that F(3) cannot be parsed as a procedure call, so
the meaning of F (i.e., the declaration denoted by F) is unambiguous for both
attributes. Since the prefix of LAST is allowed to be "appropriate for an
array type" [A(21)], F(3) can be invoked and implicitly dereferenced to
denote the designated object, so F(3)'LAST is legal. F(3)'SIZE is not legal
since the prefix of SIZE is not allowed to be a value (and also for reasons
listed in 13.7.2(6)).
The resolution of examples A and B in both cases turns on resolving syntactic
ambiguities. The basic question is WHEN these syntactic ambiguities are to
be resolved  in the process of deciding the meaning of identifiers that are
used in the prefix, or after their meaning has been determined. If we say
"after the meaning of identifiers has been determined," then F(3)'LAST is
ambiguous in example A.
Another way of phrasing the basic question is: is the rule given in 4.1.4(3)
equivalent to saying that the prefix of an attribute forms a complete context
in the sense of 8.7? If the prefix is considered a complete context, then my
argument regarding the role of syntax in resolving the prefix is consistent
with the use of syntax in general to resolve potential meanings of
identifiers.
In short, it seems to me that the syntax rules are used all the time to
resolve the prefix of attributes. The example given in comment 8300418 is
just a logical consequence of the reasoning used for Examples A and B.
INTENT  SOME HISTORY
The arguments given so far are legalistic, based on the precise wording that
exists in the RM. It was suggested at the LMC meeting that regardless of the
result of a legalistic interpretation of the wording, the intention was that
no overloading resolution be required when analyzing the prefix of
attributes. In defense of this position, one might argue that neither 4.1.4
nor 8.7 define the prefix of an attribute as a "complete context" (which
would be normally the way of reducing the context to be considered in
interpreting an identifier). The wording in 4.1.4(3) was the direct result
of comments #5261 and #5408:

section 04.01.04 (03) D Taffs 820830 #5261.
version 16
class Amendment: Resolution
topic Are attributes of function return values allowed?
It is necessary to clarify how overloading resolution of attribute
prefixes works. For example, resolution of a 'SIZE prefix should not use
the fact that the attribute is not "available" for entries.
F.E(3)'SIZE
If F returns an access to a record having component E that is of an array
type, and another F returns a task having entry family E, is the above use
of F resolvable? The 'COUNT attribute should not use entryness for
overloading resolution of its prefix, either.
I suggest adding the following sentence after the first sentence of
paragraph 4 of 4.1.4:
The prefix of an attribute must be determinable independently of
its context, except for the prefixes of attributes appropriate
for an array or task type.
This should also be included among the overloading rules listed in 8.7.
This change, together with clear definitions of "declared" and entity",
would clarify all issues mentioned above, as long as the definition of each
attribute specifies exactly which prefixes it allows.
RESPONSE: It is highly desirable to define overload resolution for attributes
in a manner that is implementation independent (not influenced by
implementation dependent attributes). For this reason a simpler restriction
is preferable.
"The type of the prefix of an attribute must be determinable independently of
the attribute designator."

section 04.01.04 (03),08.07(15) D Taffs 830102 #5408.
version 21
class Proposed Change
topic overload resolution of attribute prefixes
The last sentence of this paragraph is worded poorly, because the prefixes of
attributes can be program units, labels, and entries, which do not have
types.
Also, the context in which the attribute occurs should not be used to resolve
the prefix of an attribute. I cannot imagine (yet) how to consider all
possible designators, given a set of potential prefixes and a context. Using
the context complicates the process extraordinarily, and yet is seemingly
currently required. For example, apparently F'FIRST = X must now consider ALL
attributes (including all implementationdefined ones) to find an F that has
some (any!) attribute that returns a value of the type of X! This is clearly
ridiculous, because overloading resolution of prefixes of languagedefined
attributes is thus implementation dependent! I suggest replacing this
sentence with:
The prefix of an attribute must be determinable
independently of its context.
Note that a corresponding change must also be made to 8.7(15).
RESPONSE: The meaning of the prefix of an attribute must be determinable
independently of the attribute designator and independently of the fact
that it is the prefix of an attribute.

In short, the goal was to define overloading resolution for attribute
prefixes in a way that would not be "influenced by implementation dependent
attributes". In particular, if an implementation dependent attribute forbids
a function call, the intent was that the resolution of the prefix should not
depend on this requirement. This rationale applied to the revision of the
July 1982 version. The proposed wording referred to the "type" of a prefix,
but comment #5408 pointed out that not all attribute prefixes have a type and
that the context in which an attribute occurs should not be considered in
resolving the prefix. Taffs proposed that the prefix of an attribute be
considered a complete context. This wording was not adopted, however. The
currently existing wording was used.
My feeling is that the current wording of the RM can be interpreted clearly
and consistently to resolve examples A and B and the example in 8300418. In
short, no fix is necessary, except, of course, to note my line of reasoning
as an "obvious" ramification (smile). I grant that my interpretation
apparently does not satisfy the intent, but given that the intent was to
prevent a resolution of a prefix in a pathological (implementation defined)
case, I don't think the current wording (and my interpretation) is so
seriously wrong as to require a search for an alternative.
*****************************************************************************
!section 04.01.04 (03) R.S. Kotler/General Transformation 851024 8300679
!version 1983
!topic Prefix of an attribute.
!reference AI00015/05
I suggest that the following be illegal:
function f return string;
a: address := f'address;  illegal, the prefix is ambiguous
l: positive := f'last;  illegal, the prefix is ambiguous
The prefix has two meanings,
1) That it is a function call
2) That it is the name of a function.
This is supported strongly by the syntax of the language:
namely that a prefix can be either a name or a function call.
I don't see how things could be any clearer.
I share sympathy with those that feel that the rule that the
prefix of an attribute be determined independently of the
attribute may have been too strong, but this is the rule
and it seems very clear in the RM.
It seems to be very dangerous to start considering names to be
sequences of lexical elements, and that the interprettation of
what the lexical elements constitute can be ignored until later
resolution as long as the simple names that we will associate
with the lexical elements will be the same. I'm sure that
such an interprettation is bound to yield numerous anomolies
in the future.
It seems that perhaps modifying the rules for what information
can be used about the attribute when resolving the prefix
need to be changed rather than making unclear RM interprettations.
The following extra sentence could be added to 4.1.4(3).
"The only exception to this rule is that if there exists
exactly two resolutions, and one is as the name of a
function and one is as a parameterless function call of the
same function, then we may consider this to be a single
meaning for the above purposes".
This seems to cover the important cases without requiring
us to allow full overloading of prefixes of attributes.
Note that with this resolution, all other problems of
this nature that deal with overloading of the function
name, can be worked around in the code using renaming
declarations.

*****************************************************************************
!section 04.01.04 (03) Hans Hurvig 890704 8301324
!version 1983
!topic Resolving attribute prefixes
!reference AI00015
Consider the following example:
type ACC_STR is access STRING;
function F ( X: INTEGER := 0 ) return ACC_STR;
...
... F(3)'ADDRESS ...
Assuming there are no other visible Fs, the meaning of F is
unique, but there are two legal interpretations of F(3):
1. the access value F(X=>3)
2. the character F(X=>0).all(3)
Only 2. is legal for 'ADDRESS, but can this be used when
resolving F(3)? (I would think no)
AI00015 does not address this example, because there are
two interpretations as a prefix, but still a unique 'meaning'
of the prefix, that is, meaning of the identifiers in it.
Nor is it a matter of deciding whether to call a function or not.
*****************************************************************************
!section 04.02 (03) J. Goodenough 861004 8300819
!version 1983
!topic Graphic characters not in a string literal's component type
Presumably CONSTRAINT_ERROR should be raised when a string literal is
written whose graphic characters do not belong to the literal's component
subtype. For example, consider:
type STR is array (1..3) of CHARACTER range 'A'..'C';
VAR : STR := "ABD";  CONSTRAINT_ERROR?
4.2(3) says:
A string literal is a basic operation that combines a sequence
of characters into a value of a onedimensional array of a
character type;
Clearly, "ABD" is not a value of type STR, since 'D' is not a permitted value
of STR's component type, but the quoted sentence does not specify clearly
that any exception is to be raised when "ABD" is evaluated.
4.2(5) says:
The character literals corresponding to the graphic characters
contained within a string literal must be visible at the place
of the string literal.
This requirement is clearly met by the above example, since 'D' is a visible
value of type CHARACTER, so the string literal is legal.
Since 4.2(3) says the string literal formation operation produces a value of
an array type, the intent is certainly that the string literal be treated, in
some sense, like an array aggregate. In particular, the rule in 4.3.2(11) is
relevant:
a check is made ... that the value of each subcomponent of the
aggregate belongs to the subtype of this subcomponent. ...
The exception CONSTRAINT_ERROR is raised if any of these checks
fails.
Since this rule does not clearly apply to string literals, it is unclear why
CONSTRAINT_ERROR for "ABD" should be raised.
In short, it should be stated explicitly that CONSTRAINT_ERROR is raised if
any character in a string literal does not belong to the string literal's
component subtype.
*****************************************************************************
!section 04.02 (03) Norman Cohen 900312 8301368
!version 1983
!topic Clarify rules giving the upper bound of ""
The current presentation of the rules for the upper bound of a null
string literal is extremely confusing. The first sentence of 4.2(3) says
that the bounds (note the plural) are determined according to the rules
given for positional aggregates in 4.3.2. The second sentence appears to
contradict the first by giving (without any word like "however"
suggesting a restriction to the previous rule) a distinct rule for
computing the upper bound of a null string literal. To add to the
confusion, this rule for the upper bound is expressed in terms of the
lower bound, suggesting that the rules in 4.3.2 are still invoked to get
the bounds, but that the upper bound given by those rules is somehow
thrown away, and replaced by the predecessor of the lower bound given by
those rules.
Examination of 4.3.2(9) leads to further confusion: The rules there
appear to have been written with concrete positional aggregates (which
always have two or more component values) in mind, since 4.3.2(9) gives a
rule that is ambiguous in the case of a null array: Once the lower
bound is determined, the upper bound is "determined by the number of
components."
Only at this point does the role of the second sentence in 4.2(3) become
clear: It is meant to qualify the application of the rules in 4.3.2(9),
by clarifying the meaning of "determined by the number of components" in
the case where the number of components is zero.
This can be clarified by rewording 4.2(3) as follows:
A string literal is a basic opeation that combines a sequence
of characters into a value of a onedimensional array of a
character type; [the] {for a nonnull string literal, both}
bounds of this array are determined according to the rules for
positional array aggregates (see 4.3.2). For a null string
literal, {the lower bound is determined according to the rules
for positional array aggregates and} the upper bound is the
predecessor, as given by the PRED attribute, of the lower
bound. The evaluation of a null string literal raises the
exception CONSTRAINT_ERROR if its lower bound does not have a
predecessor (see 3.5.5).
This wording ensures that the upperbound rule in 4.3.2(9) is only
invoked in cases where its meaning is well defined. Alternatively, the
upperbound rule for null string literals can be moved from 4.2(3) to
4.3.2(9), where it can be consolidated with the general rule for upper
bounds of positional aggregates. This requires the following
deletion from 4.2(3):
A string literal is a basic opeation that combines a sequence
of characters into a value of a onedimensional array of a
character type; the bounds of this array are determined
according to the rules for positional array aggregates (see
4.3.2). [For a null string literal, the upper bound is the
predecessor, as given by the PRED attribute, of the lower
bound. The evaluation of a null string literal raises the
exception CONSTRAINT_ERROR if its lower bound does not have a
predecessor (see 3.5.5).]
It also requires the following additions to 4.3.2(9):
... For a positional aggregate, the lower bound is determined
by the applicable index constraint if the aggregate appears in
one of the contexts (a) through (c); otherwise, the lower bound
is given by S'FIRST where S is the index subtype[;]{.} [in]
{In} either case, the upper bound is determined by the number
of components{; for a null string literal, the upper bound is
the predecessor, as given by the PRED attribute, of the lower
bound; the evaluation of a null string literal raises the
exception CONSTRAINT_ERROR if its lower bound does not have a
predecessor (see 3.5.5)}.
*****************************************************************************
!section 04.02 (04) M Woodger 881105 8301121
!version 1983
!topic After "context" insert "(see 8.7)"
Helpful comment. "Context" is being used in a technical sense.
*****************************************************************************
!section 04.02 (05) MT Perkins/BDM 850307 8300643
!version 1983
!topic Visibility of character literals.
I would like to point out an area of ambiguity in the Ada language
standard and suggest a related change to the standard. The ambiguity is
illustrated by the Ada program shown below. I believe this program is
correct according to the standard. It fails to compile on the Data
General/Rolm compiler, however, producing the error shown in the program.
I have shown this listing to Data General Software Support. They maintain
that the compiler is behaving according to the standard. They cite
Section 4.2 Paragraph 5 of the Ada Language Reference Manual, which states
that the character literals correspoinding to the characters contained
within a string literal must be visible at the place of the string literal.
They say that this paragraph implies that a Use Statement must be included
in the program to make the character type directly visible. I believe
that the renaming type declaration in line 11 of the example should suffice
to make the character type in the string literal visible, and therefore
the program should compile.
My preferred solution to this problem would be to remove Section 4.2
Paragraph 5 from the language standard. It makes protecting the visibility of a
character data type awkward. In lieu of deleting Section 4.2 Paragraph 5,
making the character type visible by renamin the type is preferable to a
Use Statement, since other objects in the package remain not directly
visible.
A member of my staff gave a copy of this program to Jerry Fisher at the
recent SIGAda meeting in San Jose. Mr. Fisher requested that we also send
AJPO a letter describing the problem. This letter is the result. Thank you.
procedure ptest is
type roman_digit is ('I','V','X','L','C','D','M');  314
type roman is array (positive range <>) of roman_digit;  328
ninty_six : constant roman := "XCVI";  332
package dd is  Data Dictionary
type roman_digit2 is ('I','V','X','L','C','D','M');
type roman2 is array (positive range <>) of roman_digit2;
ninty_six2 : constant roman2 := "XCVI";  332
end dd;
subtype roman_digit2 is dd.roman_digit2;
subtype roman2 is dd.roman2;
thirty : constant roman := "XXX";
thirty2 : constant roman2 := "XXX";
==> THIRTY2 : constant ROMAN2 := "XXX";
*** ROMAN2 literal "XXX" contains 'X', which is not in type
ROMAN_DIGIT2 (line 6).
*** ROMAN2 literal "XXX" contains 'X', which is not in type
ROMAN_DIGIT2 (line 6).
*** ROMAN2 literal "XXX" contains 'X', which is not in type
ROMAN_DIGIT2 (line 6).
begin
null;
end ptest;
*****************************************************************************
!section 04.03 (02) M Woodger 881105 8301122
!version 1983
!topic Add "(For the syntax of choice, see 3.7.3).
Helpful comment.
*****************************************************************************
!section 04.03 (06) Bevin Brett/Ron Brender 840115 8300254
!version 1983
!topic Null array interactions with other legality rules  case 1
Consider
type A is array(INTEGER range <>, INTEGER range <>) of INTEGER;
I, J : constant INTEGER := {somefunctioncall};
V : constant A := (1..2 => (1..2 => 1),
 3 is deliberately omitted
4..5 => (2..1 => 2));
Depending on whether the implementation decides the aggregate bounds
are (1..5, 1..2) or (1..1, 2..1), this program is either illegal
because not all elements (namely the 3,* elements) are specified, or
raises constraint error because it is a null array aggregate and hence
all elements are specified but the 2dim bounds aren't equal.
One might like to appeal to the requirement of 4.3.2(3) that "a named
association of an array aggregate is only allowed to have ... a
choice that is a null range, if the aggregate includes a single
component association and this component association has a single
choice" as a basis for making this aggregate illegal. Because this
rule applies to each subaggregate idependently, it cannot be applied
to the combination 1..2 and 2..1.
Should this aggregate be considered illegal on the basis of the first
dimension despite the possible null array effect of the subaggregates?
************************************************************************
!section 04.03 (06) Bevin Brett/Ron Brender 840115 8300255
!version 1983
!topic Null array interactions with other legality rules  case 2
Consider
type A is array(INTEGER range <>, INTEGER range <>) of INTEGER;
I, J : constant INTEGER := {somefunctioncall};
V : constant A := (1..2 => (I..J => 1),
 3 is deliberately omitted
4..5 => (I..J => 2));
This is a worse case, since RM 4.3(6), says that each component of the
aggregate MUST be represented once and only once in the aggregate.
Now if I..J is a null range then this rule is satisfied, and the
aggregate is legal, but if I..J is not a null range then this rule is
not satisfied and the aggregate is illegal. However you can't decide
until run time...
This is a variation of the problem posed in an earlier comment where,
in this case, the possible presence of a null array cannot be answered
until runtime.
Is this aggregate to be considered illegal?
************************************************************************
!section 04.03 (06) Software Leverage, Inc. 841010 8300458
!version 1983
!topic Null Others choice for Array Aggregates
LRM 4.3(6) says that "each component of the value defined by an aggregate
must be represented once and only once in the aggregate. Hence each aggregate
must be complete and a given component is not allowed to be specified by more
than one choice."
Our question concerns the rule in 4.3.2(3) that "A named association of an
array aggregate is only allowed to have a choice that is not static, or
likewise a choice that is a null range, if the aggregate includes a single
component association and this component association has a single choice." We
assume that an association with the choice 'others' may specify no components,
since it is not a range and hence not a "null range". Is this correct? For
example:
type T is (Red, Blue, Green);
type A is array(T) of Boolean;
subtype S is T range Red..Green;  the entire type
...
X: A := (S => True, others => False);  legal?
************************************************************************
!section 04.03 (06) Software Leverage, Inc. 841009 8300459
!version 1983
!topic Aggregates With Components Outside Their Value
Consider the following peculiar array aggregate:
Subtype Small_String is String(10..20);
...
Small_String'(4..5 => 'A', 8 => 'B', 7..15 => 'C', others => 'D')
According to LRM 4.3.2(4), "The bounds of an array aggregate that has an others
choice are determined by the applicable index constraint". The applicable
index constraint for the above aggregate is (10..20), so the aggregate defines
a value of type String and subtype (10..20).
According to LRM 4.3(6), "Each component of the value defined by an aggregate
must be represented once and only once in the aggregate. Hence each aggregate
must be complete and a given component is not allowed to be specified by more
than one choice".
The above aggregate obeys this rule, since each component in "the value defined
by the aggregate" (that is, a value of type String(10..20)) is represented once
and only once. However, this aggregate also contains some extraneous values
that do not represent any component of the value of this aggregate. These
extraneous values do not seem to be prohibited by 4.3(6), or by any other rule.
This seems surprising, since extraneous values are prohibited in the choices of
case statements and variant parts.
According to LRM 4.3.2(11), the index values defined by the choices are
constraint checked with respect to the index subtype of the array type.
However, there is no mention of any check to see if the index values defined by
the choices fall within the constraint on the aggregate's value.
Which of the following statements are true about the above aggregate?
1. The aggregate is illegal, and must be rejected by a compiler.
2. The aggregate is legal, but evaluation of the aggregate raises
Constraint_Error.
3. The aggregate is legal, and returns a value of "CCCCCCDDDDD". The
extraneous values are ignored and not evaluated.
Note that this question can also arise with record aggregates, if components
not in the current variant are specified. An example (using the definition of
PERIPHERAL in an example in LRM 3.7.3) would be:
PERIPHERAL'(DISK, CLOSED, TRACK => 5, CYLINDER => 12, LINE_COUNT => 2)
************************************************************************
!section 04.03 (07) M Woodger 881105 8301123
!version 1983
!topic After "context" insert "(see 8.7)"
Helpful comment. "Context" is being used in a technical sense.
*****************************************************************************
!section 04.03 (09) Don Clarson 830630 8300007
!version 1983
!topic References: ...{, context 8.7}
************************************************************************
!section 04.03.01 (01) Ron Brender 840504 8300360
!version 1983
!topic Record aggregates with multiple names in a choice
Section 4.3.1(1) makes clear than when more than one record component
is specified in a single choice, the components must all be of the
same type. Section 4.3.1(3) goes on to require that each expression
is evaluated once for each associated component and its value is
checked against the subtype of the respective component.
Now consider:
N : INTEGER := ...;
type REC is record
S1 : STRING(1 .. 9);
S2 : STRING(1 .. N);
end record;
OBJ : REC := (S1  S2 => (1 => '1', others => '*'));  Illegal?
 ^ ^
 + array aggregate +
 of type STRING
In this example, presumably the inner array aggregate is illegal.
That is, because S1 has static index constraints, the aggregate would
be legal if used only as the expression for S1; however, because S2
does not have static index constraints, the aggregate is not legal
according to 4.3.2(3).
Is this analysis correct?
************************************************************************
!section 04.03.01 (01) D. Emery (emery@mitre.org) 881214 8301254
!version 1983
!topic Can't declare a constant of a 'null' record type.
Consider the following "abstraction":
package ABSTRACTION is
type T is private;
Null_T : constant T;
private
type T is record
null;
end record;
Null_T : constant T := ???????
end ABSTRACTION;
There is no way to initialize the object Null_T, because there is no
way to generate an expression that is of type T. Actually, there is
one way, but this way is very ugly:
Bogus_Object : T;
Null_T : constant T := Bogus_Object;
One can only hope that the compiler doesn't generate any storage for
records of this type. Furthermore, it is clear that in some sense the
vlaue of Null_T is very undefined, which goes against the whole idea
of declaring a constant. (This is true of any type where a constant
is declared using an uninitialized type. The difference here is that
this is the only choice I have to initialize Null_T.)
A potential solution is
Null_T : constant T := (others => 3);  pick any number....
The problem with is that the 'others' clause can't be used here because
4.3.1(1) requires that 'others' represent at least one component.
In part it seems to me that this is due to the dual use of "null". In
the declaration, it represents 'nothing'. But in an expression, it
represents 'null access value'. I think in some respects
Null_T : constant T := T'(null);
would make sense, if it weren't for the fact that 'null' in this case
must clearly represent a 'null access value', and not 'nothing'.
A third reasonable alternative would be:
Null_T : constant T := T'();
However, that is pretty ugly, too, as well as potentially harmful to
parsers.
Overall, this is not a major problem, but is a significant surprise.
This should be considered for Ada 9X, but not as a top priority item.
dave emery
emery@mitre.org
*****************************************************************************
!section 04.03.01 (02) Ph. Kruchten NYU 840220 8300303
!version 1983
!topic order of evaluation of components in a record aggregate
The sentence "For the evaluation of a record aggregate, the expressions
given in the component associations are evaluated in some order that is
not defined by the language" is somehow misleading: associations
corresponding to discriminants must be evaluated first, especially when
they are used to determine the "applicable index constraint" mentionned
in 4.3.2(9). Consider the following example:
subtype INT is INTEGER range 1..3;
type A is array(INT range <>,INT range <>) of INTEGER;
subtype DINT is INTEGER range 0..10;
UN : INTEGER := 1;
DEUX : INTEGER := 2;
TROIS : INTEGER := 3;
type REC(D,E:DINT:=UN) is record
U : A(1..D,E..3) := ( 1..D => ( E..3 => UN));
end record;
R : CONSTANT REC := (DEUX,DEUX, U => ((UN,DEUX),(DEUX,TROIS)));
When evaluating U (we are then in the context defined in 4.3.2(8) (c) ),
if the discriminants have not been evaluated before and used to
determine the subtype of U, what is the "applicable index constraints",
the subtype int ? (certainly not the default values).
************************************************************************
!section 04.03.02 Hans Hurvig 890704 8301326
!version 1983
!topic Do slices supply applicable index constraint?
Aggregates containing an 'others' choice require an applicable
index constraint. Slices are not mentioned in connection with
such index constraints, but seem to play the same role, eg:
S(1..10) := (others=>'x');
Clearly it is the range 1..10 that is significant, and not what
constraint may apply to S, but nowhere does it say that a slice
contains/supplies/replaces an applicable index constraint.
Just saying that a slice in effect contains an index constraint
is a bit problematic, because then slicing can supply an index
constraint to something that is already constrained, which
happens nowhere else.
How does a slice affect staticness of 'others' choices:
S(1..10) := ('y',others=>'z');
This is harmless, but what if the subtype of S is dynamic?
*****************************************************************************
!section 04.03.02 (02) Don Clarson 830630 8300008
!version 1983
!topic Is the use of a string value in a multidimensional array
aggregate restricted to a string literal (which must fit on one line)
or may it be a value of a static expression (in particular a
catenation of string literals as suggested by the note of section 2.6(6)?
************************************************************************
!section 04.03.02 (02) Ada Europe/Russell 840511 8300408
!version 1983
!topic A string literal is allowed...onedimensional array of...
Should this read "onedimensional array aggregate of..."?
This is what I think is meant  mainly because I cannot see where an
array of any dimension can be allowed in a multidimensional array
aggregate, unless of course the component type is a onedimensional
array.
If so, is the string treated as equivalent to an array aggregate
containing positional component association, or named domponent
association with lower bound equal to POSITIVE'FIRST ? In the latter
case a constraint_error could be raised as per (11) even if the length
of the string is correct.
In our implementation we are assuming that 'a onedimensional array
aggregate with positional component association' is what is intended.
************************************************************************
!section 04.03.02 (03) J. Goodenough 841025 8300461
!version 1983
!topic OTHERS choices and static index constraints
If an aggregate has more than one component association, and the last
component association has an others choice, then the others choice must be
static. 4.3.2(3) defines a static others choice as follows: "An OTHERS choice
is static if the applicable index constraint is static." Now consider the
following example (drawn from test B43201BB of Version 1.5):
N : INTEGER := 3;
subtype NON_STATIC is INTEGER range 1..N;
type ARR is (NON_STATIC, 1..3) of INTEGER;
...
ARR'(2 => (1..3 => 2),
others => (1..3 => 3))  illegal
ARR'(1..3 => (2 => 2),
others => 3))  illegal?
Both aggregates require static others choices. The first aggregate is clearly
illegal; the corresponding index constraint is nonstatic. Is the second
aggregate also illegal?
One might argue that the second aggregate is illegal because the corresponding
index constraint is (NON_STATIC, 1..3), and this index constraint is
nonstatic. (4.9(11) says "a static index constraint is an index constraint
for which each index subtype of the corresponding array type is static and in
which each discrete range is static.") Since the term "index constraint"
refers to a syntax rule [1.5(6)], there is no onedimensional corresponding
index constraint that could be considered nonstatic.
On the other hand, one might argue that the second aggregate is legal because
4.3.2(2) says "the rules concerning array aggregates are formulated in terms
of onedimensional aggregates," implying that as long as the corresponding
dimension has a static index subtype and has bounds specified with a static
discrete range, an others choice is considered to be static.
Either interpretation is easy to implement. The second interpretation (which
makes the second aggregate legal) is also more intuitive.
The ACVC test in Version 1.5 says the second aggregate is legal, but this test
was protested by an implementer who pointed out the first argument. It has
since been argued that the test correctly reflects the intent of the Standard
and should not be withdrawn from release 1.5 or changed in release 1.6.
************************************************************************
!section 04.03.02 (03) J. Kelly/J. Goodenough 860304 8300720
!version 1983
!topic Null choices for array aggregates
4.3.2(3) says:
A named association of an array aggregate is only allowed to
have a choice that is not static, or likewise a choice that is
a null range, if the aggregate includes a single component
association and this component association has a single choice.
The wording "null range" refers only to choices that syntactically have the
form L..R or ARR'RANGE (since "range" is a syntactic term [1.5(6)]. In
particular, it does not include a null others choice or a subtype indication
that has a null range. Consider the following examples:
type ARR is (1..3) of INTEGER;
subtype NR is INTEGER range 2..1;
X : ARR := (1, 2, 3, others => 4);  1
Y : ARR := (1, 2, 3, 2..1 => 4);  2
Z : ARR := (1, 2, 3, NR => 4);  3
Z1: ARR := (1, 2, 3, NR => 4, NR => 5);  4
Example 1 seems to be legal because 4.3(5) allows a final others choice that
"specifies all remaining components, if any." So the intent seems to be to
allow a null others choice.
Example 2 is clearly illegal since 2..1 is a null range.
Example 3 is probably intended to be illegal, but is not ruled out by the
current wording, since NR is not a range, but is a discrete_range.
Example 4 is surely intended to be illegal, but is also not ruled out by the
current wording.
Was the intent to disallow choices that are null discrete ranges?
*****************************************************************************
!section 04.03.02 (03) M Woodger 881105 8301124
!version 1983
!topic Replace "is a null range" by "defines a null range"
Not meant (see AI00414).
*****************************************************************************
!section 04.03.02 (03) Wm. R. Wagner  Hazeltine Corp. 890818 8301301
!version 1983
!topic Nonstatic or null range choices in an array aggregate
In the second sentence, move the word "only" to precede the word "if"
Otherwise the statement restricts a single component association from being
static or nonnull  which is contrary to most understandings of its
intention (and example F in paragraph 16).
*****************************************************************************
!section 04.03.02 (04) G. Mendal/D. Bryan/Stanford 860313 8300724
!version 1983
!topic Applicable constraint for others clause
We request that the LMC clarify the semantics of the following:
procedure Agg is
type R is
record
X : String (1 .. 2);
Y : String (5 .. 6);
end record;
A : R;
begin
A := (others => (others => 'a'));  1.
A := (X  Y => (others => 'a'));  2.
end Agg;
The questions are:
(1) Should these assignments raise Constraint_Error?
(2) Is the subaggregate (others => 'a') be evaluated
once or twice?
(3) Do things change if the components X and Y are of different lengths?
Note: We have run these tests on several compilers (Alsys, Verdix, Dec,
Rational, Data General) and have gotten mixed results. A careful reading
of the LRM [4.3(4,7..8), 4.3.1(1,3), 4.3.2(1,4..6,8,10..11), 5.2.1(5)]
does not close the semantics of what the applicable bounds of an
"others" choice denotes if this choice represents more than one array
component of a record.
*****************************************************************************
!section 04.03.02 (04) M Woodger 881105 8301125
!version 1983
!topic First occurrence of a technical term should be italic
The first occurrence of the phrase "applicable index constraint" should be
italic.
*****************************************************************************
!section 04.03.02 (05) Ron Brender 880316 8300963
!version 1983
!topic 'Others' in an array aggregate for a formal parameter
The following example has been reported to be accepted by one
validated Ada compiler and rejected as illegal by another:
procedure PROC (STR : in out STRING) is
begin
STR := (others => 'x');  Legal?
end;
It is not clear from 4.3.2(5) whether this should be legal or not.
The following exchange between myself and Goodenough details the
issues.

From: Goodenough@sei.cmu.edu
Subject: Re: re 4.3.2(5)  what do you think?
Inreplyto: Your message of 10 Feb 88 12:06:00 +0000.
<8802101847.AA04708@decwrl.dec.com>
Date: Wed, 10 Feb 88 14:28:43 EST
Sender: jbg
I doubt that this case is explicitly tested. The tests in 6.4.1 don't
do it and the relevant objective in 4.3.2 has no tests written for it.
As for legality, I think it's clearly legal. 4.3.2(5) says an others
choice is allowed if
the aggregate is an actual parameter, a generic actual parameter,
the result expression of afunction, or the expression that follows
an assignment compound delimiter. Moreover, the subtype of the
corresponding formal parameter, generic formal parameter, function
result, or object is a constrained array subtype.
The question is whether the subtype of the object is a constrained
array subtype. The requirement that a formal parameter have a
constrained array subtype only applies when an aggregate with an
others choice appears as an actual parameter, since the four cases in
the second sentence parallel the four cases in the first sentence, and
"corresponding" in this case means actual parameter corresponding to
the formal parameter, generic actual parameter corresponding to the
generic formal parameter, etc. For the case in question, 6.2(9) says
the object has a constrained array subtype.
It's a good test case. Sorry I didn't think of it explicitly. There
are probably some compilers that don't accept it.

John,
Yes, well  I guess the argument you are making hinges on
interpreting the word "subtype" as specifically NOT meaning the
subtype denoted by the type mark for the given entity. While I like
the conclusion (it seems really dumb for 'FIRST and 'LAST to be well
defined, but 'others' not allowed!), I'm still not sure how to get it
out of that wording without a bit more language lawyering.
First, consider
X : constant STRING := (others => 'x');
This clearly has to be illegal. You're reading of 4.3.2(5) would
suggest, however, that the source of the illegality is not the
aggregate per se (a priori, EVERY array object is constrained....).
So we look to 3.6.1, especially (7). We notice: (a) it doesn't quite
seem to rule out this example, and (b) it seems to use 'subtype' to
mean the "subtype declaration denoted by the subtype indication".
A similar difficulty seems to arise for
function F return STRING is
begin
return (others => 'x');
end;
5.8 is no help at all here  only 4.3.2(5) would seem to offer a
basis for illegality, but not with your interpretation.
{
Actually, I suppose I overstate your argument: it is not that "every
arry object is constrained" but rather very narrowly based on 6.2(9).
Still, I am not comfortable that the conclusion to be drawn from the
LRM is unclear.
Further thoughts? Perhaps an AI is in order?

From: DECWRL::"Goodenough@sei.cmu.edu" "7Mar88 2218 EST"
To: abszk::brender
Subj: Re: More re the 'others' example
The wording in 3.6.1(7) does seem to be wrong. For
X : constant STRING := ...
the RM shouldn't say the subtype of the constant is unconstrained.
Rather it should say the subtype indication is unconstrained, since
the constant will have a constrained subtype, if the declaration is
otherwise legal. This does pose a difficulty for the wording in
4.3.2(5), however. Looks like an ARG issue to me.
I don't see the problem with return (others => 'x'). This falls under
the rule that the aggregate is "the result expression of a function",
and therefore the function result must be a constrained array type.
I also pointed out in a followup message, that:
procedure P (X : STRING := (others => 'x'));
is illegal because 6.2(9) doesn't apply  we are not "within the body
of a subprogram", although we are within a subprogram_body. (How's
that for a subtle distinction!)
All in all, definite ARG material.
*****************************************************************************
!section 04.03.02 (05) M Woodger 881105 8301126
!version 1983
!topic The formal parameters need not be constrained
!reference AI00568/01
Replace "a constrained array subtype" by "an array subtype in the first two
cases, a constrained array subtype in the last two cases".
*****************************************************************************
!section 04.03.02 (06) J. Goodenough 861005 8300820
!version 1983
!topic Named associations for default array aggregates
Consider the following example:
procedure P (X : STRING (1..3) := (1 => 'a', others => ' ');
...
P ( (1 => 'a', others => ' ') );
Although the aggregate used in the default expression for P is the same as the
aggregate used in the call, the default expression is illegal and the call is
legal! 4.3.2(6) says:
For an aggregate that appears in such a context [in particular,
as an actual parameter or as the expression that follows an
assignment compound delimiter] and contains an association with
an OTHERS choice, named associations are allowed for other
associations only in the case of a (nongeneric) actual
parameter or function result.
Since actual_parameter is a syntax term, the default expression in P's
declaration cannot be considered an actual parameter in terms of 4.3.2(6)'s
rule, even though 6.4.2(2) says that a default expression is "used as an
implicit actual parameter" in calls where the default is needed. Since the
contexts allowing the use of named associations together with an OTHERS
choice does not include use as a default expression of a formal parameter,
P's declaration is illegal. The call, however, is clearly allowed by
4.3.2(6).
The 4.3.2(6) rule would be equivalent to saying that named associations
together with an OTHERS choice are allowed in contexts where "sliding" of
bounds does not occur, if it were not for the fact that no sliding is allowed
for aggregates used as default expressions of subprograms.
Was it intended for the aggregate in P's declaration to be considered legal?
*****************************************************************************
!section 04.03.02 (06) G. Mendal 870302 8300908
!version 1983
!topic Error in comment 8300820
!reference 8300820, AI00473
The example in AI00473/00 contains a slight syntax error. Types
of formal parameters of a subprogram must be type marks. Obviously,
you only need to define such a subtype and use it as the formal type
in the example.
*****************************************************************************
!section 04.03.02 (06) M Woodger 881113 8301127
!version 1983
!topic More precise ("this restriction" is not clear)
Replace the last sentence by:
"For a multidimensional aggregate that appears in such a context, if one of
its subaggregates contains an association with an others choice, then named
associations are not allowed for other associations of that subaggregate."
*****************************************************************************
!section 04.03.02 (08) B. Spinney 840117 8300261
!version 1983
!topic Use of others in a multidimensional aggregate
Consider the following aggregate:
P (Var => (1 => (2 => (others => 'a'))) );
and the following possible array declarations:
type A_2dim is array (Integer range <>, Integer range <>)
of String (1..4);
type A_3dim is array (Integer range <>, Integer range <>,
Integer range <>) of Character;
procedure P (Var : ...);  either of the unconstrained array types
The intent of the RM is that if Var is of type A_3dim, the aggregate is
illegal, since Var is unconstrained. But if Var is of type A_2dim, the
aggregate is legal, since the innermost aggregate appears in the context
defined by the first sentence of (8). However, even when Var is of type
A_2dim, the aggregate is illegal according to the second sentence of (8),
since there is no question that the aggregate enclosing (others => 'a') is a
multidimensional array aggregate, and the multidimensional aggregate does not
appear in one of the three contexts.
(8) should have said something like the following:
"The aggregate is the expression of the component association of an
enclosing (array or record) aggregate{, but is not a subaggregate. If the
aggregate is a subaggregate of a multidimensional array aggregate, then
the multidimensional aggregate} is itself in one of these three contexts.
************************************************************************
!section 04.03.02 (08) M Woodger 881105 8301128
!version 1983
!topic Replace "the component" by "a component"
Not meant.
*****************************************************************************
!section 04.03.02 (09) Norman Cohen 880114 8300957
!version 1983
!topic Bounds of a positional aggregate with too many values
AI00309 addresses aggregates with OTHERS choices and choices outside
the aggregate's index subtype. A similar issue arises with purely
positional aggregates:
type Enum is (A1, A2, A3, A4, A5);
function E return Enum;
type A is array (A1 .. E) of Character;
type AA is access A;
X: AA := new A'('1', '2', '3', '4', '5', '6');
Y: AA := new A'('1', '2', '3', '4');
4.3.2(9) states:
For a positional aggregate, the lower bound is determined by the
applicable index constraint if the aggregate appears in one of the
contexts (a) through (c); otherwise, the lower bound is given by
S'FIRST where S is the index subtype; in either case, the upper
bound is determined by the number of components.
An allocator is not "one of the contexts (a) through (c)." Therefore,
the lower bounds of the aggregates above are A1 in each case. In the
declaration of X, there is no possible value for the upper bound based
on the number of components, so the rule of 4.3.2(9) is not well defined.
Presumably, CONSTRAINT_ERROR should be raised by evaluation of the
aggregate; the declaration of Y demonstrates that this is not a legality
issuethe aggregate in that declaration may or may not have a legitimate
upper bound, depending on the value returned by E.
*****************************************************************************
!section 04.03.02 (09) M. Woodger 890318 8301272
!version 1983
!topic "choices given" > "index values specified"
The second sentence says:
For an aggregate that has named associations, the bounds are
determined by the smallest and largest choices given.
But choices can be ranges, and smaller or larger is not defined for
ranges.
Consider for example the case of the array aggregate
(1..7 => 0.0, 8..7 => 1.0)
4.3(5) says "a discrete range specifies the components at the index
values in the range". So the null range does not specify any
components, and the other range specifies the components at the index
values 1..7, the smallest and largest of which are the array bounds.
Thus the quoted sentence should say
... determined by the smallest and largest index values specified.
*****************************************************************************
!section 04.03.02 (11) Peter Belmont 830615 8300003
!version 1983
!topic evaluations in multidim array aggregates
In a multidimensional array aggregate, some lower dimensions
may have different bounds from others. In this case, by 4.3.2(11),
CONSTRAINT_ERROR is raised.
Does Ada specify whether or not (or in what order) any component value expressions
are evaluated before the determination to raise CONSTRAINT_ERROR is made and
the exception raised? Note that after step one (4.3.2(10), the choices
of the aggregate and its subaggregates have been evaluated and the possibility
of making the determination is available, so that no other expressions
need be evaluated.
In some cases, the evaluation of some of these choice expressions may
determine that CONSTRAINT_ERROR should be raised for the reason given in
4.3..2(11). Does Ada require that all choice expressions be evaluated?
Must all subaggregates have the same bounds if they are all NULL? 4.3.2(11)
says YES.
Suggestion: Since all determinations of the "shape" of an aggregate
may be made after the choices (if any) have been evaluated,
I would suggest rewriting 4.3.2(10,11) as follows:
The evaluation ... three steps. First, the choices
of this aggregate and its subaggregates, if any, are evaluated
in some order that is not defined by the language.
Second, for an Ndimensional multidimensional aggregate, a check is
made that all (n1)dimensional subaggregates have the same
bounds, even if they are null subaggregates. CONSTRAINT_ERROR
is raised if this check fails. If the aggregate is not a null
aggregate, a check is made that the index values of the aggregate
and of all subaggregates belong to their corresponding index subtypes,
the index values being defined either by choices or positionally.
CONSTRTAINT_ERROR is raised if this check fails.
Third, the expressions of the component associations of the
array aggregate are evaluated in some order that is not defined
by the language; the expression of a named association is
evaluated once for each associated component, and for each
evaluation a check is made that the result of the evaluation
belongs to the subtype of the array's component. If any such
check fails, CONSTRAINT_ERROR is raised, and the further evaluation
of component values stopt.
************************************************************************
!section 04.03.02 (11) Peter Belmont 830615 8300004
!version 1983
!topic checking for incorrect numbers of elements in positional arrays
Note that 4.3.2(11) seems slightly in error: index values are definable positionally
as well as by choices. The example ((1,2,3),(2,4)) ought to raise CONSTRAINT_ERROR
but has no choices.
************************************************************************
!section 04.03.02 (11) Software Leverage Inc. 830929 8300076
!version 1983
!topic Index values in array aggregates
The RM is clear that CONSTRAINT_ERROR is raised if an array aggregate
has a named component_association with a choice that doesn't belong to
the corresponding index subtype. Should a similar check apply to
positional component_associations? (The key phrase in 4.3.2(11) is
"...a check is made that the index values defined by choices
belong...".)
A similar check would catch an array aggregate with positional
associations and "too many components". An example:
type LITTLE is new INTEGER range 1..3;
type UC is array(LITTLE range <>) of CHARACTER;

function F return UC is
begin
return ('a', 'b', 'c', 'd');  too many components
exception
when CONSTRAINT_ERROR =>
DO_SOMETHING;
end F;

.. F ..  call to F
When F is called, will the procedure DO_SOMETHING be called?
************************************************************************
!section 04.03.02 (11) Peter Belmont 830615 8300130
!version 1983
!topic checking bounds of array aggregates
1. Does Ada specify whether or not component value expressions are evaluated
before making the checks specified in paragraph 11? Note that after step one
(4.3.2(10), the choices of the aggregate and its subaggregates have been
evaluated, so the required checks can be made before evaluating component value
expressions.
2. Can CONSTRAINT_ERROR be raised before all the choices have been evaluated,
e.g., for ((1..F => 1), (1..G => 1), (1..H => 1)), if F /= G > 0, can
CONSTRAINT_ERROR be raised before H is evaluated?
3. Must all subaggregates have the same bounds if they are all NULL?
4.3.2(11) says YES.
************************************************************************
!section 04.03.02 (11) John Goodenough 830706 8300131
!version 1983
!topic checking bounds of array aggregates (8300130)
1,2. It seems to me one can argue that paragraph 10 implies that all
expressions are evaluated before checks are made. Or one can say that since
the time at which the checks are made is not specified precisely, the RM allows
any check to be made as soon as possible. Since the RM is sometimes careful to
specify when a check is made (see 5.2, for example), I think it would be most
consistent to say that the check can be made before the component expressions
are evaluated, and even before all the choices are evaluated.
3. Despite comment #5410 that the bounds of such arrays are not defined by the
language, I think Peter is correct here. The second sentence of paragraph 11
does not include the qualification "For the evaluation of an aggregate that is
not a null array", and so the second sentence applies even to null arrays.
************************************************************************
!section 04.03.02 (11) Paul N. Hilfinger 830707 8300132
!version 1983
!topic checking bounds of array aggregates (8300131)
I agree with John's analysis.
I believe, however, that Peter's point 2 is a "difference that makes no
difference." The aggregate
((1..F => 1), (1..G => 1), (1..H => 1))
is illegal if F, G, or H is nonstatic (4.3.2(3)). Hence, there can be no
question of sideeffects (as there could if one were a general parameterless
function) and it is semantically indistinguishable just where
CONSTRAINT_ERROR is raised in the evaluation of this aggregate (it will
always be raised in a legal program, of course, since either F, G, or H is <
1, which is illegal according to 4.3.2(3), or there is overlap.)
************************************************************************
!section 04.03.02 (11) P. Hilfinger 831020 8300182
!version 1983
!topic Checking for incorrect numbers of elements in positional array aggregates
!references 8300076, 8300004, AI00019/00
Consider the following variation (due to Gary Dismukes) of previously
offered examples:
type LITTLE is (x,y,z);
type UC is array (LITTLE range <>) of CHARACTER;

function F return UC is
begin
return ('a', 'b', 'c', 'd');  too many components
end F;

PUT(LITTLE'IMAGE(F'LAST));
As in previouslymentioned cases, the language does not require that
CONSTRAINT_ERROR be raised at any point here (note, in particular, that
there is no range constraint on the parameter to LITTLE'IMAGE.) A friendly
reading of the standard, therefore, (i.e., one that contorts the meaning of
the words as little as possible) indicates that CONSTRAINT_ERROR must not be
raised; however, the result in this case is certainly undefined.
This case is clearly going to have to be fixed. While we're at it, we might
as well take a friendly reading of the LANGUAGE (i.e., one that does not
make the language appear ridiculous) and assume that the standard is in
error here, and that the wording of the first sentence of (11) should be
changed to the following:
For the evaluation of an aggregate that is not a null array,
a check is made that the index values defined by choices belong to
the corresponding index subtypes, {that the number of elements does
not exceed the number of values in the corresponding index subtypes,}
and also ....
***
Finally, I'd like to anticipate one misconception that I've encountered in
discussing this problem. Consider something closer to the original example:
subtype LITTLE is INTEGER range 1..3;
type UC is array (LITTLE range <>) of CHARACTER;
subtype INDEX_SUBTYPE is LITTLE range 1 .. 2;
subtype LUC is UC(INDEX_SUBTYPE);
. . .
It has been suggested that the fourelement aggregate of type UC is
acceptable, since the base type of LITTLE has at least four elements.
Clearly this argument does not apply to the enumerated type above.
Moreover, the argument confuses the role of LITTLE, the index subtype, with
that of INDEX_SUBTYPE, an index constraint. LUC is a subtype of UC, but UC
is not a subtype of anything. Aggregates that are assigned to variables of
type LUC may contain more than two values, since they are of type UC and not
constrained (CONSTRAINT_ERROR is raised on the assignment,) but there is no
type ``less constrained than'' UC to which an aggregate can belong.
************************************************************************
!section 04.03.02 (11) J. Goodenough 831206 8300239
!version 1983
!topic positional aggregates with too many components
The main difficulty with these aggregates is when the upper bound does
not exist for the index base type, e.g.,
type ENUM is (A, B, C);
subtype SMALL is ENUM range A..B;
type A is array (SMALL range <>) of INTEGER;
function F (X : A) return A is
begin
return A;
end;
...
F((1,2,3,4))'LAST  no exception and no value! (1)
F((1,2,3))'LAST  no exception; equals C (2)
(1,2,3) in A  FALSE; no CONSTRAINT_ERROR (3)
(1,2,3,4) in A  FALSE; no CONSTRAINT_ERROR (4)
One might argue that, in principle, the upper bound of an aggregate is
computed by applying 'SUCC to the lower bound the required number of
times, or by computing 'VAL((length1) + 'POS(lower bound)).
CONSTRAINT_ERROR will be raised by 'VAL or 'SUCC when the upper bound
does not exist. This reason for raising CONSTRAINT_ERROR in cases (1)
and (4) would be similar to the explanation given for null string
literals in 4.2(3).
The only inconsistency in allowing (1,2,3) for type A is theoretical 
this value does not belong to the type, according to 3.6(4) [type A has
no values with three components or more], and yet no exception is raised
by the aggregate. We could just decide to accept this somewhat
theological anomaly, or we could say that "obviously" a legal aggregate
should either produce a value belonging to the type or raise an
exception.
The smallest change to the language to resolve this problem is to say
that CONSTRAINT_ERROR is raised (by the computation of the upper bound)
if the upper bound does not exist (ala 4.2(3)).
A more intuitive rule, but one that imposes a greater change, is to say
that CONSTRAINT_ERROR is raised for nonnull aggregates if the upper
bound does not belong to the index subtype.
A legalistically conservative position would be to require, as a binding
interpretation, that CONSTRAINT_ERROR is raised when the upper bound does
not exist, and, as a nonbinding interpretation, that CONSTRAINT_ERROR
is raised when the upper bound belongs to the base type of the index,
but not to the subtype. Under these interpretations, CONSTRAINT_ERROR
will be raised for all four cases above.
************************************************************************
!section 04.03.02 (11) Jean D. Ichbiah 840301 8300307
!version 1983
!topic Checking for incorrect number of COMPONENTS in aggregates
The previous paragraphs of 4.3.2, in particular paragraphs (4) through (9),
define the bounds of an array aggregate  whether in named or in positional
notation. Hence a simple correction for (11) is to delete the reference to
choices and formulate the rules in terms of bounds, so that it applies for
both notations:
"For the evaluation of an aggregate that is not a null array, a check
is made that the [index values defined by choices] {bounds} belong to
the corresponding index subtypes, ..."
************************************************************************
!section 04.03.02 (11) Ada Europe/Russell 840712 8300394
!version 1983
!topic What exactly is the index subtype of an array aggregate?
Example:
type SUB is range 1 .. 3;
A : array (SUB) of INTEGER;
B : array (1 .. 3) of INTEGER;
C : array (INTEGER range 1 .. 3) of INTEGER;
.....
A := (2=>0, 3=>0, 4=>0);
B := (2=>0, 3=>0, 4=>0);
C := (2=>0, 3=>0, 4=>0);
Is CONSTRAINT_ERROR raised on aggregate evaluation in any of these cases?
Does sliding take place? I get a different interpretation from everyone I
ask, even after a careful reading of the LRM, so at the very least the
wording in the LRM should be tightened up. I think that CONSTRAINT_ERROR is
raised on aggregate evaluation 4.3.2(11) in all three cases  and I will
try and give the reasons.
Section 4.3.2(11) states that "a check is made that the index values
defined by choices belong to the corresponding index subtypes"; so the
problem I have is deciding what the index SUBTYPE is.
Section 4.3(7) seems to be explicit here  but only on the TYPE of an
aggregate, no mention is made of index subtypes (presumably because
resolution is only concerned with TYPES, and not constraints). So I
interpret this section as giving the type of the aggregate, but can I
assume it also gives the index SUBTYPE? In the example above, is this 1..3?
This problem occurs elsewhere  eg. loop parameter.
Section 4.3.2(9) states "For a positional aggregate, the lower bound is
determined by the applicable index constraint if the aggregate appears in
one of the contexts ...". One of these contexts (a) does apply here  thus
the index subtype is taken from the constrained array subtype of the
object. Although the aggregate in question is named, not positional, I
don's see how this can change the applicable index constraint.
Sections 3.6(5) and (14) through (16) define that the index subtype of all
three examples above is 1..3.
To me, the above argument seems reasonably sound  however I am more than
willing to be convinced otherwise. For completeness, I include some of the
counter arguments that I have been given.
It has been argued that 4.3.2(9) "the bounds of an array aggregate ... the
bounds are determined by the smallest and largest choices given.", actually
defines the index subtype. I disagree, I read this as specifying the bounds
of the ACTUAL aggregate given, NOT the index subtype.
Section 3.6(2) states that "For a discrete range used in a constrained
array definition ... an implicit conversion to the predefined type INTEGER
is assumed ...". Does this mean that, in the case of array B, the index
subtype of the aggregate is not 1..3, but actually INTEGER'FIRST to
INTEGER'LAST? Hence there are no constraint checks to be made (ever) as per
4.3.2(11), because either the index is the wrong type, or else the index
can have any value within the base. I do not think this is the case, I
think the index subtype of the aggregate is still 1..3.
Section 5.2.1 specifies that in an assignment statement, implicit subtype
conversion takes place (ie. sliding). So the assignment would not raise
CONSTRAINT_ERROR, that I'm happy with (I think). Because sliding can take
place, the index subtype of the aggregate can be considered to be the whole
range INTEGER'FIRST .. INTEGER'LAST. This I don't agree with, for the
reasons laid out above.
Note that if all the ranges in the above examples had been 1..5 and if the
assignments had been to the slices A(1..3), B(1..3 and C(1..3), then
sliding would take place and no exception raised.
One validated compiler did not raise any error on any of the cases
mentioned. The draft rationale gives an example similar to array B, stating
that it is "welldefined"  Rationale 4.5.2(7)  ie. no exception would be
raised.
The examples given above are simple cases, more complex cases occur when
checking that subaggregates of a multidimensional aggregate all have the
same bounds  4.3.2(11). In the innermost aggregates, one could be named,
another positional, as in:
((1 .. 3 => 0), (0, 0, 0))
Section 4.3.2(9) says that, for a positional aggregate, "the lower bound is
given by S'FIRST where S is the index subtype". So the index SUBTYPE is
important. Likewise when catenating an aggregate, what is the index
subtype?
In lieu of any resolution to the contrary our implementation will follow
the reasoning I have laid out above  ie. cases A, B and C will raise
CONSTRAINT_ERROR.
************************************************************************
!section 04.03.02 (11) J. Goodenough 841105 8300466
!version 1983
!topic Index bounds for null multidimensional aggregates
[An implementer submitted the following comment with respect to four tests in
Version 1.4 of the test suite. (The tests are C43206A, C43207A, C43207B, and
C43214A.)]
Four tests in Version 1.4 of the test suite contain multidimensional array
aggregates with choices given by a mixture of null and nonnull ranges. The
interpretation of the tests is that the evaluation of a null array aggregate
(i.e., an aggregate with one or more null ranges) does not perform any checks
that the index values of choices belong to the corresponding index subtypes,
even for choices that are not null ranges. We disagree with this
interpretation of the language.
Paragraph 2 in section 4.3.2 of the Ada reference manual gives a description
of multidimensional array aggregates in terms of onedimensional array
aggregates and concludes by saying that "In what follows, the rules concerning
array aggregates are formulated in terms of onedimensional aggregates."
This implies that in subsequent paragraphs using the generic term 'aggregate',
the wording is to be interpreted as applying to onedimensional aggregates.
The first sentence of paragraph 11 states that "For the evaluation of an
aggregate that is not a null array, a check is made that the index values
defined by choices belong to the corresponding index subtypes, ..." If we
read this as applying to a subaggregate of a twodimensional aggregate, then
there is a check to be performed for the choices (if any) as long as the
subaggregate itself is not null. Thus, for an aggregate such as ( 1..0 => (
1..9 => 0 ) ), a check would be performed that the range 1..9 is compatible
with the index subtype of the array type's second dimension, even though the
choice given for the outer aggregate is a null range. To not perform this
check would be inconsistent with the idea of only producing values of a type
that are fully consistent with that type (where we consider the notion of
value to include such things as attributes). (Consider also that elaboration
of an index constraint given for a multidimensional array type performs checks
for all indexes of the type regardless of whether any discrete range given in
the constraint is a null range. Furthermore, to implement the semantics
implied by these tests requires generation of special code (in the general
case) to test whether any of the given choices of the aggregate are null
ranges in order to determine whether to apply the index checks to any of the
choices.)
Should these tests be withdrawn?
************************************************************************
!section 04.03.02 (11) Paul Hilfinger 841105 8300467
!version 1983
!topic Index bounds for null multidimensional aggregates
!reference 8300466
I think that the ``intent'' here was that the checks NOT be performed,
contrary to the implementor's petition. However, the implementor has a
clever legalistic justification and (more importantly) a reasonable case,
and I am therefore tempted to suggest withdrawing the tests and clarifying
the manual.
This temptation was strengthened by the following related example:
subtype Little is INTEGER 1 .. 10;
type ARR is array (INTEGER range <>, Little range <>) of INTEGER;
A: constant ARR := (1..0 => (0..100 => 0));
begin
for i in A'RANGE(2) loop
 (*)
Now I believe we have decided that i has subtype Little. But of course, its
value won't be of that subtype. This raises several issues:
1. Are we to interpret 3.6.2(7) as meaning that the for statement
is EXACTLY equivalent to
for i in A'FIRST(2) .. A'LAST(2) loop
so that CONSTRAINT_ERROR is raised by the for loop? This is not
really clear from the LRM, since I don't know what it means to
``yield'' a range. If it is the meaning, furthermore, what about
for i in FunctionWithSideEffect(...)'RANGE(2) loop
Does the side effect happen twice?
2. If we allow the loop to execute without raising CONSTRAINT_ERROR,
this means that the invariant ``an initialized variable or
constant of a given subtype has a value obeying that subtype'' is
violated and the compiler must check uses of i. This seems a
little unfortunate to me.
3. If we require the loop to raise CONSTRAINT_ERROR, then it must
perform a check that might better have been performed (once) at
array allocation or construction time.
I claim that what we have here is an inconsistency between 3.6.1(4) and what
I see as the original intent of 4.3.2(11). 3.6.1(2) defines the
compatibility of an index constraint ``if and only if the constraint defined
by each discrete range is compatible with the corresponding index subtype.''
In other words, a null array subtype is not a special case and
B: ARR(1..0,0..100);
raises CONSTRAINT_ERROR. However, 4.3.2(11) seems to say that the obvious
analog of this rule does not apply to aggregates, and
A: constant ARR := (1..0 => (0..100 => 0));
causes no errors.
To boil this all down: the implementor ought to be right. Withdraw the
tests and let's discuss the issue in the LMC.
************************************************************************
!section 04.03.02 (11) J. Goodenough 841105 8300468
!version 1983
!topic Index bounds for null multidimensional aggregates
!reference 8300467
I agree with Paul in this case. The only justification for the test's
interpretation is that 4.3.2(11) says "For the evaluation of an aggregate that
is not a null array, a check is made that the index values defined by the
choices belong to the corresponding index subtypeS"; the use of the plural for
subtypes shows that the writer had in mind multidimensional arrays. However,
since a null multidimensional array aggregate seems to be the only way one can
get nonnull bounds that don't satisfy an index subtype, and since such bounds
are not consistent with the intent of other rules in the language, I think the
test is not fully justified. The LMC needs to issue a clarification here.
************************************************************************
!section 04.03.02 (11) R.Tischler, Tandem Computers, 860626 8300759
!version 1983
!topic Evaluation of multidimensional array aggregates
Section 4.3.2 (10) tells what is evaluated for a multidimensional
array aggregate, and section 4.3.2 (11) tells what checks are to be
performed. The manual is not specific about the ordering, so the
checks could be performed at various times during the evaluation.
Consider the following example:
type T is array (1..2, 2..5) of INTEGER;
OBJ : T := (1 => (2..5 => ... ),
(2 => (3..6 => ... ));
I'm specifically concerned with checking that subaggregates have
identical bounds, and I think it's okay to do the checks as soon
as possible. for instance, in this example, you should be able to
raise CONSTRAINT_ERROR after noticing that the lower bounds of the
subaggregates are unequal (2 /= 3), before calculating the upper
bounds. This would make a difference if these bounds were given
by expressions with side effects.
It isn't clear to me if commentary AI00018, which addresses this
issue, covers this particular detail; it speaks about evaluating
"ranges", not the individual bounds that comprise the ranges. The
latest implementer's guide that I have, which is dated 831213,
specifically agrees with me in paragraph 4.3.2 (S21). However,
ACVC test C43212A disagrees; it specifically tests that both the
lower bounds and the upper bounds are calculated before checking
whether the bounds are equal for all the subaggregates.
*****************************************************************************
!section 04.03.02 (11) Bevin Brett/Ron Brender 861016 8300845
!version 1983
!topic Meaning of "index subtype" and "null array"
Consider:
subtype S is STRING(1..2);
X : S;
function F(I : INTEGER) return CHARACTER is separate;
...
X := S'(21..29 => F(1), others => F(2));
LRM 4.3.2(11), together with AI00019/07, specifies a constraint check
to be performed in terms of the "corresponding index subtype". The
only definition of "index subtype" seems to be given in LRM 3.6(5).
If "index subtype" is understood in that sense, then in this example,
the check will not raise CONSTRAINT_ERROR. In this case it would
appear the expressions for the spurious components should be evaluated
(4.3.2(10)), and then discarded.
Alternatively the phrase "corresponding index subtype" may actually be
intended to refer to the bounds of the array aggregate (in some cases
also called "the applicable index constraint"), as determined by
4.3.2(49). In this case the spurious values would in fact be outside
the specified range, and the check would raise CONSTRAINT_ERROR.
In a similar manner, it is not clear which interpretation, hence,
which bounds, are intended to be used in determining whether the
aggregate is a "null array" for the purposes of "For the evaluation of
an aggregate that is not a null array...", which begins this
paragraph.
Which (if either) of these interpretations is intended? We lean
strongly toward the second interpretation as the one having the most
sensible consequences.
The following further illustrates the issues.


 Consider the following pieces of Ada code...

procedure EG_1 is
subtype S is STRING(1..2);
X : S;
function F(I : INTEGER) return CHARACTER is separate;
begin
X := S'(21..29 => F(1), others => F(2));

 The bounds on the aggregate are determined by 4.3.2 (b) to be 1..2


 Interpretation (1)

 a. This is a NOT null array, so para 11 does apply, so there are
 checks between the 21..29 and POSITIVE range <>.
 This check does not fail.

 b. Calls are done for the "21..29 => F(1)",

 c. 1..2 calls are done for the "others => F(2)", and the resulting
 values used in the aggregate's value.


 Interpretation (2)

 a. This is a NOT null array, so para 11 does apply, so there are
 checks between the 21..29 and 1..2.
 This check DOES fail.
end;
procedure EG_3 is
subtype S is STRING(1..2);
X : S;
function F(I : INTEGER) return CHARACTER is separate;
begin
X := S'(F(1), F(2), F(3), F(4), others => F(5));

 The bounds on the aggregate are determined by 4.3.2 (b) to be 1..2

 Interpretation (1)

 a. This is a NOT null array, so para 11 does apply, so AI019
 implies there are checks between 1..4 and POSITIVE range <>.
 This check does not fail.

 b. Calls are done for the "F(1), F(2)" and the resulting values
 are used in the aggregate's value.

 c. Calls are done for the "F(3), F(4)", and the resulting
 values are discarded.

 d. No components correspond to the "others=>F(5)", so no calls to
 "F(5)" are done.


 Interpretation (2)

 a. This is a NOT null array, so para 11 does apply, so AI019
 implies there are checks between 1..4 and 1..2.
 This check DOES fail.
end;


 There is a twist to this discussion, that hinges on the open predicate of
 para 11, "For the evaluation of an aggregate that is not a null array..."

 It would appear that what is really intended here is "for the evaluation of
 an aggregate that has a positional association, or a choice with a single
 expression, or a choice with a nonnull discrete range", since the following
 example shows that it is not always simple to decide whether or not the
 aggregate "is a null array".


procedure EG_3 is
subtype S is STRING(2..1);
X : S;
function F(I : INTEGER) return CHARACTER is separate;
begin
X := S'(1..9 => F(1), others => F(2));

 The bounds on the aggregate are determined by 4.3.2 (b) to be 2..1

 QUESTION: Is this a null array?

 Interpretation (1)

 a1 This IS a null array, so para 11 does NOT apply, so there
 are no checks between the choices and the POSITIVE range <>, or

 2 This IS NOT a null array, so the checks are done and pass.

 b. Calls are done for the "1..9 => F(1), " and the resulting values
 are discarded.

 Interpretation (2)

 a1 This IS a null array, so para 11 does NOT apply, so there
 are no checks between the choices and the 2..1, or

 2 This IS NOT a null array, so the checks are done and
 CONSTRAINT_ERROR is raised.

 b. Calls are done for the "1..9 => F(1), " and the resulting values
 are discarded.

end;
*****************************************************************************
!section 04.03.02 (11) M Woodger 881113 8301129
!version 1983
!topic Nonnull bounds belong to the index subtype
!reference AI00019, AI00313/03
Replace the text before "and also" by:
"For the evaluation of an aggregate, a check is made that the index values
belong to the corresponding index subtypes, but omitting the bounds of null
ranges,"
*****************************************************************************
!section 04.03.02 (11) M. Woodger 890318 8301273
!version 1983
!topic Index checks for aggregates with null ranges
!reference AI00019, AI00313/03
For the evaluation of an aggregate [that is not a null array], a check
is made that the index values [defined by choices] {specified} belong to
the corresponding index subtypes, ...
This wording follows 4.3(5) in that a choice that is a null discrete range
specifies no index values, so that its bounds are not to be checked.
*****************************************************************************
!section 04.03.02 (13..14) Eberhard Wegner 19830818 8300044
!version 1983
!topic Insert "(some in qualified expressions)" after "aggregates".
Second best, delete "TABLE'" twice and "SCHEDULE'" twice.
Reason: Every aggregate must begin with a left parenthesis: 4.3(2).
************************************************************************
!section 04.04 (04) M Woodger 881105 8301130
!version 1983
!topic After "context" insert "(see 8.7)"
Helpful comment.
*****************************************************************************
!section 04.05 (06) M Woodger 881105 8301131
!version 1983
!topic Insert at the end: ", and in 4.10 for universal expressions"
Incomplete statement.
*****************************************************************************
!section 04.05 (07) Gary Morris 890106 8301256
!version 1983
!topic Raising NUMERIC_ERROR on remainder operations
I have a question about the "rem" operation and the Ada rules that
apply to it. Specifically: Is it acceptable to raise NUMERIC_ERROR
for a "rem" (or "mod") operation when the operands are integer'first
(32768) and 1?
Consider a machine with twos complement integer arithmetic, where the
predefined integer type has a range that is symmetric around zero with
an extra negative value (such as 32768..32767). The result of the
operation (32768 rem 1) has a valid mathematical result of 0 but the
division operation (32768/1) used to compute the remainder yields a
NUMERIC_ERROR (because the mathematical result of the division is
32768, outside the range of the type).
Note that this problem only arises with these two values of the
operands, all other values for either operand yield the desired
result.
In the RM 4.5/7, this operation may raise NUMERIC_ERROR only if the
mathematical result is not a value of the type. Since the
mathematical result (0) is a value for this type, RM 4.5/7 requires
that no exception be raised.
However, according to the RM 4.5.5/34, integer remainder is defined
in terms of integer divide and multiply:
> Since integer division and remainder are defined by the relation 3
>
> A = (A/B)*B + (A rem B) 4
Or: A  (A/B)*B = (A rem B)
With a 16 bit twos complement integer type, where A is 32768 and B is
1, the mathematical result of the division is 32768. This value is
outside the range of the base type (range 32768..32767) and
NUMERIC_ERROR is raised on divide. Since a divide is used on most all
machines in the computation of remainder (divide instructions
typically return both quotient and remainder) it makes sense for
NUMERIC_ERROR to be allowed for "rem" (and "mod") when it is allowed
for divide. To disallow NUMERIC_ERROR for this situation would
require an explicit check of the operands on every remainder operation
to see if they were 'first and 1. This seems an unreasonable
overhead for an unusual case.
A survey of three different Ada compilers, showed that all three are
not in compliance with the RM 4.5/7 when performing of a "rem" or
"mod" with the values of 'first and 1. The compilers tested were
TeleSoft VAX/VMS Ada version 3.22, Sun Ada 1.3, and DEC VAX Ada 1.5.
The two VAX compilers raised a NUMERIC_ERROR and the Sun compiler
returned an incorrect result without raising an exception. I have an
ACVC style test program that I used to test these compiler and would
be willing to provide it if requested.
Is it acceptable to raise NUMERIC_ERROR in this situation, even though
the mathematical result of the "rem" (or "mod") is in the range of the
integer type?
*****************************************************************************
!section 04.05.01 (01) C Bendix_Nielsen, AdaFD, DDC 861009 8300830
!version 1983
!topic Definition of predefined operators.
According to 4.5.1(1) "and", "or" and "xor" are predefined, but their
meaning is not defined  although the note 4.5.1(56) defines their
conventional meaning.
According to 4.5.6(1) "abs" and "not" are predefined, their meaning is
not defined  not even their conventional meaning.
Presumably, all operators have their conventional meaning (allowing
for 4.5.7).
*****************************************************************************
!section 04.05.01 (01) E. Guerrieri 861016 8300846
!version 1983
!topic Definition of predefined operators
!references 8300830, AI00474
1.5(3) states:
All other terms are in the English language and bear their
natural meaning, as defined in Webster's Third New
International Dictionary of the English Language.
We, thus, have the following definitions:
absolute value (abs): of a real number: the value irrespective of sign
conjunction (and): a statement that is true only if both its components
are true
disjunction (or, xor): the relation of the terms or clauses of a logical
proposition or judgement expressing alternatives; also: a
statement of such a proposition usu. taking the form (1) pvq
meaning p or q or both or (2) p+q meaning p or q but not both
called also respectively (1) inclusive disjunction, (2)
exclusive disjunction.
negation (not): a statement that is true provided the unqualified original
statement is false
*****************************************************************************
!section 04.05.01 (03) MATS WEBER, DALIN SOFTWARE 860428 8300746
!version 1983
!topic BOOLEAN ARRAYS
IT DOES NOT SEEM TO ME THAT THE LANGUAGE REFERENCE MANUAL CLEARLY
MENTIONS IF THE OVERLOADING OF A LOGICAL OPERATOR INFLUENCES THE
EFFECT OF THAT OPERATOR ON BOOLEAN ARRAYS
*****************************************************************************
!section 04.05.01 (04) MC Orton 850621 8300642
!version 1983
!topic ".. or else delivers the same results as or."
Delete the last sentence in the paragraph, to wit:
If both operands are evaluated, and then delivers the same result
as and, and or else delivers the same result as or.
This sentence is incorrect, because the shortcircuit control forms
specify the order of evaluation of their operands (Section 4.5.1(4)),
whereas the logical operators do not (Section 4.5(5).
*****************************************************************************
!section 04.05.02 (10) Lee Carver, Science Appl. Inc. 830525 8300279
!version 1983
!topic Intended types
Current wording is: "the membership tests IN and NOT IN are
predefined for all types." It should read: "the membership
tests IN and NOT IN are predefined for all scalar types."
My rationale is that the current wording implies that IN is
defined over records and arrays. At a minimum the semantics
of IN operating on structures should be given, if this is the
intent.
************************************************************************
!section 04.05.02 (11) Eberhard Wegner 19830818 8300045
!version 1983
!topic Change "cars are identical" to "cars are equal".
Even if numbers and owners of MY_CAR and YOUR_CAR are the same,
there may (illegally) be two cars. "Identical" usually means that
there is only one.
************************************************************************
!section 04.05.03 (06) Japanese comments on DP8652 850510 8300561
!version 1983
!topic CONSTRAINT_ERROR raised by catenation
The 6th paragraph of 4.5.3 says "the exception CONSTRAINT_ERROR is
raised by catenation if the upper bound of the result exceeds the range
of the index subtype, unless the result is a null array.
See the following example.
#1 procedure CAT is
#2 S : STRING(1..8);
#3 SS : STRING(1..3);
#4 begin
#5 S := "12345678";
#6 SS := S(7..8) & 'a';
#7 end CAT;
(1) CONSTRAINT_ERROR should not be raised at line #6. Because the index
range of the result of & is 7..9 and 9 does not exceed POSITIVE'LAST,
which is the upper bound of the index range of type STRING.
Is this true?
(2) CONSTRAINT_ERROR should be raised at line #6. Because 9 exceeds the
upper bound of the discrete range (1..8) of the indexconstraint of S,
which is the left operand of &. Is this false?
*****************************************************************************
!section 04.05.03 (06) J. Goodenough 850617 8300567
!version 1983
!topic re: CONSTRAINT_ERROR raised by catenation
!reference 8300561
The analysis proposed in the comment is correct. CONSTRAINT_ERROR must not be
raised at line 6. 4.5.3(6) says "CONSTRAINT_ERROR is raised if the upper bound
of the result exceeds the range of the index subtype, unless the result is a
null array." SS's index subtype is POSITIVE, since that is the index subtype
of STRING, which is SS's base type. (See the definition of index subtype in
3.6(5).)
*****************************************************************************
!section 04.05.05 (03) B Wichmann 901221 8301405
!version 83
!topic Meaning of INTEGER'FIRST rem (1)
The referenced paragraph defines a relationship between INTEGER division and
remainder. However, in the case of a 2's complement machine,
INTEGER'FIRST/(1) overflows. It is therefore not clear if:
a) INTEGER'FIRST rem (1) should raise CONSTRAINT_ERROR (AI00387),
given that there is no appeal to 11.6
b) The result 0 should be produced.
Paragraph 4.5.5(14) states that:
INTEGER'FIRST rem (1) = INTEGER'FIRST rem 1, which is clearly 0;
and also:
INTEGER'FIRST rem (1) = ((INTEGER'FIRST) rem (1)), which
clearly overflows.
Paragraph 4.5.5(12) does not help since it clearly does not give all the
cases in which NUMERIC_ERROR (CONSTRAINT_ERROR, AI00387), can arise, since
INTEGER'FIRST / (1) will raise an exception on a 2's complement machine
(ignoring 11.6).
The Language Compatible Arithmetic Standard defines the result to be zero.
(This is because the rounding function rndI gives a result in Z, not in the
integer type.) Note that this problem did not arise in the NPL report DITC
167/90, since 'rem' is not defined in Pascal and the coding produced to
implement this (function remI, page 23), made a special case of this.
This issue was orginally combined with other cases in which NUMERIC_ERROR is
raised in performing various operations (AI00159). However, it has been
decided that this issue should be separated out. The tentative conclusion of
a discussion of this at the ARG meeting in September 1990 was that
implementation should be allowed to raise at exception, although the language
requires that 0 be produced. The rationale for this is that adding a
performance penalty for this one case did not seem reasonable.
Note that the situation with INTEGER'LAST mod (1) is somewhat different
since the wording in 4.5.5(5) conjectures an 'integer value N' which
could imply that the value need not be of the appropriate integer type
(which it is not in this case, due to overflow). Implementation may well
use similar code sequences for rem and mod and therefore the fact that
the wording in the RM is different is not so relevant to the resolution.
Since then, the only compiler known to NPL which gave NUMERIC_ERROR has been
amended to give 0. Hence the case for raising an exception seems rather weak.
The following compilers give 0: VADS (Sun/3), VAX (DEC), XD68000 and Alsys
(IBMPC). One vendor made a decision to add one machine instruction to ensure
that 0 was produced.
I conclude from the above that the ARG should reconsider the issue and make
it a requirement that the result be 0. I do not agree to the issue being
regarded as a pathology for two reasons. Firstly, I do not think the
performance penalty is too high, and secondly making this a special case
would adversely effect program proof tools. In practice, and implementation
for which the performance penalty was high would invoke the 'incorrect'
optimization by a mode switch, not used for validation.
Extract from Minutes of September 1990 ARG meeting
The Raising of CONSTRAINT_ERROR by REM and MOD
Brian Wichmann noted that if the operators in 4.5.5(3) are taken to be Ada
operators rather than mathematical notation, the value INTEGER'FIRST REM 1
is not defined by the reference manual. Wichmann suggested that Ada
semantics ought to be consistent with the proposed LanguageCompatible
Arithmetic Standard (LCAS). LCAS calls for X REM 1 to complete normally and
return zero.
This approach was approved unanimously in a straw vote. Wichmann will write
an AI of class "pathology."
*****************************************************************************
!section 04.05.05 (08) C Bendix_Nielsen, AdaFD, DDC 861009 8300829
!version 1983
!topic Integer multiplication of fixed point values.
4.5.5(8) says: "Integer multiplication of fixed point values
is equivalent to repeated addition. Division of a fixed point
value by an integer does not involve a change in type but is
approximate (see 4.5.7)."
How is multiplication by a negative integer performed?
What is the precision of dividing a fixed point value by an
integer (what is the model (safe) interval for an integer)?
*****************************************************************************
!section 04.05.05 (10) P.Kruchten 830631 8300128
!version 1983
!topic can a real literal be an operand of a fixed point multiply ?
Query:
Can a universal_real be an operand of a fixed point multiplying
operator ? Is 'universal_real' a case of 'any fixed point type' ?
or else:
Is there an implicit conversion of the real literal to one of the
fixed point types visible at that point ? ( LRM 4.6(15) )
and then:
If there are several fixed point types visible at that point,
shall the choice of the type be made on criteria such as the range
or accuracy ?
Example:
procedure MAIN is
type FX1 is delta 0.1 range 100.0 .. 100.0;
type FX2 is delta 0.001 range 1.0 .. 1.0;
A : FX1 := 5.0 ;
begin
A := FX2( A * 0.001 ) ;  type violation ?
 ambiguous ?
 equiv. to: FX2( A * FX2(0.001) ) ?
end MAIN;

************************************************************************
!section 04.05.05 (10) P. N. Hilfinger 830531 8300129
!version 1983
!topic can a real literal be an operand of a fixed point multiply ? (8300128)
It was the intent of the LDT (not clearly expressed in 4.10 and 4.5.5)
that universal_real be a fixed point type for the purposes of 4.5.5.
************************************************************************
!section 04.05.05 (10) Ron Brender 831031 8300201
!version 1983
!topic Can a real literal be an operand of a fixed point multiply?
!reference AI00020, 8300129
I concur with comment 8300129 that it was intended that a real
literal be allowed as an operand of a fixed point multiply (and
division) operator. However, there is a subtle point that bears
further consideration.
RM 4.5.5(11) states that
"Multiplication of operands of the same or of different fixed
point types is exact and delivers a result of the anonymous
predefined fixed point type universal_fixed whose delta is
arbitrarily small. The result of any such multiplication must
always be explicitly converted to some numeric type".
Pragmaticly, it was always understood that fixed point multiplication
(at least where the deltas are powers of two) is essentially an
"integer" multiplication producing a doublelength product which is
then appropriately scaled to the target type of the conversion
(typically also a fixed point type). That is, for the unconverted
result of the multiplication, a delta equal to the product of the
deltas of the two operands was always sufficiently small. However, a
real literal (or other real universal operand) has no defined delta as
such; indeed, real literals and static universal operands generally
are required to be exact according to 4.10(4). This leads to the
conclusion that universal arithmetic (involving both unbounded
accuracy and/or unbounded range) may be required at RUNTIME.
The following examples will help make this concrete.
Example 1:
type T is delta 0.125 range 100.0 .. 100.0;
X , Y : T;
N : constant := 13#7.0#E1;  7/13th
...
X := 13.0;  a model number of T
...  anything to stop constant propagation
Y := T(X*N);  Exactly 7.0
Because the value of X (13.0) is a model number of T, N (7/13th) is a
static universal operand, and the exact result (1.0) is also a model
number of T, Y must have the exact result 7.0. In general,
arbitrarily accurate computation at RUNTIME will be required to
assure this result on most reasonable machines.
In the following, assume that FLOAT is the most accurate floating
point type supported by an implementation.
Example 2:
type T is delta 2.0**(20) range 1000.0 .. 1000.0;
N := constant := FLOAT'SMALL*(2.0**(20));
X : T := 2.0**(20);  a model number of T
...
Y := FLOAT(X * N);  exactly FLOAT'SMALL
Here again, the value of X (1.0/(2.0**20))is a model number of T, N is
a universal operand and the exact result (FLOAT'SMALL) is a model
number of FLOAT, so that Y must have this exact result as its value.
In general, unbounded range will be required at RUNTIME to assure
this result.
I do not believe that it was ever contemplated or intended by either
the LDT or DRs that universal arithmetic would/should be required at
runtime. If true, then further work is required to specify just what
is required of an implementation in such examples as the above.
************************************************************************
!section 04.05.05 (10) Ron Brender 831118 8300207
!version 1983
!topic Can a real literal be an operand of a fixed point multiply?
!reference AI00020, 8300128
Regarding Hilfinger's analysis of 31 Oct 83, the following is of
interest. In short, it appears the approach suggested can work when
the target type is itself a fixed point type, but it appears to break
down when the target type is a floating point type. The argument is
presented in the following:

From: BRETT 8NOV1983 08:48
To: BRENDER,MITCHELL,STOCKS,GROVE
Subj: Sigh
Paul Hilfinger's response is correct as far as it goes...
(1) Using the technique he gives is going to require 2Nbit integer arithmetic
to implement Nbit fixed point multiplication that yields another fixed point
number. This is true for other reasons as well, so isn't too much of a worry
(although annoying).
(2) His midnight highschool math did not address the issue of fixed point
division yielding a fixed point result (of the form N/X, the other way round of
course can be replaced by X/N <=> X*(1/N)).
In this case, and assuming D1/D2 = 1,...
0 <= P/X  N/X < 1, where P is an approximation for N
=>
0 <= P  N < X
=>
N <= P < X + N
Fortunately, the largest N for which N/X will not overflow is only X*X, which
still only requires 2*F'mantissa bits, so his technique is still adequate.
(3) His midnight highschool math did not address the issue of conversions to
FLOATING POINT types.
Consider the equality A = B * (A/B)
Let
F be the floating point type used
N = A/B
P = an approximation for N
E = PN
X = 1.0  greatest number for F that is less than 1.0
Y = least number for F that is greater than 1.0  1.0
(notice that this means X = Y/2 on most machines)
Then
A * (1.0  X/2) < B * P < A * (1.0 + Y/2)
to guarantee that after rounding the answer is A
=>
A * (1.0  X/2) < B * (N+E) < A * (1.0 + X)
A * (1.0  X/2) < A + BE < A * (1.0 + X)
 A * X/2 < BE < A * X
 A/B * X/2 < E < A/B * X
Now, when 1/2 < A/B < 2/3, the range covered by
A/B  A/B*X/2 .. A/B + A/B*X
is less than X in width, and thus MAY HAVE NO MODEL NUMBERS in it.
Furthermore if N is done via a divide and multiply, more precision/range than is
provided by F is going to be required, which will be difficult if F is the most
precise/greatest range type available in the implementation.
For instance, on the VAX11 architecture using Hprecision arithmetic, there
is no Hprecision number adequate to express 7.0/12.0, sigh.

(As a sidenote, Goodenough has also pointed out to me that the
approach can't be used in the case of certain attributes that yeild
"runtime universal" values... I leave to him to present the
details.)
************************************************************************
!section 04.05.05 (10) J. Goodenough 831118 8300208
!version 1983
!topic real literals for fixed point multiply and divide
!reference AI00020 8300128
My feeling is that we should stick to the wording of the RM here. Paul
Hilfinger's analysis assumes that all universal real values are static, and
this is not the case. There are several ways to get arbitrary nonstatic
universal real values, e.g., 2.0**N/3.0**M or 1.0*A'LENGTH or 1.0*T'POS(M).
Since we can't interpret the RM to allow just static universal real values,
the runtime consequences of using nonstatic universal real operands for fixed
point multiplication and division are just too unpleasant to contemplate.
Even if there were not nonstatic universal real values, I would still argue
now that static universal real operands should not be allowed. Although the
technique Paul sketches is probably feasible (I haven't really analyzed it
closely), it is a technique that an implementation must, in general, only
support when it allows nonpowers of 2 for 'SMALL. If an implementation has
chosen to restrict representation clauses for 'SMALL, it should not have to do
the extra work required by Paul's technique.
The current wrding of the RM certainly disallows real literals as operands in
fixed point multiplication or division because there are always two fixed
point types in scope (DURATION and the anonymous fixed point type required by
3.5.9(7)). Therefore, there is never a unique fixed point type that can serve
as the target of an implicit conversion, and so C(1.1*F) is always ambiguous,
and hence, illegal. I think we should stick with this reading of the RM.
************************************************************************
!section 04.05.05 (10) P. N. Hilfinger 831119 8300217
!version 1983
!topic real literals for fixed point multiply and divide
!reference 8300207, 8300208, 8300128, AI00020
I had only intended to recapitulate what I thought was the original intent
of the LDT (or at least of Brian Wichmann) on this subject. If this was not
the intent, then I concur that we may as well get rid (or stay rid) of the
capability multiplying universal_real*fixed_type (I would just as soon get
rid of builtin fixed point types altogether anyway.) In case Brian should
come up with a strong argument for wanting to provide the capability, here
are a few comments.
1) ``Paul Hilfinger's analysis assumes that all universal real values are
static, and this is not the case. There are several ways to get arbitrary
nonstatic universal real values, e.g., 2.0**N/3.0**M or 1.0*A'LENGTH or
1.0*T'POS(M). Since we can't interpret the RM to allow just static
universal real values, the runtime consequences of using nonstatic
universal real operands for fixed point multiplication and division are just
too unpleasant to contemplate.'' [Goodenough]
Comment: True. In an expression such as
F((2.7**N) * X)  N an INTEGER, X of some fixed type,
 F a fixed point type.
we would not know in advance what the proper delta is to ascribe to the left
operand. The only thing that keeps us from having a simple implementation,
however, is 4.5(7), which says that we are not allowed to raise
NUMERIC_ERROR if the mathematical result of an operation is in a safe
interval (i.e., 4.5(7) disallows hidden intermediate computations that could
overflow.) Without this requirement, the computation above can be performed
as follows for a delta that is a power of 2 and a machine that represents a
fixedpoint number, X, as a simple integer REP(X):
if abs REP(X) > MAX_FLOAT_MANTISSA then raise NUMERIC_ERROR;
else
TEMP := (2.7**N) * BIG_FLOAT(X);
if abs TEMP > MAX_ACCURATE_FLOAT_FOR_F then
raise NUMERIC_ERROR;
else RESULT := F(TEMP);
end if;
end if;
Here, BIG_FLOAT is the highest precision floating type on the machine
(mentioned in 4.10(4)); MAX_FLOAT_MANTISSA is the largest integer that can
be exactly represented as a BIG_FLOAT; and MAX_ACCURATE_FLOAT_FOR_F is the
maximum floating point number that can be converted accurately to an F. It
may seem strange to be using floating point for this computation, but note
that runtime floating point (or better) is required in any case for
computations of nonstatic universal_real quantities (see 4.10(4)). I do
not believe, in other words, that these runtime consequences are ``too
unpleasant to comtemplate.''
2) ``It is a technique that an implementation must, in general, only
support when it allows nonpowers of 2 for 'SMALL. If an implementation has
chosen to restrict representation clauses for 'SMALL, it should not have to do
the extra work required by Paul's technique.'' [Goodenough]
Comment: This is true if you disallow universal*fixed computations. When
they are allowed, Brender's original objection seems to apply regardless of
the legal values of 'SMALL. Namely, the result of a fixed point
multiplication has infinite accuracy, and when the mathematically exact
answer is a model number of the target type, it must be produced exactly.
By the way, support for 'SMALLs other than a power of 2 causes some
interesting headaches. For example, again because of 4.5(7), a computation
such as f(p*q) or f(p/q) (f a fixed type, p and q fixed variables) must not
overflow if the mathematical result is in a safe interval of f. However,
for general 'SMALLs, these computations will actually be translated,
respectively as something like (REP(p)*REP(q)*K1)/K2, and
Rep(p)*K1/(K2*Rep(q)) (or possibly Rep(p)*K1/K2/Rep(q)). What's interesting
here is that if K1 is allowed not to be a power of 2, then the first
computation will involve multiplication of doublelength result by a single
length result (rather than the usual single times single yielding double).
Furthermore, if K2 is allowed not to be a power of two then the second
computation will involve either division of a double length quantity by a
double length quantity or division of a double length quantity by a single
length quantity yielding a double length result (rather than the usual
double by single yielding single).
One wants to conclude that support for 'SMALLs other than powers of two will
be rare. However, this puts a slight burden on the implementor of DURATION,
since machines that yield clock or timer values in units of 10E6 seconds or
1/60 second sort of cry out for wierd 'SMALL values.
3) ``His midnight highschool math did not address the issue of conversions to
FLOATING POINT types.'' [Brett]
Comment: True, and this is a problem. As for point (1) above, what prevents
an easy solution are the stringent requirements on NUMERIC_ERROR.
Specifically, in a computation such as
F(G * X)
where G is a universal_real quantity and X a fixed point variable, there are
constants MAX1 and MAX2 such that we can compute
if abs X > MAX1 then raise NUMERIC_ERROR;
else
TEMP := BIG_FLOAT(G) * BIG_FLOAT(X);
if abs TEMP > MAX2 then raise NUMERIC_ERROR;
else RESULT := F(TEMP);
end if;
end if;
(The constants MAX1 and MAX2 can be improved if conversion to F rounds.)
SUMMARY
In short, it is very difficult to generate code that computes the correct
answers for the cases above if we have to produce answers in all cases
required by the LRM. Should it be deemed desirable to allow multiplications
of universal_real by fixed quantites, it seems that a slight relaxation of
4.5(7), together with the analyses presented before (which are not
particularly burdensome on an compiler implementation that has rational
arithmetic.)
On the other hand, it is merely a sense of intellectual fair play that
prompts me to defend a feature such as fixed point, which might just as well
be deepsixed and replaced with a specialized generic package for all I care.
************************************************************************
!section 04.05.05 (10) R P Wehrum, Siemens A.G., Muenchen 830602 8300248
!version 1983
!topic Ambiguous Expressions Involving Universal Real Values
Let
... V1 : SOME_FIXED_POINT_TYPE := ...;
V1 := SOME_FIXED_POINT_TYPE(V1 * 3.14);
...
The rhs of the assignment should be legal (at least the programmer will
expect that). However, according to the RM it is not. The type of the
literal is universal_real. Thus an implicit conversion of the literal to
some fixed_point type is needed (or another predefined operator for "*");
but the context does not suffice to determine the target type of the
conversion; the expression is ambiguous; some semantic rule is missing.
(Cf. Section 4.5.5(10), 4.6(15).)
Is this an oversight of the language designers?
************************************************************************
!section 04.05.05 (10) Ron Brender 850808 8300603
!version 1983
!topic Universal real operands with fixed point * and /
!reference AI00020
AI00020/5, as approved by the Ada Board/WG9 in February 1985, is fine
as far as it goes. However, it has been pointed out to me that the
same analysis applies when an operand of a fixed point multiplication
or division operator is a named number of type universal real; thus, a
universal real named number should be disallowed as well.
Moreover, a moments reflection makes it clear that the same analysis
applies to ANY universal expression (as defined in 4.10) of type
universal real. This includes certain named numbers, certain
attributes, expressions such as 1.0+2.0, (1.0+0.5), and so on, as well
as just real literals.
AI00020 should be revised to better reflect its true scope and
implications.
*****************************************************************************
!section 04.05.05 (10) Terry Froggatt 861210 8300889
!version 1983
!topic Fixed Multiplication & Division with Real Literals
In Ada, a floating point number can be multiplied or divided by a real
literal constant (or any universal real named number) whereas a fixedpoint
number cannot be: the programmer has to give the literal a type.
This is AI20. In fact this restriction is unnecessary.
To perform A := A_TYPE(C*B) or A := A_TYPE(B*C) or A := A_TYPE(B/C), where
C is a constant, we simply multiply or divide the scaling factor associated
with the conversion of A to B, by the value of C, in the compiler; then
generate exactly the same code that we would have used for A := A_TYPE(B)
but using the revised scaling factor (which can now be negative).
(But note that A := A_TYPE(C/B) is a different problem).
This is implemented using a multiplication by one constant then a
division by another constant: the ratio of the constants being a
continued fraction approximation to the scaling factor. Of all the
operations on fixed point numbers which involve the use of scaling
factors, this one (fixedtofixed conversion) is the only one which
can be implemented easily.
So it is strange that the reason given for the lack of the literal
operations is the uncertainty over the accuracy to which the constant
has to be held at runtime, (see Ada Letters IV2.68 & VI.677).
There are considerably worse problems over the representation of
scale factors for fixedtointeger, fixedto/fromfloat, and
universalfixedtoanynumerictype.
Note that it is already possible in Ada to multiply or divide fixed
values by named numbers, without having to specify any reduction in
the named number's accuracy:
PI: constant := 3.14..........................................;
type PI_TYPE is delta PI range 0..2*PI;
for PI_TYPE'SMALL use PI;
TYPED_PI: constant PI_TYPE := PI;  Still as exact as the named number.
....
FIXED_VALUE := FIXED_TYPE ( FIXED_VALUE * TYPED_PI );
*****************************************************************************
!section 04.05.05 (10) M Woodger 881113 8301132
!version 1983
!topic Important restriction
!reference AI00020/07
Add a Note that "A real literal is not allowed as an operand of a fixed point
multiplication or division." Also in paragraph 16.
*****************************************************************************
!section 04.05.05 (11) Ron Brender 840314 8300344
!version 1983
!topic Explicit conversion of universalfixed
RM 4.5.5(11) requires that the result of either fixedpoint
multiplication or fixedpoint division "must always be explicitly
converted to some numeric type". It is not clear whether this means
that such fixedpoint operations must occur syntactically as the
immediate operand of a (numeric) type conversion, as in
DUR : DURATION;
...
DUR := DURATION(DUR*DUR);
or whether there are ANY other allowed syntactic variations. In
particular, what about
DUR := DURATION((DUR*DUR));  legal? [extra parens]
DUR := DURATION(((DUR/DUR)));  legal? [extra parens]
I am inclined to believe that this latter example is NOT legal, but
would just like to be sure.
Please confirm.
************************************************************************
!section 04.05.05 (11) Terry Froggatt 861209 8300888
!version 1983
!topic CounterProductive Accuracy of Universal Fixed
There are areas where the Ada language insists on too much accuracy,
and so violates any rationale based on accuracy/time/store tradeoffs.
The bestknown case is that of floatingpoint exponentiation, but far
worse problems arise when implementing fixedpoint arithmetic fully.
These problems have not come to light sooner because the implementation
of arbitrary "small" representation clauses has been made optional in the
language. So as far as I am aware, no compiler yet fully implements them
because of uncertainty as to whether the accuracy requirements could be met.
However, this is putting things the wrong way round. It is important to
implement arbitrary smalls so that the classical fixed point rangerelated
scalings can be used. Our customers want us to do this. The only problem is
whether to honour Ada's accuracy requirements or do something more sensible.
In my paper "FixedPoint Conversion, Multiplication, & Division, in Ada(R)",
to appear shortly in Ada Letters, I show that the operations
required for classical fixedpoint working can be implemented to the
accuracy required by the Ada language, using finiteprecision arithmetic.
For example, I show that a fixedpoint multiplication or division, of two
operands of the same length, which is converted to another fixed point
type of the same length, can be achieved, but by triplelength arithmetic.
If the result type is integer or float, longer arithmetic may be needed.
Thus the current accuracy requirements are counterproductive:
on a typical machine having 16 bit arithmetic with 32 bit products
and 32 bit arithmetic with 64 bit products, only 16 bit fixed point
types can be fully implemented by hardware. With relaxed accuracy
requirements, 32 bit fixed point types could be implemented: the
overall effect is to provide greater accuracy in less time.
So, as a matter of some urgency, the accuracy required of scaled
fixedpoint multiplication and division should be relaxed,
so that they can be "handled simply by the underlying hardware",
using nothing more than the doublelength arithmetic already needed.
This relaxation can be made without upsetting any existing Ada users.
*****************************************************************************
!section 04.05.05 (17) Norman Cohen 920113 8301442
!version 1983
!topic Inappropriate crossreference
The crossreference "actual parameter 6.4.1" should be replaced with
"generic actual parameter 12.3". There is no mention of subprogram or
entry actual parameters in 4.5.5, but 4.5.5(9) does mention generic
actual parameters.
*****************************************************************************
!section 04.05.06 (03) Amiram Yehudai 870331 8300915
!version 1983
!topic Logical operators producing out of range results
A colleague of mine, Yossi Veler of AITECH has come up with the following
program in Ada, which seems to create a serious problem.
procedure BOOLSUB is
subtype BOOL is BOOLEAN range TRUE..TRUE;
type ARR is array(1..10) of BOOL;
A : ARR := (1..10 => TRUE);  this seems like the only legal value
begin
A := not A;
 Here A(1)=A(2)=...=A(10)= FALSE !!!! No exception occurs etc.
A := (1..10 => FALSE);
 This does cause an exception
end BOOLSUB;
The program seems legal: we inspected the LRM and also the implementers
guide, and we ran it on both the DDC and VERDIX compilers. It seems that a
combination of innocent features in Ada produces a result that seems to
contradict with the basic philosophy of the language, that is an object
posseses a value which is not in the appropriate type.
It seems that several features interact to produce this undesirable
situation:
1) Boolean is an enumerated type, and one can take a subtype of it.
2) Boolean array operations, which are the only ones operating on all
elements of an array.
3) At run time, array assignements are not checked element by element
(I believe in all but this case this check is indeed not required).
Has anyone noticed this before? Is there a way out of it?

01Apr87 20:33 ihnp4!homxb!houxm!hjuxa!pets Re: language problem
From: ihnp4!homxb!houxm!hjuxa!petsd!joe@ucbvax Berkeley EDU (Joe Orost)
Our compiler (C3Ada R0001.02/Beta) correctly raises CONSTRAINT_ERROR on the
statement "A := not A;".

02Apr87 23:14 David S Rosenblum Re: language problem
From: David S Rosenblum
I think that the LRM is pretty clear on this point.
4.5.6 (2) says that the "not" operator takes a value of any array of boolean
components, and returns a value of the same array type. The important words
here are SAME TYPE. 3.6 (5) says that an array type is characterized by a
set of index types and a component subtype. Thus, in the example the value
of "not A" must be a value of the type of A, which is ARR. Since the
resulting component values do not satisfy the component subtype constraint,
the result is NOT a value of type ARR. Ergo, CONSTRAINT_ERROR must be raised
by the evaluation of "not A". Note that the fact that the expression appears
in an array assignment is not germane; the evaluation of such a "not A" in
any context should raise CONSTRAINT_ERROR.
 David.

03Apr87 02:19 vrdxhq!deller Re: language problem
From: vrdxhq!deller@seismo css gov (Steven Deller)
...
As for the original problem mentioned of a constraint error being missed in
an array operation, the Verdix 5.41 compiler does indeed fail to raise a
constraint error as required by the RM (the code produces the correct
"enumerical" :) result, and only fails to raise the constraint error).
Note that raising a constraint error is not a sufficient test of correct
execution of the statements presented. Robert Dewar has given an excellent
presentation at a SIGAda conference illustrating the problems with this
apparently simple construct. If I remember correctly (any errors are mine,
not his) the primary problem with the assignment is that it must be atomic,
either performed for EVERY array element, or not performed for ANY array
element. The code generated must ensure, at the point it detects any result
with a constraint error, that all elements are either assigned to by the
assignment, or that the elements assigned to so far are "backed out".
Efficiency concerns preclude the easy out of using an intermediate temporary
array and copying it to the result array if there are no constraint errors.
I believe that this code generation complication is part of the reason for
the VADS compiler missed the constraint error. The example is clearly a
pathological test. A type derived from a boolean but constrained to a single
value, yet not a constant, does not appear to be very useful. VADS is built
to operate efficiently on types normally encountered in practice, and works
well in those cases. Clearly, however, there is an error in VADS 5.41 for
this case, which requires a "more proper" solution within the compiler to
provide both the efficiency desired and the correctness required by the RM.
As a test of the boundary conditions for our code generation with array
operations, the original code serves very well. We would like to thank
Amiram for providing this test. This error will be fixed in the next release
of VADS.
...

03Apr87 21:34 dday Re: language problem
From: dday@mimsy umd edu (Dennis Doubleday)
In article <12291368304.48.ROSENBLUM@Sierra.Stanford.EDU> ROSENBLUM@SIERRA.STANFORD.EDU (David S. Rosenblum) writes:
>I think that the LRM is pretty clear on this point.
>
You are right, but it must have been missed by almost everybody. I tried the
same program out on DEC Ada V1.3; the results were the same (the operation
completed successfully without raising CONSTRAINT_ERROR). But if it is
impossible to efficiently handle type and constraint checking for every
possible pathological boundary condition then I have to come down strongly in
favor of efficient implementation of constructs that programmers are actually
going to use.

07Apr87 07:49 sdcrdcf!burdvax!eric Re: language problem
From: sdcrdcf!burdvax!eric@hplabs hp com (Eric Marshall)
in article <6109@mimsy.UUCP>, dday@mimsy.UUCP (Dennis Doubleday) says:
> .... But if it is impossible to efficiently handle
> type and constraint checking for every possible pathological
> boundary condition ...
At least in this example, the compiler can determine if the situation could
occur at execution time, and take appropriate actions by emitting code for
the efficient common situation, or less efficient code for the pathological
case.
First, the only operators which can partake in this silliness are NOT and
XOR, and only if their operand(s) is a subtype of a boolean type constrained
to a single value (Yuch). If such a type may exist at execution time, the
compiler could emit an if statement like the following.
if funny_boolean_type'first != funny_boolean_type'last then
emit efficient code for AND and XOR
else
emit funny code for AND and XOR
end if;
The funny code for AND is easy, just raise CONSTRAINT_ERROR (what could be
more efficient than that :). For XOR, a component by component test needs to
be performed for the detection of CONSTRAINT_ERROR.
So it turns out, for efficiency at least, that the XOR operation is the worst
case, AND is the best case, and for the common case (a boolean type with both
values), an additional if statement is needed. Remember, all of this is only
for a boolean type which could turn out to be constrained to only a single
value.
Eric Marshall
Unisys Corporation
P.O. Box 517
Paoli, PA. 19301
(215) 6487223
*****************************************************************************
!section 04.05.06 (05,6) R. Jones 860908 8300809
!version 1983
!topic Allow nonintegral powers for exponentiation
I have a pocket calculator (Texas Instruments TI57 programmable
bought in 1979 at UKL 24.00) that is quite capable of exponentiation
by negative and fractional exponents and I feel that in consequence
it is reasonable to expect any computer manufacturer to provide
similar capability in their hardware.
Accordingly,l I suggest that the definition in paragraph 5 be
amended to read as follows:
Operator Operation Left oprnd type Right oprnd type Result type
** expontiation any num type any num type some num type
It is possible that you may consider that the Right operand type
should not include floating point types but this I leave to your
discretion, though I would mention that my calculator has this
facility.
It would also be necessary to reflect this alteration in paragraph
6.
*****************************************************************************
!section 04.05.06 (06) Software Leverage, Inc. 831029 8300192
!version 1983
!topic Exponentiation with floating point operand
The RM demands an inefficient implementation of exponentiation for
floating point operands.
The problem is that the RM defines the model interval of the result in
terms of the model intervals of a series of multiplications from left
to right. Unfortunately this sometimes excludes model numbers
obtained for calculations used in usual algorithms, e.g. the one that
uses the identity
x**(2*n+e) = (x*x)**n * x**e
where e = 0 or 1, so x**e is either 1.0 or x.
The simplest case of this is calculating x**4 as sqr(sqr(x)) where
sqr(x) is shorthand for x*x. Even assuming we have a machine that
rounds, there are cases in which this method will fail. Adding a
guard bit doesn't help; the only result we have been able to obtain
is that with a number of guard bits equal to the number of bits in the
exponent, one can construct an algorithm using the above which works.
There is no mathematical result we can find to improve this.
It was presumably not the intent to require that, for example, x**128
take 127 multiplications to calculate, but this follows from 4.5.6(6).
Deviation from the manual is difficult to test for here, but imple
menters should be able to use power trees a la Knuth without fear. In
this case, we believe the language in the RM should be changed; there
remains the question of what to do until the next Standard. Since
conformance is very hard to enforce anyway, it presumably would suf
fice to publish some form of statement to the effect that deviation in
this area is allowed if the implementation instead conforms to a
changed version of 4.5.6(6), perhaps along the following lines:
"Exponentiation with a positive exponent delivers the left operand for
an exponent of one, and otherwise (for exponent n) is equivalent to
multiplication of the results of exponentation of the same left
operand to powers k and nk, for some positive k less than n. For an
operand of a floating point type..." (only the first sentence
changes).
One might wish to also change 4.5.7(9), for the sake of clarity, to be
something like this:
For the result of exponentiation, the model interval associated with
the expression x**n is determined as follows:
1. If n = 1, the model interval is that of the expression x.
2. If n > 1, the model interval is the union of the model inter
vals for expressions (x**k)*(x**(nk)) for all k in the range
1 <= k < n.
3. If n = 0, the model interval consists of the single number
1.0.
4. If n < 0, the model interval is that for the expression
1.0/(x**(n)).
This might need changing for the sake of rigor (evaluation of x might
cause side effects); but the idea is there.
Counterexamples where sqr(sqr(x)) doesn't give the answer required by
the RM for x**4 are given below; the list is complete for mantissas
up to 17 long. They are expressed in octal notation with "b" as the
mantissa length (some values of b don't come up in Ada, but the exam
ples suggest that counterexamples can be found for arbitrarily long
mantissas).
The situation would presumably be worse if machine rounding wasn't
assumed.
b = 14:
0.43642
0.44650
0.64452
0.64032
b = 15:
0.45360
0.62234
0.63025
0.63116
b = 16:
0.442350
0.445264
0.453420
0.454100
0.611304
0.641400
0.642074
b = 17:
0.427562
0.443354
0.447526
0.447626
0.451304
0.455276
0.455472
0.455626
0.457266
0.624100
0.624720
0.631352
0.643616
0.647376
0.652222
These examples also show that a small number of guard bits doesn't
suffice. The number 0.62234 fails for b = 14 when 1 guard bit is used
(so the machine mantissa length is 15).
************************************************************************
!section 04.05.06 (06) Jean D. Ichbiah 840301 8300312
!version 1983
!topic Exponentiation by real exponent
There is nothing wrong with the definition given by the RM.
Nothing prevents one from having a function EXP that uses repeated
squaring.
(Most exponentiations are by small integers  so why bother with
pathologies that would make the language more complex. The problem was
known; the design choice deliberate).
************************************************************************
!section 04.05.06 (06) C Bendix_Nielsen, AdaFD, DDC 861009 8300828
!version 1983
!topic Exponentiation with a negative exponent.
!reference AI137
4.5.6(6) says: "For an operand of a floating point type, the exponent
can be negative, in which case the value is the reciprocal of the
value with the positive exponent."
4.5.7(9) mentions a final division but what is the model interval of
the dividend?
*****************************************************************************
!section 04.05.07 J. Goodenough 851206 8300691
!version 1983
!topic Accuracy of the assignment operation
!references see also 3.5.8(16) and 3.5.10(15)
Consider the declarations
type T is digits 5;
subtype ST is T digits 3 range 12345.0 .. 15099.0;
X : ST := 12345.0;
It is the intention of the Standard to allow the value of X to be approximated
as a model number for ST rather than requiring that the value be assigned have
at least 5 digits of accuracy. This intention is not actually achieved,
because the necessary approximation rule is not given in 4.5.7 for assignment
statements.
It was probably considered unnecessary to state such a rule in 4.5.7 because
3.5.8(16) says:
The operations of a subtype are the corresponding operations of
the type except for the following: assignment, membership
tests, ...: the effects of these operations are redefined in
terms of the subtype.
But consider how the effect of assignment is defined [5.2(3)]:
For the execution of an assignment statement, the variable name
and the expression are first evaluated ... A check is then
made that the value of the expression belongs to the subtype of
the variable ... Finally, the value of the expression becomes
the new value of the variable.
In the case of the assignment to X, evaluation of the expression means
implicitly converting 12345.0 to T's base type and checking that the
converted value belongs to T's range. Since T's base type has at least 5
digits of accuracy, 12345.0 is a model number, and certainly belongs to T's
range. This value is then assigned to X.
"Redefining the effect" of assignment in terms of subtype ST doesn't help
with respect to the accuracy of assigned values unless one understands
"becomes the new value of the variable" to mean "is allowed to become a model
number of the variable's subtype." If so, this would seem to allow the
approximation to a model number of ST to occur after the range check has been
performed, and this is too late. If 12345.0 is approximated as 12300.0 and
becomes the value of X, it will be the case that
X in ST
is FALSE. (The membership test is performed using the predefined operators of
the type [3.5(3)]. Since the predefined operators use at least 5 digits of
accuracy, 12300.0 in ST must evaluate to FALSE. Moreover, it must be the
case that ST'FIRST = 12345.0 since nothing allows the bounds of ST to be
approximated to 3 digits.) In short, if assignment's range check is not
performed on the approximated value, an out of range value can be assigned to
X, which obviously can't be allowed.
To fix all this, 4.5.7 should say something about how when performing an
assignment, the model interval for the assigned value is determined by the
model numbers defined for subtype of the target variable. 5.2 should perhaps
be rephrased to say that the value to be assigned to the variable is range
checked (instead of referring to the value of the expression). In this way,
the revised 5.2 phrasing would ensure that range checks are performed after
any approximation of the expression's value has occurred.
*****************************************************************************
!section 04.05.07 (07) Software Leverage Inc. 830706 8300028
!version 1983
!topic NUMERIC_ERROR for real results
Section 4.5.7 gives the rules for when NUMERIC_ERROR is to be raised in real
calculations. It gives rules for determining the model interval for the
results of real calculations, and then states that "the result model interval
is undefined if the absolute value of one of the above mathematical results
exceeds the largest safe number of the result type. Whenever the result model
interval is undefined, it is highly desirable that the exception NUMERIC_ERROR
be raised if the implementation cannot produce an actual result that is in the
range of safe numbers."
Consider a floating type
type T is digits 7;
Let us say that an implementation uses a predefined type with T'MANTISSA = 24
and T'SAFE_EMAX = 127, so the safe numbers range in absolute value up to
8#0.777777770# * 2**127. Let us further say that the floating point hardware
does not detect overflow until a number exceeds 8#0.777777777# * 2**127 in
absolute value.
(1) May this implementation have T'MACHINE_OVERFLOWS = TRUE for this type if
it only raises NUMERIC_ERROR for numbers whose absolute values are greater
than 8#0.777777777# * 2**127? Or must it also raise NUMERIC_ERROR for
numbers whose absolute values are greater than 8#0.777777770# * 2**127?
Consider a fixed point type
type T is delta 0.25 range 1.0 .. 1.0;
Let us say that an implementation uses a predefined fixed point type with
T'SMALL = 2.0**(31) and T'MANTISSA = 31. The largest safe number is 1.0 
2.0**(31); the largest representable number is 1.0  2.0**(31); the
smallest representable number is 1.0. (This is a twoscomplement machine.)
(2) Is the implementation allowed to have T'MACHINE_OVERFLOWS = TRUE for this
type if it only raises NUMERIC_ERROR when a result is less than 1.0 or
greater than 1.0  2.0**(31)? Or must it also check to see if the result
is less than (1.0  2.0**(31))?
************************************************************************
!section 04.05.07 (07) Brian WICHMANN 870217 8300905
!version 1983
!topic Proposed interpretation of "overflow situation"
The proposed reading of this section dated 870115 seems most unnatural to
me (although feasible). The interpretation effectively reverts to conventional
practice, ignoring the Brown model. The problem is that it would make formal
proofs of correctness of floatingpoint algorithms effectively impossible, hence
undermining the use of the Brown model in the first place.
*****************************************************************************
!section 04.05.07 (07) M. Woodger 890318 8301274
!version 1983
!topic Undefined model interval and safe interval
3.5.6(4) says "error bounds on the predefined operations for safe
numbers are given by the same rules as for model numbers", and this
is quoted by 4.5.7(8) to establish two versions of every definition
in section 4.5.7, one for model numbers and the other for safe
numbers.
Applying this to 4.5.7(36) gives:
For any basic operation or predefined operator that yields a result of
a real subtype, the required bounds on the result are given by a safe
model interval defined as follows: ...
So the required bounds are given by two definitions. This can only
mean that the first definition applies for results within the range
of model numbers, and the second definition applies for results that
exceed the range of model numbers but not the range of safe numbers.
4.5.7(7) does not fit this plan of two versions of definitions. It says
"The result model interval is undefined if the absolute value of one of
the above mathematical results exceeds the largest safe number of the
result type."
Surely this should have read "largest MODEL number" instead, leaving
the version for safe numbers to follow automatically. This would then
be referred to by the second sentence, which should therefore begin:
Whenever the result SAFE interval is undefined, it is highly ... .
To make this clear, the first two sentences of paragraph (8) should be
brought back to between the above two sentences, beginning a separate
paragraph. The corrected text would read (in part):
The result model interval is undefined if the absolute value of one of
the above mathematical results exceeds the largest model number of the
result type.
The safe numbers of a real type are defined (see 3.5.6) as a superset
of the model numbers, for which error bounds follow the same rules as
for model numbers. Any definition given in this section in terms of
model intervals {is therefore} extended to safe intervals of safe
numbers.
Whenever the result safe interval is undefined, it it highly desirable
... in overflow situations (see 13.7.3). An implementation is not
allowed to raise the exception NUMERIC_ERROR when the result interval
is a safe interval.
*****************************************************************************
!section 04.05.07 (08) C Bendix Nielsen, AdaFD, DDC 860609 8300751
!version 1983
!topic A SAFE interval gives the bounds required on the result?
RM 4.5.7(3) says "For any basic operation or predefined operator
that yields a result of a real subtype, the required bounds on the
result are given by a model interval ..."
RM 4.5.7(2) says "A model interval of a subtype is any interval
whose bounds are model numbers of the subtype."
RM 4.5.7(7) says "The result model interval is undefined if the
absolute value of one of the above mathematical results exceeds
the largest safe number of the result type."
What is the result (model?) interval if the absolute value of one
of these mathematical results exceeds the largest model number
without exceeding the largest safe number?
*****************************************************************************
!section 04.05.07 (08) M Woodger 881105 8301133
!version 1983
!topic Replace "can therefore be extended" by "is therefore extended"
Not meant.
*****************************************************************************
!section 04.05.07 (10) Doug Bryan 870901 8300947
!version 1983
!topic AI174
re: AI174... the wording in the definitions of 'First and 'Last
Consider:
type F is delta 1.0 range 0.0 .. 4.0;
F'Large == 3.0
Can:
F'Last = 2.0 ??
In this case, R (4.0) is outside of the range of model numbers and Last
is within Small of Large. Thus, Last can be 2.0?
doug
*****************************************************************************
!section 04.05.07 (10) J. Goodenough 870806 8300949
!version 1983
!topic Real relational operations should be exact
It's unclear to me that the nondeterminism of relational operations
specified in 4.5.7(10) is really needed. 4.5.7 already allows for inaccuracy
in evaluating the expressions being compared. Relational operations compare
values, not model intervals, and the comparison of values is exact, not
fuzzy. Of course, the values being compared might lie anywhere in some model
interval, and so the comparison operation could yield different results
depending on what values are actually being compared, but this indeterminancy
is due to inaccuracies in evaluating the operands, not inaccuracies in making
comparisons, as is suggested by the current paragraph.
*****************************************************************************
!section 04.05.07 (12) Terry Froggatt 861201 8300880
!version 1983
!topic Unnecessary Loss of Accuracy
There are areas where the Ada language could usefully insist
on more accuracy than it does, without increasing runtime costs.
(1). Ada compilers contain an arbitrary precision rational arithmetic
package which has to be used for calculations which are "universal
real static expressions". This package should be used for any real
calculations which the compiler is allowed to perform accurately;
whether or not they are "universal" or what Ada calls "static".
(2). When literals and other arbitrary precision rational values have to be
converted to the machine representation of some (nonuniversal) real
type, for output to the program image, they should be correctly rounded
(not truncated) to the nearest machine number (not model number).
To me, these two statements are as selfevident for
accuracy as the statement that a compiler should not generate
spurious NOPs is for time and space.
I believe that the Notes in the LRM (not just the Implementor's guide)
should make it clear that these "accuracy optimisations" are permitted
by the language and are desirable in implementations.
It should be made clear that there is certainly no requirement for
the compiler to stoop to machine inaccuracy or model inaccuracy.
*****************************************************************************
!section 04.06 R P Wehrum, Siemens A.G., Muenchen 830602 8300244
!version 1983
!topic Implicit conversions do not preserve staticness
Implicit Conversions of Numeric Literals in the Context of Static
Expressions.
Let us consider the subsequent example
C1 : constant NATURAL := 1; (1)
 NATURAL = predefined subtype
C2 : constant NATURAL := INTEGER (1); (2)
According to the section 4.6(15) the case (1) implies a conversion of the
literal appearing on the right hand side of the assignment. Thus, case
(1) is semantically equivalent to (2).
Section 4.9 shows that expressions that involve conversions are never
static. This implies that the set of static expressions that can be
deduced from rule (d) in section 4.9(6) is void.
To describe what is intended by section 4.9(6) a rule should be added
saying something like
"An implicit conversion of a static expression is static." or more
precisely
"An expression resulting from an implicit conversion of a numeric
literal, a named number, or an attribute (of type universal_integer or
universal_real) is static."
The above problem is an example where a syntactical definition overrules
(or tries to overrule) a semantical concept.
************************************************************************
!section 04.06 (03) M Woodger 881105 8301134
!version 1983
!topic After "context" insert "(see 8.7)"
Helpful comment.
*****************************************************************************
!section 04.06 (07) Terry Froggatt 861208 8300887
!version 1983
!topic CounterProductive Accuracy of Numeric Conversions
There are areas where the Ada language insists on too much accuracy,
and so violates any rationale based on accuracy/time/store tradeoffs.
The bestknown case is that of floatingpoint exponentiation, but far
worse problems arise when implementing fixedpoint arithmetic fully.
These problems have not come to light sooner because the implementation
of arbitrary "small" representation clauses has been made optional in the
language. So as far as I am aware, no compiler yet fully implements them
because of uncertainty as to whether the accuracy requirements could be met.
However, this is putting things the wrong way round. It is important to
implement arbitrary smalls so that the classical fixed point rangerelated
scalings can be used. Our customers want us to do this. The only problem is
whether to honour Ada's accuracy requirements or do something more sensible.
In my paper "FixedPoint Conversion, Multiplication, & Division, in Ada(R)",
to appear shortly in Ada Letters, I show that the operations
required for classical fixedpoint working can be implemented to the
accuracy required by the Ada language, using finiteprecision arithmetic.
For example, I show that conversion of a fixedpoint type to an integer
type of the same length can be achieved by a multiplication by one
constant and division by another, but using one more bit than doublelength.
Conversion of Universal Fixed to an integer type is particularly difficult.
(Conversions between fixed and float may also loose accuracy if a
normallisation shift occurs as a result of the scaling multiplication).
Thus the current accuracy requirements are counterproductive:
on a typical machine having 16 bit arithmetic with 32 bit products
and 32 bit arithmetic with 64 bit products, only 16 bit fixed point
types can be fully implemented by hardware. With relaxed accuracy
requirements, 32 bit fixed point types could be implemented: the
overall effect is to provide greater accuracy in less time.
So, as a matter of some urgency, the accuracy required of scaled
conversions between fixed types and other types should be relaxed,
so that they can be "handled simply by the underlying hardware",
using nothing more than the doublelength arithmetic already needed.
(The requirement to always correctly round conversions to
integer types is probably the worst culprit: without this,
multilength divisions could probably be avoided).
This relaxation can be made without upsetting any existing Ada users.
*****************************************************************************
!section 04.06 (07) Lee Phillips/Naval TSC 861105 8300909
!version 1983
!topic Rounding up or down
4.6(7) says "rounding may be either up or down" for conversion to an integer
type when the operand is halfway between two integers. If rounding can be up
or down, this can cause problems in porting programs from one compiler to
another. This should be "up or down or controlable."
The portability of Ada programs may be a problem if this is not tightened up
or controlable.
*****************************************************************************
!section 04.06 (07) Daniel Stock/R.R. Software 881026 8301029
!version 1983
!topic Accuracy of type conversions from real types to integers
Section 4.6(7) of the LRM says that "for conversions involving real types,
the result is within the accuracy of the specified subtype (see 4.5.7). The
conversion of a real value to an integer type rounds to the nearest integer;
if the operand is halfway between two integers (within the accuracy of the
real subtype) rounding may be up or down."
It is not clear to me what "The accuracy of the real subtype" means when
converting from a real type to an integer type, especially when converting
from a fixedpoint type has a small value that is not a power of two.
Section 4.5.7 does not really help: paragraphs 1 and 2 just define terms;
paragraphs 3 to 9 are limited to when the result of an operation has a
real subtype; and paragraphs 10 to 11 (and the AI's thereon) are limited to
relations and membership tests.
In particular, when performing a type conversion to an integer type from a
value that is a model number of a fixed point type, must the conversion
yield the nearest integer? One might think so, by analogy to the way that
real types are treated. But consider the following example.
NASTY: constant := 0.316666...66667; add as many sixes as you like
MAX_MULT: constant := 2 ** SYSTEM.MAX_MANTISSA  1;
type FIX_TYPE is delta NASTY range MAX_MULT*NASTY..MAX_MULT*NASTY;
for FIX_TYPE'SMALL use NASTY;
SAMPLE: INTEGER;
function IDENT_FIX (ITEM : FIX_TYPE) return FIX_TYPE;  unoptimizable
 function that always returns its argument
...
SAMPLE := INTEGER (IDENT_FIX (30 * FIX_TYPE'(FIX_TYPE'SMALL)));
 Must this be 10?
 It is the conversion of a model number of a fixed point type
 to an integer, where the exact multiplication yields
 9.50000...00001, with as many zeroes as there were sixes in
 the declaration of NASTY.
If SAMPLE must be assigned the value ten, then it would appear that
arbitrary precision is needed at run time to perform such a type conversion,
which is absurd. (There are other equally absurd alternatives, such as
rejecting vast numbers of length clauses to avoid the problem or performing
an analysis of the continued fractions of any small values to attempt to
anticipate the problem.) To me, this means that the ARG should specify that
type conversion of model numbers of fixed point types need not yield the
nearest integer. A reasonable interpretation might be that for a conversion
of a value within a model interval I (possibly a single model number) to an
integer type, the result must be within
(0.5 + 0.5 * T'Small)
of some value in I. A more stringent rule might apply to the less
interesting case of conversions of floating point numbers to integers, since
it is generally easy for most machines to do the right thing in that case.
*****************************************************************************
!section 04.06 (07) Daniel Stock/R.R. Software 881026 8301030
!version 1983
!topic Accuracy of conversions from fixed point types to floating point types
The required accuracy of type conversions from fixed point types to floating
point types can lead to surprising results. Consider the following example:
DIVISOR : constant := 7;
STEP : constant :=1.0 / DIVISOR;
MAX_MULT : constant := 2 ** SYSTEM.MAX_MANTISSA  1;
type FIXED_TYPE is delta STEP range MAX_MULT*STEP..MAX_MULT*STEP;
for FIXED_TYPE'SMALL use STEP;
EXAMPLE: FIXED_TYPE;
function IDENT_FIXED (ITEM : FIXED_TYPE) return FIXED_TYPE;
 unoptimizable function that always returns its argument
...
EXAMPLE := IDENT_FIXED (FIXED_TYPE'(FIXED_TYPE'SMALL));
if 0.0 /= FLOAT'SAFE_LARGE * (1.0  FLOAT (DIVISOR * EXAMPLE)) then
FAIL;  Can this procedure be called? Apparently not.
end if;
In this example, DIVISOR * EXAMPLE is a model number of type FIXED_TYPE,
with the value exactly 1.0. Hence, upon conversion to type FLOAT, it
must retain the exact value 1.0, so that the procedure FAIL cannot be called.
(The multiplication by FLOAT'SAFE_LARGE is just to prevent an implementation
from passing this test by having a "fuzzy" version of equality, which
appears to be illegal by A100174/05 anyway.)
At first blush, it might appear that this example also requires arbitrary
precision at run time, since one must essentially get 1.0 from multiplying
1.0/7.0 by 7. Fortunately, arbitrary precision is not needed: one needs
only a few extra bits of precision (which many machines have) when doing the
conversion from a fixed point type to a floating point type, together with a
routine that "fuzzes" the result to the nearest safe number of the floating
point type. The inhouse version of the JANUS/Ada compiler does this, and
passes a battery of tests similar to this one. This seemed like a
reasonable approach when we were implementing small values that are not
powers of two. But is this the intent of the LRM? If so, I would like to
see the ARG confirm it (I also wonder how many validated compilers would pass
a test like this). It seems rather strange, in that potentially significant
bits of accuracy in the machine must be explicitly thrown away to satisfy
the numeric requirements of the language.
 End of Forwarded Message
*****************************************************************************
!section 04.06 (11) J. Goodenough 850729 8300595
!version 1983
!topic NUMERIC_ERROR for array and integer conversions
In the course of discussing test C64103D with an implementer, a dispute
arose over the following case:
DECLARE
TYPE SM_INT IS RANGE 0..2;
TYPE LG_INT IS RANGE SYSTEM.MIN_INT..SYSTEM.MAX_INT;
TYPE AR_SMALL IS ARRAY (SM_INT RANGE <>) OF BOOLEAN;
TYPE AR_LARGE IS ARRAY (LG_INT RANGE <>) OF BOOLEAN;
A0 : AR_LARGE (SYSTEM.MAX_INT  2..SYSTEM.MAX_INT) :=
(SYSTEM.MAX_INT  2..SYSTEM.MAX_INT => TRUE);
PROCEDURE P1 (X : OUT AR_SMALL) IS
BEGIN
FAILED ("EXCEPTION NOT RAISED BEFORE CALL P1 (D)");
END P1;
BEGIN
IF LG_INT (SM_INT'BASE'LAST) < LG_INT'BASE'LAST THEN
P1 (AR_SMALL (A0));
ELSE
COMMENT ("NOT APPLICABLE P1 (D)");
END IF;
EXCEPTION
WHEN NUMERIC_ERROR =>
NULL;
WHEN CONSTRAINT_ERROR =>
FAILED ("CONSTRAINT_ERROR RAISED INSTEAD OF " &
"NUMERIC_ERROR P1 (D)");
WHEN OTHERS =>
FAILED ("WRONG EXCEPTION RAISED P1 (D)");
END;
It was argued that the test ought to allow CONSTRAINT_ERROR to be raised
in the above case. My response (to an inquiry for the Fast Reaction
Team) is extracted below from a mail message on the subject. Others on
the FRT disagreed, and their responses are also shown below.

Date: 24 Jul 1985 22:00:42 EDT
From: John B. Goodenough
The only place an exception can be raised is in the procedure call:
P1 (AR_SMALL (A0));
Here AR_SMALL is an unconstrained array type and 4.6(11) applies, i.e.,
"for each index position, the bounds of the result are obtained by converting
the bounds of the operand to the corresponding index type of the target type."
Note that the RM says "corresponding index TYPE", not corresponding index
subtype, so the required conversion here is, in effect:
SM_INT'BASE (SYSTEM.MAXINT2) and
SM_INT'BASE (SYSTEM.MAXINT)
When SM_INT'BASE is less than SYSTEM.MAX_INT, NUMERIC_ERROR must be raised
(3.5.4(10).)
It is true that later on in the array conversion process (4.6(13)), the
converted bounds values are checked against the index subtype, but this check
can only occur after the conversion to the base type has occurred. Moreover,
3.5.4(10) implies NUMERIC_ERROR is raised (in preference to CONSTRAINT_ERROR)
even if conversion of the bounds is considered a single conversion to the
index subtype.
In short, I think the test is correct to insist that NUMERIC_ERROR be
raised in this case.

Date: Thu, 25 Jul 85 23:15:48 PDT
From: hilfingr%ucbrenoir@Berkeley (Paul Hilfinger)
InReplyTo: Your message of 24 Jul 1985 22:00:42EDT
I am compelled to disagree with John's analysis of C64103DB. The LRM says
in 4.6(11) that the "bounds of the result are obtained by converting the
bounds of the operand" and 4.6(13) says that "a check is made that the
bounds of the result belong to the corresponding index subtype." Now John's
interpretation is that these statements imply a welldefined sequence of
operations: (1) convert the bounds; (2) then, and only then, check against
the index subtype. The reason, I presume, is that it makes no sense to check
result bounds that do not exist. However, the sense of 11.6(6) seems to be
that one need not raise NUMERIC_ERROR as long as the result is correct (and
in this case, it isan exception is raised).
Now I admit this reading is open to question (as are most readings of 11.6).
I do have other motives for reading it as I did. First, it seems to me that
it makes not a partical of difference in practice which exception is raised.
Second, it is evidently a convenience to the implementor to do it as he has.
I am therefore inclined to side with the implementor.

Date: Fri 26 Jul 85 14:35:32EDT
From: PLOEDEREDER@TL20B.ARPA
I disagree with John's interpretation, in particular with the sequentiality
assumed by his rationale:
"It is true that later on in the array conversion process (4.6(13)), the
converted bounds values are checked against the index subtype, but this check
can only occur after the conversion to the base type has occurred. Moreover,
3.5.4(10) implies NUMERIC_ERROR is raised (in preference to CONSTRAINT_ERROR)
even if conversion of the bounds is considered a single conversion to the
index subtype."
There is no such requirement for sequentiality stated in LRM 4.6.
It seems perfectly legal to perform the conversion and required constraint
checks by
if SM_INT'BASE (SYSTEM.MAXINT2)  SM_INT'FIRST < 0 or
SM_INT'BASE (SYSTEM.MAXINT)  SM_INT'LAST > 0
then raise CONSTRAINT_ERROR;
else
 do whatever else is necessary for the conversion
end if;
LRM 3.5.4 (10), last sentence, then allows elimination of the NUMERIC_ERROR
exception, since the correct result of the enclosing arithmetic expressions
can be computed.
Hence, both NUMERIC_ERROR or CONSTRAINT_ERROR can be raised for the conversion.

Date: Sat, 27 Jul 85 15:54:26 edt
From: dewar@NYUACF2
I think John may be correct in his analysis, but I certainly hope someone can
find a counter argument. The separation of NUMERIC_ERROR and CONSTRAINT_ERROR
in the language design was an egregious error, severely aggravated by tests
which insist on narrow distinctions between the two!

Date: Sat, 27 Jul 85 15:56:33 edt
From: dewar@NYUACF2
Right!
Paul's analysis is at least as reasonable as John's and we can get rid of
yet another NUMERIC_ERROR vs CONSTRAINT_ERROR test!
************************************************************************
!section 04.06 (11) Peter Belmont 850819 8300615
!version 1983
!topic CONSTRAINT_ERROR vs NUMERIC_ERROR
AI00368 deals with CONSTRAINT_ERROR and NUMERIC_ERROR in relation
to conversion of values outside a numeric type's base range.
I believe that we must distinguish between a type's bounds and the
bounds of the hardware numbers and hardware computations which will
deal with them. Where the RM specifies NUMERIC_ERROR if the
hardware cannot deliver the correct result it implicitly allows
the nonraising of NUMERIC_ERROR in the event that correct results
are obtainable.
Why should not an Ada machine specify several predefined
integer types corresponding, for example, to BYTE/WORD/BIGWORD
storage  and use BIGWORD arithmetic operators on them in all cases?
In this case, NUMERIC_ERROR would NOT be raised in the case presented
in AI00368/00.
NOTE: There is a typo mid page 2:
When SM_INT'BASE
should be
When SM_INT'BASE'LAST
*****************************************************************************
!section 04.06 (11) J. Goodenough 850822 8300619
!version 1983
!topic NUMERIC_ERROR for numeric conversions
!reference 8300595
It is apparent that some people believe that in many situations where
NUMERIC_ERROR should be raised, CONSTRAINT_ERROR can also be raised. In
particular, the latest controversy deals with an issue in converting arrays,
but the issue is also relevant for the conversion of any numeric value.
Also, there is some sentiment arising that it was a mistake to have
NUMERIC_ERROR in the language at all, and the LMC should more or less
consider NUMERIC_ERROR as a synonym for CONSTRAINT_ERROR. So before turning
to the array conversion case, let me consider the question of why
NUMERIC_ERROR exists.
The need for and potential utility of NUMERIC_ERROR arises primarily from the
numeric operations: +, , *, and /. For the sake of simplicity, let's limit
the discussion to integer operators, i.e., NUMERIC_ERROR is only raised by /
when the divisor is zero, and it is only raised by +, , and * when overflow
occurs. NUMERIC_ERROR exists to indicate the occurrence of overflow since it
is not easy to check in advance that overflow will occur, and programmers can
sometimes take useful action knowing that an arithmetic operator had caused
an overflow.
Of course, overflow is a machine oriented concept that had to be translated
into language oriented concepts. The resulting Ada concept was, in essence,
that overflow occurs when an operation tries to produce a value that lies
outside a base type. Of course, any value that lies outside a base type also
does not satisfy any subtype constraint imposed on the base type, so the
question naturally arose: can CONSTRAINT_ERROR also be raised when a value
lies outside the base type. The general answer, for numeric operators, is
"No, NUMERIC_ERROR is raised in preference to CONSTRAINT_ERROR if any
exception at all is raised." (I will give some extracts from Language Study
Notes later that support this rather vague summary of the intent.)
There are several places where the RM tries to make the distinction between
NUMERIC_ERROR and CONSTRAINT_ERROR clear:
3.5.5(10) says:
The exception NUMERIC_ERROR is raised by the execution of an
operation (in particular an implicit conversion) that cannot
deliver the correct result (that is, if the value corresponding
to the mathematical result is not a value of the integer type).
However, an implementation is not required to raise the exception
NUMERIC_ERROR if the operation is part of a larger expression
whose result can be computed correctly, as described in section
11.6.
4.5(7) says:
The predefined operations on integer types either yield the
mathematically correct result or raise the exception
NUMERIC_ERROR. A predefined operation that delivers a result of
an integer type (other than universal_integer) can only raise the
exception NUMERIC_ERROR if the mathematical result is not a value
of the type.
Finally, 11.6(6) says:
Similarly, additional freedom is left to an implementation for
the evaluation of numeric simple expressions. For the evaluation
of a predefined operation, an implementation is allowed to use
the operation of a type that has a range wider than that of the
base type of the operands, provided that this delivers the exact
result (or a result within the declared accuracy, in the case of
a real type), even if some intermediate results lie outside the
range of the base type. The exception NUMERIC_ERROR need not be
raised in such a case.
In essence, 11.6(6) specifies situations in which NUMERIC_ERROR need not be
raised because no overflow will occur because overlength registers have been
used.
Now consider some examples:
A : INTEGER := INTEGER'LAST + 1;
NUMERIC_ERROR must be raised by the + operator. (I say "must" because this
is the clearest case for raising NUMERIC_ERROR in preference to CONSTRAINT_
ERROR. This means that if someone wants to loosen the interpretation of
other cases to allow CONSTRAINT_ERROR to be raised, it must be explained why
CONSTRAINT_ERROR can be raised in this example.)
Now let's consider numeric conversions. In particular, note that 4.6(4)
says:
A conversion to a subtype consists of a conversion to the target
type followed by a check that the result of the conversion
belongs to the subtype.
Since conversion is a predefined operation, 4.6(4) together with 4.5(7) imply
that a conversion raises NUMERIC_ERROR (not CONSTRAINT_ERROR) if the operand
value lies outside the base type. For example:
Long_Int : Long_Integer := Long_Integer(Integer'Last);
Int_Var : Integer := Integer (Long_Int + 1);  (1)
subtype Sm_Int is Integer range 0..5;
Sm_Var : Sm_int := Sm_Int (Long_Int + 1);  (2)
In both cases, NUMERIC_ERROR must be raised in preference to CONSTRAINT_
ERROR. Note that no exception is raised by the + operator, since the
operation for Long_Integer is used. Conversion to the base type raises
NUMERIC_ERROR by 3.5.5(10), since the operand value does not belong to the
base type. The conversion to the base type precedes any subtype check, as
specified by 4.6(4).
Now let's consider the array conversion. 4.6(11) says (with respect to array
conversions):
If the type mark denotes an unconstrained array type, then, for
each index position, the bounds of the result are obtained by
converting the bounds of the operand to the corresponding index
type of the target type.
As 4.6(4) says, all conversions start by converting the operand to the target
(base) type, so 4.6(4) and 4.6(11), taken together, imply that each bound is
converted to the corresponding index base type. This conversion will raise
NUMERIC_ERROR if a bounds value does not belong to the base type.
Erhard Ploedereder and Paul Hilfinger argue that 4.6 does not imply that
first a conversion to a base type is made and then a subtype check is
performed, but I think that 4.6(4) shows that this sequence of events is, in
general what occurs when conversions are performed. To believe otherwise is
to introduce a nonuniformity in the semantics of conversion. Paul also
argues that NUMERIC_ERROR need not be raised because 11.6(6) says
NUMERIC_ERROR need not be raised if "the result is correct (and in this case,
it is  an exception is raised)." But "result" in 11.6(6) means "value
produced", i.e., intermediate results were computed in an overlength
accumulator so values outside the base type were represented accurately. It
is not correct to read 11.6(6)'s use of "result" to include the possibility
of raising an exception. 11.6(6) says NUMERIC_ERROR need not be raised
"provided that (an operation) delivers the exact result." The term
"delivers" implies no exception is raised.
Robert Dewar and Paul both argue that, basically, it is surprising to have
NUMERIC_ERROR raised by an array conversion  that having this exception
raised causes implementation inefficiency and is not helpful to a programmer.
In essence, they would like to argue that conversion of numeric bounds does
not use the usual semantics for explicit (and implicit) numeric conversions,
but instead uses a special rule implying that CONSTRAINT_ERROR can be raised
for such conversions rather than NUMERIC_ERROR. Or perhaps they would like
to argue that the reordering of operations allowed by 11.6 allows the index
subtype constraint check required by 4.6(13) to be performed before the
bounds are converted. If so, this becomes an issue to be discussed in
conjunction with AI00315.
My feeling is that the simplest reading of the RM is to use the usual
semantics for conversions to scalar types. This reading has the consequence
that NUMERIC_ERROR must be raised by certain incorrect attempts to convert
arrays. I think we should take the language as it is in this case and not
attempt to fix what some may believe to be an incorrect language design
decision.

I have appended here some extracts from relevant Language Study Notes and
other comments produced during the design.

NOTE051
Subject: On raising NUMERIC_ERROR
Author: JBG
Date: 21 Jul. 81
At the DR meeting, the argument was made that if I, J, and K are of type
INTEGER, then I*J need not raise NUMERIC_ERROR if I*J yields a double
precision result, since such a result does not lie outside the "implemented
range of the type," at least for this operation. Note that under this
interpretation, I*J/K could well yield a result that lies in the range of
INTEGER'FIRST to INTEGER'LAST without raising any exceptions.
I argued that if NUMERIC_ERROR was not raised for I*J, then at least
CONSTRAINT_ERROR must be raised, since "*" for INTEGER is defined to yield an
INTEGER result; returning a value greater than INTEGER'LAST would violate the
constraint associated with the result of this operation, and so should require
that CONSTRAINT_ERROR be raised if NUMERIC_ERROR is not raised. I could also
have argued that if "*" is redefined in some way so CONSTRAINT_ERROR need not
be raised in the above case, then at least it must be raised when "/" is
invoked, since "/" is clearly defined to accept only operands whose values are
in the range INTEGER.
If, however, we accept for the moment the idea that the result of I*J can
exceed INTEGER'LAST and yet raise neither NUMERIC_ERROR nor CONSTRAINT_ERROR,
then we must ask in what contexts such a result can be USED without raising
CONSTRAINT_ERROR by virtue of the context. Certainly, the result of I*J will
raise CONSTRAINT_ERROR if it is used as the operand of another arithmetic
operator, in an assignment statement, or in a relational operation (although
one might argue that for I*J IN INTEGER, the RM does not require that the
result of I*J be in the RANGE of INTEGER values; it only requires that the
base type of I*J be the SAME as INTEGER'BASE). Aside from IN, the only other
contexts I can imagine are conversions (e.g., FLOAT(I*J)) and type definitions
(e.g., type T is range I*J .. LONG_INTEGER'LAST). Any implementation that
did not raise CONSTRAINT_ERROR (or NUMERIC_ERROR) in just these contexts would
justifiably be considered bizarre by its users. I would hate to have to
explain why such an implementation is considered valid by the RM.
In short, I see no merit in taking the position that under some
circumstances I*J raises no exception even though the mathematically defined
value exceeds INTEGER'LAST. Nor does it seem reasonable to permit an
implementation to raise either NUMERIC_ERROR or CONSTRAINT_ERROR in such a
situation (depending on whether the implementation has chosen to compute the
result as a double precision quantity). I think the RM should clearly state
that for the arithmetic operators and functions, NUMERIC_ERROR is raised
instead of CONSTRAINT_ERROR if the mathematically defined result exceeds the
range of the base type.
Alternatively, the RM could replace NUMERIC_ERROR with CONSTRAINT_ERROR
and state that CONSTRAINT_ERROR is raised when the result exceeds the range of
the base type. However, I believe numerical analysts will find the
distinction between these exceptions to be potentially useful, so I don't
really recommend combining the exceptions. At the very least, the presence of
NUMERIC_ERROR is "insufficiently wrong" to justify a change.

Language Study Note LSN.213
Subject : Minutes for September DR Meeting
Authors : DC, MW, KL
Date : 7th October, 1981
...
JDI: Summarizes. So we consider the INTEGER is defined as if
type %int ...
function "*"(X : %int range ...; where the
Y : %int range ...) ranges may be different
return %int range ...; and not publicised.
type INTEGER is new %int range A .. B;  (say)
JBG: Are both numeric and constraint error needed?
BAW: Numeric analysts need both.
JDI: Suggests we can't simply characterize for the programmer when
to expect numeric error and when constraint error, so we
should abolish numeric error.
GF: If INTEGER is unconstrained, how can it get constraint error?
How would INTEGER assignment get constraint error?
JDI: Propogation from the evaluation of the expression.
BAW: If we merge numeric error and constraint error, then numeric
error will not be reliable.
Consensus: retain both [exceptions]
JDI summarizes
(1) The predefined operators never raise constraint error, only numeric
error.
(2) Assignment of result may raise constraint error.
(3) The intermediate type is hardware dependant and derived from %int.
These operators may accept arguments and give results outside the range of
INTEGER (i.e. in %int) Bounds of %int should not be shown in appendix F.
RB: The programmer cannot expect this, but the implementor is free
to provide it.
JDI: We should write down this proposal.
RB: A mathematical package design philosophy is that these
operators should raise NUMERIC_ERROR.

NOTE154
Subject: Renaming, 'BASE, and proposed semantics of numerics
Author: PNH
Date: 3 Oct.81
...
... the
proposed new rules for INTEGER (making it a subtype) introduce some
complications. To review, the proposal (made in response to a request from
Brian Wichmann) is to define INTEGER as follows:
type $INTEGER$ is implementation defined;
subtype INTEGER is $INTEGER$ range implementation defined;
function "+"(X,Y: $INTEGER$) return $INTEGER$;
...
This would allow implementations to use all kinds of nifty, convenient, and
efficient tricks for coding integer expressions, while leaving them the freedom
to raise NUMERIC_ERROR when convenient. In particular, it is intended that a
subexpression whose value lies outside INTEGER'FIRST..INTEGER'LAST might or
might not raise NUMERIC_ERROR depending in part on the context. A programmer
is only sure of not getting NUMERIC_ERROR if he is careful to keep his
intermediate results in the range INTEGER'FIRST..INTEGER'LAST.
Under this view, the type INTEGER'BASE is not necessarily implemented as a
builtin type. For example, on an IBM 370, we might have an implementation
with only INTEGER (32 bit) and no doublelength integer. If the 370 defines
the type $INTEGER$ to have a range of (2**63)..2**631, and INTEGER to have a
range of (2**31)..2**311, then an INTEGER expression such as
X*Y/Z
can be computed without checking the result of X*Y, because multiplication
produces a doublelength result and division requires a doublelength dividend.
(Notice, though, that X+Y may well raise NUMERIC_ERROR, even though the
constraint on $INTEGER$ cannot be exceeded by the addition of two integers.)
As we shall see, it is crucial to understand that the intent of this adjustment
is NOT that programmers be able to take advantage of the extra range of the
type INTEGER, but rather that compilers be allowed to eliminate certain checks
that would otherwise be necessary.
If the user tries to define
type MY_INT is range INTEGER'BASE'FIRST..INTEGER'BASE'LAST;
the compiler is allowed to refuse. Indeed, it is even allowed to claim that
INTEGER'BASE'FIRST is too big a value for the compiler to handle (well, maybe).
...

Language Study Note LSN.225
Subject : Preliminary Review of Chapter 11  Exceptions
Authors : LDT
Date : 30th October, 1981
...
Editorial : constraint_error versus numeric_error
Numeric_error can be raised by the predefined arithmetic operators and
by ABS. It can also be raised by a pure conversion.
Constraint_error can be raised by assignment, qualification, by the
attributes 'SUCC and 'PRED, and by numeric literals.
Note that a conversion is the succession of a pure conversion (that may
raise numeric_error) and of a qualification (that may raise
constraint_error).
Relations and membership tests never raise an exception. On the other
hand an exception may be raised during the evaluation of their
constituents. For example an exception may be raised by
if X in S range U..V then
if the constituent subtype indication S range U..V is incorrect (that is if
U and/or V are not in the range of S).
...

Language Study Note LSN.241
Subject : Preliminary review of Numeric Topics
Authors : LDT
Date : 12th November, 1981
...
(6) NUMERIC_ERROR and CONSTRAINT_ERROR
Rather than repeat a wording on exceptions for each class of
operators, a clearer wording here will suffice. Hence this paragraph
should be replaced by the following
"The predefined integer operations yield the mathematically correct
result or raises the exception NUMERIC_ERROR (provided this exception
is not suppressed, see 11.7). An operation giving an integer result of
type T can only raise NUMERIC_ERROR if the correct result is outside
the range T'FIRST..T'LAST. The predefined real operations yield
results within the bounds defined in 4.5.8. With such operations,
NUMERIC_ERROR may, but need not, be raised under the conditions given
in 4.5.8."
The remaining problem is one of providing an explanation within the
type model and to explain when CONSTRAINT_ERROR can be raised.
The suggestion that INTEGER should be derived from an unnamed type
should be rejected on the grounds of complexity and that the
properties of the unnamed type is not expressible in the type model.
...
Clarification of NUMERIC_ERROR
Add the following:
"For reliable computation it is highly desirable that NUMERIC_ERROR is
raised whenever the result of an operation is not an approximation to
the mathematical result. Due to the idiosyncrasies of hardware, this
is not required by the language, but the attribute MACHINE_OVERFLOWS
indicates if such a desirable situation arises (see 13.7.1)."
...
(4/a)Accuracy and exceptions in numeric conversions
Replace the last two sentences by:
"A conversion to a numeric type or subtype T consists of a pure
conversion to T'BASE followed by any constraint checking implied by T.
The accuracy of the operation is that of a null operation using the
rules in 4.5.8. The conversion of a real value to an integer type is
by rounding to the nearest integer. If the model interval of the real
value (as in 4.5.8) includes the value half way between two successive
integers, the result can be rounded up or down. The pure conversion
can raise NUMERIC_ERROR and the constraint checking raise
CONSTRAINT_ERROR."
...
NUMERIC_ERROR and CONSTRAINT_ERROR
The following note could be added
Can raise NUMERIC_ERROR
+, , *, /,
mod, rem, ABS
pure conversion
Can raise CONSTRAINT_ERROR
:=
subtype conversion
qualification
'SUCC 'PRED
Can raise neither exception
=, /=
relational operators
membership (assuming operands correct)
...
"Pure conversion: Process of regarding a numeric value of one type as
being of another type (without changing the value).

NOTE275
Subject: Note on NUMERIC_ERROR versus CONSTANT_ERROR~
Author: JDI
Date: 14 Dec.81
Ron,
Consider
generic
type T is (<>);
package P is
...
T'SUCC(T'LAST)
end;
Clearly if we instantiate P(T => COLOR) we want
COLOR'SUCC(COLOR'LAST)
to raise CONSTRAINT_ERROR. Then should it be different for P(T => INTEGER)
I would prefer not, in order to avoid having to generate different bodies
for integers and colors.
The only possible other interpretation is to say that
T'SUCC(T'LAST)
raises a constraint_error but may actually raise a NUMERIC_ERROR before,
in the attempt of computing the value. This would correspond to the
model where T'SUCC works as follows
function T'SUCC(X : T) return T is
C : INTEGER := T'POS(X);
begin
C := C + 1;  can raise numeric_error
if C > T'POS(T'LAST) then
raise CONSTRAINT_ERROR;
else
return T'VAL(C);
end;
end;
Actually this interpretation appears legitimate although it may not be
that easy to explain.
Jean
************************************************************************
!section 04.06 (11) G. Fisher 850824 8300620
!version 1983
!topic NUMERIC_ERROR for Array and Integer conversions
!reference AI00368, 8300595
I agree with John's interpretation of C64103DB and disagree with Paul's
and Erhard's. Appeal to 11.6(6) is not possible here I believe. The
counversion of the bounds, SYSTEM.MAXINT  2 and SYSTEM.MAXINT, to the
index type SM_INT'BASE may be a part of the larger conversion operation
AR_SMALL(A0), but 11.6(6) doesn't apply to it. This paragraph applies
to the "evaluation of numeric simple expressions". The array conversion
AR_SMALL(A0) is not a numeric expression. 11.6(6) as stated applies
only to simple numeric expressions written in Ada, not to nonnumeric
expressions even if these may involve the implicit evaluation of numeric
expressions. Note that 11.6(6) gives the implementation the freedom to
use the "operation of a type that has a range wider than that of the
base type of the operands, provided that this delivers the exact
result". Even assuming 11.6(6) could be applied to the numeric
conversion involved in the array conversion, what possible operation
could be used for SM_INT'BASE(SYSTEM.MAXINT) that has a result that
is correct and belongs to the base type SM_INT'BASE? Even if 11.6(6)
could apply, there is no way to apply it in this case.
Paul claims that raising an exception is the correct result of the array
conversion. But there is no way to view the numeric conversion as an
intermediate result on the way to a correct result. Moreover, the RM
always describes "correct result" as the mathematical value produced by
the corresponding mathematical operation. It makes no sense to call an
exception a correct result.
Finally, 4.6(11) makes it clear that the evaluation of an array
conversion involves computing the bounds of the resulting array. Then
4.6(13) states that those bounds must belong to the target index
subtype. There is no requirement to compute the conversion of the
source array bounds to the target index base type if the mathematical
values of the source bounds satisfy the target index subtype constraint.
If they do not, it would be much simpler if the implementation could
just raise CONSTRAINT_ERROR and be done with it. But 4.6(11) requires
that these bounds be convertible to the target index base type. If they
are not convertible, NUMERIC_ERROR must be raised (unless suppressed).
It may be argued that 4.6(11) was intended only to define what values
the bounds of the target array have, provided they have values, and not
to prescribe how they are computed. Unfortunately, 4.6(11) states how
the bounds are obtained: "by converting the bounds of the operand to the
corresponding index type". Also, 4.6(11) does not say that the
implementation has to perform that conversion to get the bounds. An
implementation of the sort suggested by Erhart (where I assume
mathematical values are used) is possible, except that if the constraint
check fails, then an additional check of the source bounds against the
index base type range is necessary. In short, there is no great
implementation difficulty in making this check and no loss of
efficiency. The implementor just didn't want to be bothered with it.
No doubt we would all be just as happy without it. But then we must
rewrite 4.6(11).
*****************************************************************************
!section 04.06 (12) R P Wehrum, Siemens A.G., Muenchen 830602 8300252
!version 1983
!topic Explicit Conversions and Numeric Error
Section 4.6(12) should not simply say, "In the case of conversions of
numeric types, the exception CONSTRAINT_ERROR is raised by the evaluation
of a type conversion if the result of the conversion fails to satisfy a
constraint imposed by the type mark.", but it should also mention that
numeric conversions will raise NUMERIC_ERROR under certain premises. (Cf.
section 3.5.4(10), 3.5.6(6).)
************************************************************************
!section 04.06 (12) D. Eilers 900805 8301388
!version 1983
!topic Allowable exceptions from implicit conversions in static expressions
Is the following allowed to cause an exception to be raised?
type byte is 128..127;
b: byte := 128; is this safe?
The principle of least surprise would say it is safe, but the RM seems
to imply otherwise.
>From 2.4 it can be determined that Ada does not have literals for negative
universal_integers, although a note would have helped.
>From 4.9, "128" is a static expression, since from 4.5.4(1) '' is predefined.
>From 4.10(4), universal static expressions must be evaluated exactly.
However, from 4.10(1), the expression is universal only if it delivers
a result type of universal_integer.
>From 4.4(3), "the type of the expression depends only on the type of its
constituents and on the operators applied."
>From 4.10(2), the same operations are predefined for the type universal_integer
as for any integer type, so '' is available for universal integers.
However, from 3.2.1(1) the type of the expression in an object declaration
must be that of the object.
>From 3.5.4(8), implicit conversions are available to convert universal_integer
values into corresponding values of an integer type.
>From 4.6(15) an implicit conversion is applied if and only if necessary for
a legal interpretation. Apparently, such an implicit conversion
must be applied (either before or after the negate).
Also from 4.6(15) the implicit conversion can only be applied to numeric
literals, named numbers, or attributes. This implies that the
implicit conversion is applied before the negate. A note would
would have helped draw attention to the rule that implicit
conversions are applied to the leaves of the parse tree.
>From 4.6(12) a conversion raises CONSTRAINT_ERROR if the result of the
conversion fails to satisfy a constraint imposed by the type mark.
And 128 fails to satisfy the range constraint of byte.
Although there is no explicitly stated effect for implicit conversions,
apparently 4.6(12) was intended to apply, substituting the
implicit target type for the type mark of an explicit conversion.
However, 3.5.4(10) states that "an implementation is not required to raise
the exception NUMERIC ERROR if the operation is part of a larger
expression whose result can be computed correctly, as described
in section 11.6."
>From this all, one would conclude surprisingly that an implementation is
expected (4.6(12)), but not required (3.5.4(10)) to raise an exception
for the above declaration! This seems very counterintuitive.
 Dan Eilers
*****************************************************************************
!section 04.06 (13) Don Clarson 830630 8300009
!version 1983
!topic Explicit type conversion of array types.
Must cpmponent {sub}types be the same? The phrase ... a check is made
that any constraint on the component subtype is the same for the
operand array type as for the target array type. ...
Is this determined at compile time? Must these be the same or
only compatible? If so, are the component values checked at run time?
************************************************************************
!section 04.06 (13) M Woodger 881105 8301135
!version 1983
!topic Nonnull bounds belong to the index subtype
!reference AI00313/03
Delete "and if the operand is not a null array", and five words later replace
the end of the sentence by:
"a check is made that the bounds of the result either belong to the
corresponding index subtype or define a null range."
*****************************************************************************
!section 04.06 (15) JD Ichbiah/Alsys 830506 8300153
!version 1983
!topic Paragraph numbers on page 423 should be increased by one.
************************************************************************
!section 04.06 (15) Peter Belmont 831024 8300189
!version 1983
!topic implicit conversion rules  surprises
I tried the following example, or something like it, on two
compilers on display in Dallas last week, one of them validated,
with the indicated result, a finding of ambiguity at the line shown.
Example:
procedure MAIN is
function F(x:boolean) return integer is
begin return 66; end;
function "<"(x,y:INTEGER) return integer is
begin return 66; end;
package internal_1 is
x:integer := F(5<5);  OK
 use F(boolean), builtin/universal <
end internal_1;
function F(x:integer) return integer is
begin return 66; end;
package internal2 is
x:integer := F(5<5);  AMBIGUOUS
 F(boolean), builtin/universal < looks good
 F(integer), userdefined "<">integer
 also looks good (although requiring
 implicit conversion)
end internal2;
begin
null;
end MAIN;
Question: Shall the RM be interpreted to say (a) that the line marked
'AMBIGUOUS' is in fact ambiguous; or (b) that there is a single
correct reading, F(boolean) / predefined/universal "<",
determined by the program and the particular complete context;
or (c) otherwise?
In any case, I would hope that an interpretation of 4.6(15)
could be prepared which makes its intention, its meaning, and
its consequences more apparent to the reader of the RM.
We seem caught in a semimultipass situation, reminiscent of the old
rules of USE visibility. We must try to compile a completecontext
without even considering implicit numeric conversions and, if we succeed,
stop, successful ("An implicit conversion of a convertible universal
operand is applied if and only if the innermost complete context (see 8.7)
determines a unique (numeric) target type for the implicit conversion
AND there is no legal interpretation of this context without
the conversion."). If, however, no such interpretation can be found and
yet a definitive target type for the convertible operand is "determined"
(are all of these wonderful terms defined elsewhere?), then the
implicit conversion is applied.
How surprised will the programmer of the example given above
be when, after his having taken such trouble to define "<"(integer,integer)>
integer, his program compiles without complaint, but
uses the predefined universal "<" and performs a major "GOTCHA"
at runtime?
I, for my part, and as a language commentator rather than as
an LRM interpreter, feel that the two functions "F" in the
example deserve to be considered separately, and that since
each possibility can be supported by suitable choices for "<",
should result in a decision of "ambiguous" rather than in a
possibly surprising determination of "OK". There may also be
a case of French wine hiding in and about this example.
************************************************************************
!section 04.06 (15) P. N. Hilfinger 831024 8300190
!version 1983
!topic Implicit conversion rules
!references 8300189
It seems clear that the compilers were wrong. Although there is just one
possible conversion target (since one doesn't convert to universal_integer),
this is certainly not the only legal interpretation. Hence, Belmont's
option (b) is correct.
I must admit, in agreement with Belmont, that I would have preferred a
slightly different wording, if only to indicate that there isn't a
semimultipass problem. It is supposed to be the case that if we treat
universal quantites as if they were overloaded on all types of the
appropriate class, then the LRM's rule is equivalent to saying that if two
interpretations of a complete expression differ only in that one or more
subexpressions of one interpretation have a universal type, then that
interpretation is chosen, and there is no ambiguity.
************************************************************************
!section 04.06 (15) Peter Belmont 831026 8300191
!version 1983
!topic Implicit Conversion Rules
!reference 8300190
We might look at several issues.
(a) Do we care what the intent of the language was
in case the stated LRM rule is different?
(b) If an easy algorithm does not exist, do we
restudy the matter?
(c) If the LRM has unintentionally pulled the plug
from a cask of Beaujolais, do we revise the LRM
(via interpretation) or just get drunk ...
Let me first consider PNH's claim that "it is supposed to be the case
... LRM's rule is equivalent ... if interpretations differ only
in that one or more subexpressions of one have a universal type..."
An interpretation does assign types to expressions but ALSO assigns
meanings to names. If we assume that "<" denotes a single entity,
namely builtin<, then I can see that the many interpretations
of 5<5 (for all the integer types including univ_int)
do, indeed, differ only in the types of the expressions, and I see
how we may easily choose the universal type for the operands and
thereby choose the universal interpretation of the operator.
But, in my example, there were at least two denotables for "<",
the builtin one(s) and the userdefined one. The available
interpretations of the expression F(5<5) are, thus,
F bool>int F int>int
< (any int,any int) < (int,int)>int
>bool
5 any int 5 int
5 any int 5 int
and it is true of the lefthand interpretation(s) that they differ
only in that one or more subexpressions of one interpretation have
universal type (or what ever the words ought to be...), allowing
the reduction of the interpretation set to
F bool>int F int>int
< UI,UI>bool < int,int>int
5 UI 5 int
5 UI 5 int
but we still have two interpretations, and they differ in MORE than
just the types of subexpressions; they differ in the denotations of
names.
Now, if it is the case that the excellent and well defined rule
of 4.6(15) is not the rule that was intended, should we go with
the LRM or with the intent? Or with something else?
I feel that an easy implementation of the LRM's rule will be
found, and it is too early to feel pity for existing implementations,
even for validated ones, so we could go with the LRM with a clear
conscience as far as implementations go.
Next is the question on consequences for users. I do not like the
idea that adding a USEclause can leave a legal program legal but
change its meaning. But if we change my example to
function F(x:integer) ...
function "<" (integer,integer) return integer ...
F(5<5)  is OK
use P;  where P contains F(x:boolean) ...
F(5<5)  suddenly denotes P's F and
 the predefined, universal "<"
then a user may be unpleasantly surprised.
Post Script: I understand the intent of the rule that PNH
remembers as a local rather than a global matter: to disambiguate
the tree below a particular operator such as "<" and to
leave universal operands alone where possible (and where
no denotations needed to change). The LRM's rule is, however,
a global rule, which says:
if a noimplicitconv reading exists uniquely, use it.
otherwise, if exactly one reading involving implicit
conversions exists, use it.
************************************************************************
!section 04.06 (15) J. Goodenough 831026 8300193
!version 1983
!topic re: implicit conversion rules
!references 8300189
No surprise here. 4.6(15) says that F(5<5) is unambiguous since there is a
legal interpretation without any implicit conversions. Version 1.3 of the test
suite contains tests to check this.
From an implementation viewpoint, multipass resolution is not necessary. One
propagates universal_integer and all implicitly convertible types up the tree,
keeping track of whether implicit conversions affected a result type. For
example, in the upward pass, 5<5 has the type set {boolean, integer(impl)},
where integer(impl) means the result type is integer, but an implicit
conversion was required lower in the tree in order to achieve this result type.
The result type set for F(5<5) is {integer, integer(impl)}. In the complete
context, the integer(impl) result is rejected (because of the preference rule
in 4.6(15), so the top down resolution process removes all the alternatives
requiring an implicit conversion. In particular, the set {boolean,
integer(impl)} is reduced to {boolean}, and hence, an unambiguous resolution is
possible.
As to whether or not users will find this surprising, I think the extensive
consideration given to the problem of implicit conversions showed that no
matter what rule is chosen, there is some surprise.

************************************************************************
!section 04.06 (15) R P Wehrum, Siemens A.G., Muenchen 830602 8300249
!version 1983
!topic Illegal Expressions Involving UniversalInteger Values
Consider the subsequent example:
... X, Y : FLOAT := ...;
X := 100 * 100.0 * Y;  (1)
Y := 10.0/2  (2)
Are the expressions on the right hand sides of the assignments in (1) and
(2) legal?
According to the RM they are not, though there exists a predefined
operator "*" (case (1) which performs the mapping (cf. section App.
3(11)).
(universal_integer, universal_real) > universal_real.
The second "*" needs to have a left operand of type FLOAT; so an implicit
conversion must be applied; however, an implicit conversion cannot be
performed for the expression (100*100.0) but only for its constituents;
an implicit conversion from universal_integer to a noninteger type is
not allowed.
Cf. also section 4.5.5(2) and 4.6(15).
Is this intended or an oversight of the language designers?
Case (2) can be analysed in an analogous way.
************************************************************************
!section 04.06 (15) J. Storbank Pedersen (DDC) 830526 8300328
!version 1983
!topic Convertible universal operands
Function calls are not mentioned as convertible universal operands.
Hence the following example is illegal:
type ENUM is (A,B);
V : INTEGER := ENUM'POS(A);
ENUM'POS(A) is a function call and thus not a convertible universal
operand. Is it the intention to require an explicit conversion in such
cases, as implied by the current definition ?
************************************************************************
!section 04.06 (15) Software Leverage, Inc. 840501 8300374
!version 1983
!topic Is T'POS a convertible universal operand?
This section states, "An implicit conversion ... can only be applied if
the operand is either a numeric literal, a named number, or an attribute;
such an operand is called a convertible universal operand in this section."
Presumably, this does not include function calls, even if the name of the
function is an attribute. For example, the following is illegal:
type T is (RED);
X: INTEGER := T'POS(RED);  Illegal.
This is because T'POS(RED) is a function call of type universal_integer,
but X is of type INTEGER, and no implicit conversions are possible. Is
this correct?
************************************************************************
!section 04.06 (15) J. Goodenough 840526 8300377
!version 1983
!topic Re: Is T'POS a convertible universal operand?
!reference 8300374
T'POS is both a function and an attribute. Syntactically it is an attribute,
and so, according to the wording of the Standard, it is allowed as the
operand of an implicit conversion.
(Since the Standard is intending to list syntactic entities, instead of
"named number", it should say "a name declared by a number declaration".)
************************************************************************
!section 04.06 (15) Norman Cohen 880216 8300958
!version 1983
!topic Rules of the form "X must be of type T"
The RM is replete with rules stating that a certain expression "must be"
of a given type. The natural reading of such a rule is that the
programmer must write an expression of the given type, but the intention
in the case of numeric types is generally that 4.6(15) applies, i.e., if
the expression "must be" of some integer/real type then the programmer
may write a universal_integer/universal_real expression instead, and the
words "must be" force an implicit conversion.
In other words, 4.6(15) overrides the plain reading of the words "must
be." Unfortunately, a reader could at least as easily presume that the
words "must be" override the usual rules allowing universal expressions
to appear in places where expressions of some other numeric type are
required.
The operative words in 4.6(15) are:
An implicit conversion of a convertible universal operand is
applied if and only if the innermost complete context (see 8.7)
determines a unique numeric target type for the implicit conversion,
and there is no legal interpretation of this context without this
conversion.
Therefore, in contexts where it is intended to allow implicit conversion,
rules of the form "X must be of type T" should be replaced by rules of
the form "The only legal interpretation of X is as a value of type T."
Example:
3.6.1(2) contains the following rule about the bounds of discrete ranges:
Otherwise [i.e., if at least one bound is not a literal, named
number, or attribute of type universal_integer], both bounds must be
of the same discrete type, other than universal_integer....
This rule has been widely misinterpreted as barring a discrete range of
the form 1 .. Integer'(10) because the two bounds are not both of the
same type; the words "other than universal_integer" can be read as
meaning that each bound must be of a type other than universal_integer,
i.e., forbidding implicit conversion in this context. The following
wording more clearly expresses the intent:
Otherwise, the bounds must not both be of type universal_integer;
the only legal interpretations of the bounds are as expressions
of the same discrete type.
*****************************************************************************
!section 04.06 (15) J. Storbank Pedersen 880218 8300959
!version 1983
!topic Implicit conversions and overload resolution
AI00136/02 deals with implicit conversions and overload resolution. However,
it only treats examples where the interpretations of a complete context has
either some implicit conversions or none at all. More interesting cases arise
when ALL legal interpretations of a complete context require some implicit
conversions. This comment is intended to clarify the implications of section
4.6(15) in such cases.
The essential part of 4.6(15) says: "An implicit conversion of a convertable
universal operand is applied if and only if the innermost complete context
(see 8.7) determines a unique (numeric) target type for the implicit
conversion, and there is no legal interpretation of this context without this
conversion."  notice that it refers to each individual (potential) implicit
conversion (by saying "THIS conversion" and not "ANY conversion").
Below are a few examples illustrating the point.
Example 1:
procedure IMPL_CONV_1 is
procedure P(X : INTEGER; Y : BOOLEAN) is
begin
null;
end;
procedure P(X : BOOLEAN; Y : INTEGER) is
begin
null;
end;
function "<"(X,Y : INTEGER) return INTEGER is
begin
return 1;
end;
begin
P(5<4, 6<7);  is overloading resolvable?
end;
There are two possible interpretations of P(5<4, 6<7). Each of them require
implicit conversion of two of the universal integer literals (either 5 and 4
or 6 and 7). Hence, with reference to 4.6(15), for each of the implicit
conversions it is the case that there is ANOTHER legal interpretation of the
complete context without "this conversion". Therefore NO implicit conversion
can be applied and the call is illegal.
Example 2:
procedure IMPL_CONV_2 is
procedure P(X : INTEGER; Y : INTEGER; Z : BOOLEAN) is
begin
null;
end;
procedure P(X : BOOLEAN; Y : INTEGER; Z : INTEGER) is
begin
null;
end;
function "<"(X,Y : INTEGER) return INTEGER is
begin
return 1;
end;
begin
P(5<4, 6<7, 1<2);  is overloading resolvable?
end;
There are two possible interpretations of P(5<4, 6<7, 1<2). In this case 6
and 7 will be implicitly converted to INTEGER, because there are NO legal
interpretations of the complete context without these conversions. For the
remaining potential implicit conversions (5 and 4 or 1 and 2) the reasoning of
example 1 applies. Hence, the call is illegal.
Example 3:
procedure IMPL_CONV_3 is
procedure P(X : INTEGER; Y : BOOLEAN) is
begin
null;
end;
procedure P(X : INTEGER; Y : INTEGER) is
begin
null;
end;
function "<"(X,Y : INTEGER) return INTEGER is
begin
return 1;
end;
begin
P(5<4, 6<7);  is overloading resolvable?
end;
Also in this case two interpretations of the procedure call are possible.
The literals 5 and 4 are implicitly converted to INTEGER, because there are
NO legal interpretations of the complete context without these conversions.
The literals 6 and 7 are NOT implicitly converted (and remain universal
integers) because there IS a legal interpretation of the complete context
without these conversions (use the first P and the predefined "<" for
universal integer in 6<7). As a result the call is legal (unambiguous) and
involves implicit conversion of 5 and 4.
The above reasoning agrees with the expected interpretation of expressions
like: X'LENGTH = 2**3, where the "**" operator requires the right operand
to be an INTEGER (causing 3 to be implicitly converted to INTEGER for
ALL legal interpretations of this expression [assuming the user has not
overloaded "**"]). The rule af 4.6(15) is intended to prevent additional
implicit conversions (of 2 and X'LENGTH), so that the "=" operator is the one
for universal integer  not for some 'arbitrary' (ambiguous?) integer type
whose scope includes this expression.
*****************************************************************************
!section 04.06 (15) Chuck Engle 881012 8301022
!version 1983
!topic AI00218/07 mentions nonexistent object
There is a problem with AI00218/07 in that the discussion mentions in
paragraph 2 the following, "Thus, the declarations V1 and V2 in the question
..." yet the question does not contain a V2. V2 is left over from a former
version of this AI and thus the reference to it should just be removed from
the discussion section.
*****************************************************************************
!section 04.06 (15) Keith Enevoldsen/Boeing 881122 8301044
!version 1983
!topic Implicit conversion rules
!references AI00136, AIG(86)8.7.a.7
If there are multiple legal interpretations of a construct, but one
intepretation involves fewer implicit conversions than the others,
is overloading always resolved to that interpretation?
The ACVC Implementers' Guide (version 1, Dec 86) 8.7.a.7 says
that such a counting scheme should be used:
S3: "... it is necessary to keep track of how many implicit
conversions have been performed ..."
S4: "... since one of the operators requires more implicit
conversions ..."
S7: "... since this result type is obtained with fewer implicit
conversions ..."
S8: "Since ... requires fewer implicit conversions in its subtree ..."
AI00136 has 3 examples, all of which would be correctly handled
simply by counting the number of implicit conversions for each
interpretation and choosing the one with the fewest implicit
conversions if there is only one.
But, consider this example:
 Example 4
procedure P (X : INTEGER; Y : BOOLEAN; Z : BOOLEAN) is ... end P;
procedure P (X : BOOLEAN; Y : INTEGER; Z : INTEGER) is ... end P;
function "<" (X, Y : INTEGER) return INTEGER is ... end "<";
...
P(5 < 4, 6 < 7, 1 < 2);  is overloading resolvable?
There are two possible interpretations of P(5 < 4, 6 < 7, 1 < 2).
Consider these two methods of overload resolution:
1. A "counting" scheme would resolve to the first P because the
interpretation using the first P involves 2 implicit conversion
whereas the interpretation using the second P involves 4 implicit
conversions.
2. Using the reasoning given for AI00136 examples 1 through 3:
for each implicit conversion, there is another legal interpretation
without "this conversion," so no conversions may be applied and
the construct is unresolvable.
Which method is correct?
*****************************************************************************
!section 04.06 (15) M Woodger 881105 8301136
!version 1983
!topic Conversion of universal operands of relational operator
!reference AI00039
Delete second sentence of the second paragraph (15). Example 1 of AI00039
shows it is false.
*****************************************************************************
!section 04.06 (15) Ron Brender 890605 8301293
!version 1983
!topic The number of implicit conversions
!reference AI00606/01
I support the conclusion of this AI. The "algorithm" presented in the
IG always struck me as wrong from "day zero". This is reflected in
the following comment on the IG submitted back in 1984:
Section 08.07.a.07(S3S12) Ron Brender 840104
Version G1
Topic "more or less" implicit conversions
The use of a count of the number of implicit conversions in the
model given here seems illadvised. The ARM certainly says
nothing about "minimizing" the number of implicit conversions 
it only states conditions under which a particular implicit
conversion is applied or not. Adopting a model that counts the
number of implicit conversions invites concern about cases where
there may be a choice of implicit conversions  cases which can
be shown to be just not possible (the argument is lengthly and
will not be dealt with here).
[Ed. In light of the question in this AI, perhaps the argument
was too lengthly to be correct...]
It appears that the appeal to a count is motivated solely by the
examples involving **. But we know that the right operand of **
can never be of a universal type: the predefined ** operators all
require INTEGER and any userdefined overloading of ** must have a
nonuniversal type in its specification. It follows that in any
syntactic occurance of the ** operator (whether infix or call
notation) any implicit conversions that MIGHT be applied to
satisfy the required type of the right operand in fact MUST be
applied. Consequently, the presence (never absence) of an
implicit conversion in the right operand cannot make any
difference in the resolution of any larger complete context.
Having disposed of the need to take account of implicit
conversions in the right operand of **, there appears to be no
need to have a count as such for the purposes of the remaining
part of the model  it suffices to replace the current use of the
count with a simple bit indicating whether or not an
interpretation involves any implicit conversion (other than in the
right operand of **). I think everything else goes through
without further difficulty.
 End of Forwarded Message
*****************************************************************************
!section 04.06 (15) Hans Hurvig 890704 8301321
!version 1983
!topic Implicit conversion rules
!reference AI00606/01
I agree with the conclusion of AI00606. However, Ron Brender's
comments (which were circulated at the ARG meeting in Madrid)
contain what I believe to be a mistaken conclusion, and since the
area is still surrounded by some confusion I offer the
following analysis.
The rule should be understood in terms of "conversion sets":
the set of (convertible) operands which in a given resolution
undergo implicit conversion.
Now, 4.6(15) says that if a conflict arises, you choose the
resolution (if any) whose conversion set is a proper subset of
the conversion sets of all conflicting resolutions.
The countapproach (representing just the cardinality of the
sets) and the bitapproach (representing just whether the sets
are empty) are attempts to avoid having to represent the sets
in their entirety.
The countapproach is invalidated by the example in AI00606.
The bitapproach cannot be repaired by making special cases for
the "**" operator. Similar cases (as well as more complicated
ones) can be constructed without "**":
function "<" ( L,R: INTEGER ) return INTEGER;
function F ( L: BOOLEAN; R: INTEGER ) return BOOLEAN; F1
function F ( L: INTEGER; R: INTEGER ) return INTEGER;
Now the call F(3<4,5) gives rise to two nonconflicting
resolutions which both contain conversions (of the right
operand), but where further analysis is still allowed to
select F1 if needed:
procedure P ( X: BOOLEAN ); P1
procedure P ( X: INTEGER );
...
P ( F(3<4,5) ); legal, calls F1 and P1, only 5 converted
Note however, that the countapproach is sufficient to
identify the potential survivor among the conflicting
resolutions: lower cardinality is a necessary, though not
sufficient, condition for being a proper subset.
Finally, on a more pedantic note, the formulation of 4.06(15) is
unfortunate because it is so indirect: it specifies what must be
satisfied, not how to find it. This gives some problems with
circularity, where the legality of X is defined in terms of the
legality of Y, whose legality may be defined in terms of the
legality of X. This actually happens if you look very closely at
the example in AI00606, but of course the (pragmatic!) solution
is that both are illegal.
*****************************************************************************
!section 04.06 (15) Hans Hurvig 890704 8301323
!version 1983
!topic Not COMPLETE context for implicit conversions
The last sentence of 4.6(15) speaks of the innermost COMPLETE
context when deciding whether to apply an implicit conversion.
This is in stark conflict with many other rules in the RM and
is definitely not the way it is handled by implementations, which
apply the rule locally during bottomup analysis, and not on the
completecontext level.
Consider for example:
function "<" ( L,R: INTEGER ) return INTEGER;
function F ( X: INTEGER ) return INTEGER; F1
function F ( X: BOOLEAN ) return STRING; F2
...
...... LONG_INTEGER ( F(3<4) ) ......
The call F(3<4) is an operand to a type conversion, so its type
must be determinable independently of the context, 4.6(3).
Both F1 and F2 is a possibility, but since F1 requires
implicit conversions while F2 requires none, F2 is chosen.
However, STRING is not a valid type for conversion to
LONG_INTEGER so the construct is illegal.
Now let us examine closer why we chose F2 over F1. The last
sentence of 4.6/15 says that an implicit conversion is applied
only if there is no legal interpretation, in the COMPLETE
context, without this conversion. But this is exactly the case:
F2 was not legal in the complete context, so we should apply the
implicit conversions after all and call F1.
This clearly conflicts with virtually any rule saying that a type
must be determinable independently of the context.
*****************************************************************************
!section 04.06 (15) D. Eilers 900813 8301392
!version 1983
!topic predefined operations for derived type not always available
!reference ai00148
Consider the following:
package p is
type my_int is new integer;
end p;
with p;  no use clause
package q is
i: p.my_int := 1;  is this legal? (apparently not)
c: constant := 1;
j: p.my_int := c;  no problem
k: p.my_int := p.my_int(1);  no problem
end q;
As a consequence of wording in 4.6(15) which is similar to 3.6.1(2), this
seems to be a surprising situation where, like 1..10 in a discrete range
(AI00148), a constant declaration or providing an explicit type can be used
to achieve an effect which is otherwise not directly achievable.
*****************************************************************************
!section 04.06 (15) Tucker Taft 910227 8301408
!version 1983
!topic AI00606 should be classed as "pathology"
I notice that AI00606/03 was approved by the ARG at the September meeting.
This is the AI which states that it is inappropriate to "count" implicit
conversions when trying to perform overload resolution. It then goes on to
include an example where a "counting" implementation would resolve an
expression, but a noncounting one would (correctly?) declare it ambiguous.
Shouldn't this be labeled a "pathology" to prevent ACVCs from testing it,
since presumably, most compilers use the counting strategy recommended in the
AIG?
Also, I am curious whether anyone knows of an implementation which uses the
"proper subset" approach described in the AI. It sounds like a bear to
implement properly. Is there some trick?
Finally, I wonder whether anyone has given thought to a
simplertoimplementcorrectlyonceandforall rule appropriate for Ada 9X,
which would be upwardcompatible in the nonpathological cases. Ideally the
rule would be based on "local" context rather than global context.
For reference, here are some test cases:
function "<"(Left, Right : Integer) return Integer;
procedure P(A : Integer; B : Integer; C : Boolean);
procedure P(A : Boolean; B : Boolean; C : Integer);
procedure Q(A : Integer; B : Integer; C : Boolean);
procedure Q(A : Integer; B : Boolean; C : Boolean);
. . .
P(1<2, 3<4, 5<6);  *not* resolvable according to AI00606/03
 "counter" would resolve to P#2 (fewer conversions)
 "propersubsetter" would correctly(?) reject this
Q(1<2, 3<4, 5<6);  *yes* resolvable according to AI00606/03
 "counter" & "propersubsetter" would resolve to
 Q#2 (fewer, proper subset of conversions)
 "singlebit" approach would not resolve this
 (assuming I understand it correctly).
Tuck
*****************************************************************************
!section 04.06 (21) M Woodger 881116 8301137
!version 1983
!topic Helpful example
Add to the examples of implicit conversion of each integer literal:
" STARS(1..15)  see 4.1.2"
*****************************************************************************
!section 04.08 Software Leverage, Inc. 831117 8300223
!version 1983
!topic Must allocated objects be in the designated subtype?
Consider the following example:
subtype S is STRING(1..10);
type A is access S;
X:A;
...
X := new STRING(1..4);
The designated subtype of A is S. Section 3.8(3) claims that values
of type A created by allocators designate objects of subtype S: "Each
such access value designates an object of the subtype defined by the
subtype indication of the access type definition; this subtype is
called the designated subtype."
However, section 4.8 "Allocators" does not define any rules that
ensure this. In fact section 4.8(5) contradicts section 3.8(3) by
stating, "If the allocator includes a subtype indication, the created
object is constrained either by the subtype or by the default
discriminant values." Although it doesn't say which, we can guess that
in this case it is constrained by the subtype.
Section 4.8(6) says, "Initializations are then performed as for a
declared object (see 3.2.1) ...." In this case the initialization is
considered implicit. Section 3.2.1 does not say anything about
implicit initialization of array bounds.
Thus, in the example above, section 3.8(3) implies that X designates a
string whose bounds are 1..10, but 4.8(5) implies that X designates a
string whose bounds are 1..4.
Note that the assignment to X does not check that the bounds of the
string are 1..10. Section 5.2(3) says that for an assignment
statement, "A check is then made that the value of the expression
belongs to the subtype of the variable...." The subtype of X is A.
According to section 3.3(4), A is an unconstrained subtype, and
therefore, "imposes no restriction". Therefore, all possible values
of type A will satisfy the check.
Another example that exhibits similar behavior is:
type R(D: INTEGER := 17) is
record
...
end record;
type acc_R is access R(45);
X: acc_R := new R;  use the default expression "17".
In this case section 4.8 tells us to allocate an object with the
discriminant D constrained to be 17. Section 3.2.1 says that the
value 17 is checked against the subtype INTEGER, but it does not
require a check of the whole object against the discriminant
constraint.
Apparently, there is a missing check. The manual should require that
an allocator check the newly created object against the designated
subtype. There is some question, however, as to whether this check
should be preceded by an implicit subtype conversion in the case of
arrays. There is also some question as to when the check should be
done  should it be done before or after implicit initialization is
done for the rest of the components?
************************************************************************
!section 04.08 (05) Vittorio Zecca 850122 8300494
!version 1983
!topic Allocators with subtype indication of a scalar type are erroneous
Add "If the allocator includes a subtype indication and the base type is scalar
, or derived directly or indirectly from a scalar type, the value of the object
created by the allocator is the lower bound of the subtype . "
Please consider 3.8(6) : An access value belongs to a corresponding subtype of
an access type either if the access value is the null value or if the value of
the designated object satisfies the constraint .
3.2.1(16) : The initialization of an object checks that the initial value
belongs to the subtype of the object .
5.2(3) : A check is then made that the value of
the expression belongs to the subtype of the variable .
By comparing 3.8(6), i.e. the definition of compatibility between an access
value and its subtype, and the definitions of compatibility in object
initializations and in assignment statements, an evaluation of the object
designated by the access value is required, and this means that all programs
like the following are erroneous :
procedure P is
type T is access INTEGER range 110..120;
V : T := new INTEGER;  by 3.2.1(16) the (undefined) value designated by
 the allocator should be >= 110 and <= 120
 but this value is undefined and then the program
 is erroneous
begin
V := new INTEGER;  by 5.2(3) the same problem arises
end;
See also Ada Compiler Validation Capability release 1.5
test cases c48004ab.ada block A
containing the assignment statement VA := new TA; and c62003ab.ada block D
containing the statement I := new INTEGER;
In my opinion the ANSI standard should be changed so that it would be possible
to assign to an access value an allocator of that kind without incurring in
erroneities .
The easiest way to solve this problem is to implicitly initialize the object
created by the allocator, for example to T'FIRST were T is the designated
subtype .
With this rule the value designated by V in the previous example would be 110 .
I suggest also to add an example in 4.8(14) showing explicitely what is
happening .
An alternative could be to add after 3.8(6) "No check is done if the base type
of the designated subtype is a scalar type, or is derived directly or
indirectly from a scalar type . "
Anyway this is uncomfortable to me, because the designated value in this case
is a black hole .
Best regards  Vittorio Zecca

************************************************************************
!section 04.08 (05) Peter Belmont (Intermetrics) 85410 8300527
!version 1983
!topic Are constraints to allocated access values to be ignored?
Recommendation
A discriminant or index constraint, when applied to or implied
by the typemark of an access type in an allocator, is ignored.
Discussion
Consider the example:
declare
type A1 is access STRING ;
subtype S1A1 is A1(1..10);
subtype S2A1 is A1(1..5);
Type A2 is access A1;
V1A2 : A2 ;
V2A2 : A2 ;
begin
V1A2 := new A1(f..g);  [1]
V1A2.ALL := new STRING'("qw");  [2]
V2A2 := new S1A1 ;  [3]
V2A2 := new S2A1 ;  [4]
V1A2 := V2A2 ;  [5]
V2A2.ALL := new STRING'("df");  [6]
end;
I assume that the constraint explicitly written at [1] is
elaborated and checked; and that CONSTRAINT ERROR will be raised
if its values are, for example, (5..10).
I must ask, however, if its constraint, if proper, is to be
recorded with the access object V1A2.ALL, or if its constraint
(having been alaborated and checked) is to be thrown away, ignored.
If to be recorded, then constraints must be recorded also at [3] and
[4], copied at [5], and checked at such assignments as [2] and [6].
To decide this question, I turn to RM 4.8(5).
I believe that its first three sentences mean to discuss only
record and array heap objects, not (for example) access heap objects.
For its fourth sentence begins: "For other types..."
This reading implies that constraint information must be recorded
on the heap for all record and array objects allocated there
and requiring them; and that all other objects allocated on the heap
are constrained "by the subtype indication of the access type
definition." This means to me that the allocation of access objects
on the heap does not imply recording any (index or discriminant)
constraint that may be explicitly (or implicitly) provided, as
at [1] (and at [3]).
If this interpretation is correct, then the example at [1]
is one of the few cases in Ada where an explicit constraint
is subsequently ignored. It joins a list (which should appear
in an RM appendix) of cases where subtypes, though given, are
ignored, for example, subtypes of parameters in renamed
subprograms.
************************************************************************
!section 04.08 (05) Woodger/Alsys 850423 8300536
!version 1983
!topic subtype of created objects of access type
!reference AI00331, 8300527
Peter is right in his reading of this paragraph. It was intended
to be read so that the second and third sentences expand on the
first sentence, and specify what the constraint on the created
object shall be in the case that its type is an array type or a
type with discriminants.
(Try replacing the first period by a colon and the second period
by a semicolon.)
For other types, including access types, the last sentence of
4.8(5) applies: the subtype of the created object is that defined
by the definition of the access type. This means that a subtype
indication for an access type used in an allocator, such as
Peter's example
V1A2 := new A1(f .. g);
will, as he suggests, be ignored in determining the subtype of
the created object.
The consequences of 4.8(6) in this case are that:
(a) The subtype indication A1(f..g) is elaborated, so that the
constraint is checked  3.3.2(b)  and f and g must be
POSITIVE.
(b) "Initializations are performed as for a declared object (see
3.2.1)" and "initializations are implicit". This has the
effect of invoking the second sentence of 3.2.1(6), and
3.2.1(10), so that the created object is initialized with
the null access value.
This is evidently a "ramification" or "confirmation".
*****************************************************************************
!section 04.08 (05) J. Goodenough 851022 8300675
!version 1983
!topic Performing default initializations before subtype checks
Consider the following example:
type REC (D : INTEGER) is
record
C : INTEGER := function_with_side_effect;
end record;
type AC_REC_3 is access REC(3);
VAR : AC_REC_3 := new REC(4);  CONSTRAINT_ERROR raised?
May the default expression be evaluated before CONSTRAINT_ERROR is raised?
Must it be evaluated? Or is its evaluation forbidden?
Note that in the above case, CONSTRAINT_ERROR is raised by the allocator since
the designated object does not satisfy the constraint specified for type
AC_REC_3. AI00150 dealt with a similar situation in which the recommendation
included a statement saying that it was undefined whether the check for
subtype compatibility could be performed before or after a designated object
is created. Since creation of an object normally implies that any default
initializations are performed [4.8(6)] if no explicit value is provided in the
allocator, it would seem reasonable to extend AI00150's remark to say that
default initializations may be performed before the check is made.
In fact, AI00150 gives an example where the default initialization of a
discriminant must be performed before the check can be made.
This comment arose because test C48008A required that no default
initializations be performed, and this seems to be too severe a requirement,
given AI00150.
*****************************************************************************
!section 04.08 (05) M Woodger 881105 8301138
!version 1983
!topic Punctuation
Replace period by colon after the first occurence of "constrained". The
following text is explaining how it is constrained.
Replace period by semicolon after "values". This separates the treatment of
the two possible forms of allocator. Now the last sentence, beginning "For
other types", is clearly relating to the first part of the paragraph.
*****************************************************************************
!section 04.08 (05) M Woodger 881105 8301139
!version 1983
!topic Subtype check on created object
!reference AI00150
Following "value." insert the sentence:
"The exception CONSTRAINT_ERROR is raised if the created object does not
belong to the designated subtype."
*****************************************************************************
!section 04.08 (06) F.Mazzanti 880704 8300980
!version 1983
!topic Undefined variables created by allocators
In 3.2.1(17) is stated that the value of a variable is undefined after
elaboration of the corresponding object declaration unless an initial value
is assigned to the variable by an initialization (explicitly or implicitly).
This definition of undefined value is later user for the definition of
a case of erroneous execution in 3.2.1(18).
Current wording does not seem to include the case of objects created by
allocators as illustrated below:
type REF is access INTEGER;
PTR:REF := new INTEGER;
Y:INTEGER := PTR.all;  should be erroneous
Notice that the object created by the allocator is not required by 3.2.1(17)
to be undefined, as no corresponding object declaration exists.
Something equivalent to 3.2.1(17) should then be added in 4.8 ( 4.8(6)).
*****************************************************************************
!section 04.08 (06) M Woodger 881105 8301140
!version 1983
!topic Inconsistency
The second and third sentences should be combined thus:
"The new object is then created and initializations are performed as for ...".
This avoids contradicting 3.2.1(6,7), which has object creation after the
obtaining of initial values.
*****************************************************************************
!section 04.08 (06) A. Blakemore 900626 8301382
!version 1983
!topic Must allocators work in the presence of concurrency?
Here is a question for the language lawyers.
First, recall Ada LRM 3.8(3)
"The objects designated by the values of an access type
form a collection implicitely associated with the type."
The question is:
Can multiple tasks allocate objects from the same collection safely,
without having to serialize calls to new and unchecked_deallocation ?
Are implementations responsible for ensuring that the shared variables
(heap allocation state variables etc) implicit in the access type
declaration are correctly handled in the presence of concurrency ?
Or is it the programmer's responsibility to serialize calls to allocators
for fear they may be interrupted during critical sections ?
Or is it unspecified and therefore erroneous if it just happens to work?
I have never found a mention of this in the LRM and have always wrapped
allocate/deallocate tasks around "new" in those cases.
(or avoided global access types) Was I being paranoid?
Consider
package global_types is
type crud is access integer;
end global_types;
...
task body A is
begin
crudptr := new crud;
...
free (crudptr);  unchecked deallocation
end A;
task body B is  the same
 Alternative  Is this necessary ? 
package paranoid_global_types is
type crud is access integer;
task crud_mgr is
entry create (bozo : in out crud);
entry free (bozo : in out crud);
end crud_mgr;
end paranoid_global_types;
....
task body A is
begin
paranoid_global_types.crud_mgr.create (crudptr);
...
paranoid_global_types.crud_mgr.free (crudptr);
end A;
task body B is  the same

Alex Blakemore Internet: blakemore@software.org
Software Productivity Consortium UUNET: ...!uunet!software!blakemore
2214 Rock Hill Rd, Herndon VA 22070 Bellnet: (703) 7427125
*****************************************************************************
!section 04.08 (06) R. Eachus 900703 8301383
!version 1983
!topic Response to Blakemore
!reference 8301382
Your questions really has two parts, and I'll try to answer both,
but since a quick check shows no AI's on this issue I am also
forwarding your posting to John Goodenough (in case he missed it) and
adacomment, since there probably should be an AI even if it is a
confirmation.
First, if you are only calling new for a type, it is my opinion
that you should never need to create your own critical regions, the
compiler should do that for you. Also, if you have calls to
unchecked_deallocation for the same collection in more than one task,
the standard seems to be clear that there will be no problem. But if
you mix the two? Your protections may be a good idea.
Note that when you leave a scope on many compilers certain
objects and collections are freed. Since the user cannot easily
supply a critical region around this boundary, requiring that he do so
in other cases seems silly.
*****************************************************************************
!section 04.08 (07) R. W. Shore/BoozAllen 871112 8300954
!version 1983
!topic Releasing heap storage associated with task type instances
The Ada compiler we are using does not allow heap space associated with an
instance of a task type to be recovered under any circumstances. (Comment in
a paper appearing in Ada Letters, SeptOct 1987, vii.5106) [Such
restrictions should be considered in the proposed revision of Ada. J.
Goodenough]
*****************************************************************************
!section 04.08 (11) Software Leverage, Inc. 841010 8300452
!version 1983
!topic Restrictions on Name for Pragma Controlled
Paragraph 4.8(11) includes the statement that "A pragma CONTROLLED for a given
access type is allowed at the same places as a representation clause for the
type (see 13.1)." This is clear, but for pragma Pack we find that "The position
of a PACK pragma, AND [emphasis added] the restrictions on the named type, are
governed by the same rules as for a representation clause...".
Presumably, it was the intent to have the same restrictions on the named type
for pragma Controlled as for pragma Pack. Although the italicized syntax
specifies a type name, so does that for pragma Pack, and the latter allows a
first named subtype as well. Is this correct?
In particular, is it correct that the following is illegal:
type T is access...
subtype S is T;
pragma Controlled(S);  Not a first named subtype
************************************************************************
!section 04.08 (14) M Woodger 881105 8301141
!version 1983
!topic Helpful example
Add the line:
" new BUFFER  default discriminant value used; see 3.7.1"
*****************************************************************************
!section 04.09 Software Leverage, Inc. 830914 8300073
!version 1983
!topic Static integer subtypes
According to the RM, if
type T is range 1..10;
then the subtype T is not a static subtype, T'FIRST, T'LAST, and
T'RANGE are not static, and so on.
This follows because, by 3.5.4(4), the above declaration is equivalent
to
type integer_type is new predefined_integer_type;
subtype T is integer_type
range integer_type(1) .. integer_type(10);
and, by 4.9, type conversions are not static.
Is this really true?
************************************************************************
!section 04.09 Ron Brender 831029 8300202
!version 1983
!topic Static integer subtypes
!reference AI00023
If one accepts the conclusion that in
type T is range 1 .. 10;
that T is not a static subtype, then I submit that one must also
conclude that an implicit conversion of any integer or real literal,
or any named number, is also not static. A consequence is that there
will be almost no static expressions (only universal expressions would
be static) and there will be absolutely no static subtypes or static
ranges left in Ada!
(Aside: This neatly "solves" the problem posed in AI00001: since a
constant  other than a named number  is not of a universal type, it
can't be static; thus, any renaming is never static...)
This consequence is certainly contrary to all intents and reasonable
expectations. No matter how plausible this consequence might appear
from 3.5.4(4), it simply does not make sense for the LM committee/Ada
Board to ratify it.
Perhaps one interpretation that does not appear to contradict 3.5.4(4)
too badly is to assert that the definition of staticness applies to an
expression "as written" or at least is independent of any implicit
type conversions. But the most direct, and understandable, approach
would be to simply "correct" the RM to the effect that an implicit
type conversion is static if and only if its operand is static.
************************************************************************
!section 04.09 12 Ron Brender 840126 8300286
!version 1983
!topic Staticness and Generic Formal Types
!References AI00023
Consider this example
generic
type T is range <>;
package PACK is
type ARR is array (T) of BOOLEAN;
function F return ARR;
end;
package body PACK is
function F return ARR is
begin
return (10000 => FALSE, 10001 => TRUE);  legal?
end;
end;
In an aggregate with more than one named association, each choice must
be static. The expressions clearly satisfy the requirements of
4.9/310, that is, (a) through (h). 4.9/2 also requires of a static
expression that it "delivers a value (that is, it does not raise an
exception)". However, there is no way to determine at the point of
the aggregate whether or not the expression raises an exception
because the "real" base type for the implicit conversion of these
expressions to type T is known.
It appears that either we need an additional rule in 4.9 and/or 12.1
to the effect that an implicit conversion to a generic formal type is
not static or we need to accept that there is a violation of the
contract model.
It has been suggested in AI00023 that because type conversions are
not static therefore implicit type conversions are not static and
therefore only predefined types/subtypes are static. The consequences
of this reasoning are so onerous that I am confident that such an
interpretation will not be ratified. What this example shows is that
even an interpretation that distinguishes between implicit and
explicit type conversion on the basis that an implicit type conversion
is not a primary of an expression as used in 4.9 is not, of itself, a
sufficient basis for resolution of this issue.
I have conducted an exhaustive search of the entire RM to consider
every occurance of the word "static" to see whether I could construct
any other kind of example involving the interaction of staticness and
generic formals. For the benefit of others who might also wish to
explore this puzzle, the significant sections are: 3.2.2, 3.5.4,
3.5.7, 3.5.8, 3.6.2, 3.7.3, 4.1.4, 4.3.1, 4.3.2, 4.9, 5.4, 9.8, 12.1,
13.2, 13.3 and 13.5.
It is interesting to note that interaction problems are avoided in
most cases in one of two ways:
1. The type of an expression that must be static is explicitly
required to be of some specific type, such as
universal_integer or predefined INTEGER, or
2. The type of an expression that must be static is allowed to
be of any integer type or any real type  and the only way
to "guide" overloading resolution to a generic formal type
necessarily involves uses of the name of that generic formal
type or its attributes which are disallowed in static
expressions by 12.1.
In the two remaining cases of possible interaction, there are explicit
rules disallowing generic formal types, notably: discriminants used
in variant parts (3.7.3) and case statements (5.4). This suggests
that it was a simple oversight that an essentially similar problem
exists for array aggregates. Therefore, another resolution is to
adopt this rule as a ramification/extrapolation of the RM:
"A named association, except for one with an others choice, is
not allowed in an array aggregate if the type of any choice is a
generic formal type."
************************************************************************
!section 04.09 (02) Norman Cohen/John Goodenough 830610 8300180
!version 1983
!topic membership tests, shortcircuit operators in static expressions
This paragraph applies only to the primaries and operators of expressions.
Thus the Boolean expression
10 in Some_Subtype
is static since 10 is a primary, "in" is not an operator (it is an operation;
see 4.5(1, 2)), and Some_Subtype is neither an operator nor a primary (see
4.4). Note that the expression is static even if Some_Subtype is not static!
The same reasoning holds for expressions containing short circuit operations:
True and then False
is static.
Is this the intent?
************************************************************************
!section 04.09 (02) J Storbank Pedersen(DDC) 830829 8300242
!version 1983
!topic Membership tests and short circuit control forms
The current definition of static expressions implies that membership tests may
be static expressions even if they include a nonstatic subtype or range. The
reason is that in and not in are not operators and no specific rule is given
for membership tests, e.g. requiring the subtype or range to be static.
Proposed solution (new rule):
A membership test is static if and only if the expression is static and
the subtype or the range is static.
Short circuit control forms are not operators (either), so according to 4.9(2)
such expressions are static if and only if both operands are static. They
ought to be mentioned explicitly, and due to the special rules for their
evaluation they should be considered static "whenever possible", that is:
and then: static if the left operand is static and equals false, or if
the left operand is static and equals true and the right
operand is static.
or else: static if the left operand is static and equals true, or if
the left operand is static and equals false and the right
operand is static.
************************************************************************
!section 04.09 (02) Ada Group Ltd 840207 8300323
!version 1983
!topic expression with catenation is static
According to this section, the following expression is a static
expression:
'a' & 'b' = 'c' & 'd'
since each primary is an enumeration literal, the expression delivers a
scalar type (BOOLEAN), and "&" and "=" are predefined operators. The
fact that 'a' & 'b' is not a static expression doesn't matter. The
complete expression given above satisfies the definition of a static
expression, even though this is probably not what was intended.
************************************************************************
!section 04.09 (02) J. Goodenough 840524 8300376
!version 1983
!topic Re: Staticness and Generic Formal Types
!reference 8300286
Ron Brender argues that the legality of an aggregate such as (10_000 => FALSE,
10_001 => TRUE) depends on whether the value 10_001 belongs to the index base
type, and if the index base type is a generic formal type, this decision cannot
be made at compile time, since the base type of the generic formal type is not
known. I think that an implementation is always allowed to consider such an
aggregate legal.
Let's first consider a case that is somewhat simpler than Ron's:
Big : constant := Integer'Pos (Integer'Last) + 1;
type Arr is array (Integer range <>) of Boolean;
The aggregate Arr'(Big => True, Big + 1 => False) need never be rejected as
illegal because an implementation is always allowed to let the implicit
conversion of Big to succeed. 3.5.4(10) says, "An implementation is not
required to raise the exception NUMERIC_ERROR [for an implicit conversion] if
the [conversion] is part of a larger expression whose result can be computed
correctly, as described in section 11.6." A choice in an aggregate is always
"part of a larger expression". In general, the choices in such aggregates must
be checked against the index subtype, and this check will raise CONSTRAINT_
ERROR if the choice values do not belong to the base type; failure of this
check does not make the aggregate illegal  after all the index subtype might
be nonstatic, so the check cannot be performed in general at compile time.
Hence, the base type conversion, which would make the aggregate illegal if
performed at compile time, can always be omitted and replaced with a subtype
check, which will raise CONSTRAINT_ERROR (not NUMERIC_ERROR) at run time.
Sometimes, even the index subtype check can be omitted, as in the case:
X : ARR (1..2) := (Big => True, Big + 1 => False);
Since the choices in this case serve only to say how many components there are
and in what order the components occur, the fact that the choice values do not
belong to the index subtype can be ignored (i.e., no exception need be raised)
and X can be assigned the value (True, False).
Now suppose we have function:
function F return Arr is
begin
return (Big => True, Big + 1 => False);
end F;
Once again, the implementation can accept the aggregate as legal because
CONSTRAINT_ERROR will be raised at runtime, when the check is made that the
index values belong to the index subtype.
Given that it is never NECESSARY to declare an aggregate with integer choices
illegal just because the implicit conversion of the choices to the index base
type would raise NUMERIC_ERROR, there is no problem when the index subtype is a
generic formal type. In this case, since the base type is not known, the
implicit conversions cannot be evaluated at compile time, and so, all such
aggregates must be considered legal. Later, when the actual index subtype is
known, the check to see if the choices belong to the index subtype will raise
an exception (CONSTRAINT_ERROR, not NUMERIC_ERROR) if the choice values are too
large or too small. If the choice values exceed SYSTEM.MAX_INT, then an
implementation can reject the aggregate on the grounds that it does not support
such integer values. This would be necessary, for example, in the case (Big =>
True, Big + 2 => False). This aggregate is illegal because not all index
values are covered, but an implementation would not be able to detect this if
it does not support compiletime computations on integer values greater than
INTEGER'LAST. Such an aggregate would be rejected even if the index subtype
were a generic formal type.
In short, I think there is no semantic or implementation difficulty with
allowing static expressions as choices when the index subtype is a generic
formal type.
I do suspect that if this problem had been pointed out during language design,
a restriction would have been imposed on such choices, probably the restriction
Ron proposes. But Ron's restriction is not really needed at this point.
If my argument above is not considered convincing, then I think the only
restriction that is motivated by the current wording of the RM is that choices
of an aggregate are considered nonstatic if the index subtype is a generic
formal subtype. This would allow (BIG..BIG+1 => True) even if the choice type
is a generic formal type.
************************************************************************
!section 04.09 (02) P. N. Hilfinger 840524 8300378
!version 1983
!topic Re: Staticness and Generic Formal Types
!reference 8300286
I think that one can indeed get a consistent set of rules along the lines
Goodenough suggests. However, it seems to me that the LRM comes down on
another side. The phrase (4.3.2(1)) ``each choice must specify VALUES of the
index type'' seems quite clear to me. In your example, the value of Big is
not in the type INTEGER, and so the aggregate is illegal. For Ron's problem,
as the manual is currently written, we must determine the validity of these
aggregates at generic instantiation time. I guess the question is whether
this is ``sufficiently wrong.''
************************************************************************
!section 04.09 (02) Ron Brender 840530 8300379
!version 1983
!topic Re: staticness and generic formal types
!reference 8300286, AI00190
Goodenough raises some good points in his comment of 840524. But
there are a few other considerations as well.
A key consideration is that the overall design of array aggregates
carefully divides them into two classes: those with static choices
which are allowed to have more than one choice and those with
nonstatic choices which are restricted to a single choice. This
distinction was made in order to minimize the significant compiletime
complexity and likely runtime overhead of dealing with multiple
nonstatic choices. In particular, the implementation must assure
that multiple ranges both are complete and have no overlaps. It is
natural to expect that an implementation will take advantage of this
distinction in the way that it organizes it compile time stategies.
For example, it is likely to use data structures that hold the results
of evaluating static expressions in the first case for use in carrying
out the completeness and overlap checks, and completely different data
structures for the nonstatic, single choice case.
If Goodenough's conclusion were accepted, then there would be yet a
third class  aggregates with multiple choices that are not really
static, but which must be considered static according to the text of
4.9. For these aggregates, presumably the implementation would need
to evaluate the expressions at compiletime assuming some type whose
range is at least MIN_INT..MAX_INT and use that assumed value for
overlap checks  the completeness check can't be performed until the
actual bounds of the generic formal index subtype are known at
runtime; the best that could be done is to assume that the union of
the choices is a single contiguous range. Good quality runtime code
would dictate keeping track of which choices produced the minimum and
maximum values for checking against the actual subtype at runtime. I
agree with Goodenough that this is doable, but I think he passes off
too lightly the additional implementation and runtime complexity.
Further, I believe this additional complexity was not intended by the
language designers.
John makes a plausible argument to the effect that an implementation
"is always allowed" to consider aggregates legal that have choices of
a generic formal type. (It is less clear whether he is also arguing
the converse  that an implementation is also "allowed" to consider
them illegal.) The interesting part about his argument is that it
appears to be equally valid if applied to the choices in a case
statement or the choices in a variant record. Yet, the RM explicitly
specifies that case statements and variant parts with choices of a
generic formal type are illegal. It would be most strange for the
opposite to be true about array aggregates.
Goodenough suggests an alternative restriction in his last paragraph,
namely, that the choices of an aggregate be considered nonstatic if
the index subtype is a generic formal subtype. Actually, I like this
rule much better than the rule I proposed in 8300286 because it seems
to better reflect the semantic reality of the situation rather than
being a rather ad hoc restriction. Indeed, this rule makes the ad hoc
restrictions reguarding case statements and variant parts unnecessary
because a choice that was both static and of a generic formal type
could not occur.
I proposed the rule making such aggregates illegal precisely because
that was the precedent established by the RM. I also sensed that in
the related issues involving the staticness of integer type
definitions (see AI00023) that there was a reluctance to tinker with
the definition of staticness in 4.9 in relation to implicit type
conversions.
On balance, however, I certainly would not oppose the alternative
rule.
************************************************************************
!section 04.09 (02) Software Leverage, Inc. 841010 8300454
!version 1983
!topic Numeric Literals not Always Static?
There is an apparent gap in 4.9 in the definition of static expressions, in
that it seems that numeric literals should not be static if they are implicitly
converted to a generic formal integer or real type. (Or a type derived from
such, if that were possible.)
Without such a requirement, it would be possible to violate the contract model
for generics as follows:
generic
type T is range <>;  implicitly declares "+", ""
package G;

with System;
package body G is
L: constant := System.Max_Int  1;
type F is digits G.""(G."+"(1, L), L);  legal?
end G;
The expression is, by taking 4.9(2) literally, only static if no exception is
raised. If there is a predefined integer type longer than Integer, the above
could raise Numeric_Error for some instantiations but not for others. Even if
the implementation could invoke 11.6(6) to avoid raising the exception, the
user couldn't assume this to be the case; and we aren't sure that one could
always escape in this manner, although it sounds plausible. In any case, it
seems likely that it was not intended that implicit conversions to nonstatic
base types be static, since if the above real type declaration had been written
as
type F is digits (T'(1) + T'(L)  T'(L));
it would not have been legal.
Are implicit conversions to nonstatic base types actually static if the
operand is?
************************************************************************
!section 04.09 (02) R. Pierce 881129 8301051
!version 1983
!topic Is a renamed predefined operator predefined?
Is an expression static if it contains an operator which is a renaming of a
predefined operator? A renamed constant is OK in a static
expression (AI001) as is a renamed enumeration literal (AI438).
By analogy, it would seem reasonable to allow an operator symbol which
identifies a renaming declaration which renames a predefined operator to be
a valid constituent of an static expression. However, some validated
compilers accept the following example and some reject it.
package P is
type MY_INT is range 2**15 .. 2**151;
end P;
function "+" (L, R : P.MY_INT) renames P."+";  Avoid use clause
V : constant P.MY_INT := 6;
........
case E is
when V + 1 => ....  Is the expression static ?
VADS version 5.5 rejects this, although version 5.4 and previous accepted
it. DEC Ada accepts it.
Regards,
Ron Pierce
*****************************************************************************
!section 04.09 (02) M. Woodger 890318 8301275
!version 1983
!topic Errors in AI00190/05
!reference AI00190/05
1. In the discussion section of this Commentary, the first paragraph
describes the possibilities for an expression of generic formal
type to be static, and excludes (for example) static qualified
expressions.
The second paragraph says "An examination of the Standard shows that
rules requiring static expressions are given in the following
sections: ... ". But the list given is not complete. For example,
it omits 4.9(9) which concerns qualified expressions.
This list is in fact only a list of the places that are RELEVANT to
the set of possibilities described in the first paragraph.
To correct this, replace "rules requiring" by
"relevant rules requiring", in the second paragraph.
2. At three places in the first paragraph, "formal generic type"
should be replaced by the standard nomenclature (used everywhere else)
which is "generic formal type".
3. In the fifth paragraph of the discussion, item 3., the expression
[4.6(15] should read [4.6(15)].
*****************************************************************************
!section 04.09 (03) R.Tischler,D.Cutler Tandem, 860604 8300761
!version 1983
!topic Functions renaming enumeration literals  static?
Can a static expression contain a function that renames an enumeration
literal? In 4.9(3) the manual only says that enumeration literals
are okay, but we're guessing this should be interpreted to mean
anything that denotes an enumeration literal, such as a function that
renames it.
Compare commentary #1, which said that declarations that rename
constants are okay in static expressions, because they denote
constants, even though 4.9(6) explicitly only allows constants that
were declared with constant declarations.
*****************************************************************************
!section 04.09 (06) J. Goodenough 830701 8300133
!version 1983
!topic are renamed static constants static?
Consider the following example:
C1 : constant INTEGER := 6;  C1 is static
C2 : INTEGER renames C1;  is C2 static?
On the face of it, paragraph 6 says C2 is not static, since C2 is not
"explicitly declared by a constant declaration." On the other hand, 8.5(4)
says, "The properties of the renamed object are not affected by the renaming
declaration". This statement means that C2 is a constant. Is "being static"
a "property" of C1? If so, 8.5 implies C2 is static.
Since "property" is not a technical term, we are free to consider staticness a
property of C1, and indeed, I think this would be the intuitive
interpretation. Although strictly speaking, 8.5 and 4.9 conflict, I think we
can "interpret" 8.5 to be a metastatement that extends the meaning of 4.9.
Of course, we can also say that 4.9 is crystal clear in its statement that C2
is not static, with the consequence that staticness is not a "property"
covered by the rule in 8.5.
From a user viewpoint, I think it is less confusing to say that C2 is static.
I don't see that implementers should care one way or the other, unless, of
course, they've implemented it the "wrong" way.
I think that if we were pressed on this point, we would have to say the RM is
ambiguous, and hence an implementation could not be invalidated because, say,
it treats C2 as nonstatic. I could, however, simply write the tests taking
my recommended view, and wait for someone to protest.
************************************************************************
!section 04.09 (06) P. N. Hilfinger 830701 8300134
!version 1983
!topic are renamed static constants static? (8300133)
I agree with John's position; C2 should retain the staticness property.
************************************************************************
!section 04.09 (06) Ron Brender 831029 8300198
!version 1983
!topic Are renamed static constants static?
!reference AI00001, 8300133
I support the interpretation by Goodenough (8300133) that a renaming
of a static constant ought to be static. To conclude otherwise flies
in the face of all reasonable intuitions that one expects not only
from 8.5(4) but also 8.5(1): "A renaming declaration declares another
name for an entity". I suspect that if we where consider staticness
as a property of a name rather than a property of an entity, that we
will fall into a bottomless pit of other distinctions between names
and denoted entities that will provide fertile ground for keeping the
Language Maintenance committee and the Ada Board as a whole
"lawyering" for a long, long time.
************************************************************************
!section 04.09 (06) Software Leverage, Inc. 840121 8300269
!version 1983
!topic Are renamed static constants static?
!reference 8300133
In 8300133, J. Goodenough claims that 8.5(4) and 4.9(6) conflict. We
disagree with this interpretation. We also believe that there is no
language problem here; renamed constants can be used in static
expressions.
Staticness is a property of expressions, discrete ranges, and
subtypes. It is not a property of objects, so 8.5(4) does not apply.
The example given was:
C1 : constant INTEGER := 6;  (1)
C2 : INTEGER renames C1;  (2)
The primaries C1 and C2 (in some expression) both refer to the same
constant, which was, in fact, "explicitly declared by a constant
declaration". Therefore, expressions involving C1 and C2 can be
static.
The constant is declared by (1). (2) does NOT declare an object; it
merely declares a new name for an object that already exists. In
4.9(6), the name used to refer to the constant is irrelevant.
************************************************************************
!section 04.09 (06) P. Wehrum, Siemens 830408 8300283
!version 1983
!topic Implicit conversion of numeric literals and static expressions
Let us consider some examples:
type FIX is delta 1.0 range  10.0..10.0; (1)
 SMALL = 1.0
C1A:constant FIX := 0.5; (1a)
C1B:constant FIX := FIX(0.5); (1b)
 depending on implementation chosen model number
 can be either 1.0 or 0.0
C2A: constant FIX := 0.5+0.5; (2a)
C2B: constant FIX := FIX(0.5)+FIX(0.5); (2b)
 resulting constant value will be either 0.0 or 2.0
C3A: constant NATURAL := 1;  (3a)
C3B: constant NATURAL := NATURAL (1); (3b)
According to 04.06 (15) the examples (1a), (2a), (3a)
imply a conversion of the literals appearing on the
rhs of the assignment. Thus, (1a)  (3a) are semantically
equivalent to (1b)  (3b) and a conversion is never static.
This implies that the set of static expressions resulting
from rule (d) in 04.09 is void.
To describe what is intended by 04.09 (06), a rule should
be added saying something like:
"an implicit conversion of a static expression is static".
(By the way, the paragraph number 15 appears twice in 04.06;
renumbering will be necessary.)
************************************************************************
!section 04.09 (06) Jean D. Ichbiah 840301 8300308
!version 1983
!topic No name declared by renaming in static expressions
A name declared by a renaming declaration is not allowed as a primary of a
static expression.
One goal in the writing of the 1983 Ada Manual was to use a formal English:
define each term precisely (thereby attempting to distinguish technical
terms from ordinary terms) and provide a very consistent use of technical
terms. The writing of section 4.9 on static expressions is an example:
First it announces the definition of the qualifier "static". This is the
role of 4.9(1) and it announces
. static expressions
. static discrete ranges
. static subtypes
The meaning of these terms is then defined by the rest of the section. Thus
Static Expression is defined by 4.9(2) through 4.9(10).
Then paragraph 4.9(11) uses the definition of static expression in order to
define:
. static ranges
. static range constraints
. static subtypes
. static discrete ranges
. static index constraints
. static discriminant constraints
Finally section 4.3.2(4) uses the notion of static index constraint in
order to define:
. static "others" choices
Consider now the interpretation of the questions raised in comment (83
133). Consider the examples:
C1 : constant INTEGER := 6;
C2 : INTEGER renames C1;
The question refers to whether expressions containing the names C1 or C2
are static. The answer is clearly given by 4.9(6)(d). For example:
2*C1  is a static expression
2*C2  is NOT a static expression
This interpretation follows directly from 4.9: "An expression is static if
and only if every primary is one of those listed in (a) through (h) ...."
Here the primary "2" is a numeric literal, the operator "*" is a predefined
operator and
. C1 is a constant EXPLICITLY declared by a constant declaration,
with the static subtype INTEGER, and initialized with the static
expression "6" (a numeric literal  case (b)).
. C2 is another name, declared by a renaming declaration (see
8.5(1)): NOT a constant declaration.
My point here is that the intent of (a) through (h) was to define the
syntactic characters of the allowed primaries of a static expression.
I suspect that the question arose from incorrect usage of the Ada
terminology.
Note in particular that the Ada Reference Manual has:
. No "static constants"
. No "static functions"
. No "static attributes"
. No "static literals"
These are not Ada technical terms so that a question such as:
"Are renamed static constants static?"
is meaningless: Ada has no "static constants" and even less "static renamed
constants". (This was my reason for removing the misleading comments in the
declaration of C1 and C2).
Historical Note.
The July 80 definition was more liberal. The 1983 move to a more restricted
formulation was deliberate: several implementers considered that too much
was being required from Ada compilers in terms of static expressions. The
nonscalar static expressions were banned. Similarly names denoting renamed
entities (whether constants, functions, operators, or attributes) were
deliberately omitted.
Note also that there is hardly any need for renaming a constant: why not
declare C2 as follows:
C2 : constant INTEGER := C1;
(Then the expression "2*C2" would be static.)
Conclusion.
Should the LMC consider that the "syntactic" intent of 4.9 is not
sufficiently clear, let us consider better wording. On the other hand, the
intent of being syntactic was quite clear and there is no ground for
changing the language.
************************************************************************
!section 04.09 (06) M. Woodger [edited] 840403 8300364
!version 1983
!topic Renaming and static expressions
!reference AI00001/03
Since the recommendation recorded in this [Commentary] is contrary to the
design team's intent (as stated in the [Commentary]), ... I am impelled to add
a further comment to the discussion on this topic... . If the LM Committee
makes piecemeal recommendations that deviate from the intent, then we run the
real risk of further inconsistencies in future.
While I support the exclusion of the new name of a renamed constant from the
class of static expressions (and disagree with the Committee's
recommendation), I would be happier to exclude also the new name of a renamed
operator, for consistency. (Then "Size + 1" in the discussion would not be
static.) To achieve this one might append to the first sentence of 4.9(7) the
words indicated below:
... including a function name that is an expanded name {but
excluding the new name in a renaming declaration}; ...
************************************************************************
!section 04.09 (06) Kit Lester/AdaEurope 840711 8300386
!version 1983
!topic Renaming and static expressions
!reference AI00001/04, 8300308
I agree with the conclusion of AI00006/04, that a name declared by a
renaming declaration denoting a constant, the constant having been declared as
initiailised with a static expression, is itself static. However, I have two
further arguments for that conclusion.
The first is that, given the conventions of the maual, the manual says
that such names are static. The convention is that the manual confuses
entities with the notations that denoted them, and vice versa. 4.9(2) implies
that 4.9(3) to 4.9(10) are to be construed syntactically (and JDI says this
was his intention). So when 4.9(6) says:
"A constant explicitly declared by a constant declaration with
a static subtype, and initialised with a static expression"
we must construe it for some notation denoting those enitites called
"constants" The only possible such notation is a name. Therefore 4.9(6)
means
"A name that denotes a constant, provided that the constant is
explicitly declared by a constant declaration ..."
This does not place any restriction on the name: in particular it does not
require that the name be declared by that same constant declaration.
JDI says this was not his intent: however, we have a standard, and it
appears that what JDI wrote means something he did not intend. That is not
relevant: the standard, so long as it is not grossly unreasonable, takes
priority over anyone's intent.
My second argument is as follows. 4.9(7) says:
"A function call whose function name is an operator symbol
that denotes a predefined operator ..."
This allows an operator symbol whose denotation (in this context) was fixed by
a renaming declaration: the unusual occurrence of the word "denotes" very
forcibly draw attention to this permission. Since 4.9(3) and 4.9(7) are the
only two candidates for staticness (out of 4.9(1..h)) that COULD result from
renaming, it would be inconsistent to permit renaming in the one case and not
the other.
************************************************************************
!section 04.09 (06) Jean D Ichbiah/M Woodger 850214 8300504
!version 1983
!topic Renaming and static expressions
We wish to offer further evidence to refute the recommendation
and supporting discussion of this Commentary.
The argument in support is only that section 4.9 already allows
some names declared by a renaming declaration to be used in a
static expression, since an operator symbol that DENOTES a
predefined operator can be used. It concludes that "denotes" was
therefore also to be understood in paragraph (6), in spite of the
careful avoidance of this word (acknowledged to be the design
team's intent).
This is already a weak argument, but it is even weaker if we
consider the parallel situation where care has likewise been
exercised to exclude renaming in defining the semantics of the
pragma SHARED in section 9.11(10).
If we take the same cavalier attitude in that case we have the
absurd situation that we can invalidate the second requirement of
9.11(10)  for example:
package A is
V : INTEGER;
package B is
R renames V;  denotes a variable declared by an
 object declaration ...
pragma SHARED(R);  allowing this would
 side step the rule that it
 should follow the declaration of V
...
end B;
...
end A;
************************************************************************
!section 04.09 (06) Gary Dismukes/TeleSoft 850223 8300508
!version 1983
!topic Renaming and static expressions
!reference 8300504
Comment 8300504 attempts to demonstrate a parallel between the
wording of 4.9(16) concerning constants in static expressions and the
wording of 9.11(10) concerning pragma SHARED. It is contended that
a "cavalier attitude" is being taken towards the wording in 4.9(6)
that could lead to invalidating the second requirement of 9.11(10).
I disagree that there is a useful parallel to be drawn between these
two sets of rules.
Presumably the analogy is being made between 4.9(6) and the first
requirement of 9.11(10). That is, the phrase "a constant declared
explicitly by a constant declaration" is being compared to the phrase
"a variable declared by an object declaration". The claim seems to
be that if the word 'variable' is interpreted to mean "name denoting a
variable" then the second requirement of 9.11(10) can be invalidated.
The second requirement of 9.11(10) states that "the variable declaration
and the pragma must both occur (in this order) immediately within the
same declarative part or package specification." In the example, the
pragma is given within a package B nested inside the package specification
of A which itself contains the declaration of the variable. The renaming
declaration does not declare a variable and thus the pragma will have no
effect since it does not occur immediately within the package A. This is
true independently of the interpretation of the word 'variable' in the
first requirement.
The second requirement of 9.11(10) also does not disallow the use of a
name declared by a renaming declaration as the argument of pragma SHARED.
It only places a restriction on the context and relative positions of
the variable's declaration and the pragma, not on the nature of the name
used within the pragma to denote the variable.
Rather, it is the third requirement, that "the pragma must appear before
any occurrence of the name of the variable, other than in an address
clause", that prohibits use of the renaming. Since the renaming declaration
contains an occurrence of the name of V, the pragma will have no effect.
(It is interesting to note that, were it not for this third requirement,
SHARED could be given for subcomponents of variables declared by object
declarations since the first requirement is not careful to state that the
variable should be one that is EXPLICITLY declared by a variable declaration.
Cf. the wording in 4.9(6).)
The point to be made is that the rules for pragma SHARED are unambiguous
insofar as they disallow the use of a renaming of a variable as the name
in the pragma. In the case of 4.9(6), however, the use of the wording
"constant explicitly declared by a constant declaration" is imprecise
(or at least open to misinterpretation) in that it does not make clear
whether the primary must be a name declared by a constant declaration
or merely a name denoting such a constant. This has already been stated
in the commentary and there seems to be definite disagreement about which
interpretation is to be preferred.
I agree that it is important to take into consideration the original
intent of restricting the use of renamings within static expressions.
However, it is also important for adequate technical justification to
be provided for that intent. It is clear why renamings should not be
allowed as the argument for pragma SHARED. It is not so evident why
they should be disallowed within static expressions. Although the
restriction does not impose a significant loss of functionality, it
may well prove to be a surprise to programmers (and an annoyance to
those who have adopted a style of importation by renaming). Given
that a clarification of the meaning of 4.9(6) needs to be made, it
would seem useful to consider making an interpretation that will
eliminate the surprise.
************************************************************************
!section 04.09 (06) M Woodger 881105 8301142
!version 1983
!topic Replace the first occurrence of "constant" by "name"
Not meant. A constant is not a primary, but a name is  see 4.9(2).
*****************************************************************************
!section 04.09 (07) J. Goodenough 861030 8300854
!version 1983
!topic PRED, SUCC, POS, and VAL can be used in static expressions
4.9(2) says:
An expression of a scalar type is said to be static if and only
if every primary is one of those listed in (a) through (h)
below, ...
Paragraphs (e) and (f) say:
(e) A function call whose function name is an operator symbol
that denotes a predefined operator, ...
(f) A languagedefined attribute of a static subtype; for an
attribute that is a function, the actual parameter must also be
a static expression.
The term "attribute" is a syntax term and so refers to the syntactic form
defined in 4.1.4(2):
attribute ::= prefix'attribute_designator
attribute_designator
::= simple_name [(UNIVERSAL_STATIC_expression)]
An attribute such as CHARACTER'POS('A') is considered a function call, since
its argument is not a static expression having a universal type. The
function "name" in this call is CHARACTER'POS, and this name does not denote
a predefined operator. Hence, according to paragraph (e), CHARACTER'POS('A')
cannot be used in a static expression.
Clearly the intent, however, is to allow such "function calls" in static
expressions.
*****************************************************************************
!section 04.09 (07) Geoff Mendal 870821 8300950
!version 1983
!topic Nasty Ramifications of AI00438/02
I was surprised, but happy to see that the "holding"
of the LMP/LMC has changed in this matter. (The previous
revs of AI00438 had stated that functions renaming
enumeration literals were NOT static.)
One minor point, before I get into my question. The
example in the "question" of AI00438/02 needs to be made
semantically valid... the renaming declaration of F1
should be:
function F1 return Enum renames E1;
Now onto my question.
One of ramifications concerning the use of such a renamed function
is as the expression governing a case statement. AI00438 clearly
indicates that a function renaming an enumeration literal is
static. Consider the following code segment:
subtype Lower_Case is Character range 'a' .. 'z';
function Lc_A return Lower_Case renames 'a';
. . .
case Lc_A is  1.
when Lower_Case => null;
end case;
case Lc_A is  2.
when Character => null;
end case;
Which of the case statements above is legal? A reading of the
ARM 5.4(3) clearly says that the second case statement is legal,
but not the first. AI00438/02 however might be read as saying
that the first statement is legal, but not the second.
I interpret the current rev of AI00438 as trying to make functions
renaming enumeration literals more like constant object
declarations (with the provisions of AI00001). Clearly the
code segment
Lc_B : constant Lower_Case := 'b';
. . .
case Lc_B is
when Lower_Case => null;
end;
is legal. Is the intent of AI00438/02 to make functions renaming
enumeration literals more like constant objects of a static subtype?
Is there any need to consider the semantics of the case statement
in this regard? Are there other areas of the language where this
"problem" crops up?
Consider the following code segment:
subtype Non_Static_Lower_Case is Lower_Case
range Lower_Case'First .. Lower_Case ('z');  non static subtype
function Lc_C return Non_Static_Lower_Case renames 'c';
. . .
case Lc_C is
when Lower_Case => null;
end case;
Is the above case statement legal? It wouldn't be legal if
Lc_C was declared as:
Lc_C : constant Non_Static_Lower_Case := 'c';
I would like to see the LMP/LMC at least address this issue. I don't
really care which way it turns out... 6 or half a dozen.
gom
*****************************************************************************
!section 04.09 (08) P. Miller (SofTech) 881027 8301255
!version 1983
!topic Attibutes SAFE_LARGE and SAFE_SMALL should be static.
The attributes SAFE_LARGE and SAFE_SMALL are defined by the base type.
The base type must always be static. These attributes should be
considered static even when they are applied to a nonstatic subtype.
*****************************************************************************
!section 04.09 (11) M Lott/Alsys 830608 8300166
!version 1983
!topic static subtype
There seems to be a gap in the definition of static subtype, since the
possibility of the exception CONSTRAINT_ERROR being raised by the
elaboration of the subtype indication is not mentioned (compare 4.9(2) for
the evaluation of an expression).
RESPONSE
There is no gap. Paragraph 4.9(11) refers to the subtype formed by
imposing a constraint. If CONSTRAINT_ERROR is raised, no subtype is
formed. Paragraph 4.9(2) is defining staticness for a syntactic form (an
expression), so must require a value to be delivered.
************************************************************************
!section 04.09 (11) Jean D. Ichbiah 840301 8300315
!version 1983
!topic Static numeric subtypes
There is no doubt that the intent is that the subtype T declared as follows
be static:
type T is range 1 .. 10;
The question only arises because of the "formal" type conversions used in
the definition by equivalence. Several approaches can be used to correct
the present definition:
(a) Correct the definition of 4.9(11) regarding static subtypes
(second sentence):
"A static subtype is either a scalar base type, other than a
generic formal type; {or a subtype declared by an integer or real
type definition;} or a scalar subtype formed by
(b) Correct the definition of 4.9(9)(g) to include type conversions.
Solution (b) could be more consistent (in spite of the pathology of
conversions such as INTEGER(1.5) which are implementation dependent).
However, at this stage, this would be a radical change  more than is
conceivable under "language maintenance".
Solution (a) is therefore preferable in spite of the slight inconsistency
between the second and third subsentences.
************************************************************************
!section 04.09 (11) Software Leverage, Inc. 840501 8300371
!version 1983
!topic Are types derived from generic formal types static subtypes?
Paragraph 4.9(11) states, "A static subtype is ... a scalar base type,
other than a generic formal type ...". Paragraph 12.1(3) defines generic
formal type: "The [term] ... generic formal type ... [is] used to refer
to corresponding generic formal parameters." This implies that a type
derived from a generic formal type is not a generic formal type.
Therefore, a scalar base type which is derived from a generic formal type
is a static subtype.
For example,
generic
type T is range <>;
package P is
type S is new T;  S is a static subtype.
type F is digits S'LAST;  Legal!
end P;
Surely it was the intent of the language designers that the bounds of
static subtypes be knowable at compile time. Therefore, the manual should
be changed to say, "A static subtype is ... a scalar base type, other than
a generic formal type or a type derived (directly or indirectly) from a
generic formal type ...".
************************************************************************
!section 04.09 (11) 12.3 M. Woodger 851024 8300680
!version 1983
!topic Status of identifiers that denote generic actual parameters
Consider:
generic
type T is (<>);
package SET_OF is
type SET is array (T) of BOOLEAN;
end SET_OF;
package CHAR_SET is new SET_OF(CHARACTER);
What is the status of the identifier T in the package CHAR_SET?
Is this T a static discrete range?
This is an aspect of the wider question of whether the rules of
4.9 are to be applied to expressions (and discrete ranges) within
packages and subprograms that are generated as instances by the
elaboration of generic instantiations.
It is plainly stated in 3.1(8) that the process of elaboration
happens during program execution, and in particular the raising
of the exception PROGRAM_ERROR upon an unsuccessful attempt to
elaborate a generic instantiation (11.1(7)) is a runtime
activity. Moreover, staticness is supposed to be a compiletime
notion, applying only to (textual) expressions  10.6(1) suggests
that static expressions be evaluated by the compiler. One would
thus expect that nothing in a program text after the generation
of a copy (instance) of a generic unit could be static that was
not already static before the instantiation.
In that case, since T is not static in SET_OF, T is not static in
the instantiated package CHAR_SET, despite the fact that this
copied occurrence of T denotes the static subtype CHARACTER.
*****************************************************************************
!section 04.09 (11) 12.3 P. N. Hilfinger 851024 8300681
!version 1983
!topic Status of identifiers that denote generic actual parameters
!reference 8300680
Mike Woodger writes
It is plainly stated in 3.1(8) that the process of elaboration
happens during program execution... Moreover, staticness is supposed
to be a compiletime notion, applying only to (textual) expressions....
One would thus expect that nothing in a program text after the
generation of a copy (instance) of a generic unit could be static that
was not already static before the instantiation.
It is exceedingly dangerous to distinguish compiletime and runtime,
given the current Standard (and in general, some would say). Consider
4.1(9),
The evaluation of a name determines the entity denoted by the
name.
Now evaluation is also a `runtime' activity. Because of the use of
language in the Standard, ambiguous and undefined names are to be
found at compile time. If a compiler determines that a name can never
be evaluated, then, is it free not to report any such errors for it?
The Standard was written with the Algol 68 fiction in mind: Everything
happens at `runtime' (which is not an Algol 68 term, however);
certain errors (known as ``compiletime errors'' in the vulgar) are to
be anticipated by the compiler during translation. Consequently, I
would be loathe to base any argument, as Mike has done, on what
officially happens at ``runtime.''
*****************************************************************************
!section 04.09 (11) Art Evans/Tartan Labs 870626 8300928
!version 1983
!topic Need for static attributes of arrays and records
Ada never permits an array type to be static. This fact follows from
the third sentence of 4.9(11).
I see no reason why a type such as
type T is array(1..2) of integer;
should not be considered static. In particular, I find it reasonable
that an array type be considered static if both of the following
requirements are met:
All indices must be static ranges; and
the array element type must be static.
(A similar rule can be developed for record types.) If such a type is
static, it should then follow that 'size of such a type (or an object of
such a type) should be static.
This problem bit us when we tried to use the size of an array in a rep
spec, where a static value is required.
*****************************************************************************
!section 04.09 (11) Ron Brender 870721 8300934
!version 1983
!topic Static range constraints
Consider the following, taken from ACVC test B49004A (which is new in
V1.9):
SUBTYPE INT IS INTEGER RANGE 1 .. 5;
...
OBJ1 : INTEGER RANGE 1 .. 6 := 3;
...
CASE OBJ1 IS
WHEN INT RANGE 1 .. 6 =>  ERROR: [?]
OBJ1 := 4;
WHEN OTHERS =>
NULL;
END CASE;
Careful reading of 4.9(11) leads to the conclusion that INT RANGE 1 ..
6 is, indeed, a STATIC range. Further, 5.4(5) does require that the
choices of a case statement must be static. What, then, is the error
expected by the test?
Actually, it appears that there must be an error, but it is not clear
just what basis serves as the justification. Note that 4.9(11) does
not specify that a range constraint is static only if its evaluation
does not raise an exception (analygously to what is done for
expressions in 4.9(2))  is this an oversight? Further, 5.4 does not
require that the choices of a case statement are "evaluated"  which
must lead to raising CONSTRAINT_ERROR in this case  only that "each
value of the (base) type of the expression must be REPRESENTED
[emphasis added] once and only once in the set of choices". One can
suppose that the values "represented" by a choice somehow involves the
evaluation of the choice, but this is never quite explained.
Please clarify.
*****************************************************************************
!section 04.09 (11) M Woodger 881105 8301143
!version 1983
!topic Exclude types derived from generic formal types
!reference AI00025
Before the first semicolon, insert "or a type derived therefrom (directly or
indirectly)".
*****************************************************************************
!section 04.09 (11) M Woodger 881105 8301144
!version 1983
!topic Constraint imposed must be compatible
!reference AI00114/02
At the end of the third sentence, insert "and where in each case the
constraint imposed is compatible (that is, it does not raise an exception 
see 3.3.2)".
*****************************************************************************
!section 04.09 (12) Eberhard Wegner 19830818 8300046
!version 1983
!topic Change "as the value" to "as any of the values".
I presume that evaluation at run time need not always yield the same
value but (depending e.g. on the processor assigned) may yield the
neighbouring upper or lower model number in a nondeterminate way.
Perhaps add a note to say this.
************************************************************************
!section 04.09 (12) M Woodger 881105 8301145
!version 1983
!topic Helpful note
Precede this paragraph by a note:
"An explicit type conversion is not static."
*****************************************************************************
!section 04.09 (12) M Woodger 881105 8301146
!version 1983
!topic "[an] evaluation"
Grammar.
*****************************************************************************
!section 04.09 (13) Don Clarson 830630 8300010
!version 1983
!topic {Generic formal objects are not static.}
************************************************************************
!section 04.10 M Woodger/Alsys 830506 8300154
!version 1983
!topic Accuracy of a relation between two universal real operands
The last sentence of section 4.10(4) says "if a universal expression is
a static expression, then the evaluation must be exact". However,
no corresponding rule exists for the evaluation of a static expression such
as
0.1 = 0.01E1
which is intended always to deliver the result TRUE (see Response 810129
to Query 216 on section 2.4.1). This is because an expression with result
type BOOLEAN is not classed as a universal expression.
The RM currently defines the accuracy of this relational operation in terms
of model numbers of the type (see 4.5.7(10)). According to 3.5.6(5 and 3),
a set of model numbers is associated with the type universal_real;
clearly 4.10(4) intends to ignore these, and so should the missing rule
for static relations having universal real operands. Probably it should
be specified in 3.5.7(7) that the model numbers for the type universal
real have effectively infinite mantissa, so that every real literal
evaluates to a model number of the type.
************************************************************************
!section 04.10 M. Woodger 850918 8300659
!version 1983
!topic Accuracy of a relation between two universal real operands
!reference AI00103/01, 8300154
In retrospect, I believe we have been too hasty in
(a) inserting the word "static" into the topic for this AI, and
(b) approving it as a ramification.
Regarding (a), there is still a gap in the manual as pointed out
in 8300154 for relations (and membership tests) whose operands
are of type universal real but not static. Section 4.10 defines
universal expressions, but entirely omits relations and
membership tests because these deliver a result of type BOOLEAN.
The missing rule should parallel 4.10(4) for universal real
operand(s) and BOOLEAN result.
Regarding (b), the missing argument can only deduce the result
from 4.5.7(10) if it establishes that each universal real value
is a model number, as your present discussion points out. For
static operands I believe it  their "evaluation must be exact".
For nonstatic operands there is a gap, and the Note 4.9(12) only
serves to emphasize it.
AI103 should be resurrected to deal with the original question.
************************************************************************
!section 04.10 (02) Norman Cohen 830610 8300181
!version 1983
!topic legality of 2.0**3
The note in paragraph 4.10(6) suggests that 2.0**3 is a valid expression of
type universal real, but this is not clear from the Standard itself. Since **
is predefined for a left operand of any floating point type and a right
operand of one particular integer type (the predefined type INTEGER), it is
not clear that ** qualifies as either an operation "predefined .. for any
integer type" or an operation "predefined ... for any floating point type."
************************************************************************
!section 04.10 (04) G.A. Riccardi, Florida State U. 830705 8300284
!version 1983
!topic Exact evaluation of static universal real expressions.
A typical floating point evaluation of expression a, below , would
yield the value false  which is incorrect according to
paragraph 4.10(4), which requires that the evaluation of static
universal real expressions be exact. Hence, in order to
implement exact evaluation of universal real expressions, a
compiler must include a rational arithmetic package or some even
more complicated expression representation and manipulation
package.
Such a rational arithmetic package would consume more than 3000
bits to represent the exact value of expression b. Any
limitation on the precision of the arithmetic would result in
expression c evaluating to true, which is incorrect.
An expression with only static universal real operations whose
values are also real may be evaluated with a floating point
precision somewhat greater than that of the target machine with
no apparent loss of precision. The value of the expression
functions as a run time constant of floating point type. After
the evaluation of the static expression, the value must be
converted to floating point type in order to be used by the Ada
program. Only with mixed type operations such as relational
operations and conversion operations does the lack of precision
of floating point evaluation become visible to an Ada program.
The expressions a and c, below, are examples of such mixed type
expressions. The loss of precision in the evaluation is made
visible by the equality operation.
Implementors of Ada compilers need clarification about possible
implementation dependent characteristics which would address
the need for space and time efficiency in the compilation process.
We favor allowing a compiler to employ a limited precision
evaluation strategy and to reject programs which cannot be
evaluated exactly using this strategy. A number of strategies may
be appropriate, including binary floating point, decimal floating
point and limited precision rational arithmetic.
Example static expressions:
a. (1.0/10.0) * 10.0 = 1.0
b. 1.0E1000 + 1.0
c. (1.0E1000 + 1.0) / 1.0E1000 = 1.0
************************************************************************
!section 04.10 (04) Software Leverage, Inc. 840130 8300427
!version 1983
!topic Accuracy of Attributes of Generic Formal Types
Since attributes of generic formal types aren't static, the accuracy required
for their evaluation is given by 4.10(4): "The accuracy of the evaluation of a
universal expression of type universal_real is at least as good as that of the
most accurate predefined floating point type supported by the implementation,
apart from universal_real itself."
This makes it very difficult to portably use generic fixed point types. To
illustrate this, assume we wished to completely sidestep issues of accuracy by
converting from the fixed point representation to integer values:
type LONGEST_INTEGER is SYSTEM.MIN_INT..SYSTEM.MAX_INT;
 Assume that, for any fixed point subtype T,
 SYSTEM.MIN_INT*T'SMALL <= T'BASE'FIRST
 and T'BASE'LAST <= SYSTEM.MAX_INT*T'SMALL.
...
function CONVERT(X: T) return LONGEST_INTEGER is
begin
return LONGEST_INTEGER(X/(T'FIRST*INTEGER'(0) + T'SMALL));
 X/T'SMALL is ambiguous, because T'SMALL must be converted to
 some fixed point type, and which one isn't determined.
 X/T'(T'SMALL) isn't ambiguous, but will raise CONSTRAINT_ERROR
 if the range of T doesn't include T'SMALL.
 The above is a trick to arrange that only conversions to T'BASE
 are done; fortunately the model number interval of the left
 summand is just 0.0.
end CONVERT;
(That such contortions are needed is itself a defect in the language, but this
is left as a topic for future comments.)
This doesn't necessarily accomplish what is desired because (e.g., if T has a
length clause for T'SMALL) T'SMALL may not be a model number of the most
precise floating point type. Therefore the implicit conversion of T'SMALL from
universal_real to T'BASE isn't guaranteed to be accurate (it may even yield
zero if T'SMALL is very small).
This problem was discovered while considering the implementation of
TEXT_IO.FIXED_IO in Ada.
The four attributes which are troublesome are T'SMALL, T'LARGE, T'SAFE_SMALL,
and T'SAFE_LARGE for fixed point types. (There is only a problem for generic
formal types.)
It isn't clear how to minimally fix Ada to remedy this problem. One might
strengthen 4.10(4) to also require exact evaluation for a convertible universal
operand as defined in 4.6(10) if the operand is explicitly or implicitly
converted to a real type (of course, the conversion itself need not be exact).
The next version of Ada ought to resolve this problem.
************************************************************************
!section 04.10 (04) J. Goodenough 850921 8300665
!version 1983
!topic Accuracy of universal real relations
I agree with Mike that AI00103 should be widened to deal with the
full question of universal real operands of relations and membership
test operations, but not for his reasons.
I think it is clear that if both operands of a relation are static and have
type universal real, the relation must be evaluated exactly, since each
universal real value serves as its own model number. Similarly, if both
operands are nonstatic universal real expressions, it is clear that both
will be evaluated with at least the accuracy of the most accurate predefined
floating point type, and the relation (or membership test) will be evaluated
in terms of the model intervals for such a floating point type.
A problem arises, however, when one operand is static and the other is
nonstatic. In this case, the Standard should be interpreted to mean that
the static operand is evaluated with at least the accuracy of the most
accurate predefined floating point type, since the evaluation must be
performed at runtime, but the current wording does not allow such an
approximation. This can cause a problem in at least the following case.
Let m1, m2, and m3 be three consecutive model numbers of the most accurate
predefined floating point type. Let us suppose the expression to be
evaluated is NS = S, where NS is a nonstatic universal_real expression, and
S is a static universal real expression. Suppose the model interval for NS
is m1..m2, and that S lies in the model interval between m2 and
m3, i.e.,
m1 <= NS <= m2 < S < m3
Now if NS = S is evaluated with the required model intervals (i.e., by
treating S as a model number of type universal_real, and treating NS as a
value lying in the range m1..m2) then NS = S must evaluate to FALSE. But if
for purposes of evaluating this expression, S is approximated by the model
interval m2..m3, then NS = S is allowed to evaluate to TRUE.
In short, we want a semantic effect that is equivalent to saying S is
implicitly converted to the most accurate predefined floating point type, and
the relational operations (and membership tests) are performed with respect
to the model intervals of that type.
Nothing in the Standard currently allows such an implicit conversion of S
(i.e., such a runtime approximation to S's true value), and this imposes an
implementation burden and is contrary to the intent of 4.10(4).
The recommendation (as a binding interpretation) should say that when both
operands of a relational operator (and similarly when the operand and both
bounds in a membership test) have the type universal real, and one operand (or
bound) is also not a static expression, the result of each static expression
is implicitly converted to the most accurate predefined floating point type,
and the relational operation is evaluated in terms of that type.
Such an interpretation allows S (in my example) to be evaluated using the
model interval m2..m3, i.e., S's value can be approximated at runtime.
Although this problem only arises when some operands are static and some are
not, I think it is better to deal with all cases of static/nonstatic
operands in a single commentary.
************************************************************************
!section 04.10 (04) P. N. Hilfinger 860412 8300736
!version 1983
!topic Accuracy of universal real relations
References: AI405/1
I was asked to prepare a writeup on AI405. The following is cast in AI form.
I would be willing (for once) that the category be changed to "ramification."
Please note the first two paragraphs of the discussion, which perhaps
should be addressed in another AI.
======================================================================
!summary
If the operands of a relational operator or membership test have the
type universal_real and one or more of the operands is nonstatic, the
static operands must be evaluated exactly. Doing so, however, does
not impose a runtime overhead.
!question
... [ as before ] ...
!recommendation
In evaluating a relation in which one operand is a static
universal_real expression and the other is a nonstatic universal_real
expression, the static operand is evaluated exactly, the nonstatic
operand is evaluated at the highest machine precision available, and
the results are compared to give the mathematically correct boolean result.
!discussion
There is both a trivial and a less trivial resolution of this problem.
Trivially 4.10(4) refers to "expressions," and in NS = S, for example,
NS and S are not expressions according to the Ada syntax, but rather
are simple_expressions, and possibly terms, factors, or primaries.
Since the only expression involved is NS = S, which is nonstatic,
infinite precision is not required anywhere, by this argument.
However, the intent of the Standard was that the word "expression" in
these contexts should not be interpreted in the strict syntactic
sense, but should be interpreted to refer also to simple_expressions,
relations, terms, factors, and primaries. Furthermore, in such
constructs as
2.0 * PI * X,
the phrase 2.0 * PI is also to be considered an expression for the
purposes of sections 4.9 and 4.10, even though it is not a term, but
only a portion of a term.
Given that established Ada usage does not support the trivial
resolution, there is also a nontrivial resolution that requires no
compromise of the semantics. Suppose NS is a nonstatic universal real
expression, and S is a static universal real expression. Consider the
following relation.
NS relation S
The language (4.10(4)) indicates that S must be evaluated exactly. It
also indicates (4.5.7(10)) that because the values of S and NS, being
of type universal_real, are both model numbers, the relation itself
must be carried out exactly on the computed values of S and NS. It
might at first seem that the expression S must be carried to full
precision (i.e., as a ratio of arbitrarily large integers) at runtime.
This is not, in fact, the case.
Let LONG be the base type for the highest precision floating point
numbers used by a given implementation. By abuse of notation, we
shall also use it to denote the set of all values of type LONG. Let
CEIL(x,LONG) be the lub of the subset of LONG greater than or equal to
x. Let FLOOR(x,LONG) be the glb of the subset of LONG less than or
equal to x. These may be undefined beyond the extrema of LONG. There
are two cases, and the one that applies can be determined at
compilation time.
1. The value of S is a member of LONG. In this case, the
implementation is obvious.
2. S is not a member of LONG. Convert the relational
expression according to the following table.
Expression Transformed expression

NS > S, NS >= S, NS > FLOOR(S,LONG), if the latter is defined
S < NS, S <= NS or TRUE otherwise
NS < S, NS <= S NS < CEIL(S,LONG), if the latter is defined
S > NS, S >= NS or TRUE otherwise
NS = S, S = NS FALSE
NS /= S, S /= NS TRUE
We thus reduce everything to at worst the case of comparing a static
member of LONG to a nonstatic member of LONG. This all works because
the quantities involved obey the following axioms (interpret NS to be
the actual (machine) value it represents; <, >, etc. to be the true
mathematical predicates; and <=> to be logical equivalence).
S >= NS <=> S > NS
S > NS <=> CEIL(S,LONG) > NS, when CEIL is defined.
S < NS <=> S < FLOOR(S,LONG), when FLOOR is defined.
S > NS for S > max(LONG)
S < NS for S < min(LONG)
As a consequence, it is possible to maintain the convenient "semantic
fiction" that S is carried to infinite precision in the comparison
without runtime cost.
*****************************************************************************
!section 04.10 (05) Software Leverage, Inc. 840123 8300267
!version 1983
!topic Numeric_Error and Universal Operands
Section 4.10(5) states that "For the evaluation of an operation of a
nonstatic universal expression, an implementation is allowed to raise
the exception NUMERIC_ERROR only if the result of the operation is...
greater than SYSTEM.MAX_INT or less than SYSTEM.MIN_INT."
This unfortunately says nothing about the size of the operands.
Consider the nonstatic universal expression in the following (the left
side of the comparison):
with RANDOM_NUMBERS;
 Assume a library unit with the obvious functionality
procedure P is
X: POSITIVE := RANDOM_NUMBERS.RANDOM_POSITIVE;
Y: BOOLEAN := 10E1000 mod INTEGER'POS(X) = 0;
 The equality compares two universal_integer operands
begin
null;
end P;
Since the result of the "mod" operator is less than INTEGER'LAST, it
would seem that the above must return an exact result. Since we may
obviously replace the 10E1000 with any very large integer literal, the
straightforward implementation of the above requires multiple
precision arithmetic at run time.
It may be possible to use tricks to circumvent some of these
requirements; for example, if the operator were "**", so the exponent
was of type integer, then the expression
STATIC_LARGE_VALUE**EXP
could be computed as
if EXP = 0 then
return 1;
else
raise NUMERIC_ERROR;
end if;
but it seems unlikely that the intent was to require such tricks (when
they exist!) for every predefined operator.
There is, of course, a similar problem for real operations. The
general problem can be stated thus: for a binary operator 'op', a
static universal operand E1 which would raise NUMERIC_ERROR if
converted to any appropriate predefined type, and a nonstatic
universal operand E2 of appropriate type, the expression
E1 op E2  Similarly, E2 op E1
is required to yield an exact result if the result is within the range
of some predefined type.
Does Ada therefore require multiple precision arithmetic at run time?
************************************************************************