!section 03.01 (05) M Woodger/Alsys 83-05-06 83-00146
!version 1983
!topic "statement label" -> "label name", for consistency with 5.1(3)
************************************************************************
!section 03.01 (06) M Woodger/Alsys 83-05-06 83-00147
!version 1983
!topic A declaration of an entity is also a declaration of its name
Append to this paragraph the words ", and the declaration of the entity
is also said to declare the name."
Without this addition, paragraphs 3.1(1) and 3.1(5) are inconsistent: the
first says a declared entity can be a block, loop or statement, while
the second speaks instead of declaring the name of a block or loop, or a
statement label.
This proposal merely regularizes normal practice; see for
example 3.3.1(5), 3.5.1(3), 3.5.4(12), 5.1(3, 4), 8.5(1),
10.1.2(8), 11(2), 11.1(1, 3), 13.5(7).
************************************************************************
!section 03.01 (06) M Woodger 88-11-05 83-01060
!version 1983
!topic Missing definition
At the end, insert ", and the declaration of the entity is
also said to declare the name".
*****************************************************************************
!section 03.01 (09) M Saaltink 89-02-08 83-01260
!version 1983
!topic Reversion to not-yet-elaborated state
Declarations can be elaborated more than once. For example, the
declarations appearing in the declarative part of a subprogram body
are elaborated each time the subprogram is called [ARM 6.3(6)].
However, [ARM 3.1(9)] states "The elaboration of any declaration
always has at least the effect of achieving this change of state (from
not yet elaborated to elaborated)." These two statements in
conjunction imply that at some time the declarations in the
declarative part of a subprogram body have their states changed from
elaborated back to not-yet-elaborated (if the subprogram is called
more than once). This does not appear to be described anywhere in the
ARM and leads to some possible differences in interpretation.
For example, consider this procedure:
procedure Q (x: integer) is
procedure R;
function f return integer is
begin
if x = 1 then R; end if;
return 0;
end f;
package P is
c: constant integer := f;
end P;
procedure R is begin null; end R;
begin
if x < 1 then Q(1); end if;
end Q;
and consider the call Q(0). The declarative region of Q can be elaborated
without error. Then in the recursive call Q(1), the initialization of c
causes execution of f, which will call R. Now, has the declaration of
the body of R been elaborated or not? In particular, will PROGRAM_ERROR
be raised (by virtue of [ARM 3.9(5)])?
The declaration of the body of R has indeed been elaborated before;
that occurred before the recursive call Q(1). So there should be no
PROGRAM_ERROR. On the other hand, after c is initialized, the
declaration of the body of R will be elaborated again, and will at
that time change state from not-yet-elaborated to elaborated. So
perhaps when c is being initialized, the declaration of the body of R
has already reverted to that not-yet-elaborated state, and there
should be a PROGRAM_ERROR.
[ARM 3.9(3)] suggests that the reversion to the not-yet-elaborated state
should occur at the beginning of the elaboration of the declarative part
containing the declaration. Now consider
procedure Q2 (x: integer) is
procedure R;
function f return integer is
begin
if x = 0 then Q2(1); R; end if;
return 0;
end f;
package P is
c: constant integer := f;
end P;
procedure R is begin null; end R;
begin
null;
end Q2;
This time, in the call Q2(0), when c is being initialized there is a
recursive call from f to Q2(1). In the course of that call, the
declaration of the body of R is elaborated. Thus one might expect
that when that call returns, the declaration of the body of R is still
in this elaborated state, even when [ARM 3.9(3)] is taken into account,
and so PROGRAM_ERROR should not be raised.
Some validated Ada compilers do in fact raise PROGRAM_ERROR in executing
these procedures.
The above considerations show that the terms used in describing
elaboration are ill-chosen. Since a single declaration may be
elaborated many times, any reference to "the elaboration" (as a
definite event) is meaningless. Thus, for example, [ARM 3.1(8,9)] and
[ARM 3.9(3-8)] are not sensible.
*****************************************************************************
!section 03.02 (08) Software Leverage, Inc. 84-07-16 83-00391
!version 1983
!topic Is a named number an object?
Is a named number an object?
Usually, the LRM implies that named numbers are not objects, by
explicitly mentioning named numbers in addition to objects when both
are allowed (see, for example, 4.4(3)). However, 3.2(8) confuses the
matter by stating that "a number declaration is a special form of
object declaration...".
The distinction matters in a few places. For example, is an address
clause allowed for a named number?
We believe that a named number is not an object, and that use of a
named number is equivalent to use of a numeric literal. Is this
correct?
************************************************************************
!section 03.02 (08) Ron Brender 86-10-15 83-00843
!version 1983
!topic Is a named number an object?
The following is stimulated by the Implementers Guide, IG13 G1,
Section 13.5/S13.
This statement is rather too glib. LRM 3.2(1-2) defines an object as
(among other alterantives) an entity "declared by an object
declaration". LRM 3.2(8), as well as 3.2.2(1), describe a number
declaration as a "special form of object [or constant] declaration".
It would appear, therefore, that a named number IS an object!
This is not to say that I think it useful (or desirable) to be able to
apply an address clause to a named number. It is simply that I don't
see any basis for saying that this is absolutely precluded (illegal).
LMP/LMC clarification is sought.
*****************************************************************************
!section 03.02 (09) J. Goodenough 87-07-07 83-00933
!version 1983
!topic Declaring constant arrays with an anonymous type
Consider the following example:
type ARR is array (INTEGER range <>) of INTEGER;
C1 : constant ARR := (1, 2, 3); -- legal
C2 : constant array (1..3) of INTEGER := (1,2,3); -- legal
C3 : constant array (INTEGER range <>) of INTEGER :=
(1, 2, 3); -- illegal
Although the declarations of C1 and C3 both mention unconstrained array types,
the declaration of C3 is illegal since the syntax does not allow an
unconstrained array definition in this context. It seems non-uniform to allow
C1 and C2 but not C3.
*****************************************************************************
!section 03.02.01 (04) Norman Cohen 89-12-08 83-01341
!version 1983
!topic Contradiction with 3.6.1(7)
Paragraphs 3.2.1(4-6) should be reworded. They state without
qualification that the subtype of an object is determined before any
initial value is obtained (and the first sentence of 3.2.1(15) confirms
the significance of the order specified by these paragraphs); but
3.6.1(7) makes it clear that the initial value determines the subtype in
the case of a constant declared with a subtype indication that denotes an
unconstrained array type.
*****************************************************************************
!section 03.02.01 (04) Norman Cohen 90-03-01 83-01358
!version 1983
!topic There is no contradiction
!references AI-00861
AI-00861/0 is incorrect to assert that there is a contradiction between
3.2.1(4..6) and 3.6.1(7).
Although the initial value determines the bounds of the constant, the
subtype of the constant is the unconstrained array subtype established in
step (a), 3.2.1(5). Indeed 3.6.1(7) specifically speaks of the case in
which "the subtype of the constant is unconstrained".
*****************************************************************************
!section 03.02.01 (06) J. Goodenough 85-07-10 83-00582
!version 1983
!topic subcomponents -> components
Paragraph (6) says:
If the object declaration includes an explicit initialization, the
initial value is obtained by evaluating the corresponding expression.
Otherwise any implicit initial values for the object or for its
subcomponents are evaluated.
It is important that paragraph (6) specify only that implicit initial values
of COMPONENTS (rather than subcomponents) are evaluated, since implicit
initial values can be given more than once for the same subcomponent:
type T is
record
A, B : INTEGER := 3;
end record;
type U is
record
C : T := (4, 5);
end record;
X : U;
The initial expressions for both X.C and X.C.A must not both be evaluated.
In fact, paragraph 14 gives the explicit rule that covers this case:
In the case of a component that is itself a composite object and
whose value is defined neither by an explicit initialization nor by a
default expression, any implicit initial values for components of the
composite object are defined by the same rules as for a declared
object.
If paragraph 6 is corrected to read COMPONENT instead of SUBCOMPONENT,
paragraph 14 takes care of the rest.
*****************************************************************************
!section 03.02.01 (06) M Woodger 88-11-05 83-01061
!version 1983
!topic The last word "evaluated" should be "obtained
Not meant.
*****************************************************************************
!section 03.02.01 (06) M. Woodger 89-06-22 83-01294
!version 1983
!topic Evaluating subcomponent initialization expressions
!reference AI-00364/01
There is no problem to be fixed with 3.2.1(6) (other than a presenta-
tion issue). AI-364/01 should be a ramification. Here is my analysis
of the example given in this Commentary:
type T is
record
A, B : INTEGER := 3;
end record;
type U is
record
C : T := (A =>4, B => 5);
end record;
X : U;
3.2.1(6) does not say that all initial expressions of all subcomponents
must be evaluated. It says
"... any implicit initial values for the object or for its
subcomponents are evaluated."
What are these implicit initial values? 3.2.1(9) says:
"Implicit initial values are defined for objects declared by object
declarations, and for components of such objects, in the following
cases:"
and the relevant case is dealt with by 3.2.1(13), which says:
"If the type of an object is a composite type, the implicit initial
value of each component that has a default expression is obtained
by evaluation of this expression ..."
This does not say "subcomponent". It says, in the example, that the
implicit initial value of the component X.C is obtained by evaluating
the default expression "(A => 4, B => 5)" .
We now read on, and find under 3.2.1(14):
"In the case of a component that is itself a composite object and
whose value is defined neither by an explicit initialization nor by
a default expression, any implicit initial values for components of
the composite object are defined by the same rules as for a
declared object."
This does not define any implicit initial values for components of X.C,
because this composite component of X does not satisfy the conditions;
its value IS defined by a default expression, namely the one above.
The purpose of 3.2.1(14) is just to delve into the lower depths of
subcomponents ONLY in the case where no value is provided for the host
component in question.
If, for example, component C had no default expression:
type U is
record
C : T;
end record;
X : U;
then the conditions of 3.2.1(14) would be satisfied, and in this case we
would find that "any implicit initial values for components of" X.C
"are defined by the same rules as for a declared object."
This means we apply 3.2.1(9) and (13) again, to components A and B of
X.C, and find default expressions for both, which are to be evaluated
to get implicit intial values.
So in this case the phrase in 3.2.1(6)
"... any implicit initial values for the object or for its
subcomponents are evaluated"
(referring to the object X) covers only the default expressions for the
SUBcomponents A and B. No other implicit initial values are defined.
Clearly, AI-364/01 is wrong to delete "sub" from 3.2.1(6). It could be
changed to answer the question on the above lines, and should be a
ramification.
*****************************************************************************
!section 03.02.01 (06) M. Woodger 89-06-22 83-01295
!version 1983
!topic "evaluated" -> "obtained"
!reference AI-00618/00
3.2.1(6) ends:
"Otherwise any implicit initial values for the object or for its
subcomponents are evaluated."
In view of the fact that so many ARG members have been confused by this
sentence, perhaps it does need enlarging slightly, as a "ramification".
The sentence is trying to abbreviate something like:
"Otherwise any implicit initial values that are defined for the
object or for its subcomponents by the rules in paragraphs 9 to 15
below are obtained."
This formulation uses the word "obtained" like the previous sentence,
and explicitly refers to the notion of "defined initial values"
introduced by 3.2.1(9). It avoids "evaluating expressions to obtain
values" since some values do not involve expressions (as in paragraphs
10 and 11).
*****************************************************************************
!section 03.02.01 (08) Norman Cohen 90-03-01 83-01357
!version 1983
!topic subcomponent{s}
The change would parallel the use of the plural in 3.2.1(6) and
3.2.1(15) and avoid confusion. Several subcomponents with implicit
initial values may be identified in Step (b) (3.2.1(6)).
*****************************************************************************
!section 03.02.01 (14) Norman Cohen 89-12-08 83-01342
!version 1983
!topic Shouldn't this be a bulleted paragraph?
3.2.1(14) is describing the definition of implicit initial values for
subcomponents in one of the possible cases. It should be bulleted like
the preceding four paragraphs, which describe the definition of implicit
initial values in the other cases for which such values are defined.
*****************************************************************************
!section 03.02.01 (15) M Woodger 88-11-05 83-01062
!version 1983
!topic Replace "the previous rule" by "this rule"
Obscure wording.
*****************************************************************************
!section 03.02.01 (15) M. Woodger 89-06-22 83-01296
!version 1983
!topic "the previous" -> "this"
!reference AI-00619/00
3.2.1(15) reads:
"The steps (a) to (d) are performed in the order indicated. For
step (b), if the default expression for a discriminant is evalu-
ated, then this evaluation is performed before that of default
expressions for subcomponents that depend on discriminants, and
also before that of default expressions that include the name of
the discriminant. Apart from THE PREVIOUS RULE, the evaluation of
default expressions is performed in some order that is not defined
by the language."
The last sentence here is only referring to the previous sentence, which
starts with "For step (b),". This is the only rule that prescribes an
order of evaluation of default expressions. (The first sentence is
excluded.)
The point of this presentation issue is that, having just enunciated a
rule, one refers to it just afterwards as "this rule". The words "the
previous rule" would refer to an earlier one. So the reader is
needlessly confused.
It would be even clearer, but would lengthen the sentence considerably,
to use a semicolon in place of the period before "apart from this rule
..".
*****************************************************************************
!section 03.02.01 (15) Norman Cohen 89-12-08 83-01343
!version 1983
!topic Dependence on the given discriminant, or any discriminant?
The second sentence of 3.2.1(15) reads:
For step (b), if the default expression for a discriminant is
evaluated, then this evaluation is performed before that of default
expressions for subcomponents that depend on discriminants, and also
before that of default expressions that include the name of the
discriminant.
This sentence seems to suggest that the default expression for a
discriminant must be evaluated before the default expression for a
subcomponent that depends on ANY discriminant. Presumably, the intent is
only to require that the default expression for a discriminant be
evaluated before the default expression for a subcomponent that depends
on THAT discriminant. This is consistent with the second part of the
sentence, which only requires the default expression for a discriminant
to be evaluated before default expressions that include (consist of,
really) the name of THAT discriminant.
*****************************************************************************
!section 03.02.01 (16) 3.3.2(6),3.7.2(5,8) M Woodger/Alsys 86-09-04 83-00798
!version 1983
!topic Discriminant checks for default discriminant values
!reference AI-7/14,AI-14/05,AI-308/05,AI-358/06,83-00788.
I am not convinced that these Commentaries need expanding to deal
with subcomponent declarations as well as object declarations.
This is because the wording of the Standard already takes care of
this point, and should apply equally to the provisions of the
Commentaries. (If the Commentaries were expressed as rephrasing
of the wording of the Standard we would not have this problem.)
Concerning AI-14/05, my opinion is that this is a ramification.
3.7.2(8) tells how discriminant values are defined, and has
indeed overlooked the case where an explicit initialization
expression is given in the object declaration. This however does
not invalidate 3.2.1(6)(b), which is clear that implicit initial
values for the object or for its subcomponents are evaluated only
if the object declaration does NOT include an explicit
initialization. (The clearest statement of that is on page 4 of
LSN.232, the Preliminary Review of Chapter 3 of 1981, which was
adopted for the standards of 1982 and 1983.)
Now as to AI-308/05 (approved by WG9/AJPO). The summary can be
read as also applying to objects that are components of an object
that is "created either by an object declaration or an
allocator", which is what 3.2.1(16) already provides for with the
parenthetic phrase "(the declared object or one of its
subcomponents)". The recommendation is even more explicit: "When
a discriminant is initialized, its value is checked for
compatibility".
The Standard intended to defer compatibility checks of default
discriminant values until object creation time, and to perform no
such checks when a type declaration is elaborated. In the case
quoted by John Goodenough (86-07-24):
type R (D : INTEGER := -1) is
record
COMP : STRING (D .. 10);
end record;
type T2 (D : BOOLEAN) is
record
case D is
when TRUE =>
C1 : R; -- No constraint error
when FALSE =>
C2 : INTEGER;
end case;
end record;
section 3.3.2(6) does in fact apply; it says:
"If the subtype indication does not include a constraint,
the subtype is the same as that denoted by the type mark."
So the subtype of component C1 is the type R, which is
unconstrained.
The question of when default expressions for discriminants are
evaluated (and checked for compatibility) when a component is
declared to be of an unconstrained record type and has no
explicit default expression (Comment 83-00787) is dealt with by
3.2.1(14), which says -
"In the case of a component that is itself a composite
object and whose value is defined neither by an explicit
initialization nor by a default expression, any implicit
initial values for components of the composite object are
defined by the same rules as for a declared object."
(This is the key paragraph that completes the "recursive"
treatment of subcomponents in 3.2.1.)
The corresponding sentence in 3.7.2 is 3.7.2(8/2) -
"The same requirement exists for the subtype indication of a
component declaration, if the type of the record component
has discriminants ..."
In as much as AI-14 and AI-308 modify 3.7.2(8/1), they must
evidently also adopt this second sentence too.
Regarding AI-358/06 and 83-00786, the chief purpose of this
Commentary is to exempt non-existent subcomponents from the
check specified in 3.7.2(5). It would not be an appropriate
place for a discussion of when or how the check is performed.
Possibly the right place would be AI-7, whose main topic is this
check.
-----------------------------------------------------------------
*****************************************************************************
!section 03.02.01 (16) M Woodger 88-11-05 83-01063
!version 1983
!topic Discriminant compatibility check missing
!reference AI-00308
Before the semicolon, insert "(and for a discriminant, that
the corresponding constraint is compatible - see 3.7.2)"
*****************************************************************************
!section 03.02.01 (16) Norman Cohen 89-12-08 83-01344
!version 1983
!topic Definition of initialization
The term "initialization" is not defined in the RM, and the meaning of
the term "initial value" is suggested only indirectly, yet 3.2.1(16)
begins:
The initialization of an object (the declared object or one of its
subcomponents) checks that the initial value belongs to the subtype
of the object....
Presumably, this means:
The assignment to an object (the declared object or one of its
subcomponents) in step (d) checks that the value assigned belongs to
the subtype of the object....
"Initialization" could be misconstrued as the entire process of
elaborating an object declaration. "Initial value" could be misconstrued
as meaning the initial contents of an object, even in the case where the
object contains an undefined value, rather than just the explicit initial
value mentioned in 3.2.1(6) or the implicit initial values described in
3.2.1(9-14). With these interpretations, the current wording of 3.2.1(16)
would imply that CONSTRAINT_ERROR could be raised by
type DIGIT_LIST_TYPE is
array (POSITIVE range <>) of INTEGER range 0 .. 9;
type DIGIT_STACK_TYPE is
record
TOP : INTEGER range 0 .. 100;
CONTENTS : DIGIT_LIST_TYPE (1 .. 100);
end record;
DS : DIGIT_STACK_TYPE;
since the "initialization" of DS would check that the "initial values" of
subcomponents DS.CONTENTS(1), ..., DS.CONTENTS(100) belong to the subtype
INTEGER range 0 .. 9 .
*****************************************************************************
!section 03.02.01 (18) Sam Kendall 85-08-20 83-00616
!version 1983
!topic Loophole in definition of erroneous undefined value
"The execution of a program is erroneous if it attempts to evaluate a
scalar variable with an undefined value," but the same is not said of
scalar-typed expressions in general. The following fragment evaluates a
scalar with undefined value, but is not erroneous according to 3.2.1(18)
because the scalar is a constant, not a variable:
type REC is record UNDEFINED: INTEGER; end record;
R1: REC; -- R1.UNDEFINED is undefined
R2: constant REC := R1; -- evaluating R1 is okay, since it is
-- composite. Now R2.UNDEFINED is
-- undefined.
... R2.UNDEFINED ... -- in an expression
Recommendation: replace "scalar variable" with "scalar expression" in
the sentence in 3.2.1(18) quoted above.
The Ada implementer's guide should probably contain advice to those who
want to implement a runtime check for undefined value, telling them that
there are very few circumstances where a scalar non-variable with
undefined value might be evaluated. Once such circumstance is if the
scalar is the scalar field of a constant record, as above. There may
be other such circumstances. This advice will help in generating fewer
costly undefined value checks.
*****************************************************************************
!section 03.02.01 (18) M. Woodger 85-08-27 83-00621
!version 1983
!topic Loophole in definition of erroneous Undefined value
!reference 83-00616
The trouble is caused by allowing the evaluation of R1, yielding
a record "value" with an undefined component value. This violates
the intent of 3.7(1) which says "the value of a record object is
a composite value consisting of the values of its components",
and of 3.2.1(6) which says the initial value for R2 "is obtained
by evaluating the corresponding expression".
The point is not that R2 is a constant, but that R1 does not yield
an initial value.
*****************************************************************************
!section 03.02.01 (18) C Bendix Nielsen, AdaFD, DDC 86-06-09 83-00747
!version 1983
!topic Operations on undefined array values
Consider:
procedure MAIN is
type ARR_TYPE is array (integer range <>) of boolean;
type R( D: integer) is record
A: ARR_TYPE(1..D);
end record;
R1: R(1);
R2: R(2);
begin
R2.A := R1.A or R2.A; -- Erroneous ?
end MAIN;
3.2.1(18) says: "the execution of a program is erroneous if it
attempts to apply a predefined operator to a variable that has a
scalar subcomponent with an undefined value."
This seems to indicate that the above program is erroneous, but ...
4.5.1(3) says: "The operations on arrays are performed on a
component-by-component basis on matching components, if any ..."
4.5.2(13) (a note!) claims that two arrays of different lengths
have no matching components - wherefore the "or" operation is not
to be performed: there are no matching components. Thus, it
results in raising CONSTRAINT_ERROR (4.5.1(3)).
Is the program erroneous - or must it raise CONSTRAINT_ERROR?
*****************************************************************************
!section 03.02.01 (18) C Bendix_Nielsen, AdaFD, DDC 86-10-09 83-00825
!version 1983
!topic Operations on undefined scalar values.
Consider:
type S is new SCALAR; -- SCALAR is some scalar type
type T is array (INTEGER range <>) of S;
subtype T1 is T(1..1);
subtype T2 is T(1..2);
function ID( X: T) return T is
begin return X; end ID;
A1: T1; -- A1(1) is not initialized and therefore undefined
A2: T1 := A1; -- A2(1) is initialized and therefore not undefined?
A3: T1;
B: T2;
C: S;
begin
B := A1 & A1; -- (1) erroneous
B := A2 & A2; -- (2) not erroneous?
B := ID(A1) & ID(A1); -- (3) not erroneous?
A3 := A1;
B := A3 & A3; -- (4) not erroneous?
C := A1(1); -- (5) erroneous
C := ID(A1)(1); -- (6) not erroneous?
In (1), a predefined operator is applied "to a variable that has a
scalar subcomponent with an undefined value" which makes execution
of the program erroneous.
In (2), A2 has been assigned an initial value by explicit initialization
without evaluation of any scalar variable. Therefore, A2(1) is not
undefined?
In (3), a predefined operator is applied to a VALUE that has a undefined
scalar 'subcomponent'. This is not ruled erroneous by RM.
In (4), A3 has been assigned a value and is, therefore, not undefined?
In (5), a scalar variable with an undefined value is evaluated,
and execution of the program is, therefore, erroneous.
In (6), no scalar variable (but a scalar value) is evaluated, and the
program is not erroneous?
It seems unfortunate that these very similar examples have so different
semantics; and, furthermore, it seems unpleseant to have values that
are neither well-defined nor undefined. It would be more reasonable
if assignment preserved undefinedness of 'subcomponents', and if it was
'erroneous' to use an undefined scalar value.
*****************************************************************************
!section 03.02.01 (18) C Bendix_Nielsen, AdaFD, DDC 86-10-30 83-00855
!version 1983
!topic Operations on an undefined variable of a private type.
Consider:
package P is
type T is private;
private
type T is new SCALAR; -- SCALAR is some scalar type
end P;
V1: P.T;
V2: P.T := V1; -- erroneous?
B: BOOLEAN := V1 = V2; -- erroneous;
For the initialization of V2 and B, V1 is evaluated. V1 is undefined,
but it is not a variable of a scalar type: it is a variable of as
private type, so 3.2.1(18) seems to make neither of them erroneous!
Is this the intention, or should a variable of a private type, in this
respect, be treated as if it was of the type of the full declaration of
the private type (as for subprogram parameters [AI-00025/08]).
*****************************************************************************
!section 03.02.01 (18) F.Mazzanti 88-07-06 83-00981
!version 1983
!topic Operations preserving undefinedness of subcomponents
In 3.2.1(18) is specified that if the operand of a type conversion or
qualified expression is a variable that has scalar subcomponents with
undefined values, then the values of the corresponding subcomponents of
the result are undefined.
Should not a similar rule be stated for other kinds of operations as
assignment, construction of a parameter association, construction of an
aggregate, construction of a slice, selection of a record component, indexing
of an array?
The following are some examples of these operations on undefined objects:
subtype LINE is STRING(1..80);
type PAGE is array (1..60) of LINE;
type DOCUMENT is
record
TITLE: LINE;
TEXT: PAGE;
end record;
UNDEF_LINE: LINE;
UNDEF_PAGE: PAGE;
UNDEFDOC: DOCUMENT;
LINE:= LINE; -- array assignment
TEXT_IO.PUT(LINE); -- parameter association
UNDEF_PAGE := (1..60 => UNDEF_LINE); -- aggregate
UNDEF_PAGE(11..20):= UNDEF_PAGE(1..10); -- slice
UNDEF_PAGE:= UNDEF_DOC.TEXT; -- selection
UNDEF_LINE:= UNDEF_PAGE(15); -- indexing
*****************************************************************************
!section 03.02.01 (18) F.Mazzanti 90-03-07 83-01359
!version 1983
!topic Type conversion/qualification of undefined scalar values
Suggestion: the execution of a program is not erronenous if a type conversion
or type qualification is applied to an undefined scalar value.
On the contrary, the appropriate range checks are performed and
CONSTRAINT_ERROR is raised if such a check fails.
The result of the type conversion can then be any legal value of the target
type.
--
Motivation: There is no safe way to check the absence of illegal values in the
progam without causing the execution to become erroneous. A simple explicit
type conversion or type qualification (forcing a range check) would be
sufficient to solve the problem, provided that the type conversion is
ufficially stated to behave just as it does usually behave also in the case of
undefined values, and provided that the type conversion of an expression of a
given type to the type itself is not optimized by the compiler.
--
The above clarification would achieve both effects without introducing any
additional complexity in the language and its implementations (all
implementations I know already behave in this way).
--
Example:
type T is range 1..10;
type VECT is array (T) of integer;
T1:T; -- undefined
V1: VECT;
...
V1(T(T1)) := 0; -- safe (checked) use of indexing,
-- and execution not erroneous
...
V1(T'(T1)) := 0; -- safe (checked) use of indexing,
-- and execution not erroneous
--
*****************************************************************************
!section 03.02.01 (18) F. Mazzanti 90-06-19 83-01375
!version 1983
!topic Undefined values of a private type
!reference AI-00490/03
_
I do not think that not mentioning the evaluation of an undefined value of a
private type (even if implemented as a scalar type) as a case of erroneous
execution was an oversight.
_
First, values of a private type cannot be used in any critical contexts (e.g
as index for an array, or expression for a case, or a value for a discriminant).
Indeed private values have the same degree of safety that non-scalar values
(aggregates) have. Hence it is perfectly useless to require their evaluation,
when undefined, to be erroneous.
_
Second, if the use an undefined scalar value is considered an error we
would have that the correctness of the following code:
_
with OTHER_PACKAGE; use OTHER_PACKAGE;
procedure P is
LOCAL: OTHER_PACKAGE.PRIVATE_TYPE;
begin
OTHER_PACKAGE.OTHER_SUBPROGRAM(LOCAL);
end;
would depend from the way in which the PRIVATE_PACKAGE is implemented.
E.g. the transformation of the implementation of the PRIVATE_TYPE from a record
with one component to a scalar definition would make P erroeneous.
I do not think this would be a nice property of language (expecially if
introduced without any sound reason).
_
Third, most uses of limited private types (implemented by scalar definitions)
would become erroneous. Notice in fact the limited private definitions defined
as scalar type are not allowed to have default values, and that using such
an undefined value as an in out argument would become erroneous.
E.g.
MY_FILE: TEXT_IO.FILE_TYPE;
...
TEXT_IO.CREATE(MY_FILE, ...);
-- would become erroneous if FILE_TYPE is implemented by a scalar type!!!
-- (e.g. as an index in a I/O table).
_
*****************************************************************************
!section 03.02.01 (18) F. Mazzanti 90-06-19 83-01378
!version 1983
!topic Undefined values of a private type
!reference AI-00490/03
_
I do not think that not mentioning the evaluation of an undefined value of a
private type (even if implemented as a scalar type) as a case of erroneous
execution was an oversight.
_
First, values of a private type cannot be used in any critical contexts (e.g
as index for an array, or expression for a case, or a value for a discriminant).
Indeed private values have the same degree of safety that non-scalar values
(aggregates) have. Hence it is perfectly useless to require their evaluation,
when undefined, to be erroneous.
_
Second, if the use an undefined scalar value is considered an error we
would have that the correctness of the following code:
_
with OTHER_PACKAGE; use OTHER_PACKAGE;
procedure P is
LOCAL: OTHER_PACKAGE.PRIVATE_TYPE;
begin
OTHER_PACKAGE.OTHER_SUBPROGRAM(LOCAL);
end;
would depend from the way in which the PRIVATE_PACKAGE is implemented.
E.g. the transformation of the implementation of the PRIVATE_TYPE from a record
with one component to a scalar definition would make P erroeneous.
I do not think this would be a nice property of language (expecially if
introduced without any sound reason).
_
Third, most uses of limited private types (implemented by scalar definitions)
would become erroneous. Notice in fact the limited private definitions defined
as scalar type are not allowed to have default values, and that using such
an undefined value as an in out argument would become erroneous.
E.g.
MY_FILE: TEXT_IO.FILE_TYPE;
...
TEXT_IO.CREATE(MY_FILE, ...);
-- would become erroneous if FILE_TYPE is implemented by a scalar type!!!
-- (e.g. as an index in a I/O table).
_
*****************************************************************************
!section 03.02.01 (19) M Woodger 88-11-05 83-01064
!version 1983
!topic Illegal example
The bounds 1..N in the array declaration violate 3.6.1(2).
*****************************************************************************
!section 03.02.02 (03) M Woodger 88-11-05 83-01065
!version 1983
!topic Missing example
Add the example:
MAX_LINE_SIZE : constant := 120;
The identifier MAX_LINE_SIZE is used in 3.6(12).
*****************************************************************************
!section 03.03 (05) M. Saaltink 89-02-08 83-01262
!version 1983
!topic Operations of subtypes
[ARM 3.3(5)] and [ARM 3.5.5(2,16)] make it clear that there are operations
for subtypes as well as for types, and that the assignment operation for
a subtype is different that the assignment operation for the base type.
This has some surprising consequences. Consider for example
procedure P is
subtype T is Integer range 0 .. 9;
x: T;
begin
x := 0; -- (*)
end P;
In the analysis of the marked statement, we note numerous possible
operations denoted by the assignment notation. In particular, we have
(a) the assignment operator for Integer, and (by [ARM 3.5.5(16)])
(b) the assignment operator for subtype T. The overloading rules do
not appear to be able to resolve the ambiguity between these two
interpretations of the assignment operator. ([ARM 8.7] and [ARM 5.2]
do not even rule out the possibility of the interpretation of this as
Boolean assignment! Even if this were ruled out, the overloading
rules do not, and can not, use subtypes to resolve ambiguities, so
interpretations (a) and (b) remain possible.) Thus, statement (*) is
ambiguous and therefore illegal.
The IG does not include the operations on subtypes in its list of
possible interpretations of the assignment notation [IG 8.7a]; its
version of the language rules therefore allow (*) as legal.
*****************************************************************************
!section 03.03.01 (03) M Woodger 88-11-05 83-01066
!version 1983
!topic Helpful wording
After "private type declaration" insert " - for which it has
already been done".
*****************************************************************************
!section 03.03.01 (04) M. Saaltink 89-02-13 83-01263
!version 1983
!topic Confusion between compile-time and run-time
Various parts of the manual (for example, [ARM 1.6(2,3)] and
[ARM 10.1(4-8)]) imply that there is a distinction between compiling and
running a program, that certain errors must be detected at compilation
time, and that evaluation does not take place until after compilation.
Not all of the conceptual framework used in the ARM is consistent with
this view. For example, rules requiring the types of two names or
expressions to be the same are part of the compile-time checks
[ARM 8.7(8)]. Types, however, are entities [ARM 3.1(1)] created by
the elaboration of type definitions [ARM 3.3.1(4)]. Accordingly,
testing if two types are the same must be done at run time.
[ARM 3.3.1(4)] provides some relief from this dilemma, since it
implies that one can predict at compile time whether the types denoted
by two type marks will be the same at run time (but it is not clear
that this holds in the case of private types, types declared by
incomplete type definitions, or generic formal types). Even so, this
dilemma does indicate a certain carelessness in the ARM's description
of the language.
*****************************************************************************
!section 03.03.02 (06) Bo Lindberg 85-07-22 83-00640
!version 1983
!topic Creating an already existing subtype
The conclusion of the second and third sentences of this paragraph
is that the elaboration of a subtype indication without a
constraint creates the subtype denoted by the type mark. This makes no
sense, since that subtype already exists.
*****************************************************************************
!section 03.03.02 (06) J. Goodenough 86-07-24 83-00787
!version 1983
!topic Compatibility of default discriminants in type declarations
!reference AI-00308
AI-00308 addresses the issue of checking default discriminant values for
object declarations, but a similar problem arises in type declarations:
type R (D : INTEGER := -1) is
record
COMP : STRING (D .. 10);
end record;
type T1 is array (1..5) of R; -- CONSTRAINT_ERROR?
type T2 (D : BOOLEAN) is
record
case D is
when TRUE =>
C1 : R; -- CONSTRAINT_ERROR?
when FALSE =>
C2 : INTEGER;
end case;
end record;
The basic question is whether the default discriminant expression for
components of T1 and for component C1 is evaluated and checked when the type
declaration is elaborated or when an object declaration for these types is
elaborated. It seems reasonable that the compatibility check ought to be
made when the type declarations are elaborated, but the Standard doesn't
appear to require that the default expressions be evaluated at this time,
much less that a compatibility check be made.
3.3.2(6-8) define what it means to elaborate a subtype indication that has a
constraint, but the component subtype definition for C1 doesn't have a
constraint (nor is there a constraint in the array type declaration), so
3.3.2(6-8) doesn't apply.
If we look in 3.7.2, we find that 3.7.2(8) defines the initial values of the
discriminants for C1 and T1's components, but imposes no requirement to check
the compatibility of these values. 3.7.2(5) defines what it means for a
discriminant constraint to be compatible with a type, but no discriminant
constraint is given in these declarations.
AI-00308 applies to object declarations. It seems pretty clear that since a
component of an object is an object (3.2(7)), AI-00308 implies a check for
compatibility when an object is created:
OBJ1 : T1; -- CONSTRAINT_ERROR by AI-00308?
OBJ2 : T2(TRUE); -- CONSTRAINT_ERROR by AI-00308?
OBJ3 : T2(FALSE); -- no CONSTRAINT_ERROR?
In short, I don't think the Standard implies a check for compatibility of
default discriminant values when a type declaration is elaborated, although
this seems to be the appropriate time. If the LMC/LMP doesn't make clear
that this is the intent, then AI-00308 takes over and makes the check when an
object is created. Is this correct?
*****************************************************************************
!section 03.03.03 R P Wehrum, Siemens A.G., Muenchen 83-06-02 83-00251
!version 1983
!topic The Notion of Predefined Operations
The RM defines the notions "operations", "basic operations", and
"predefined operators" (mainly) in 3.3.3. However, the notion of
"predefined operations" and especially "predefined numeric operations"
seems to be completely undefined, though it is used (e.g. in section
11.1(6).)
For instance, is the explicit conversion
INT ((some_numeric_expression))
a predefined numeric operation provided that INT stands for a user-
defined integer type?
************************************************************************
!section 03.03.03 Erhard Ploedereder 83-01318
!version MIL-STD-1815A-1983
!topic "=" as a basic operation
!summary
It should be seriously considered whether equality ("=", "/=") should
be reclassified to be a basic operation, with a special rule that, for
limited types, the definition of "=" provides the definition of this
basic operation.
!rationale
Very frequently, the sole cause for a 'use'-clause in Ada programs is to
obtain direct visibility to the equality operation, which for all but
limited types cannot be hidden by a user-provided function definition.
These 'use'-clauses are detrimental to code readability and lead to
potential overload resolution problems (ambigutity rules).
The write-around of renaming equality locally is ugly and often not
applied.
Yet, the rules of the language make it completely obvious that, for
non-limited types, equality can bind only to the predefined operation, so
that direct visibility of the type declaration and its implicitly declared
equality cannot possibly alter the meaning of the operation. (ARM 6.7
(4+5)). For limited types, one could either stay with the current rule of
requiring direct visibility, or one could limit the opportunity to declare
equality to the same declarative part in which the declaration of the
limited type occurs.
*****************************************************************************
!section 03.03.03 (02) 4.5(6) J. Goodenough 85-07-31 83-00597
!version 1983
!topic Implicit declarations are not all predefined operations
!reference 83-00590
Mike Woodger argues that 4.5(6) implies all implicitly declared operations
are predefined. 4.5(6) says:
For each form of type declaration, certain of the above
operators are PREDEFINED, that is, they are implicitly declared
by the type declaration.
But despite this attempt at a definition, it is clear that not all implicitly
declared operations are predefined. For example, 3.4(12) says
Each operation of the derived type is implicitly declared at
the place of the derived type declaration. The implicit
declarations of any derived subprograms occur last.
Thus the argument (in 83-00590) that an enumeration literal is a predefined
operation because it is implicitly declared is not correct. On the other
hand, all predefined operations are implicitly declared, with the possible
exception of enumeration literals, depending on whether one views an
enumeration literal specification as an explicit declaration or not.
So the real question is whether an enumeration literal specification is an
explicit or implicit declaration of an enumeration literal operation. 3.1(4)
could be interpreted to mean only that an enumeration literal specification
occurs, syntactically, in an explicit declaration. One could go on to say
that 3.3.3(2)'s discussion of enumeration literals in a paragraph discussing
implicitly declared operations is sufficient to imply that an enumeration
literal is an implicitly declared operation.
On the other hand, 3.5.1(3) says an enumeration literal specification "is
equivalent to the declaration of a parameterless function ..." Since we know
that an enumeration literal specification is a form of explicit declaration
(by 3.1(4)), this equivalence implies that enumeration literals are
operations that are explicitly declared by an enumeration literal
specification, i.e., an enumeration literal specification is just a special
form for explicitly declaring functions.
In short, the Standard is not quite consistent in its view of enumeration
literals. If one decides that enumeration literals are not always implicitly
declared operations, then enumeration literals in 3.3.3(2) should be
mentioned 3.3.3(1), which discusses explicitly declared operations.
Personally, it seems rather more intuitive to consider the explicit
occurrence of an enumeration literal in text as an explicit declaration of
the corresponding operation. One could argue that 3.3.3(2) only happened to
mention enumeration literals because the form of declaration as a function is
implicit.
It was argued in May that the ability to provide a user-defined function that
would be called in place of an enumeration literal was intentional. (One
might want to count the number of uses of a literal, and this is the way to
do it.) Such an ability can only be provided if the user-defined function is
considered to hide an enumeration literal's implicit declaration. (A similar
ability can be provided for constants by replacing a constant declaration
with a function declaration.) The loss of this capability (should I say
"uniformity"?) does not seem large to me. I think it is more likely that a
function declared with the same profile as an enumeration literal is an error
than it is likely to be intentional.
Note that one can say that an enumeration literal is a predefined operation
that is declared explicitly by an enumeration literal specification and
implicitly by derivation. This distinction would allow the current
resolution of AI-00002 and yet forbid duplicate explicit declarations in
AI-00330. (On the other hand, it is a bit peculiar for enumeration literals
to be the only case of an explicitly declared predefined operation.)
*****************************************************************************
!section 03.03.03 (02) M Woodger 88-11-05 83-01067
!version 1983
!topic Subtype declarations also implicitly declare operations
See 3.3(5).
In the first line, "type" should be "type or subtype". In the second line,
after "type definition" add "or subtype indication".
The fourth sentence should read "The operations implicitly declared for a
given type or subtype declaration occur after it and before the next explicit
declaration, if any."
It is not clear where the subtype operations are declared. If a qualification
is declared after the subtype declaration for the corresponding type mark, is
it also considered an operation of the base type or not? Is catenation of two
components defined for the component type or for the array type?
*****************************************************************************
!section 03.03.03 (04) G. Morrone TEXEL & Co. 89-09-01 83-01302
!version 1983
!topic short circuit control forms and basic operations
The following code has been rejected by three different compilers,
which all suggest that a "use" clause be employed. Yet the references
below indicate the compilers are incorrect. Is procedure Test a
legal Ada program?
procedure Test is
package P is
type B is new Boolean;
end P;
C : P.B := P.True;
D : P.B := P.False;
begin -- Test
if C and then D
then
null;
end if;
end Test;
According to 3.3.3 (4), a short circuit control form is a basic operation.
3.4 (5) states that for each basic operation of the parent type, there
is a basic operation of the derived type. 8.3 (18) states that "the
notation associated with a basic operation is directly visible within
the entire scope of this operation."
*****************************************************************************
!section 03.03.03 (07) Norman Cohen 88-11-03 83-01031
!version 1983
!topic Of what types is an attribute an operation?
!references AI-00043
Paragraphs 1 through 7 of 3.3.3 read, in part:
1 The set of operations of a type includes the explicitly
declared subprograms that have a parameter or result of the
type....
2 The remaining operations are each implicitly declared for a
given type declaration, immediately after the type definition.
These implicitly declared operations comprise the basic
operations, the predefined operators (see 4.5), and enumeration
literals.... The operations implicitly declared for a given
type declaration occur after the type declaration and before
the next explicit declaration, if any....
3 A basic operation is an operation that is inherent in one of
the following:
...
7 o A numeric literal (for a universal type), the literal
null (for an access type), a string literal, an
aggregate, or an attribute.
Furthermore, paragraphs 1 and 3 of 4.1.4 read, in part:
1 An attribute denotes a basic operation of an entity given by a
prefix.
...
3 ... An attribute can be a basic operation delivering a value;
alternatively, it can be a function, a type, or a range....
This suggests several questions:
1. Is every attribute a basic operation of some type?
2. Can an attribute be a basic operation of more than one type?
3. Of what types is a given attribute a basic operation?
4. When an attribute is renamed as a function or passed as a generic
actual parameter, are the names declared by the renaming declaration
or the generic parameter declaration considered to denote an entity
that is an operation of the parameter type and of the result type?
These questions are inspired by AI-00043. During consideration of
AI-00043 it was natural to think of the attribute X'ADDRESS as a basic
operation of type SYSTEM.ADDRESS, and the conclusion of AI-00043 is
consistent with this view, but this view is not supported by the
Standard.
Here are some of the issues relevant to questions 1-4:
1. Is every attribute a basic operation of some type?
It does not necessarily follow from 3.3.3 that every attribute is a
basic operation. 3.3.3(3) can be read (in conjunction with 3.3.3(7))
as meaning that IF an operation is "inherent in" a given attribute,
THEN that attribute is a basic operation, but that not all attributes
necessarily have operations inherent in them.
A strict reading of 4.1.4(1) suggests that an attribute is an
operation of whatever entity is denoted by its prefix. However, since
the Standard makes no other reference to operations of program units,
operations of labels, operations of entries, or operations of objects,
this paragraph should not be taken literally.
The wording of 4.1.4(3) strongly and, it seems, intentionally suggests
that only attributes delivering values are basic operations. An
attribute yielding a function, a type, or a range is not. Thus the
attributes P'BASE and P'RANGE are not operations. It would follow
from this line of reasoning that the attributes P'IMAGE, P'POS,
P'PRED, P'SUCC, P'VAL, and P'VALUE are not operations.
It might be argued that the attributes P'IMAGE, P'POS, P'PRED, P'SUCC,
P'VAL, and P'VALUE, while not themselves operations, are second-order
"meta-operations," providing functions that are, in turn, operations.
Consider, however, the verbs used to describe the meanings of the four
kinds of attributes: Each attribute delivering a value "yields" that
value. The attribute P'RANGE also "yields" a range (3.6.2(7), A(36)).
The attribute P'BASE "denotes" a type according to A(4), while
3.3.3(9) simply reads, "The base type of T," eliding the verb.
According to 3.5.5, paragraphs 5 through 12, as well as Annex A, the
attributes P'IMAGE, P'POS, P'PRED, P'SUCC, P'VAL, and P'VALUE do not
"yield" functions; rather, each of these attributes "is" a function.
Thus each of these attributes is an operation if and only if the
corresponding function is an operation.
It would be very strange if these functions were not operations.
Except for P'VAL (described as a "special function" because its
parameter can be of any integer type) and P'POS (which has returns a
universal_integer result), these functions act much like functions
defined by subprogram declarations. They can be renamed (8.5(9)) and
passed as generic actual parameters (12.3.6(6), a note) and are
considered for these purposes to have parameter and result type
profiles. Indeed, 12.3(11) states, "For a name that denotes a generic
formal subprogram: The corresponding name denotes the subprogram,
enumeration literal, or entry named by the associated generic actual
parameter (the actual subprogram)," and 12.3.6(1) states, "A formal
subprogram is matched by an actual subprogram, enumeration literal, or
entry...," suggesting that a functional attribute is simply a
subprogram. (On the other hand, the semantics of subprogram calls
given in 6.4(1) do not apply to attributes because attributes do not
have function bodies.)
2. Can an attribute be a basic operation of more than one type?
3.3.2(2) states that all operations other than explicitly declared
subprograms are "implicitly declared for a given type declaration,
immediately after the type definition.... The operations implicitly
declared for a given type declaration occur after the type declaration
and before the next explicit declaration, if any." Thus an attribute
(or any other basic operation) can only be declared "for" a single
type declaration.
Is an implicitly declared operation an operation "of" a given type (in
the sense of 3.3.3(1)) if and only if it is declared "for" the
declaration of that type (in the sense of 3.3.3(2))? If so, certain
anomalies arise. For example, an explicitly declared subprogram
procedure P (X1: in T1; X2: in T2);
is an operation of both T1 and T2, but given the declaration
type NT1 is new T1;
the derived subprogram
procedure P (X1: in NT1; X2: in T2);
is an operation of NT1 but not of T2, since it is declared "for" the
derived-type declaration.
3. Of what types is a given attribute a basic operation?
One possible answer is that an attribute is a basic operation only if
its prefix denotes a subtype, in which case it is a basic operation of
the corresponding base type. 4.1.4(1) can be read as supporting this
view. Alternatively, we might also consider an attribute to be a
basic operation if its prefix denotes a value, in which case it is a
basic operation of the type of the value.
Another possible answer, formed by analogy to explicitly declared
subprograms, is that an attribute is an operation of a given type if
it has a prefix of that type, if it has a parameter of that type, or
if it yields a result of that type. (The dimension numbers in the
array attributes are part of the attribute designators, not
parameters, so the A'FIRST(2) attribute, for example, is distinct from
the A'FIRST(1) attribute and is not an operation of type
universal_integer.) While this view is intuitively appealing (and is
probably the basis for thinking of the attribute A'ADDRESS as an
operation of type SYSTEM.ADDRESS), it does not seem to be supported
by the Standard. Indeed, such an interpretation would cause several
attributes to be operations of type universal_integer but not of any
other integer type, contradicting 4.10(2,3), which purports to contain
a comprehensive description of the universal_integer operations.
Perhaps the most convincing answer is that type of which each
attribute is a basic operation can be inferred from the sections of
the Standard listing the basic operations for each class of types:
- T'BASE is listed as a basic operation of any discrete type T in
3.3.5(3), of any floating-point type T in 3.5.8(3), of any
fixed-point type T in 3.5.10(3), of any array type in 3.6.2(11),
of any record type in 3.7.4(4), of any access type in 3.8.2(4),
and of any private type in 7.4.2(2). It is not listed as an
attribute of a task type in 9.9, but 3.3.3(8) states (and A(4)
suggests) that T'BASE is defined for every type or subtype T.
- The attribute T'SIZE, where T denotes a type or subtype, is listed
as a basic operation of any discrete type T in 3.5.5(3), of any
floating-point type T in 3.5.8(3), of any fixed-point type T in
3.5.10(3), of any array type in 3.6.2(11), of any record type in
3.7.4(4), of any access type in 3.8.2(4), and of any private type
in 7.4.2(2). It is listed in 9.9(4) as being "defined for" any
task type, but 9.9 does not state that this attribute is a basic
operation of a task type.
- The attributes A'ADDRESS and A'SIZE, where A denotes an object,
are listed as basic operations of the type of the object for
discrete types in 3.5.5(14), for floating-point types in
3.5.8(14), for fixed-point types in 3.5.10(13), for array types in
3.6.2(11), for record types in 3.7.4(4), for access types in
3.8.2(4), and for private types in 7.4.2(2). These attributes are
listed in 9.9(4) as being "defined for" values or objects of a
task type, but 9.9 does not state that these attributes are basic
operations of a task type. There is no basis for concluding that
A'ADDRESS is an operation of type SYSTEM.ADDRESS (except in the
special case that A itself is of type SYSTEM.ADDRESS).
- The attributes T'FIRST and T'LAST, where T is a scalar type, are
listed as basic operations of any discrete type T in 3.5.5(3), of
any floating-point type T in 3.5.8(3), and of any fixed-point type
T in 3.5.10(3).
- The attribute T'WIDTH is listed as a basic operation of any
discrete type T in 3.5.5(3,4).
- The functional attributes T'POS, T'VAL, T'SUCC, T'PRED, T'IMAGE,
and T'VALUE are listed in 3.5.5(5..13) as basic operations of any
discrete type T. There is no basis for concluding, for example,
that T'POS and T'VAL are operations of universal_integer or any
other integer type, or that T'IMAGE and T'VALUE are operations of
type STRING, even though T'IMAGE can be renamed as a function with
a STRING result and T'VALUE as a function with a STRING parameter.
Thus, given the declaration
type NS is new STRING;
we presumably do not derive an overloaded attribute BOOLEAN'IMAGE
with a result of type NS or an overloaded attribute BOOLEAN'VALUE
with a parameter of type NS (even though the parent type, STRING,
is declared in the same package, STANDARD, as the two attributes).
- The attributes T'DIGITS, T'EPSILON, T'EMAX, T'SAFE_EMAX,
T'MACHINE_RADIX, T'MACHINE_MANTISSA, T'MACHINE_EMAX, and
T'MACHINE_EMIN are listed as basic operations of any
floating-point type T in 3.5.8(4 .. 14).
- The attributes T'DELTA, T'FORE, and T'AFT are listed as basic
operations of any fixed-point type in 3.5.10(4,8,9).
- The attributes T'MANTISSA, T'SMALL, T'LARGE, T'SAFE_SMALL,
T'SAFE_LARGE, T'MACHINE_ROUNDS, and T'MACHINE_OVERFLOWS are listed
as basic operations of any floating-point type T in 3.5.8(4..14)
and of any fixed-point type T in 3.5.10(5..13).
- The attributes A'FIRST(N), A'LAST(N), A'RANGE(N), and A'LENGTH(N),
as well as their counterparts in which the dimension is left
implicit, are listed in 3.6.2(2..10) as basic operations of any
constrained array subtype A. Similar attributes in which A denotes
a value are listed as basic operations of the type of A for array
types in 3.6.2(2..10) and for access types designating array types
in 3.8.2(2). Presumably, the operation "inherent in" an attribute
whose prefix denotes an array subtype is the same as the operation
"inherent in" an attribute whose prefix denotes an array value;
otherwise the former would be an operation of a subtype but not an
operation of the corresponding base type, a notion alien to Ada.
There is no basis for concluding that A'FIRST(N) and A'LAST(N) are
operations of the index type of A.
- The attribute T'CONSTRAINED is listed as a basic operation of any
private type T in 7.4.2(8,9).
- The attribute A'CONSTRAINED, where A denotes an object, is listed
as a basic operation of the type of the object for record types
with discriminants in 3.7.4(2,3) and for private types with
discriminants in 7.4.2(2).
- The attributes T'TERMINATED and T'CALLABLE, where T denotes a
value, are listed as basic operations of the type of T for access
types designating task types in 3.8.2(3). These attributes are
listed in 9.9(1,3) as being "defined for" values or objects of a
task type, but 9.9 does not state that these attributes are basic
operations of that task type.
- The attribute T'STORAGE_SIZE is listed as a basic operation of any
access type in 3.8.2(4). It is listed in 9.9(4) as being "defined
for" task types and task objects, but 9.9 does not state that this
attribute is a basic operation of a task type.
This analysis leaves several questions unanswered:
- Is the attribute T'BASE defined (albeit useless) for a task type?
- Was it intended that the attributes described in 9.9 as "defined
for" a task object, task value, or task type (T'CALLABLE,
T'TERMINATED, T'STORAGE_SIZE, T'SIZE, and T'ADDRESS) be basic
operations of a task type? Is the attribute E'COUNT an operation
of the task type to which entry E belongs?
- When the prefix of an 'ADDRESS attribute denotes a program unit,
label, or entry, is the attribute an operation of any type?
- Are the attributes R.C'POSITION, R.C'FIRST_BIT, or R.C'LAST_BIT
operations of any type? If so, they would presumably be
operations of the type of record component R.C, and thus listed as
basic operations for every class of types, but the Standard does
not list them as basic operations for any class of types.
4. When an attribute is renamed as a function or passed as a generic
actual parameter, are the names declared by the renaming declaration
or the generic parameter declaration considered to denote an entity
that is an operation of the parameter type and of the result type?
Assume that T'IMAGE and T'VALUE are not considered operations of type
STRING; otherwise, there is no issue. Given the declaration
function MY_IMAGE (B: BOOLEAN) return STRING renames BOOLEAN'IMAGE;
is MY_IMAGE an operation of type STRING? Similarly, given the generic
package
generic
with function MY_IMAGE (B: BOOLEAN) return STRING;
package GP is
...
end GP;
and the instantiation
package P is new GP (MY_IMAGE => BOOLEAN'IMAGE);
does MY_IMAGE denote an operation of type STRING within the generic
unit?
The answer depends on the definition of an "explicitly declared
subprogram." By 3.3.3(1), "The set of operations of a type includes
the explicitly declared subprograms that have a parameter or result of
the type...." A subprogram renaming declaration is an explicit
declaration, but it probably cannot be said to declare the subprogram,
since 8.5(1) states that a renaming declaration "declares another name
for an entity," not that it (re)declares an entity. A generic
parameter declaration is an explicit declaration, but by 12.1.3(1),
the kind of generic parameter declaration considered here "declares a
generic formal subprogram," not a subprogram. Thus, in neither case
above is MY_IMAGE an explicitly declared subprogram. In fact, 8.5(3)
indicates that the name declared by a renaming declaration "denotes"
the renamed entity and 12.3(11) indicates that within a generic
instance a generic formal function "denotes" the corresponding generic
actual parameter. Thus, in each case, MY_IMAGE denotes the attribute
BOOLEAN'IMAGE. Presumably, a given entity either is or is not an
operation of a given type, regardless of which name we use to denote
the entity.
*****************************************************************************
!section 03.03.03 (09) Ron Lieberman/Convex 90-10-30 83-01398
!version 83
!topic Is T'BASE'BASE'FIRST allowed?
I am submitting this request for interpretation on the BASE attribute for
a customer of our Ada compiler. The customer ran into an inconsistency
between our implementation and another vendors. The BASE attribute is
described as follows in the LRM:
For every type or subtype T, the following attribute is defined:
T'BASE The base type of T. This attribute is allowed only as the prefix of
the name of another attribute: for example, T'BASE'FIRST.
Our Ada compiler implementation disallows expressions of the form:
T'BASE'BASE'FIRST
where any number of BASE attributes may be strung together. Other
implementations of Ada compilers do allow this notation.
I would like to know if the BASE'BASE is legal Ada or not, and then to
have the appropriate test case added to the ACVC test suites.
Thank you.
Ron Lieberman
Convex Computer Corporation
3000 Waterview Parkway
P.O. Box 833851
Richardson, Tx. 75083-3851
(214) 497-4248
lieb@convex.com
*****************************************************************************
!section 03.03.03 (11) Ph. Kruchten, NYU 84-01-16 83-00260
!version 1983
!topic Delete ", 6.1," on line 4
This ", 6.1," is not related to "object", and seems irrelevant for
this section.
************************************************************************
!section 03.04 (06) Norman Cohen 88-05-04 83-00968
!version 1983
!topic "Corresponding" predefined operation of a derived type
Certain predefined operators have an operand that is always of type
INTEGER. These are fixed-point-INTEGER multiplication and division
(4.5.5(7)) and exponentiation (4.5.6(5)).
3.4(6) states, "For each enumeration literal or predefined operator of
the parent type there is a corresponding operation for the derived type."
Please confirm (or bindingly interpret) the following:
1. For the predefined operators described above, the "corresponding
operation" of the derived type has a corresponding operand that is
necessarily of type INTEGER. Thus, given the declaration
type NI is new INTEGER;
the operator corresponding to the parent-type operator
function "**" (LEFT, RIGHT: INTEGER) return INTEGER;
is
function "**" (LEFT: NI; RIGHT: INTEGER) return NI;
and not
function "**" (LEFT, RIGHT: NI) return NI;
(3.4(13), which states, "The specification of a derived subprogram is
obtained implicitly by systematic replacement of the parent type by
the derived type in the specification of the derivable subprogram,"
does not apply here because, by 3.4(11), derivable subprograms must be
either explicitly declared or derived from a derivable subprogram.)
2. Except for the special case
function "**" (LEFT, RIGHT: INTEGER) return INTEGER;
(in which LEFT happens to be of type INTEGER), the predefined
operators described above are not to be considered "predefined
operators of the parent type" in the sense of 3.4(6) when the parent
type is INTEGER. Thus, if there is a predefined operator
function "**"
(LEFT: LONG_INTEGER; RIGHT: INTEGER) return LONG_INTEGER;
the declaration
type NI is new INTEGER;
does not cause the following operation to be derived:
function "**"
(LEFT: LONG_INTEGER; RIGHT: NI) return LONG_INTEGER;
(3.3.3(1) states, "The set of operations of a type includes the
explicitly declared subprograms that have a parameter or result of the
type...," but 3.3.3(2), which states that predefined operators are
also operations, does not explain when a predefined operation is "an
operation of" a given type.)
*****************************************************************************
!section 03.04 (06) Norman Cohen 88-07-11 83-00987
!version 1983
!topic Addendum to comment 83-00968 (AI-00567)
By analogy to point (2) of the comment, the "special" multiplying
operators described in 4.5.5(9), taking two fixed-point operands and
returning a universal_fixed result, are not to be considered "predefined
operators of the parent type" in the sense of 3.4(6) when the parent type
is a fixed-point type. Thus, if there is a predefined operator
function "*"
(LEFT: FPT1; RIGHT: FPT2) return universal_fixed;
(where FPT1 and FPT2 are fixed-point types), the declaration
type NFPT1 is new FPT1;
does not cause the following operation to be derived:
function "*"
(LEFT: NFPT1; RIGHT: FPT2) return universal_fixed;
Such a multiplication operation does indeed exist, but it is declared
presciently in STANDARD (see paragraph 11 of Annex C) rather than
derived.
*****************************************************************************
!section 03.04 (10) Software Leverage, Inc. 83-10-31 83-00195
!version 1983
!topic representation clauses and chains of derivation
The RM seems to indicate that a type derived from a parent type with a
representation clause must use the representation of its parent type, but
that this restriction can be circumvented if one is persistent.
Consider the following chain of derived types:
type A is (A1, A2, A3); -- explicit rep clause
for A use (1, 10, 100);
type B is new A; -- implicit rep clause, by 3.4(10)
type C is new B; -- ?
Paragraph 3.4(10) does not say what happens to a derived type like C, whose
parent type has an implicit representation clause. Does it also inherit
the representation clause (if any) from its parent type? Or does it revert
to the default representation? Can the compiler's default representation
chooser cleverly notice a case like this and choose a representation that
matches the parent type's implicit representation clause?
Paragraph 13.1(3) states that each type can have only one representation
clause, but it does not say whether this restriction applies to implicit
representation clauses, as defined in 3.4(10). Can B's implicit represen-
tation clause be overridden by an explicit one, or would this be illegal?
Would a representation clause for C be legal?
Assuming the straightforward reading that B has an implicit representation
clause but C has none, what was the rationale for making representation
clauses only inherit through derivation for a while, then fade out?
Interesting sidelight: If the straightforward interpretation is taken,
then the following would be legal:
type BYPASS_REP_CLAUSE_RESTRICTIONS is new CHARACTER;
type EBCDIC_CHARACTER is new BYPASS_REP_CLAUSE_RESTRICTIONS;
for EBCDIC_CHARACTER use ( ... );
Now we can conveniently do type conversions between CHARACTER and
EBCDIC_CHARACTER (since they are related by derivation), but we had to go
through horrible contortions to achieve it.
************************************************************************
!section 03.04 (10) Ron Brender 83-10-29 83-00196
!version 1983
!topic Evaluation of derived representation clauses
RM 3.4(10) states:
. If an explicit representation clause exists for the parent type
and if this clause appears before the derived type definition,
then there is a corresponding representation clause (an implicit
one) for the derived type.
This wording suggests a textual copy semantics for the derivation of
representation clauses. However, if a textual copy semantics applies
then there arises the question of where the implicit representation
clause exists and when any expression contained in the representation
clause is evaluated for the derived type. Except for the
specification of storage for a task specification, 13.2(10), and
address clauses, 13.5, the question is moot because such expressions
are required to be static and hence evaluate to the same result and
are free of side-effects no matter how many times in however many
places the expression is evaluated.
Now consider:
function F return INTEGER;
task type TSK is ... end;
for TSK'STORAGE_SIZE use F;
type NEW_TSK is new TSK;
-- for NEW_TSK'STORAGE_SIZE use F; -- Implicit rep spec
-- Is F evaluated again?
In this example, the question of whether the expression F for the task
size of NEW_TSK is not moot.
If it is intended that F be evaluated again in such examples, then
this should be made clear. Further, the place of the implicit
representation specification must be clarified in order to make clear
where/when the evaluation occurs.
Alternatively, it may only have been intended that the VALUE of the
representation clause for TSK should be the value that is used for
NEW_TSK; in particular, that the representation clause is not to be
evaluated again. If so, this must be made clear.
************************************************************************
!section 03.04 (10) Software Leverage, Inc. 84-10-10 83-00448
!version 1983
!topic Derived Task Types
LRM 3.4(10) says "If an explicit representation clause exists for [a] parent
type and if this clause appears before the derived type definition, then there
is a corresponding representation clause (an implicit one) for the derived
type".
Consider the following:
task type T is
entry E;
for E use ...;
end T;
type D is new T;
X: D; -- what is X.E'Address?
The wording of the manual seems to imply that there is no implicit address
clause for X.E, since the explicit clause doesn't "exist for" T but only for E.
Is this correct? What was the intent?
************************************************************************
!section 03.04 (10) Lester p.p. Ada-Europe 85-09-05 83-00652
!version 1983
!topic AI-138: rep clauses for derived types
In essence, we approve of 138/03, but:
(1) in the !question part:
"This wording raises [two]{three} questions:"
(2) We'd prefer a more detailed discussion, especially on the
underlying philosophy. Something like:
"A forcing occurrence is a place you nail down the default
representation as THE representation, if an explicit representation has
not already been given. So a derivation before the first forcing
occurrence yields a derived type whose representation has not yet been
nailed down, AND has a default representation which will be used to
nail down the representation at a forcing occurrence unless there is
first an explicit representation specification."
(3) Perhaps an extension of the issue: WHAT EXACTLY IS AN "IMPLICIT
REPRESENTATION CLAUSE"? Certainly it can arise by derivation from a
parent type with an explicit or implicit representation clause: but CAN
IT ARISE BY ANY OTHER MEANS? E.g., by a forcing occurence:
type R is record
I : INTEGER;
B : BOOLEAN;
end record;
PROGRESS : R := (135, TRUE);
for R use... -- illegal.
type New_R is new R;
for New_R use... -- illegal?
There is no explicit LRM wording that a forcing occurrence IS or IS NOT
an implicit rep clause, or that a forcing occurrence for a PARENT is
also forcing for a subsequently-derived type, so it seems New_R's rep
clause is NOT illegal (just odd). In short, in this context, DOES
"IMPLICIT" MEAN "INHERITED"?
*****************************************************************************
!section 03.04 (10) Mike Woodger 88-08-19 83-01019
!version 1983
!topic Representation clauses for derived types
!reference AI-138/09
AI-138/09 has received AJPO approval, but some flaws remain.
. The summary and the recommendation disagree regarding SMALL. The
summary allows an explicit clause for the derived type when there is
one for the parent, but the recommendation excepts this case.
. Although the discussion clearly suggests that an implicit
representation clause can be overridden by an explicit one (thus
answering YES to the second question), the recommendation only says it
is legal to have both clauses present. It does not specify the
semantics of this situation. Nor does the summary.
. The first paragraph of the recommendation fails to answer the first
question (the original comment 83-00195). It leaves the problem that
an implicit clause does not generate another one for a subsequent
derived type, so its effect is not inherited beyond that type.
I think we would revise (revisit) this Commentary as a matter of urgency.
The following five points summarize my understanding of the situation:
(1) The representation of a derived type is unaffected by forcing
occurrences or representation clauses for its parent type that occur later
than the derivation. (For a given aspect of representation.)
(2) If neither explicit nor implicit clauses nor forcing occurrences exist
for an aspect of the representation of the parent type prior to a
derivation, then the representation of the derived type can differ from
that of the parent in this respect. (With or without clauses for the
derived type.)
(3) If a forcing occurrence prior to a derivation has forces the
determination of the representation of the parent type, then the
representation of the derived type can be different. (With or without a
representation clause for the derived type.) (Do we want this?)
(4) If there is no explicit clause for the parent type prior to a
derivation, then there is on corresponding implicit clause for the derived
type. (So, "explicit" remain in 3.4(10). Do we want this?)
(5) If both explicit and implicit clauses exist for an aspect of the
representation of a type, then the implicit clause takes effect up to the
explicit clause and the explicit clause applies thereafter.
Consequently a representation clause explicitly given for a parent type,
prior to the derivation, generates an implicit one for the derived type, and
this applies also to a third type derived from the second type (in the
absence of another explicit clause).
But now the effect "dies out". Although the representation of the third
type is governed by the original explicit clause, there is no implicit clause
for it, so a further fourth derived type is free in this respect.
This is bizarre to a greater degree than the situation we had before.
Surely the AI is intended to prevent this. This can be done by putting
"determined" in the place of the words "specified by an implicit or explicit
or explicit representation clause" in the summary (and recommendation).
This leaves inheritance of representation in every case, so that explicit
specification is necessary in order to change it. The notation of implicit
clauses then becomes redundant (or if preferred can be retained but alway
inherited), and so does the final recommendation on evaluation of
expressions.
(See Comment 83-00652.)
Example of existing rules:
declare
type A is (A1, A2, A3);
for A use (1, 2, 4);
type B is new A;
-- implicit: for B use (1, 2, 4);
type C is new B;
-- no new implicit clause, but the old one "applies"
for B use (1, 2, 3);
-- overrides the implicit clause for B
-- but too late to affect C
type D is new C;
-- no clause applies to D. (Should the old one still apply?)
X : B := A3; -- uses code 3; forces SIZE determination for B
Y : C := A3; -- uses code 4; forces SIZE determination for C
Z : D := A3; -- forces all representation determination for D
begin
end;
------- End of Forwarded Message
*****************************************************************************
!section 03.04 (10) Randy Brukardt 91-08-19 83-01423
!version 1983
!topic Comments on version 02
Unfortunately, we didn't receive the ARG meeting invitation until after the
meeting. We had wanted to comment on AI-599.
Just in case there will be further discussion on it, here are our comments on
the current draft.
AI-599(02) comments.
Summary: This AI, as written, eliminates the possibility of 1 pass compilers
for Ada, and also renders the entire idea of forcing occurrence moot and
unnecessary. It could not have been the intention of the framers.
AI-599(02) essentially eliminates the utility of forcing occurrence.
Since a forcing occurrence of a derived type is NOT enough to be able to
determine its representation (since a rep. clause of the parent type may
follow later), a compiler cannot begin to generate code for a type when a
forcing occurrence is reached. This is an insane position to take, since
it essentially eliminates any need for the concept of forcing occurrence.
(If the representation cannot be determined at some forcing occurrence, it
might as well not be determined at all of them. It is no more work for a
compiler. Then, there is no need for the concept and the complexity).
One of the primary reasons for forcing occurrence to be defined is that they
allow a (mostly) one-pass implementation of Ada. Forcing occurrence are defined
such that it is never necessary to generate code for a type before its
representation is determined. This means that no backpatching of
representations is required (and that is a good thing, since that is a very
difficult thing to accomplish).
However, saying that the representation can change after a forcing occurrence
eliminates this nice property. This could have a major impact on many Ada
compilers (it certainly would on ours). [Our implementation of Ada semantics is
essentially one pass, with the syntax pass providing some purely syntactic
'helpers' to implement proper visibility of block names and labels.]
An example of the effect can be seen in M. Woodger's Example 1 in his
comment of 89-06-07. In this case, the rep. clause for A follows the
forcing occurance for type B. The AI says that a compiler must choose
representation 4 for literal A3, yet this is not known until after the
declaration of X (and its code generation, in a one-pass compiler).
This problem with the AI-599(02) can be fixed two ways: First, make a
forcing occurrence on a derived type forcing on the parent type also (if not
already forced). That preserves the properties desired without eliminating
the reason for forcing occurrence. Alternatively, eliminating the connection
between the representation of the parent and the derived type would work
(although it essentially would reverse the AI as it stands.)
If this draft of the AI has been approved, please inform us, so we can contact
WG9 about rejecting it. Thank you.
Randy Brukardt.
P.S. I do have a bunch of other comments which I've been meaning to write up
for a long time. Should I send them to you, or to some other mail address?
*****************************************************************************
!section 03.04 (11) 8.3(17) JLG 83-06-08 83-00138
!version 83
!topic derivation of enumeration literals
package P is
type T is (Red);
type NT is private;
function Red return NT;
private
type NT is new T;
type NNT is new NT;
end P;
For the full declaration of NT, the derived enumeration literal is hidden
by the explicit function Red, but is still derivable according to 3.4(6).
Thus for the declaration of NNT, we get two implicit declarations of
homographs. The problem is that this case is not covered by 8.3(17),
because an enumeration literal is neither a predefined operation nor a
derived subprogram in the sense of 3.4(11).
My answer was that enumeration literals should behave in this case as
predefined operations, and thus be hidden by derived subprograms. In the
above example, the declaration of NNT would be legal.
Treating enumeration literals as subprograms would not explain why they are
always derivable.
************************************************************************
!section 03.04 (11) JLG 83-06-08 83-00139
!version 83
!topic multiple derivations of homographs
package P is
type T is private;
type S is range 1..10;
procedure Q (X: T; Y: S);
procedure Q (X: T; Y: T);
private
type T is new S:
type NT is new T;
end P;
For the full declaration of T, the first Q is derived, but hidden "by a
derivable subrogram of the first kind" and is consequently not derivable.
The declaration of NT is thus legal since only one Q(NT, NT) is obtained.
Right?
************************************************************************
!section 03.04 (11) 8.3(17 83-00140
!version 83
!topic multiple derivations, etc. (83-00139, 83-00140)
Re: JLG's message of 83-06-08
I agree with Jean-Loup's analysis, and will add his comment on 8.3(17).
More problems with derived types, eh?
************************************************************************
!section 03.04 (11) Ada Europe/Pepperdine 85-07-09 83-00589
!version 1983
!topic derivable subprograms of a generic formal type
What subprograms are derivable when a type is derived from a generic
formal type, or from another type itself derived from a generic formal
type?
Consider the following example:
generic
type T is (<>);
with procedure P(X : T);
package Z is
type N_T is new T;
-- What subprograms are derived for N_T?
procedure Q(Y : N_Y);
end Z;
The easy part of this comment deals with P. It is fairly clear that P
cannot be derived, since P does not appear in the visible part of any
package, it being a generic formal parameter.
So let us introduce another derived type in the body. (This could
equally well appear in the private part of the generic package Z).
package body Z is
type N_N_T is new N_T;
-- What subprograms are derived for N_N_Y?
end Z;
But it is far less clear what happens to Q when N_N_Y is derived. The
simple fact is that N_T does not appear in the visible part of a
package. (It is in the visible part of a generic package. The LRM
makes a very clear distinction between a package and a generic package -
see 12(1-4) & 7(1)). Hence it would appear the Q is not derivable in
this case.
But now let us see what happens if we instantiate Z. Is the same still
true?
package G is
type U is (ALPHA, BETA);
package N_Z is new Z(U);
-- What subprograms are derived for N_Z.N_T?
end G;
Here we have a package N_Z (by 12.3(5)), which is an instantiation of
the generic package Z. Hence N_Z.N_T is not declared in the visible
part of a package. So by 3.4(11), Q is a derivable subprogram of the
first kind.
Hence it seems that Q is derivable only OUTSIDE the generic package and
not in its body - a somewhat arbitrary result.
Further investigation (12.1(5)) implies that the name Z, when used
inside the generic package denotes the current instance (not the generic
package). It could be argued that the name of Q is really Z.Q inside
the generic package, and hence Q is to be found inside the (implied)
package Z. The unfortunate thing is that in deriving subprograms for
the type N_T, the identifier Z is nowhere used syntactically.
What was the intent? I believe that 12.1(5) is meant to imply that
inside the generic package specification or body, the behavior of types
and objects is the same as the equivalent ones in (non-generic)
packages. Thus, the derivation of Q should be allowed for N_N_T. If
that is the case, then the wording in 12.1 should be changed. Perhaps
the description of deriving subprograms should also be made clearer in
this instance.
Next, if this is the intent, it seems somewhat peculiar that the
procedure P can be applied to T, but not to any of the types derived
from T. Is this intentional? Or should there be a further modification
to allow the derivation of generic formal subprograms as though they
were declared in the same visible part of a package?
Finally, the obvious way of clearing this point up would be to prevent
all derivation of types from generic formal types. Why is it allowed
anyway? Is there any good example from real life?
*****************************************************************************
!section 03.04 (11) J. Goodenough 85-09-26 83-00671
!version 1983
!topic Not all operations of a type are derivable
One might think that all suprograms declared for a type in the visible part
of a package are derivable outside the package, but this is not the case. A
pathological case exists in which certain implicitly declared subprograms
appear not to be derivable:
package P1 is
type T1 is new P.T;
-- assume function F is derived here from P.T
package P2 is
type T2 is range 1..10;
procedure G (X : T1; Y : T2);
end P2;
type T3 is new P2.T2;
-- derives G (X : T1; Y : T3)
end P1;
type T4 is new P1.T1;
-- G is not derived for T4
P1.G is an operation of type T1, but it is not derivable. 3.4(11) defines
two kinds of derivable subprograms: one kind must be declared explicitly, and
the other kind is declared implicitly by derivation. G is clearly a
subprogram of the second kind. The specific rule says:
If the parent type is itself a derived type, then any
subprogram that has been derived by this parent type is further
derivable, unless the parent type is declared in the visible
part of a package and the derived subprogram is hidden by a
derivable subprogram of the first kind.
G is not hidden by any declaration, so the part after the "unless" doesn't
apply. The question is whether G is a subprogram "derived by" parent type
P1.T1. The straightforward interpretation is that G is not derived "by" the
declaration of P1.T1, since it isn't implicitly declared after the
declaration of P1.T1. Therefore, G is not a derivable subprogram.
Is this reasoning correct?
*****************************************************************************
!section 03.04 (11) M Woodger 88-11-05 83-01068
!version 1983
!topic Generic packages overlooked
!reference AI-00367/04
In the fourth sentence, after "package" insert "or generic package".
*****************************************************************************
!section 03.04 (11) M Woodger 88-11-05 83-01069
!version 1983
!topic Within the parenthesis, replace "is" by "must be".
This is a requirement, not a description of the situation.
Add within the parenthesis:
"Enumeration literal specifications are not included."
This makes it clear that the parenthesis is not just a comment. See
Ada Answer no.4629 (82-09-15) and 3.1(4).
*****************************************************************************
!section 03.04 (11) N. Cohen 89-08-04 83-01297
!version 1983
!topic Derivability of subprograms declared by renaming
3.4(11) says, "First, if the parent type is declared immediately within
the visible part of a package, then a subprogram that is itself
explicitly declared immediately within the visible part becomes derivable
after the end of the visible part, if it is an operation of the parent
type." Consider the following example:
package P1 is
type T is new INTEGER;
package P2 is
function F return T;
end P2;
function F return T renames P2.F; -- Derivable operation of T?
end P1;
with P1;
package P3 is
type NT is new P1.T;
X: NT := F; -- legal?
end P3;
Is F considered, by virtue of the renaming declaration, to be "explcitly
declared immediately within the visible part" of P1?
*****************************************************************************
!section 03.04 (11) R. Eachus 89-08-07 83-01298
!version 1983
!topic Derivability of subprograms declared by renaming
I sent in a comment during the ANSI canvass (I remember
discussing in with John Goodenough at the Washington Internation Ada
Conference) to ask whether the effect that an entry can be renamed as
a derivable subprogram was intended:
package P is
type T is new INTEGER;
task T is
entry E (TP: in out T);
end T;
procedure E (TP: in out T) renames T.E;
end P;
with P;
procedure Main is
type NT is new P1.T;
X: NT;
begin
E(X); -- legal?
end Main;
I think the answer was one word: "Yes." However the fact that
renamings cannot be used to provide bodies for subprogram declarations
from package specifications means that I have never found a use for
this. (Now if we get this feature in Ada9X, and possibly entries with
return values, I'll use this feature all over the place.)
Robert I. Eachus
*****************************************************************************
!section 03.04 (14) M Woodger 88-11-05 83-01070
!version 1983
!topic Grammar
Replace "in which ... is replaced by" by:
"after replacing each actual parameter that is of the derived type by"
*****************************************************************************
!section 03.04 (15) J. Goodenough 85-09-27 83-00670
!version 1983
!topic When is a numeric type a derived type?
Consider the declaration:
package P is
type T is range 1..10
type NT is new T; -- legal?
end P;
3.4(15) says:
If a derived or private type is declared immediately within the
visible part of a package, then, within this visible part, this
type must not be used as the parent type of a derived type
definition.
The declaration of NT is illegal if T is considered to be a derived type.
3.4(1) defines the term "derived type":
A derived type definition defines a new (base) type whose
characteristics are derived from those of a PARENT TYPE; the new
type is called a DERIVED TYPE.
This definition seems to say that the only form of type declaration that
declares a derived type is the form that contains a derived type definition.
Since the declaration of T, as it occurs in the source text, does not contain
a derived type definition, it seems that T is not a "derived type," so the
declaration of NT is legal. Is this reasoning correct?
Of course, 3.5.4(5) says that the declaration of T is equivalent to the
declaration of a derived type, but it has already been shown that this
equivalence cannot be taken too seriously. Is it to be ignored in this case
also?
The rule in 3.4(15) was created to take care of certain anomalies that could
arise when the parent type had derivable subprograms (see comments #5212 and
#5250; #5271 says that this restriction applies to numeric types, but this
comment is not an LDT comment). For numeric type definitions, there are no
derivable subprograms (the only implicitly declared subprograms are for the
operators of the type), so such difficulties do not arise. In short, I don't
think there is any harm in following what the RM appears to say, and consider
that T is not a derived type.
*****************************************************************************
!section 03.04 (20) M Woodger 88-11-05 83-01071
!version 1983
!topic Replace "instantiation" by "instance
Not meant.
*****************************************************************************
!section 03.05 (02) MATS WEBER, DALIN SOFTWARE 86-04-28 83-00745
!version 1983
!topic Semi-constrained subtypes
ONE POINT IS THAT OF THE INTEGER SUBTYPES NATURAL AND POSITIVE WHICH ARE
UNFORTUNATELY CONSTRAINED AND ONE MAY WRITE :
type TEXT (LENGTH : NATURAL) is
record
ST : STRING (1..LENGTH)
end record
T : TEXT;
WHICH IS CATASTROPHIC BECAUSE THE OBJECT T REQUIRES INTEGER'LAST BYTES
OF MEMORY AND IS CONSTRAINED. IN SUCH A CASE IT SHOULD NOT BE ALLOWED
TO DECLARE AN OBJECT OF TYPE TEXT WITHOUT EXPLICITLY CONSTRAINING IT.
ONE WAY TO ACHIEVE THIS WOULD BE TO INTRODUCE "SEMI-CONSTRAINED
SUBTYPES" WHICH ARE CONSTRAINED ONLY WITH ONE (UPPER OR LOWER) BOUND.
THE SUBTYPES NATURAL AND POSITIVE WOULD BE DECLARED AS FOLLOWS,
SPECIFYING ONLY THE LOWER BOUNDS :
subtype NATURAL is INTEGER range 0.. ;
subtype POSITIVE is INTEGER RANTE 1.. ;
AND THE ATTRIBUTES NATURAL'CONSTRAINED AND POSITIVE'CONSTRAINED WOULD
YIELD THE VALUE FALSE.
*****************************************************************************
!section 03.05 (03) M Woodger/Alsys 83-05-06 83-00148
!version 1983
!topic "TRUE" -> "true", at both occurrences (as on the second line).
************************************************************************
!section 03.05 (03) BA Wichmann 83-04-28 83-00150
!version 1983
!topic FIRST and LAST for real types
In 3.5(3) the attributes FIRST and LAST are defined in terms of the
predefined relational operators <= and <. However 4.5.7 deliberately does
not define these relational operations outside the range
-F'SAFE_LARGE..F'SAFE_LARGE. This is an inconsistency which has some
unsatisfactory consequences in some cases. On a machine using the
IEC(IEEE) floating point standard, values outside the range of safe numbers
can arise. For instance, in projective rounding mode, an unsigned infinity
can be obtained. Such values inhibit the construction of the values FIRST
and LAST as given in 3.5(3). Hence 3.5(3) should be altered (for real
types) to conform to 4.5.7.
************************************************************************
!section 03.05 (05) Brian WICHMANN 87-02-16 83-00904
!version 1983
!topic 'FIRST and 'LAST for real types
Unfortunately, the proposed recommendation (87-01-16) does not work on some
"reasonable" implementations. Consider a machine that stores type FLOAT as one
word, but all arithmetic processing is performed in the registers double length.
(We could assume that the double length increases both the exponent and signifi-
cand sizes.) The only reasonable value for 'FIRST/'LAST is the minimum/maximum
value that can be stored in one word. If a compiler for such a machine always
stores "objects" in main memory (i.e. FLOAT objects in ome word), then the pro-
posal works. However, this is a grave injustice to such a machine, because an
optimizing compiler will get "better" results by leaving "objects" in registers.
The Brown model used by Ada (4.5.7) was formulated to overcome just these pro-
blems.
One way out of the problem would be to make it a non-binding interpretation.
Cray and CDC should be asked if they can define the attributes to satisfy the
proposal for their machines. (Most conventional machines have no problems.)
*****************************************************************************
!section 03.05 (08, 09) M Woodger/Alsys 83-05-06 83-00149
!version 1983
!topic Definition of T'FIRST and T'LAST for real types
T'LAST yields the upper bound of T. But what is the upper bound of a real
type whose declaration does not have a range constraint? Or a predefined
type such as FLOAT? The problem is that T'LAST need not yield a safe
number (see 3.5.8(19)), and the predefined relational operators <= and <
that are used to define lower bound and upper bound of a range, and
belonging to a range, are undefined outside the range of safe numbers, by
4.5.7(8 and 10), because the operand intervals are there undefined. So if
T'LAST is outside the range of safe numbers (in the ordinary numerical
sense) then it cannot be distinguished from T'SAFE_LARGE, nor from any
larger machine representable value, and is essentially undefined. Section
4.5.2 does not help much; 4.5.2(4) seems to appeal to 4.5.7 only for the
case of nearly equal values, yet 4.5.7(1) is quite general. The confusion
is between the intuitive and well understood ordering of mathematical
values on the one hand and the results obtainable by machine operations on
the other; the former satisfies the axioms of order, the latter often not.
Paragraph 3.5(3) speaks of "the values from L to R inclusive"; this means
in the mathematical sense of order, since "from" and "to" are not otherwise
defined.
Probably the definitions of T'FIRST and T'LAST should not use the terms
"lower bound" and "upper bound", but (as in July 80 Ada) should speak of
minimum value of a type, and a lower bound only of a subtype.
************************************************************************
!section 03.05 (09) J. Goodenough 87-07-22 83-00937
!version 1983
!topic T'LAST can't raise NUMERIC_ERROR
!reference AI-00174
AI-00174 should mention that NUMERIC_ERROR can never be raised for the
evaluation of 'LAST. Consider the declaration:
type F is delta 2**(-15) range -1.0 .. 1.0;
AI-00144 says this is a legal declaration even if F'SIZE is equal to 16 so 1.0
is not a representable value. Although F'(1.0) can raise NUMERIC_ERROR (or
CONSTRAINT_ERROR, given AI-387), F'LAST cannot raise NUMERIC_ERROR. Although
3.5(9) says that F'LAST yields the upper bound of F, this does not mean that
F'LAST is equivalent to F'(1.0), but instead, means that F'LAST yields a value
such that any storable value of F is less than or equal to F'LAST. Certainly
the intent was that F'LAST always yield a defined value.
*****************************************************************************
!section 03.05 (10) Don Clarson 83-06-30 83-00005
!version 1983
!topic {Discriminant and choice rules use values of discrete types.}
************************************************************************
!section 03.05.01 (01) C(12) M. Woodger/Alsys 85-08-29 83-00623
!version 1983
!topic Representation of non-graphic characters
!reference AI-00239/06
Aside from the incomplete definition of T'VALUE noted by this
Commentary, there is a contradiction between the tests of
3.5.2(1), 3.5.1, and C(12), that should be resolved.
For the predefined type CHARACTER given in STANDARD, C(12) asserts:
"Character literals corresponding to control characters are not
identifiers; they are indicated in italics in this definition."
There are two things wrong here. First, a character literal is
defined in 2.5, and never is an identifier. We can fix this up,
and bring it in line with the definition of T'IMAGE in 13.5.5(11),
by replacing the above by:
"The images of control characters are implementation-defined
(3.5.5); they are indicated in italics in this definition."
But then the second problem is that these images MUST be identifiers,
because CHARACTER is an enumeration type, whose syntax (3.5.1) only
admits character literals and identifiers. 3.5.2(1) says:
"The predefined type CHARACTER is a character type ..."
and:
"An enumeration type is said to be a character type if at least
one of its enumeration literals is a character literal."
One of 3.5.2(1) and C(12) will have to be changed, whatever the
outcome of this AI.
*****************************************************************************
!section 03.05.01 (03) Japanese comments on DP8652 85-05-10 83-00560
!version 1983
!topic Overloading subprograms and enumeration literals
It is ambiguous whether the overloading between subprograms and
enumeration literals is allowed or not.
3.5.1 says, "This [enumeration literal] declaration is equivalent to the
declaration of a parameterless function. Does this wording implicitly
define the permission of the overloading between subprograms and
enumeration literals?
*****************************************************************************
!section 03.05.01 (03) 03.01(04) Mike Woodger 85-05-24 83-00590
!version 1983
!topic An enumeration literal is a predefined operation
Introduction:
At the May 85 Paris meeting of the LMC, discussing AI-00002/4 and
AI-00330/01, the question
was raised whether there was a conflict between section 3.3.3(2) of the
RM, which says enumeration literals are implicitly declared operations,
and sections 3.1(4,5) and 3.5.1(3) which suggest the opposite.
We suggest here that a careful reading can reconcile the apparent
conflict.
Analysis:
3.1(1) tells us -
"The language defines several kinds
of entities that are
declared, either explicitly or implicitly, by declarations.
Such an entity can be ... an operation (in particular, ...
an enumeration literal; see 3.3.3)."
Then 3.1(4) continues -
"Certain forms of declaration always occur (explicitly) as
part of a basic declaration; these forms are ... enumeration
literal specifications ."
and 3.1(5) says -
"The remaining forms of declaration are implicit ... Certain
operations are implicitly declared (see 3.3.3)."
Se we refer to 3.3.3 and read -
"The set of operations of a type includes the explicitly
declared subprograms ... The remaining operations are each
implicitly declared for a given type declaration,
immediately after the type definition. These implicitly
declared operations comprise the basic operations, the
predefined operators (see 4.5), and enumeration literals."
(Note that enumeration literals are listed as a separate class from the
predefined operators. Section 4.5 explains "predefined" as meaning
implicitly declared by the type declaration, so we are justified in
calling all three classes "predefined operations".)
The conclusion so far:
(1) An enumeration literal specification is a form of
declaration that occurs explicitly as part of a basic
declaration.
(2) An enumeration literal is an operation implicitly declared
for a given type declaration, immediately after the
enumeration type definition.
Next we turn to 3.5.1(3), which says -
"Each enumeration literal specification is the declaration
of the corresponding enumeration literal: this declaration
is equivalent to the declaration of a parameterless
function, the designator being the enumeration literal, and
the result type being the enumeration type."
Conclusion:
There is no contradiction if we read this as telling us that each
enumeration literal specification that appears in an enumeration type
definition (as part of a basic declaration) has the effect of implicitly
declaring an operation that immediately follows this type definition -
namely the enumeration literal treated as a parameterless function.
*****************************************************************************
!section 03.05.01 (03) J. Goodenough 85-11-15 83-00686
!version 1983
!topic Character literals are implicitly declared as functions
The wording of the paragraph says:
[the declaration of an enumeration literal] is equivalent to
the declaration of a parameterless function, the designator
being the enumeration literal and the result type being the
enumeration type.
The syntax for a "designator" says (6.1) that a designator is either an
identifier or an operator symbol. Neither of these alternatives includes the
syntax for a character literal. The above use of the term designator might
be interpreted to mean that character literals are not implicitly declared as
functions, since the term designator excludes such a form of function name.
On the other hand, the intent was to have all enumeration literals considered
to be declared as functions, so the above wording needs to be modified to
make it clear that a character literal is implicitly declared as a function.
For example, one might say that "the designator of such a function is allowed
to have the form of a character literal."
*****************************************************************************
!section 03.05.01 (03) Ron Brender 86-02-11 83-00709
!version 1983
!topic Enumeration literals
!reference AI-00330/08
The LMC should note that test B83A06B of ACVC V1.7 (and preliminary
V1.8) has been challenged on the basis of AI-00330 as approved in
November. The test declares an enumeration literal and a label with
the same name in the same declarative part and expects an
implementation to report an error. When enumeration literals are
viewed as an implicitly declared predefined operation, this is no
longer illegal. The label, which is not overloadable, hides the
enumeration literal.
There are two points to be made:
1. However innocuous it might seem for a function homograph to
hide an enumeration literal, it seems far more serious and
surprizing for a nonoverloadable entity to do so.
2. This test has been in the validation suite in this form since
February 1984. Thus every compiler validated since ACVC V1.4
has detected this situation as an error. Every one of them
must be changed according to this AI. We had previously
assumed that this AI could be resolved independently of the
validation suite interactions, but this is now seen as
clearly not the case.
An equally insideous example can to my attention recently:
package FOO is
type ENUM is (EA, EB, EC, ...);
-- many lines later
package EB is ...end; -- hides enumeral EB!
end;
use FOO;
O : ENUM := EB; -- illegal
Note that outside the package there is no way to name EB at all, even
using selected components!
Finally, it occurs to me that the following is made legal by the AI:
type T is ...;
type E is (S, T, U, V); -- T is hidden "before" it is declared!
And so is:
type E is (A, B, C, D, E); -- Enumeral E is hidden by its own type!
I urge that this AI be reconsidered.
*****************************************************************************
!section 03.05.02 (01) M Woodger 88-11-05 83-01073
!version 1983
!topic Inconsistent use of "character literal"
The term "character literal" defined in 2.5(2) excludes the constants
denoting control characters, yet this term is used in 3.5.2(3) and in
C(12) to include these.
*****************************************************************************
!section 03.05.04 (00) Ivar Walseth 88-08-19 83-01020
!version 1983
!topic Why We Need Unsigned Integers in Ada
*
Sivilingenior Kjell G. Knutsen, A.S.
P.O. Box 95
N-4520 SOR-AUDNEDAL
Norway
19th August 1988
Dave Emery
MITRE
MS A156
Bedford, MA 01730
Why We Need Unsigned Integers in Ada
During the last year I've managed a project where we are implementing
communication protocols in Ada (protocols specified in the CCITT
recommendations X.213, X.214, X.215, X.224, X.225, X.409, X.410,
X.411, X.420).
In the communictions world they operate with octets. Each octet
contains 8 bits, and all bit combinations should be available. For
this purpose it is of course possible to define the type:
type octet is range 0..255;
for octet'size use 8; -- optionally
So far so good. The need for a standardized unsigned integer facility
in Ada arises when we are using this octet type for generating
checksums (according to X.224, Appendix I). This algorithm requires
modulo 255 arithmetic. For this purpose we have of course implemented
our own slow machine-independent arithmetic operators.
The next problem arises when the protocol specificiations says that bit
number 6 has some special meaning (as defined in table 2 in X.409). To
fetch (and store) this value we use logical operators such as "and"
"or" and shift-functions. These are implemented in a
compiler-dependent way. We could of course have used some tricky
records or multiply and division operators, but we didn't find these
solutions better when it comes to portability and performance.
In addition to the use of octets, other parts of the protocols need
bitwise operations on bigger unsigned integers (ref. tag and length
coding of X.409-units). My wish for Ada 9X is therefore either the 3
types unsigned_8, unsigned_16 and unsigned_32, where the number
specifies the number of bits of the variable, or a more general type
unsigned from which my own types might be created. In the latter case
Ada should guarantee at least 32 bits are available in the type.
For the chosen type I'm hoping for the following functions and
operators:
- arithmetic without overflow (appropriate modulo arithmetic)
- bitwise logical operators : and, or, xor, not
- shift-functions without bit rotation (optionally separate
functions with the more rare shift-variants)
- operators for comparisons
- procedures for fetching and storing single bits
- conversion between unsigned and "normal" integer types
The various HW-manufacturers like to number the bits of a byte
differently. In the CCITT world the octet has 8 bits numbered from 1
to 8 where the leftmost bit is MSB and has the number 8. I suggest
the same numbering in Ada.
Best Regards,
per Siv.ing. Kjell G Knutsen A/S
/signed/
Ivar Walseth
*****************************************************************************
!section 03.05.04 (03) Ada Group Ltd 84-04-16 83-00357
!version 1983
!topic Range attribute in integer type definitions.
The syntax of an integer type definition is given as
integer_type_definition ::= range_constraint
The syntax for range_constraint allows the form
range T'RANGE
However, 3.5.4(4-6) only discusses the form
range L .. R
Does this imply that
type MY_INT is range T'RANGE
is illegal?
Similar remarks apply to real types.
************************************************************************
!section 03.05.04 (04) Bryce Bardin 87-02-27 83-00907
!version 1983
!topic Is an explicitly declared integer type a derived type?
Consider the following example:
package P is
type Int1 is range 1 .. 10; -- this declares an integer type
type Int2 is new Int1; -- is derivation legal?
end P;
Then 3.5.4(4-6) says, in part:
"A type declaration of the form:
type T is range L .. R;
is, by definition, equivalent to the following declarations:
type integer_type is new predefined_integer_type;
subtype T is integer_type range integer_type(L) .. integer_type(R);
where ..."
and 3.4(15) says:
"If a derived ... type is declared immediately within the visible part
of a package, then, within this visible part, this type must not be
used as the parent type of a derived type definition."
Is it the intent of the reference manual to interpret the equivalence of
3.5.4(4-6) literally, making Int1 a derived type and thus not derivable until
the end of the visible part of the package? (Some implementations treat Int1
as a derived type and reject the above example on the grounds that it
violates 3.4(15).)
*****************************************************************************
!section 03.05.04 (06) J. Goodenough 85-11-06 83-00684
!version 1983
!topic Unsigned integer types
3.5.4(6) says:
The predefined integer types include the type INTEGER. An
implementation may also have predefined types such as
SHORT_INTEGER and LONG_INTEGER, which have (substantially)
shorter and longer ranges, respectively, than INTEGER. The
range of each of these types must be symmetric about zero,
excepting an extra negative value which may exist in some
implementations. The base type of each of these types is the
type itself.
Does the phrase "each of these types" refer just to the predefined types
INTEGER, SHORT_INTEGER, and LONG_INTEGER? This would be a reasonable
interpretation, since the Standard could otherwise have said, "The range of
each predefined integer type must be ...".
If it is only the three named predefined types that are to have symmetric
ranges, then an implementation could provide an unsigned integer type whose
values do not include any negative numbers. As a consequence, when
evaluating expressions having an unsigned base type, NUMERIC_ERROR could be
raised if any intermediate result was negative [3.5.4(10)]. In addition, the
upper bound of such a type might exceed the upper bound of INTEGER or
LONG_INTEGER, in which SYSTEM.MAX_INT would have this value. This would mean
that
type T is range SYSTEM.MIN_INT .. SYSTEM.MAX_INT;
could be rejected because there would be no predefined integer type covering
this range. (Such a consequence may be surprising, but is not inconsistent
with the definition of SYSTEM.MAX_INT.)
In short, can a predefined type have a non-symmetric range, and in
particular, a range that only includes non-negative values?
*****************************************************************************
!section 03.05.04 (07) J. Goodenough 86-01-06 83-00700
!version 1983
!topic Asymmetric integer types
During the design phase, Ron Brender wrote a note concerning possible problems
that would arise if predefined integer types were allowed to have nonsymmetric
ranges. I'm including this note here, followed by some of my comments.
-----------------------------------------------------------------------------
BEGINNING OF NOTE
-----------------------------------------------------------------------------
NOTE-249
Subject: Asymetric integer types, MIN/MAX_INT, and dynamic universal
Author: RFB
Date: 30 Nov.81
A combination of two actions tentatively taken at the November DR
meeting raise some questions and problems. The two actions are:
1. Drop the requirement that all predefined integer types must
have ranges that are symmetric about zero. This is intended,
in particular, to allow unsigned integer types.
2. Introduce the notion of non-static universal integer (and
real) expressions; these are expressions of type universal
integer which are evaluated at run-time. The required range
for this type is not completely clear, but the informal
discussion seemed to suggest that the range would be "the
largest integer range supported at run-time".
The following issues must now be considered. For concreteness,
suppose that an implementation provides (only) two integer types:
INTEGER with a range of -32768..32767, and UNSIGNED_INTEGER with a
range of 0..65535.
Issue 1: What are the values of SYSTEM.MIN_INT and SYSTEM.MAX_INT.
One might suppose that MIN_INT is -32768 and MAX_INT is 65535. If so,
observe that there is now NO integer type in the implementation that
has the range MIN_INT..MAX_INT. This is not necessarily bad, but it
is a change and may have surprizing implications in practice.
Issue 2: Which type is to be used at run-time for non-static
universal integer expressions? Note that because the integer types
are not required to be symmetric about zero, there is no longer a
well-ordering among the types and, in particular, no notion of "the
largest" integer type.
For most non-static universal integers, it seems likely that a signed
integer type ought to be used. For 'LENGTH in particular, however, it
seems that the unsigned integer type is highly desireable. It seems
most problematical that all of the following can be necessarily legal:
type ARR is array (1..40000) of BOOLEAN; -- (1a)
type ARR is array (1..UNSIGNED_INTEGER'(40000)) of BOOLEAN; -- (1b)
procedure P (X : ARR) is
begin
... 40000 = ARR'LENGTH ... -- (2)
... -32000 < ARR'LENGTH ... -- (3)
end;
In (1a), the rules for default resolution choose INTEGER as the type
for the index range so 40000 must raise NUMERIC_ERROR at runtime, so
let us suppose our knowledgeable programmer has written (1b) instead.
In (2) let us suppose that a cleaver implementation considers that the
universal integer type of ARR'LENGTH is "really" UNSIGNED_INTEGER; is
this expression now illegal because 40000 is "really" type INTEGER or
not? Similarly in (3), is this illegal, legal and possibly raising
NUMERIC_ERROR, or guarenteed to evaluate to TRUE?
Issue 3: What operators are defined in STANDARD for the unsigned
predefined types and what are their signatures? In particular, what
about:
function ABS (X : UNSIGNED_INTEGER) return UNSIGNED_INTEGER;
function "-" (X : UNSIGNED_INTEGER) return UNSIGNED_INTEGER;
ABS seems okay, I guess, although note that it is always optimizable
to no operation. Unary minus is even more curious -- it will
necessarily raise NUMERIC_ERROR for any operand other than zero!
Presumably as "an integer type" in the sense of LRM 3.5.4, these
operators must be available as specified in LRM 4.5.
Some of the identities used in the definition of multiplying operators
in LRM 4.5.5 in particular become vacuously true for unsigned integer
types -- this may be worthy of a note.
Issue 4: Are any "guidelines" necessary or appropriate in determining
the choice of parent type in an integer type definition? Consider:
type MY_INT1 is range 1..100;
Does it matter whether INTEGER or UNSIGNED_INTEGER is choosen for this
type?
-----------------------------------------------------------------------------
END OF NOTE
-----------------------------------------------------------------------------
In response to this comment and a related comment by Paul Hilfinger, the rules
for evaluating nonstatic universal integer expressions were phrased in terms
of SYSTEM.MIN_INT and SYSTEM.MAX_INT, i.e., an implementation can only raise
NUMERIC_ERROR if a value lies outside the range MIN_INT .. MAX_INT [4.10(5)].
This suggests that if an implementation decides to support nonsymmetric
integer types, the upper bound for the nonsymmetric type had better not exceed
the upper bound for the largest symmetric type, since it will otherwise be
difficult to support nonstatic universal integer calculations at run-time.
The phrasing in 4.10 also suggests that the intent was to allow some
predefined types to have nonsymmtric ranges.
As for the issue of which predefined type is chosen when the type definition
requires only nonnegative values, it is probably least surprising to a user if
a symmetric type is always chosen unless an implementation defined pragma is
used to specify otherwise.
*****************************************************************************
!section 03.05.04 (07) J. Goodenough 86-09-21 83-00807
!version 1983
!topic Precision and range of predefined numeric types
Are predefined numeric types such as INTEGER and LONG_INTEGER allowed to have
almost exactly the same range? This question has arisen in the validation of
several compilers, as documented by the messages given below:
==================================================
From: Dan Lehman
Date: 16 Sep 1986
FRT Members:
In 2 recent VSRs I noticed that declarations for various numeric types
were equivalent; I issued the following comments to the AVFs. One AVF has
conferred with Brian WICHMANN on the matter (i.p., the 1st of the 2 below),
and Brian said that I've misinterpreted the Standard on these points. I
do not think that I have, though of course ...; but I based my remarks upon
those of the only validation failure to be documented with a VSR--viz., that
of ROLM in 1983 (it failed for more serious reasons, but its declaration
of LONG_INTEGER to have a range but one greater than INTEGER was also cited
in the list of nonconformities). I believe that John participated in the
analysis of the ROLM failure, so I trust the interpretation given there re
the similar predefined declarations to be accurate.
Now that the interpretation has been challenged--esp. by one of the FRT--,
I am submitting the matter here for a formal review. Although the issue
still seems clear to me (the "may also have predefined ..." meaning only that
the types need not be supported--not that they need not be different), I
wonder at how important this passage is: for although it specifies that
the types must differ, INTEGER, e.g., may be 32 bits here but 16 bits there
and 24 bits somewhere else; thus the use of these types frustrates portability?
The second case, at least, seems like an attempt to accept LONG_INTEGER
declarations that are for 32 bits, even though INTEGER has also been defined
to be of 32 bits.
---Dan
-----------------------------------------------------------------------------
[I have include the entire text of each message to the AVFs; much of it is\s
repetitive. --dl]
Summary:
The VSR for the AAAAABBBBBBCCCCCC validation shows--in the Appendix F
information--that the package STANDARD used by their "ABC Ada Compiler,
Release 2" defines both SHORT_ & BYTE_INTEGER to be exactly equal in range to
INTEGER; this violates the Ada Standard 3.5.4(7), which states that the
respective ranges must be substantially different. Also, SHORT_FLOAT is
defined to be but 1 digit less in accuracy than FLOAT; this violates the Ada
Standard 3.5.7(8), which states that the respective floating-point accuracies
must be substantially different.
Discussion:
The Ada Standard 3.5.4(7) reads
"An implementation may also have predefined types such as SHORT_INTEGER
and LONG_INTEGER, which have (substantially) shorter and longer ranges,
respectively, than INTEGER."
Paragraph 3.5.7(8) reads similarly, with the appropriate differences
pertaining to floating-point types. However, there is no ACVC test that
checks that these requirements are met, although the AVF should have made
this check of the implemention's Appendix F during prevalidation; the
violations were noticed after testing was completed, when the VSR was
reviewed by the AVO.
The effect of defining additional types to be (nearly) identical to the
types INTEGER and FLOAT seems negligible; I cannot imagine what purpose the
implementer felt they served. The presence of other types implies that
distinct, significantly different types exist; and for this compiler such is
not the case. Moreover, the names "SHORT_INTEGER" & "BYTE_INTEGER" further
connote a size that the actual definitions greatly exceed. And the
definition of SHORT_FLOAT is problematic in that it is of less accuracy than
any other validated compiler's SHORT_FLOAT (which have always been "digits
6", with FLOAT being "digits 15"), so the portability of applications using
such a declaration would be frustrated.
The ABC Ada Compiler undeniably violates the Ada Standard.
Recommendation:
The AAAAAABBBBBBBCCC VSR shall conspicuously document the similarities of
the several cited numeric type declarations as a nonconformity; a rationale
for passing the implementation shall be given. The VC may be issued.
In the normal places where information on numeric types is given--viz., in
the Executive Summary and in section 2.3--, the VSR shall state:
"SHORT_INTEGER and BYTE_INTEGER are predefined, but have the same range
as INTEGER; this violates the Ada Standard 3.5.4(7). SHORT_FLOAT is
predefined, but its accuracy is only 1 less than FLOAT's; this violates
3.5.7(8). See section 3.8."
And a special section--"3.8 Anomalies"--shall be added to contain the
explanation of the problem and the rationale for validating the compiler. In
this new section, state:
These anomalies were discovered after the completion of testing:
this implementation's predefined types BYTE_INTEGER and SHORT_INTEGER
have exactly the same range as INTEGER; and the predefined type
SHORT_FLOAT has an accuracy of only 1 digit less than FLOAT. The Ada
Standard 3.5.4(7) states that "An implementation may also have prede-
fined types such as SHORT_INTEGER and LONG_INTEGER, which have (sub-
stantially) shorter and longer ranges, respectively, than INTEGER."
3.5.7(8) reads similarly regarding floating-point types, requiring
that SHORT_FLOAT have "(substantially) less ... accuracy than FLOAT."
However, there is no ACVC test that checks that these requirements are
met, and this implementation's violations were not noticed until after
testing was completed and a draft of this report was reviewed.
Given this nonconformity's late detection and superficial nature,
the AVO does not deny validation to this implementation. However,
it is recommended that the package STANDARD be corrected to exclude
the declarations of the additional predefined types and the associated
subprograms. With such a corrected version of STANDARD, 9 tests passed
during testing--viz., C34001D & F, C35702A, B52004E, C55B07B, B55B09D,
and B86001CP, CR, & DT--, become inapplicable, for they contain declara-
tions for objects of the types above which must then be rejected.
---------------------------------------------------------------------------
Summary:
The VSR for the XXXXXYYYYZZZZ validation shows--in the Appendix F infor-
mation--that the package STANDARD used by "XXXXX Ada" defines LONG_INTEGER to
be exactly equal in range to INTEGER; this violates the Ada Standard
3.5.4(7), which states that the respective ranges must be substantially dif-
ferent.
Discussion:
The Ada Standard 3.5.4(7) reads "An implementation may also have prede-
fined types such as SHORT_INTEGER and LONG_INTEGER, which have
(substantially) shorter and longer ranges, respectively, than INTEGER.".
However, there is no ACVC test that checks that this requirement is met,
although the AVF should have made this check of the implemention's Appendix F
during prevalidation; the violation was noticed after testing was completed,
when the VSR was reviewed by the AVO.
The effect of defining LONG_INTEGER to be identical in range to INTEGER
seems fairly negligible, since the Ada Standard does not require any
particular integer declaration to be of any particular range. The presence
of a type named "LONG_INTEGER" does imply that a distinct, significantly
greater-than-INTEGER type exists, and for this compiler such is not the case.
(On the positive side, having such a type allows the implementation to accept
such declarations in programs written on other systems, where INTEGER &
LONG_INTEGER are e.g. of 16 & 32 bits, resp., without modifying the code.)
XYZZ Ada undeniably, yet trivially, violates the Ada Standard.
Recommendation:
The XYZZ Ada VSR shall conspicuously document the similarity of the
INTEGER and LONG_INTEGER declarations as a nonconformity; a rationale for
passing the implementation shall be given. The VC may be issued.
In the normal places where information on numeric types is given--viz., in
the Executive Summary and in section 2.3--, the VSR shall state:
"LONG_INTEGER is predefined but has the same range as INTEGER; this vio-
lates the Ada Standard 3.5.4(7). See section 3.8."
And a special section--"3.8 Anomalies"--shall be added to contain the
explanation of the problem and the rationale for validating the compiler.
In this new section, state:
One anomaly was discovered after the completion of testing: this
implementation's predefined type LONG_INTEGER has exactly the same
range as INTEGER. The Ada Standard 3.5.4(7) states that "An imple-
mentation may also have predefined types such as SHORT_INTEGER and
LONG_INTEGER, which have (substantially) shorter and longer ranges,
respectively, than INTEGER." However, there is no ACVC test that
checks that this requirement is met, and this implementation's vio-
lation was not noticed until after testing was completed and a draft
of this report was reviewed.
Given this nonconformity's late detection and superficial nature,
the AVO does not deny validation to this implementation. However,
it is recommended that the package STANDARD be corrected to exclude
the declarations of LONG_INTEGER and associated subprograms. With
such a corrected version of STANDARD, 5 tests passed during testing
--viz., C34001E, B52004D, B55B09C, B86001CS, & C55B07A--, become
inapplicable, for they contain declarations for objects of type
LONG_INTEGER which must then be rejected.
==================================================
From: Paul Hilfinger
Date: Tue, 16 Sep 86 22:41:19 PDT
I see, in rereading the standard, that it is, indeed, ambiguous on
this point. The sentence in question is this.
"An implementation may also have predefined types such as
SHORT_INTEGER and LONG_INTEGER, which have (substantially)
shorter and longer ranges, respectively, than INTEGER."
The word "respectively" indicates that the modifying clause
"(substantially) shorter and longer ranges" applies to SHORT_INTEGER
and LONG_INTEGER, and not necessarily to other possible "predefined
types." I say "not necessarily" because that hinges on whether "such
as" is to be read as "along the lines of" or "for example." That is,
does this mean that there can be other predefined (presumably integer)
types and all must have substantially different ranges, or does it
mean that there can be other predefined (presumably integer) types, of
which SHORT_INTEGER and LONG_INTEGER, when defined, must have
substantially different ranges? I suspect that the first
interpretation was the intended one, so that you are right in
objecting. However, the issue is sufficiently vague to require LMP
consideration. Furthermore, I would be very reluctant to have a
compiler fail validation solely or principally for this reason. It
would help immensely to know why the implementor did this.
The situation for FLOAT may be different, depending on the machine. I
strongly advise against making any negative determination or report
against a vendor having the problems cited without getting some
indication of their rationale. Ada has some unfortunate rules
concerning floating point that lead to similar problems. For example,
if an implementor were to use D format on the VAX to represent double
precision, the language rules require that the maximum allowed digits
value be 9, even though the significand contains 56 binary digits.
(This is because the maximum B is constrained to be 1/4 of the maximum
exponent, which is 127 for VAX D format.) Give us some more data
about the target, please.
Paul
==================================================
From: John B. Goodenough
Date: 17 Sep 1986 13:19:05 EDT
Although one can read some ambiguity into the wording of 3.5.4(7) if you
read closely enough, I think the intent is clear -- the predefined integer
types are not supposed to have identical (or near-identical) ranges. Of
course, as Paul points out, it would be a shame to invalidate a compiler just
for this reason, but that is the whole purpose of the pre-validation process
-- to get these minor deviations fixed so the validation can go forward.
As Paul points out, the floating point case is more complex. A strict
reading would say that when the Standard uses the term, it means accuracy in
terms of the model numbers, so a difference of one in DIGITS is not a
substantial difference in accuracy. On the other hand, the actual accuracy
for two predefined floating point types might be substantially different even
though the digits values are the same, or nearly the same, since the Ada
model links the exponent range with the number of bits of significance.
This might be a defensible reason for allowing such predefined types. (How
does DEC Ada handle the different floating point representations?)
==================================================
From: dewar@NYU-ACF2.ARPA
Date: Fri, 19 Sep 86 10:06 EDT
I am much more creative in reading the sentence on LONG_INTEGER. Here is
my reading:
An implementation may also have [additional] predefined types
That's the general rule
such as SHORT_INTEGER and LONG_INTEGER which have respectively substantially
shorter and longer ranges than INTEGER.
That's one particular example of the rule
"such as" is a pretty vague term, inviting us to guess at the principles for
constructing additional examples. I choose to take a very liberal guess here,
additional possible examples for me would include:
LONG_INTEGER which has the same precision as INTEGER (where the
implementation provides an INTEGER type which is LONG)
JUNK_INTEGER, which could have any precision you liked.
Why do I choose a liberal interpretation, well because it leads to a sensible
result. Consider the three implementations following:
Implementation A INTEGER 32 bits, LONG_INTEGER 32 bits
Implementation B INTEGER 16 bits, LONG_INTEGER 32 bits
Implementation C INTEGER 32 bits, no LONG_INTEGER
Now as a user, I would clearly prefer implementation A (assuming that I was
on a 32 bit machine with a large store so there is no penalty in INTEGER
being 32 bits - it might even cost substantially to try to make INTEGER
smaller, imagine trying to have INTEGER's 16 bits long on a Cyber!)
Why do I prefer A? Simple, it runs all the programs that implementation B
or C can run, and a lot of others besides.
Yet in our wisdom we threaten to tell an implementor that A is not allowed,
and he must damage his implementation by using B or C instead. This is
plain absurd!
Now if I felt the RM was clear on this matter, I would have to merely chalk
this silly result up to another design mistake. Since I believe the RM can
be read in a permissive manner here, I think we should at least agree that
it is a discussable issue, send it off to the LMP, and meanwhile not bother
implementors with misguided concerns. Since the issue would then be under
discussion, there is also no need to get over concerned about flagging
this situation in the RM.
Incidentally, exactly the same reasoning can be applied to FLOAT.
==================================================
From: dewar@nyu-acf2.arpa
Date: Fri, 19 Sep 86 10:19:59 edt
Of course AI-325 considerations apply to the choice of INTEGER and other
predefined types. Suppose that we had an implementation on an 8088 where
the predefined INTEGER type had range -128 to +127. I think we should
reject it on the grounds of ludicrous choice of length for integer
(to see that this is not an issue of pricinple but one of AI-325
considerations, note that no one would disagree with ruling out
an implementation which used a range of 0..0).
Similarly I would consider that an implementation on an 8088 which
used 16 bits for INTEGER and 16 bits for LONG_INTEGER was suspect.
However, a 68000 implementation which used 32 bits for both is probably
quite reasonable.
My underlying thinking in making these judgements is something like:
INTEGER ought to correspond to the natural efficient integer size
on the machine and be at least 16 bits.
LONG_INTEGER should have as much precision as possible, consistent
with reasonably efficient code, preferably at least 32 bits.
Sure these are heuristic considerations which cannot be derived from
the RM, but that's what AI-325 is all about!
Another way of stating it is that an implementation can refuse to implement
any part of the language on AI-325 grounds. Of course such arguments are
hard to make (in legal terms there is a Prima Facie case ruling against
such implementers). Even if you take the strict ruling of the RM statement
about LONG_INTEGER (as you know I argue for a weaker reading), an implementor
can argue that on his hardware it is simply impractical to differentiate
between the two types. Such an argument should be analyzed in AI-325 terms.
It is much more desirable to be leniant in allowing the identical
implementation of the two types than to demand that one be removed, damaging
the usability of the compiler from a portability point of view.
In other words, if a vendor comes along and argues on AI-325 basis that
it is reasonable to use the same precision, we should respond in one of
two ways:
1) OK, we agree with you, it's allowed
2) Wrong, we insist that LONG_INTEGER have more precision (or INTEGER less)
Of course, due to the characteristic over permissiveness of the RM in allowing
subsetting, the implementor can respond to 2) by omitting LONG_INTEGER
completely and we can't do a darned thing about it, but this minor travesty
should be at the implementer's choice, not at the validater's demand!
For the next version of the language I will argue that the implementation of
SHORT_INTEGER and LONG_INTEGER be mandatory, with possibly a note that the
intention is that where appropriate these types have substantially different
precisions. I will then argue that the validation use a guideline of
at least 8,16,32 bits for the three types (I mean we are interested in
portability -- right? right?)
*****************************************************************************
!section 03.05.04 (07) Dan Lehman/IDA 86-10-02 83-00818
!version 1983
!topic Ranges of predefined integer types
!reference 83-00807
I find the counter arguments to my interpretation of 3.5.4(7) & 3.5.7(8)
unconvincing. One must work hard--and unnaturally--to read into these
paragraphs meanings other than the obvious one--viz., that implementations are
permitted ("may") to provide more than one IMPLEMENTATION of integer and of
floating-point types. I emphasize "implementation" to stress that having,
e.g., LONG_INTEGER & INTEGER defined to be both integers of 32 bits is to
have but one integer implementation with two names. I find support for the
straightforward reading in 3.5.4(5&6), where "...the predefined integer type
is implicitly selected by the implementation, so as to contain [the values
of the declared type.]" Thus, just as we may--in Ada code--ask "What is
MY_INTEGER'BASE'LAST?", so may we--implicitly/theoretically--ask "What is
MY_INTEGER'BASE?" And we would expect to be given one name, indicating one
integer length, not more.
The Rationale further supports the straightforward reading of these sections.
In the Rationale for the Design of the Ada Programming Language, I read:
"If type LONG_INTEGER is also IMPLEMENTED, then this [type] has the same
operations as above [for INTEGER], but it is NECESSARY to overload the
operations to obtain the required semantics. This step corresponds to the
NEED for the implementation to generate code for such EXTENDED integers."
[my emphases]
The Rationale continues
"The type SHORT_INTEGER may also be implemented with the above semantics.
Note, however, that THIS TYPE can readily be defined by the user with a
type declaration ..." [my emphasis]
unlike LONG_INTEGER, you see, for one is a restriction of INTEGER's range,
the other an extension--that is the unmistakable message here. Were it
merely a matter of naming types, the user could "define" both types.
And the Rationale further offers
"type MY_INTEGER is range -100_000..100_000;
This type would be implemented with the MACHINE TYPE LONG_INTEGER on a
typical 16-bit minicomputer, but with ordinary integers (i.e. INTEGER)
on a larger word[-]length machine." [my emphasis]
Now I realize that this draft document, the Rationale, is not an official
interpretation of the Standard. But I take it as a faithful interpretation
of the distinguished authors.
I think that it is unfortunate that the Standard gives even the illusion
that predefined types are portable. It would have been better had the re-
commended names for additional types taken a more descriptive form, e.g.:
INTEGER_8, FLOAT_15, INTEGER_64 vs. SHORT_INTEGER, LONG_FLOAT, &, one would
presume, LONG_LONG_(LONG_)INTEGER.
* * * * * * * * * *
Brian focuses on "may" as being suspect; and he attaches "may" not to
"have [these additional types]..." but to "have (substantially) shorter and
longer ranges, respectively, than INTEGER". I believe that he is right
about "may"'s meaning, but wrong about its object: the 2nd sentence of
3.5.4(7) is broken with a comma before the specification of ranges; it is
the presence of additional types that is optional, the specification of
distinct ranges is required if the types are present. The 3rd sentence
continues the specification, requiring symmetry about zero; there is nothing
uncertain about its meaning. (And actually here I would ask: Why prohibit
an implementation from predefining an unsigned integer? --a 16-bit positive=
valued integer, e.g.? --great for indexing, looping,...!?)
Brian admits that, on his interpretation, the 2nd sentence of 3.5.4(7) is
senseless. Surely, in trying to ascertain the intent of the Standard, we
don't believe that text was intended to be vacuous--reductio ad absurdum is
a means to rejecting premises, not confirming them!
* * * * * * * * * *
As for Robert Dewar's concerns about portability, given that the Standard
does not specify any particular range or accuracy for the predefined numeric
types, I would expect their use to be confined to applications where such
considerations were of no importance. And where range or accuracy affects
an application, I expect to see application-specific user type declarations
explicitly specifying " ... range PROJECT_MIN..PROJECT_MAX", so that, re-
gardless of what predefined type is used as the base type on a particular
implementation, that application will be assured of having the necessary
range or accuracy.
Trying to increase the portability of Ada programs by giving a "permissive"
reading to 3.5.4(7) & 3.5.7(8) seems like trying to, by some sleight of hand,
get milk from a bull: the effort is doomed to fail, for you've got the wrong
beast for the task. (--and the experience will be painful!)
* * * * * * * * * *
Finally, I am, like Paul, curious about the implementer's purpose in defining
predefined types to be identical. In the case of L_I = INTEGER, I suspect that
it is to have an implementation like Dewar's "Impl. A" that accepts other com-
mon predefined types of 32 or fewer bits. But why would one name a 32-bit
type "BYTE_INTEGER"? I've asked for that implementer's rationale, but have yet
to receive any reply.
I would not want implementers to feel that they could pass validation test-
ing with several predefined types of equal range or accuracy, and then later
actually implement different machine types to assume those additional type
names--implement a "byte integer" later, e.g..
*****************************************************************************
!section 03.05.04 (07) Brian A. Wichmann 86-09-24 83-00856
!version 1983
!topic Precision and range of predefined numeric types
!reference 83-00807
I believe that the LRM permits INTEGER and LONG_INTEGER to have
the same representation. I say this because the wording in 3.5.4(7)
uses the word 'may'. In British Standards, one is not allowed to
use 'may'. This is covered by BS0, the Standard for Standards.
Specifically, paragraph 8.3.2 of BS0 reads:
"Auxiliaries such as 'should' and 'may' are appropriate only
outside the requirements, in recommendations and statements
repectively."
Now BS0 is modelled on the ISO directives which I believe takes a
similar view. The question therefore arises as to the guidelines
in use for writing the Ada Standard. Does ANSI have a similar
document to BS0? What does it say on such issues?
On the specific case, my reading of the English 'may' is that it
provides explicit permission but says nothing about the converse.
In other words, LONG_INTEGER could be shorter than INTEGER! Hence
the statement in 3.5.4(7) is virtually without meaning. I have not
scanned the LRM for further occurances of 'may'.
Robert Dewar's comments are all very reasonable, but the issue in
hand is not language design but interpreting the LRM which is
written in English. It is up to the implementor to decide the
characteristics of his system and therefore we need to be very
careful about issues such as a minimum value for INTEGER'LAST
on which the LRM is silent.
As a supporting remark to Dewar's position, the current view of
this issue for the Modula-2 definition that BSI is producing
is that three lengths of integers will always be provided (with
defined names), but that they may have the same representation.
Brian 24th Sept 86
--------------------------------------
*****************************************************************************
!section 03.05.04 (07) Bryce Bardin 87-06-15 83-00974
!version 1983
!topic Unsigned arithmetic
!reference AI-00402
[Draft] Implementation of Unsigned Integers in Ada
INTRODUCTION
It is suggested that the minimal goals for unsigned integers should include at
least:
1) providing an extended maximum non-negative integer range which fully
exploits the available hardware (and which allows full range address
arithmetic when appropriate),
2) providing straightforward and efficiently-implementable logical operations
(including shifts, rotates, and masks) on all bits of unsigned types,
3) providing numeric literals in arbitrary bases (so that representations
appropriate to a given architecture may be chosen for bit-level
operations), and
4) providing efficient support for modular arithmetic of arbitrary range
(which allows checksums, hash functions, and pseudo-random number
generators which generate all possible bit patterns in closed cycles
to be cleanly written in Ada).
Note that the goals stated above imply that both range-checked and modular
arithmetic ought to be supported for unsigned integers.
Exemption of the unsigned integer types from the requirement to have symmetric
ranges solves the main problem with the functionality of the language. As
suggested by Robert Dewar and Paul Hilfinger, sentence 3 of LRM section
3.5.4(7) may be interpreted to allow the introduction of implementation-defined
integer types in package SYSTEM or elsewhere. Such types are not predefined
integer types and thus may not be used by the implementation in the derivations
of 3.5.4(4-6), although other types may be derived from them and subtypes may
be defined based on them.
The view taken here is that unsigned integer types are integer types in every
other respect than not participating in implicit derivations. In particular,
they have as a subset of their predefined operations the same operations
provided for the predefined integer types, although the meaning of those
operations are different. A primary of a non-static universal expression can
be implicitly converted to an unsigned integer type. In addition, they match
generic formal discrete and integer types. (E.g., TEXT_IO.INTEGER_IO can be
instantiated for these types.) Finally, it is proposed that the declaration
of subtypes of unsigned types and types derived from unsigned types will
behave the same as for predefined and user-defined integer types. In
particular, it is intended that LRM 3.5.5(12) apply after modification by
replacing "predefined" by "implementation-defined" at its second occurence:
"..., the predefined operators of an integer type deliver results whose
range is defined by the parent [implementation-defined] type; such a result
need not belong to the declared subtype, in which case an attempt to assign
the result to a variable of the integer subtype raises the exception
CONSTRAINT_ERROR."
The additional operations on these types include bit-wise logical operators.
The following is a strawman proposal (draft of a draft) to meet the goals
stated above. Note that the choice between UNSIGNED and CARDINAL in the names
below is arbitrary, although there is some precedent from Modula-2 for the
choice made here. Alternative names which might be considered as a substitute
for CARDINAL are MODULAR and CYCLIC.
In what follows, asides and comments are enclosed in square brackets ([]).
*START OF PROPOSAL*
DRAFT PROPOSAL ON UNSIGNED INTEGER TYPES
An implementation-defined unsigned integer type definition defines an integer
type whose set of values include exactly the specified range, where the lower
bound is zero and the upper bound is 2**n - 1 for some positive integer n. The
base type of such a type is the type itself.
Operations on implementation-defined unsigned integer types include all of the
operations on integer types plus the predefined logical operators and the
highest precedence operator "not". The logical operators have their
conventional meaning as applied to unsigned integers viewed as arrays of 1-bit
numeric values which represent boolean values (with 0 corresponding to FALSE
and 1 corresponding to TRUE). (Note: based integer literals are available for
defining values in conventional formats, e.g., hexadecimal.) Additional
operations for bit-wise arithmetic and logical shifts and rotations are defined
in the package specification given below.
For every integer type or subtype T, the following (implementation-defined)
attributes are defined:
T'UNSIGNED Yields the value TRUE if T is an unsigned integer type; yields the
value FALSE otherwise. The value of this attribute is of the
predefined type BOOLEAN.
T'MODULAR Yields the value TRUE if T is an unsigned integer type with
modular arithmetic; yields the value FALSE otherwise. The value
of this attribute is of the predefined type BOOLEAN.
[These attributes facilitate the usage of unsigned and modular types in
generic units.]
Every implementation of should provide at least one pair of unsigned integer
types with identical ranges: an unsigned integer type with modular arithmetic
and an unsigned integer type with range-checked arithmetic.
[Exactly where these are declared must be defined. Candidates are package
SYSTEM and package X, where X = {your favorite name}. E.g., packages
UNSIGNED_NUMBERS and CARDINAL_NUMBERS.]
The implementation-defined unsigned types using range-checked arithmetic
should include the type UNSIGNED_nn, where nn represents an integer value
equal to both UNSIGNED_nn'SIZE and INTEGER'SIZE. An implementation may also
have other implementation-defined unsigned integer types using range-checked
arithmetic with names of the same form which have different sizes.
[It would be symmetric to have a predefined integer type named INTEGER_nn
instead of INTEGER here. It would be less universal than INTEGER, but directly
comparable to UNSIGNED_nn and CARDINAL_nn, and directly tied to a specific
hardware representation by a standard name.]
The implementation-defined unsigned types using modular arithmetic should
include the type CARDINAL_nn, where nn represents an integer value equal to
both CARDINAL_nn'SIZE and INTEGER'SIZE. An implementation may also have other
implementation-defined unsigned integer types using modular arithmetic with
names of the same form which have different sizes. The arithmetic operations
on these types are performed modulo 2**nn.
[The explicit declaration of unsigned types cannot be given in Ada because their
base types are not implicitly derived from predefined integer types as required
by 3.5.4(4-6). Because of that, their declarations are given here in the style
of package STANDARD (see Annex C).]
This outlines the specification of the package X containing all
implementation-defined unsigned integer type declarations. The corresponding
package body is implementation-defined and is not shown.
The operators that are predefined for the types declared in the package X are
given in comments since they are implicitly declared. Italics are used [will
be used] for pseudo-names of anonymous types and for undefined information.
package X is
type UNSIGNED_nn is implementation-defined;
-- Note:
-- UNSIGNED_nn'FIRST = 0
-- UNSIGNED_nn'LAST = 2**nn - 1
-- UNSIGNED_nn'BASE'FIRST = UNSIGNED_nn'FIRST
-- UNSIGNED_nn'BASE'LAST = UNSIGNED_nn'LAST
-- UNSIGNED_nn'PRED(UNSIGNED_nn'FIRST) raises CONSTRAINT_ERROR
-- UNSIGNED_nn'SUCC(UNSIGNED_nn'LAST) raises CONSTRAINT_ERROR
for UNSIGNED_nn'SIZE use nn;
-- The predefined operators for this type are as follows:
-- function "=" (LEFT, RIGHT : UNSIGNED_nn) return BOOLEAN;
-- function "/=" (LEFT, RIGHT : UNSIGNED_nn) return BOOLEAN;
-- function "<" (LEFT, RIGHT : UNSIGNED_nn) return BOOLEAN;
-- function "<=" (LEFT, RIGHT : UNSIGNED_nn) return BOOLEAN;
-- function ">" (LEFT, RIGHT : UNSIGNED_nn) return BOOLEAN;
-- function ">=" (LEFT, RIGHT : UNSIGNED_nn) return BOOLEAN;
-- function "+" (RIGHT : UNSIGNED_nn) return UNSIGNED_nn;
-- function "-" (RIGHT : UNSIGNED_nn) return UNSIGNED_nn;
-- "-" raises CONSTRAINT_ERROR unless RIGHT = 0.
-- function "abs" (RIGHT : UNSIGNED_nn) return UNSIGNED_nn;
-- function "+" (LEFT, RIGHT : UNSIGNED_nn) return UNSIGNED_nn;
-- function "-" (LEFT, RIGHT : UNSIGNED_nn) return UNSIGNED_nn;
-- function "*" (LEFT, RIGHT : UNSIGNED_nn) return UNSIGNED_nn;
-- function "/" (LEFT, RIGHT : UNSIGNED_nn) return UNSIGNED_nn;
-- function "rem" (LEFT, RIGHT : UNSIGNED_nn) return UNSIGNED_nn;
-- function "mod" (LEFT, RIGHT : UNSIGNED_nn) return UNSIGNED_nn;
-- function "**" (LEFT : UNSIGNED_nn;
RIGHT : INTEGER) return UNSIGNED_nn;
-- function "and" (LEFT, RIGHT : UNSIGNED_nn) return UNSIGNED_nn;
-- function "or" (LEFT, RIGHT : UNSIGNED_nn) return UNSIGNED_nn;
-- function "xor" (LEFT, RIGHT : UNSIGNED_nn) return UNSIGNED_nn;
-- function "not" (RIGHT : UNSIGNED_nn) return UNSIGNED_nn;
-- function ARITHMETIC_SHIFT (ITEM : UNSIGNED_nn;
-- BITS : INTEGER) return UNSIGNED_nn;
-- If BITS >= 0 or BITS < 0, returns ITEM left or right arithmetically
-- shifted (with zero fill) by abs(BITS) bits, respectively.
-- Raises NUMERIC_ERROR if a bit would be shifted off the left end.
-- function LOGICAL_SHIFT (ITEM : UNSIGNED_nn;
-- BITS : INTEGER) return UNSIGNED_nn;
-- If BITS >= 0 or BITS < 0, returns ITEM left or right logically shifted
-- (end off) by abs(BITS) bits, respectively.
-- function ROTATE (ITEM : UNSIGNED_nn;
-- BITS : INTEGER) return UNSIGNED_nn;
-- If BITS >= 0 or BITS < 0, returns ITEM left or right rotated (end
-- around) by abs(BITS) bits, respectively.
[Issue: Is this the right set of operations on range-checked unsigned
numbers? For instance, should there be a "FIND_FIRST_BIT" function? Many
computers have efficient instructions for this function and it would be
unlikely that a user-defined function would be able to utilize it.]
[Issue: Should subtypes of UNSIGNED_nn be declared here? Probably not,
since the only set of subtypes that would seem to make sense to predefine
are those requiring exactly one bit, two bits, etc., for storage, and they may
easily be defined by the user if desired.]
Similarly, for modular arithmetic:
type CARDINAL_nn is implementation-defined;
-- Note:
-- CARDINAL_nn'FIRST = 0
-- CARDINAL_nn'LAST = 2**nn - 1
-- CARDINAL_nn'BASE'FIRST = CARDINAL_nn'FIRST = 0
-- CARDINAL_nn'BASE'LAST = CARDINAL_nn'LAST
-- CARDINAL_nn'PRED(CARDINAL_nn'FIRST) = CARDINAL_nn'LAST
-- CARDINAL_nn'SUCC(CARDINAL_nn'LAST) = CARDINAL_nn'FIRST
for CARDINAL_nn'SIZE use nn;
-- The predefined operators for this type are as follows:
-- function "=" (LEFT, RIGHT : CARDINAL_nn) return BOOLEAN;
-- function "/=" (LEFT, RIGHT : CARDINAL_nn) return BOOLEAN;
-- function "<" (LEFT, RIGHT : CARDINAL_nn) return BOOLEAN;
-- function "<=" (LEFT, RIGHT : CARDINAL_nn) return BOOLEAN;
-- function ">" (LEFT, RIGHT : CARDINAL_nn) return BOOLEAN;
-- function ">=" (LEFT, RIGHT : CARDINAL_nn) return BOOLEAN;
-- function "+" (RIGHT : CARDINAL_nn) return CARDINAL_nn;
-- function "-" (RIGHT : CARDINAL_nn) return CARDINAL_nn;
-- function "abs" (RIGHT : CARDINAL_nn) return CARDINAL_nn;
-- function "+" (LEFT, RIGHT : CARDINAL_nn) return CARDINAL_nn;
-- function "-" (LEFT, RIGHT : CARDINAL_nn) return CARDINAL_nn;
-- function "*" (LEFT, RIGHT : CARDINAL_nn) return CARDINAL_nn;
-- function "/" (LEFT, RIGHT : CARDINAL_nn) return CARDINAL_nn;
-- function "rem" (LEFT, RIGHT : CARDINAL_nn) return CARDINAL_nn;
-- function "mod" (LEFT, RIGHT : CARDINAL_nn) return CARDINAL_nn;
-- function "**" (LEFT : CARDINAL_nn;
-- RIGHT : INTEGER) return CARDINAL_nn;
-- function "and" (LEFT, RIGHT : CARDINAL_nn) return CARDINAL_nn;
-- function "or" (LEFT, RIGHT : CARDINAL_nn) return CARDINAL_nn;
-- function "xor" (LEFT, RIGHT : CARDINAL_nn) return CARDINAL_nn;
-- function "not" (LEFT : CARDINAL_nn) return CARDINAL_nn;
-- function ARITHMETIC_SHIFT (ITEM : CARDINAL_nn;
-- BITS : INTEGER) return CARDINAL_nn;
-- If BITS >= 0 or BITS < 0, returns ITEM left or right arithmetically
-- shifted (with zero fill) by abs(BITS) bits, respectively.
-- Raises NUMERIC_ERROR if a bit would be shifted off the left end.
-- function LOGICAL_SHIFT (ITEM : CARDINAL_nn;
-- BITS : INTEGER) return CARDINAL_nn;
-- If BITS >= 0 or BITS < 0, returns ITEM left or right logically shifted
-- (end off) by abs(BITS) bits, respectively.
-- function ROTATE (ITEM : CARDINAL_nn; BITS : INTEGER) return CARDINAL_nn;
-- If BITS >= 0 or BITS < 0, returns ITEM left or right rotated (end
-- around) by abs(BITS) bits, respectively.
[The following two functions, REM_OF_SUM and REM_OF_PRODUCT, are needed to
allow the construction of types derived from CARDINAL_nn, but with smaller
ranges, having the desired semantics. One possible package for this purpose
is discussed below.]
-- function REM_OF_SUM (ADDEND,
-- AUGEND,
-- DIVISOR : CARDINAL_nn) return CARDINAL_nn;
-- Returns CARDINAL_nn((Anonymous(ADDEND) + Anonymous(AUGEND))/
-- Anonymous(DIVISOR)),
-- where Anonymous is some integer type for which 2 * CARDINAL_nn'LAST
-- is within the range of values.
-- function REM_OF_PRODUCT (MULTIPLIER,
-- MULTIPLICAND,
-- DIVISOR : CARDINAL_nn) return CARDINAL_nn;
-- Returns CARDINAL_nn((Anonymous(MULITPLIER) * Anonymous(MULTIPLICAND))/
-- Anonymous(DIVISOR)),
-- where Anonymous is some integer type for which CARDINAL_nn'LAST ** 2
-- is within the range of values.
[Issue: Is this the right set of operations on modular unsigned numbers?
For instance, should there be a "FIND_FIRST_BIT" function? Many computers
have efficient instructions for this function and it would be unlikely that a
user-defined function would be able to utilize it.]
[Issue: Should subtypes of CARDINAL_nn be allowed? They seem to have no
application, but orthogonality would seem to require them.]
-- The following types facilitate the declaration of unsigned types of
-- maximum range:
type LARGEST_UNSIGNED_TYPE is
new implementation_defined_unsigned_type;
type LARGEST_CARDINAL_TYPE is
new implementation_defined_cardinal_type;
end X;
[In analogy with:
type T is range SYSTEM.MIN_INT .. SYSTEM.MAX_INT;
for symmetric integer types, unsigned integer types of maximum range can be
declared (in portable syntax) by:
type T is new LARGEST_UNSIGNED_TYPE;
type T is new LARGEST_CARDINAL_TYPE;]
*END OF PROPOSAL*
*DISCUSSION*
OPERATIONAL SEMANTICS FOR UNSIGNED TYPES
The following two packages demonstrate the semantics desired for the
implementation-dependent unsigned types defined in the draft proposal.
-- Unsigned_Numbers demonstrates (most of) the desired properties of
-- implementation-defined unsigned numbers having range-checked
-- arithmetic.
-- In particular, it shows the behavior required for the arithmetic,
-- relational, and logical operations on type Unsigned. In addition,
-- it provides analogs for the 'Pred and 'Succ attributes.
-- It was considered unnecessary to simulate the other attributes and
-- the Shift and Rotate operations since they are expected to reflect the
-- usual properties of the underlying hardware operations.
-- N. B.: This simulation is also not realistic with regard to the
-- representation of the type Unsigned.
generic
type Basis is range <>; -- Any predefined or user-defined integer
-- type or subtype
Last : in Basis; -- Any positive value less than or equal to
-- Basis'Last/2
type Logical is range <>; -- Any integer type with a range at least
-- 0 .. Last for which bit-level
-- logical operations are available.
with function "and" (Left, Right : Logical) return Logical is <>;
with function "or" (Left, Right : Logical) return Logical is <>;
with function "xor" (Left, Right : Logical) return Logical is <>;
with function "not" (Right : Logical) return Logical is <>;
package Unsigned_Numbers is
type Unsigned is new Basis range 0 .. Last;
-- Note:
-- Unsigned'First = 0
-- Unsigned'Last = Last
-- Unsigned'Base'First = Unsigned'First = 0
-- (Not true in this simulation)
-- Unsigned'Base'Last = Unsigned'Last
-- (Not true in this simulation)
-- Unsigned'Pred(Unsigned'First) => Constraint_Error
-- (Not true in this simulation)
-- Unsigned'Succ(Unsigned'Last) => Constraint_Error
-- (Not true in this simulation)
-- The predefined operators for this type are as follows:
-- function "=" (Left, Right : Unsigned) return Boolean;
-- function "/=" (Left, Right : Unsigned) return Boolean;
-- function "<" (Left, Right : Unsigned) return Boolean;
-- function "<=" (Left, Right : Unsigned) return Boolean;
-- function ">" (Left, Right : Unsigned) return Boolean;
-- function ">=" (Left, Right : Unsigned) return Boolean;
-- function "+" (Right : Unsigned) return Unsigned;
function "-" (Right : Unsigned) return Unsigned;
-- function "abs" (Right : Unsigned) return Unsigned;
function "+" (Left, Right : Unsigned) return Unsigned;
function "-" (Left, Right : Unsigned) return Unsigned;
function "*" (Left, Right : Unsigned) return Unsigned;
-- function "/" (Left, Right : Unsigned) return Unsigned;
-- function "rem" (Left, Right : Unsigned) return Unsigned;
-- function "mod" (Left, Right : Unsigned) return Unsigned;
function "**" (Left : Unsigned;
Right : Integer) return Unsigned;
function "and" (Left, Right : Unsigned) return Unsigned;
function "or" (Left, Right : Unsigned) return Unsigned;
function "xor" (Left, Right : Unsigned) return Unsigned;
function "not" (Right : Unsigned) return Unsigned;
-- The following functions are substitutes for the
-- attributes Unsigned'Pred and Unsigned'Succ, respectively.
function Pred (Right : Unsigned) return Unsigned;
function Succ (Right : Unsigned) return Unsigned;
-- The following functions are not implemented because they utilize the
-- underlying hardware directly and need no simulation:
-- function Arithmetic_Shift (Item : Unsigned_nn;
-- Bits : Integer) return Unsigned_nn;
-- If Bits >= 0 or Bits < 0, returns Item left or right
-- arithmetically shifted (with zero fill) by abs(Bits)
-- bits, respectively.
-- function Logical_Shift (Item : Unsigned;
-- Bits : Integer) return Unsigned_nn;
-- If Bits >= 0 or Bits < 0, returns Item left or right logically
-- shifted (end off) by abs(Bits) bits, respectively.
-- function Rotate (Item : Unsigned_nn;
-- Bits : Integer) return Unsigned_nn;
-- If Bits >= 0 or Bits < 0, returns Item left or right rotated
-- (end around) by abs(Bits) bits, respectively.
end Unsigned_Numbers;
with System;
package body Unsigned_Numbers is
function "-" (Right : Unsigned) return Unsigned is
begin
if Right = 0 then
return 0;
else
raise Numeric_Error;
end if;
end "-";
function "+" (Left, Right : Unsigned) return Unsigned is
begin
return Unsigned(Basis(Left) + Basis(Right));
exception
when Constraint_Error =>
raise Numeric_Error;
end "+";
function "-" (Left, Right : Unsigned) return Unsigned is
begin
return Unsigned(Basis(Left) - Basis(Right));
exception
when Constraint_Error =>
raise Numeric_Error;
end "-";
function "*" (Left, Right : Unsigned) return Unsigned is
begin
return Unsigned(Basis(Left) * Basis(Right));
exception
when Constraint_Error =>
raise Numeric_Error;
end "*";
function "**" (Left : Unsigned; Right : Integer) return Unsigned is
begin
return Unsigned(Basis(Left) ** Right);
exception
when Constraint_Error =>
raise Numeric_Error;
end "**";
function "and" (Left, Right : Unsigned) return Unsigned is
begin
return Unsigned(Logical(Left) and Logical(Right));
end "and";
function "or" (Left, Right : Unsigned) return Unsigned is
begin
return Unsigned(Logical(Left) or Logical(Right));
end "or";
function "xor" (Left, Right : Unsigned) return Unsigned is
begin
return Unsigned(Logical(Left) xor Logical(Right));
end "xor";
package Make is
function Mask return Logical;
end Make;
package body Make is
Mask_Constant : Logical;
function Mask return Logical is
begin
return Mask_Constant;
end Mask;
begin
Mask_Constant := 0;
for C in Unsigned loop
Mask_Constant := Mask_Constant or Logical(C);
end loop;
end Make;
function "not" (Right : Unsigned) return Unsigned is
begin
return Unsigned(not Logical(Right) and Make.Mask);
end "not";
function Pred (Right : Unsigned) return Unsigned is
begin
if Right = 0 then
raise Constraint_Error;
else
return Unsigned(Basis'Pred(Basis(Right)));
end if;
end Pred;
function Succ (Right : Unsigned) return Unsigned is
begin
if Right = Unsigned(Last) then
raise Constraint_Error;
else
return Unsigned(Basis'Succ(Basis(Right)));
end if;
end Succ;
begin
declare
type Large is range System.Min_Int .. System.Max_Int;
begin
if Last > Basis'Last/2 or else
Logical'First > 0 or else
Large(Logical'Last) < Large(Last) then
raise Program_Error;
end if;
end;
end Unsigned_Numbers;
-- Cardinal_Numbers demonstrates (most of) the desired properties of
-- implementation-defined unsigned numbers having modular arithmetic.
-- In particular, it shows the behavior required for the arithmetic,
-- relational, and logical operations on type Cardinal. In addition,
-- it provides analogs for the 'Pred and 'Succ attributes.
-- It was considered unnecessary to simulate the other attributes and
-- the Shift and Rotate operations since they are expected to reflect the
-- usual properties of the underlying hardware operations.
-- N. B.: This simulation is also not realistic with regard to the
-- representation of the type Cardinal.
generic
type Basis is range <>; -- Any predefined or user-defined integer
-- type or subtype
Modulus : in Basis; -- Any positive value less than
-- Basis'Last/2
type Extended is range <>; -- Any integer type with a range such that
-- Extended'Last >= (Modulus - 1)**2
type Logical is range <>; -- Any integer type with a range at least
-- 0 .. Modulus - 1 for which bit-level
-- logical operations are available.
with function "and" (Left, Right : Logical) return Logical is <>;
with function "or" (Left, Right : Logical) return Logical is <>;
with function "xor" (Left, Right : Logical) return Logical is <>;
with function "not" (Right : Logical) return Logical is <>;
package Cardinal_Numbers is
type Cardinal is new Basis range 0 .. Modulus - 1;
-- Note:
-- Cardinal'First = 0
-- Cardinal'Last = Modulus - 1
-- Cardinal'Base'First = Cardinal'First = 0
-- (Not true in this simulation)
-- Cardinal'Base'Last = Cardinal'Last
-- (Not true in this simulation)
-- Cardinal'Pred(Cardinal'First) = Cardinal'Last
-- (Not true in this simulation)
-- Cardinal'Succ(Cardinal'Last) = Cardinal'First
-- (Not true in this simulation)
-- The predefined operators for this type are as follows:
-- function "=" (Left, Right : Cardinal) return Boolean;
-- function "/=" (Left, Right : Cardinal) return Boolean;
-- function "<" (Left, Right : Cardinal) return Boolean;
-- function "<=" (Left, Right : Cardinal) return Boolean;
-- function ">" (Left, Right : Cardinal) return Boolean;
-- function ">=" (Left, Right : Cardinal) return Boolean;
-- function "+" (Right : Cardinal) return Cardinal;
function "-" (Right : Cardinal) return Cardinal;
-- function "abs" (Right : Cardinal) return Cardinal;
function "+" (Left, Right : Cardinal) return Cardinal;
function "-" (Left, Right : Cardinal) return Cardinal;
function "*" (Left, Right : Cardinal) return Cardinal;
-- function "/" (Left, Right : Cardinal) return Cardinal;
-- function "rem" (Left, Right : Cardinal) return Cardinal;
-- function "mod" (Left, Right : Cardinal) return Cardinal;
function "**" (Left : Cardinal; Right : Integer)
return Cardinal;
function "and" (Left, Right : Cardinal) return Cardinal;
function "or" (Left, Right : Cardinal) return Cardinal;
function "xor" (Left, Right : Cardinal) return Cardinal;
function "not" (Right : Cardinal) return Cardinal;
-- The following functions are cyclic substitutes for the
-- attributes Cardinal'Pred and Cardinal'Succ, respectively.
function Pred (Right : Cardinal) return Cardinal;
function Succ (Right : Cardinal) return Cardinal;
-- The following functions are not implemented:
-- function Arithmetic_Shift (Item : Cardinal_nn;
-- Bits : Integer) return Cardinal_nn;
-- If Bits >= 0 or Bits < 0, returns Item left or right
-- arithmetically shifted (with zero fill) by abs(Bits)
-- bits, respectively.
-- function Logical_Shift (Item : Cardinal;
-- Bits : Integer) return Cardinal_nn;
-- If Bits >= 0 or Bits < 0, returns Item left or right logically
-- shifted (end off) by abs(Bits) bits, respectively.
-- function Rotate (Item : Cardinal_nn;
-- Bits : Integer) return Cardinal_nn;
-- If Bits >= 0 or Bits < 0, returns Item left or right rotated
-- (end around) by abs(Bits) bits, respectively.
function Rem_of_Sum (Addend,
Augend,
Divisor : Cardinal) return Cardinal;
-- Returns Cardinal(Extended(Addend) + Extended(Augend))
-- rem Extended(Divisor))
function Rem_of_Product (Multiplier,
Multiplicand,
Divisor : Cardinal) return Cardinal;
-- Returns Cardinal(Extended(Multiplier) * Extended(Multiplicand))
-- rem Extended(Divisor))
end Cardinal_Numbers;
package body Cardinal_Numbers is
function "-" (Right : Cardinal) return Cardinal is
begin
if Modulus = 1 then
return 0;
else
return (Cardinal'Last - Right) + 1;
end if;
end "-";
function "+" (Left, Right : Cardinal) return Cardinal is
begin
return Cardinal((Extended(Left) + Extended(Right))
mod Extended(Modulus));
end "+";
function "-" (Left, Right : Cardinal) return Cardinal is
begin
if Right > Left then
return ((Cardinal'Last - Right) + Left) + 1;
else
return Cardinal(Basis(Left) - Basis(Right));
end if;
end "-";
function "*" (Left, Right : Cardinal) return Cardinal is
Product : Cardinal;
begin
return Cardinal((Extended(Left) * Extended(Right))
mod Extended(Modulus));
end "*";
function "**" (Left : Cardinal;
Right : Integer) return Cardinal is
T : Cardinal;
begin
if Modulus = 1 then
return 0;
else
T := 1;
for N in 1 .. Right loop
T := T * Left;
end loop;
return T;
end if;
end "**";
function "and" (Left, Right : Cardinal) return Cardinal is
begin
return Cardinal(Logical(Left) and Logical(Right));
end "and";
function "or" (Left, Right : Cardinal) return Cardinal is
begin
return Cardinal(Logical(Left) or Logical(Right));
end "or";
function "xor" (Left, Right : Cardinal) return Cardinal is
begin
return Cardinal(Logical(Left) xor Logical(Right));
end "xor";
package Make is
function Mask return Logical;
end Make;
package body Make is
Mask_Constant : Logical;
function Mask return Logical is
begin
return Mask_Constant;
end Mask;
begin
Mask_Constant := 0;
for C in Cardinal loop
Mask_Constant := Mask_Constant or Logical(C);
end loop;
end Make;
function "not" (Right : Cardinal) return Cardinal is
begin
return Cardinal(not Logical(Right) and Make.Mask);
end "not";
function Pred (Right : Cardinal) return Cardinal is
begin
if Right = 0 then
return Cardinal(Modulus - 1);
else
return Cardinal(Basis'Pred(Basis(Right)));
end if;
end Pred;
function Succ (Right : Cardinal) return Cardinal is
begin
if Right = Cardinal(Modulus - 1) then
return 0;
else
return Cardinal(Basis'Succ(Basis(Right)));
end if;
end Succ;
function Rem_of_Sum (Addend, Augend, Divisor : Cardinal)
return Cardinal is
begin
return Cardinal(
(Extended(Addend) + Extended(Augend))rem Extended(Divisor));
end Rem_of_Sum;
function Rem_of_Product (Multiplier,
Multiplicand,
Divisor : Cardinal) return Cardinal is
begin
return Cardinal(
(Extended(Multiplier) * Extended(Multiplicand)) rem
Extended(Divisor));
end Rem_of_Product;
begin
if Extended'Last < Extended(Basis'Last) ** 2 then
raise Program_Error;
end if;
end Cardinal_Numbers;
UNSIGNED TYPES OF ARBITRARY RANGE
The following two packages can be used to provide range-checked and modular
unsigned types of arbitrary range by derivation from the unsigned types in
package X.
generic
type Basis is range <>;
Last : in Basis;
with function "and" (Left, Right : Basis) return Basis is <>;
with function "or" (Left, Right : Basis) return Basis is <>;
with function "xor" (Left, Right : Basis) return Basis is <>;
with function "not" (Right : Basis) return Basis is <>;
with function Pred (Item : Basis) return Basis is Basis'Pred;
with function Succ (Item : Basis) return Basis is Basis'Succ;
package Derived_Unsigned is
type Unsigned is new Basis range 0 .. Last;
-- function "<" (Left, Right : Unsigned) return Boolean;
-- function "<=" (Left, Right : Unsigned) return Boolean;
-- function ">" (Left, Right : Unsigned) return Boolean;
-- function ">=" (Left, Right : Unsigned) return Boolean;
-- function "+" (Right : Unsigned) return Unsigned;
function "-" (Right : Unsigned) return Unsigned;
-- function "abs" (Right : Unsigned) return Unsigned;
function "+" (Left, Right : Unsigned) return Unsigned;
function "-" (Left, Right : Unsigned) return Unsigned;
function "*" (Left, Right : Unsigned) return Unsigned;
-- function "/" (Left, Right : Unsigned) return Unsigned;
-- function "rem" (Left, Right : Unsigned) return Unsigned;
-- function "mod" (Left, Right : Unsigned) return Unsigned;
function "**" (Left : Unsigned; Right : Integer)
return Unsigned;
function "and" (Left, Right : Unsigned) return Unsigned;
function "or" (Left, Right : Unsigned) return Unsigned;
function "xor" (Left, Right : Unsigned) return Unsigned;
function "not" (Right : Unsigned) return Unsigned;
function Pred (Item : Unsigned) return Unsigned;
function Succ (Item : Unsigned) return Unsigned;
end Derived_Unsigned;
package body Derived_Unsigned is
function "-" (Right : Unsigned) return Unsigned is
begin
return Unsigned(-Basis(Right));
exception
when Constraint_Error =>
raise Numeric_Error;
end "-";
function "+" (Left, Right : Unsigned) return Unsigned is
Temp : Unsigned;
begin
Temp := Unsigned(Basis(Left) + Basis(Right));
if Basis(Temp) <= Last then
return Temp;
else
raise Numeric_Error;
end if;
exception
when Constraint_Error =>
raise Numeric_Error;
end "+";
function "-" (Left, Right : Unsigned) return Unsigned is
Temp : Unsigned;
begin
Temp := Unsigned(Basis(Left) - Basis(Right));
if Basis(Temp) <= Last then
return Temp;
else
raise Numeric_Error;
end if;
exception
when Constraint_Error =>
raise Numeric_Error;
end "-";
function "*" (Left, Right : Unsigned) return Unsigned is
begin
return Unsigned(Basis(Left) * Basis(Right));
exception
when Constraint_Error =>
raise Numeric_Error;
end "*";
function "**" (Left : Unsigned; Right : Integer) return Unsigned is
Temp : Unsigned;
begin
Temp := Unsigned(Basis(Left) ** Right);
if Basis(Temp) <= Last then
return Temp;
else
raise Numeric_Error;
end if;
exception
when Constraint_Error =>
raise Numeric_Error;
end "**";
function "and" (Left, Right : Unsigned) return Unsigned is
begin
return Unsigned(Basis(Left) and Basis(Right));
end "and";
function "or" (Left, Right : Unsigned) return Unsigned is
begin
return Unsigned(Basis(Left) or Basis(Right));
end "or";
function "xor" (Left, Right : Unsigned) return Unsigned is
begin
return Unsigned(Basis(Left) xor Basis(Right));
end "xor";
package Make is
function Mask return Basis;
end Make;
package body Make is
Mask_Constant : Basis;
function Mask return Basis is
begin
return Mask_Constant;
end Mask;
begin
Mask_Constant := 0;
for C in Unsigned loop
Mask_Constant := Mask_Constant or Basis(C);
end loop;
end Make;
function "not" (Right : Unsigned) return Unsigned is
begin
return Unsigned(not Basis(Right) and Make.Mask);
end "not";
function Pred (Item : Unsigned) return Unsigned is
begin
return Unsigned(Pred(Basis(Item)));
end Pred;
function Succ (Item : Unsigned) return Unsigned is
begin
if Item = Unsigned'Last then
raise Constraint_Error;
else
return Unsigned(Succ(Basis(Item)));
end if;
end Succ;
end Derived_Unsigned;
generic
type Basis is range <>;
Modulus : in Basis;
with function Rem_of_Sum (Addend, Augend, Divisor : Basis)
return Basis is <>;
with function Rem_of_Product (Multiplier,
Multiplicand,
Divisor : Basis) return Basis is <>;
with function "and" (Left, Right : Basis) return Basis is <>;
with function "or" (Left, Right : Basis) return Basis is <>;
with function "xor" (Left, Right : Basis) return Basis is <>;
with function "not" (Right : Basis) return Basis is <>;
with function Pred (Item : Basis) return Basis is Basis'Pred;
with function Succ (Item : Basis) return Basis is Basis'Succ;
package Derived_Cardinal is
type Cardinal is new Basis range 0 .. Modulus - 1;
-- function "<" (Left, Right : Cardinal) return Boolean;
-- function "<=" (Left, Right : Cardinal) return Boolean;
-- function ">" (Left, Right : Cardinal) return Boolean;
-- function ">=" (Left, Right : Cardinal) return Boolean;
-- function "+" (Right : Cardinal) return Cardinal;
function "-" (Right : Cardinal) return Cardinal;
-- function "abs" (Right : Cardinal) return Cardinal;
function "+" (Left, Right : Cardinal) return Cardinal;
function "-" (Left, Right : Cardinal) return Cardinal;
function "*" (Left, Right : Cardinal) return Cardinal;
-- function "/" (Left, Right : Cardinal) return Cardinal;
-- function "rem" (Left, Right : Cardinal) return Cardinal;
-- function "mod" (Left, Right : Cardinal) return Cardinal;
function "**" (Left : Cardinal; Right : Integer)
return Cardinal;
function "and" (Left, Right : Cardinal) return Cardinal;
function "or" (Left, Right : Cardinal) return Cardinal;
function "xor" (Left, Right : Cardinal) return Cardinal;
function "not" (Right : Cardinal) return Cardinal;
function Pred (Item : Cardinal) return Cardinal;
function Succ (Item : Cardinal) return Cardinal;
end Derived_Cardinal;
package body Derived_Cardinal is
function "-" (Right : Cardinal) return Cardinal is
begin
if Modulus = 1 then
return 0;
else
return (Cardinal'Last - Right) + 1;
end if;
end "-";
function "+" (Left, Right : Cardinal) return Cardinal is
begin
return Cardinal(Rem_of_Sum(Basis(Left), Basis(Right), Modulus));
end "+";
function "-" (Left, Right : Cardinal) return Cardinal is
begin
if Right > Left then
return ((Cardinal'Last - Right) + Left) + 1;
else
return Cardinal(Basis(Left) - Basis(Right));
end if;
end "-";
function "*" (Left, Right : Cardinal) return Cardinal is
begin
return Cardinal(Rem_of_Product(Basis(Left), Basis(Right), Modulus));
end "*";
function "**" (Left : Cardinal; Right : Integer) return Cardinal is
Temp : Cardinal;
begin
if Modulus = 1 then
return 0;
else
Temp := 1;
for N in 1 .. Right loop
Temp := Temp * Left;
end loop;
return Temp;
end if;
end "**";
function "and" (Left, Right : Cardinal) return Cardinal is
begin
return Cardinal(Basis(Left) and Basis(Right));
end "and";
function "or" (Left, Right : Cardinal) return Cardinal is
begin
return Cardinal(Basis(Left) or Basis(Right));
end "or";
function "xor" (Left, Right : Cardinal) return Cardinal is
begin
return Cardinal(Basis(Left) xor Basis(Right));
end "xor";
package Make is
function Mask return Basis;
end Make;
package body Make is
Mask_Constant : Basis;
function Mask return Basis is
begin
return Mask_Constant;
end Mask;
begin
Mask_Constant := 0;
for C in Cardinal loop
Mask_Constant := Mask_Constant or Basis(C);
end loop;
end Make;
function "not" (Right : Cardinal) return Cardinal is
begin
return Cardinal(not Basis(Right) and Make.Mask);
end "not";
function Pred (Item : Cardinal) return Cardinal is
begin
if Item = 0 then
return Cardinal'Last;
else
return Cardinal(Pred(Basis(Item)));
end if;
end Pred;
function Succ (Item : Cardinal) return Cardinal is
begin
if Item = Cardinal'Last then
return 0;
else
return Cardinal(Succ(Basis(Item)));
end if;
end Succ;
end Derived_Cardinal;
EXAMPLES OF THE BEHAVIOR OF UNSIGNED TYPES
The following are examples of the desired behavior of range-checked and
modular unsigned types:
Range-checked types:
---------------------------------------
Last is 3
+ 0 = 0 (+ 2#0# = 2#0#)
+ 1 = 1 (+ 2#1# = 2#1#)
+ 2 = 2 (+ 2#10# = 2#10#)
- 0 = 0 (- 2#0# = 2#0#)
- 1 = Numeric_Error
- 2 = Numeric_Error
0 + 0 = 0 (2#0# + 2#0# = 2#0#)
0 + 1 = 1 (2#0# + 2#1# = 2#1#)
0 + 2 = 2 (2#0# + 2#10# = 2#10#)
1 + 0 = 1 (2#1# + 2#0# = 2#1#)
1 + 1 = 2 (2#1# + 2#1# = 2#10#)
1 + 2 = Numeric_Error
2 + 0 = 2 (2#10# + 2#0# = 2#10#)
2 + 1 = Numeric_Error
2 + 2 = Numeric_Error
0 - 0 = 0 (2#0# - 2#0# = 2#0#)
0 - 1 = Numeric_Error
0 - 2 = Numeric_Error
1 - 0 = 1 (2#1# - 2#0# = 2#1#)
1 - 1 = 0 (2#1# - 2#1# = 2#0#)
1 - 2 = Numeric_Error
2 - 0 = 2 (2#10# - 2#0# = 2#10#)
2 - 1 = 1 (2#10# - 2#1# = 2#1#)
2 - 2 = 0 (2#10# - 2#10# = 2#0#)
0 * 0 = 0 (2#0# * 2#0# = 2#0#)
0 * 1 = 0 (2#0# * 2#1# = 2#0#)
0 * 2 = 0 (2#0# * 2#10# = 2#0#)
1 * 0 = 0 (2#1# * 2#0# = 2#0#)
1 * 1 = 1 (2#1# * 2#1# = 2#1#)
1 * 2 = 2 (2#1# * 2#10# = 2#10#)
2 * 0 = 0 (2#10# * 2#0# = 2#0#)
2 * 1 = 2 (2#10# * 2#1# = 2#10#)
2 * 2 = Numeric_Error
0 / 0 = Numeric_Error
0 / 1 = 0 (2#0# / 2#1# = 2#0#)
0 / 2 = 0 (2#0# / 2#10# = 2#0#)
1 / 0 = Numeric_Error
1 / 1 = 1 (2#1# / 2#1# = 2#1#)
1 / 2 = 0 (2#1# / 2#10# = 2#0#)
2 / 0 = Numeric_Error
2 / 1 = 2 (2#10# / 2#1# = 2#10#)
2 / 2 = 1 (2#10# / 2#10# = 2#1#)
0 rem 0 = Numeric_Error
0 rem 1 = 0 (2#0# rem 2#1# = 2#0#)
0 rem 2 = 0 (2#0# rem 2#10# = 2#0#)
1 rem 0 = Numeric_Error
1 rem 1 = 0 (2#1# rem 2#1# = 2#0#)
1 rem 2 = 1 (2#1# rem 2#10# = 2#1#)
2 rem 0 = Numeric_Error
2 rem 1 = 0 (2#10# rem 2#1# = 2#0#)
2 rem 2 = 0 (2#10# rem 2#10# = 2#0#)
0 mod 0 = Numeric_Error
0 mod 1 = 0 (2#0# mod 2#1# = 2#0#)
0 mod 2 = 0 (2#0# mod 2#10# = 2#0#)
1 mod 0 = Numeric_Error
1 mod 1 = 0 (2#1# mod 2#1# = 2#0#)
1 mod 2 = 1 (2#1# mod 2#10# = 2#1#)
2 mod 0 = Numeric_Error
2 mod 1 = 0 (2#10# mod 2#1# = 2#0#)
2 mod 2 = 0 (2#10# mod 2#10# = 2#0#)
0 ** 0 = 1 (2#0# ** 2#0# = 2#1#)
0 ** 1 = 0 (2#0# ** 2#1# = 2#0#)
0 ** 2 = 0 (2#0# ** 2#10# = 2#0#)
1 ** 0 = 1 (2#1# ** 2#0# = 2#1#)
1 ** 1 = 1 (2#1# ** 2#1# = 2#1#)
1 ** 2 = 1 (2#1# ** 2#10# = 2#1#)
2 ** 0 = 1 (2#10# ** 2#0# = 2#1#)
2 ** 1 = 2 (2#10# ** 2#1# = 2#10#)
2 ** 2 = Numeric_Error
0 and 0 = 0 (2#0# and 2#0# = 2#0#)
0 and 1 = 0 (2#0# and 2#1# = 2#0#)
0 and 2 = 0 (2#0# and 2#10# = 2#0#)
1 and 0 = 0 (2#1# and 2#0# = 2#0#)
1 and 1 = 1 (2#1# and 2#1# = 2#1#)
1 and 2 = 0 (2#1# and 2#10# = 2#0#)
2 and 0 = 0 (2#10# and 2#0# = 2#0#)
2 and 1 = 0 (2#10# and 2#1# = 2#0#)
2 and 2 = 2 (2#10# and 2#10# = 2#10#)
0 or 0 = 0 (2#0# or 2#0# = 2#0#)
0 or 1 = 1 (2#0# or 2#1# = 2#1#)
0 or 2 = 2 (2#0# or 2#10# = 2#10#)
1 or 0 = 1 (2#1# or 2#0# = 2#1#)
1 or 1 = 1 (2#1# or 2#1# = 2#1#)
1 or 2 = Constraint_Error
2 or 0 = 2 (2#10# or 2#0# = 2#10#)
2 or 1 = Constraint_Error
2 or 2 = 2 (2#10# or 2#10# = 2#10#)
0 xor 0 = 0 (2#0# xor 2#0# = 2#0#)
0 xor 1 = 1 (2#0# xor 2#1# = 2#1#)
0 xor 2 = 2 (2#0# xor 2#10# = 2#10#)
1 xor 0 = 1 (2#1# xor 2#0# = 2#1#)
1 xor 1 = 0 (2#1# xor 2#1# = 2#0#)
1 xor 2 = Constraint_Error
2 xor 0 = 2 (2#10# xor 2#0# = 2#10#)
2 xor 1 = Constraint_Error
2 xor 2 = 0 (2#10# xor 2#10# = 2#0#)
not 0 = Constraint_Error
not 1 = 2 (not 2#1# = 2#10#)
not 2 = 1 (not 2#10# = 2#1#)
Pred(0) = Constraint_Error
Pred(1) = 0 (Pred(2#1#) = 2#0#)
Pred(2) = 1 (Pred(2#10#) = 2#1#)
Succ(0) = 1 (Succ(2#0#) = 2#1#)
Succ(1) = 2 (Succ(2#1#) = 2#10#)
Succ(2) = Constraint_Error
---------------------------------------
Last is 4
+ 0 = 0 (+ 2#0# = 2#0#)
+ 1 = 1 (+ 2#1# = 2#1#)
+ 2 = 2 (+ 2#10# = 2#10#)
+ 3 = 3 (+ 2#11# = 2#11#)
- 0 = 0 (- 2#0# = 2#0#)
- 1 = Numeric_Error
- 2 = Numeric_Error
- 3 = Numeric_Error
0 + 0 = 0 (2#0# + 2#0# = 2#0#)
0 + 1 = 1 (2#0# + 2#1# = 2#1#)
0 + 2 = 2 (2#0# + 2#10# = 2#10#)
0 + 3 = 3 (2#0# + 2#11# = 2#11#)
1 + 0 = 1 (2#1# + 2#0# = 2#1#)
1 + 1 = 2 (2#1# + 2#1# = 2#10#)
1 + 2 = 3 (2#1# + 2#10# = 2#11#)
1 + 3 = Numeric_Error
2 + 0 = 2 (2#10# + 2#0# = 2#10#)
2 + 1 = 3 (2#10# + 2#1# = 2#11#)
2 + 2 = Numeric_Error
2 + 3 = Numeric_Error
3 + 0 = 3 (2#11# + 2#0# = 2#11#)
3 + 1 = Numeric_Error
3 + 2 = Numeric_Error
3 + 3 = Numeric_Error
0 - 0 = 0 (2#0# - 2#0# = 2#0#)
0 - 1 = Numeric_Error
0 - 2 = Numeric_Error
0 - 3 = Numeric_Error
1 - 0 = 1 (2#1# - 2#0# = 2#1#)
1 - 1 = 0 (2#1# - 2#1# = 2#0#)
1 - 2 = Numeric_Error
1 - 3 = Numeric_Error
2 - 0 = 2 (2#10# - 2#0# = 2#10#)
2 - 1 = 1 (2#10# - 2#1# = 2#1#)
2 - 2 = 0 (2#10# - 2#10# = 2#0#)
2 - 3 = Numeric_Error
3 - 0 = 3 (2#11# - 2#0# = 2#11#)
3 - 1 = 2 (2#11# - 2#1# = 2#10#)
3 - 2 = 1 (2#11# - 2#10# = 2#1#)
3 - 3 = 0 (2#11# - 2#11# = 2#0#)
0 * 0 = 0 (2#0# * 2#0# = 2#0#)
0 * 1 = 0 (2#0# * 2#1# = 2#0#)
0 * 2 = 0 (2#0# * 2#10# = 2#0#)
0 * 3 = 0 (2#0# * 2#11# = 2#0#)
1 * 0 = 0 (2#1# * 2#0# = 2#0#)
1 * 1 = 1 (2#1# * 2#1# = 2#1#)
1 * 2 = 2 (2#1# * 2#10# = 2#10#)
1 * 3 = 3 (2#1# * 2#11# = 2#11#)
2 * 0 = 0 (2#10# * 2#0# = 2#0#)
2 * 1 = 2 (2#10# * 2#1# = 2#10#)
2 * 2 = Numeric_Error
2 * 3 = Numeric_Error
3 * 0 = 0 (2#11# * 2#0# = 2#0#)
3 * 1 = 3 (2#11# * 2#1# = 2#11#)
3 * 2 = Numeric_Error
3 * 3 = Numeric_Error
0 / 0 = Numeric_Error
0 / 1 = 0 (2#0# / 2#1# = 2#0#)
0 / 2 = 0 (2#0# / 2#10# = 2#0#)
0 / 3 = 0 (2#0# / 2#11# = 2#0#)
1 / 0 = Numeric_Error
1 / 1 = 1 (2#1# / 2#1# = 2#1#)
1 / 2 = 0 (2#1# / 2#10# = 2#0#)
1 / 3 = 0 (2#1# / 2#11# = 2#0#)
2 / 0 = Numeric_Error
2 / 1 = 2 (2#10# / 2#1# = 2#10#)
2 / 2 = 1 (2#10# / 2#10# = 2#1#)
2 / 3 = 0 (2#10# / 2#11# = 2#0#)
3 / 0 = Numeric_Error
3 / 1 = 3 (2#11# / 2#1# = 2#11#)
3 / 2 = 1 (2#11# / 2#10# = 2#1#)
3 / 3 = 1 (2#11# / 2#11# = 2#1#)
0 rem 0 = Numeric_Error
0 rem 1 = 0 (2#0# rem 2#1# = 2#0#)
0 rem 2 = 0 (2#0# rem 2#10# = 2#0#)
0 rem 3 = 0 (2#0# rem 2#11# = 2#0#)
1 rem 0 = Numeric_Error
1 rem 1 = 0 (2#1# rem 2#1# = 2#0#)
1 rem 2 = 1 (2#1# rem 2#10# = 2#1#)
1 rem 3 = 1 (2#1# rem 2#11# = 2#1#)
2 rem 0 = Numeric_Error
2 rem 1 = 0 (2#10# rem 2#1# = 2#0#)
2 rem 2 = 0 (2#10# rem 2#10# = 2#0#)
2 rem 3 = 2 (2#10# rem 2#11# = 2#10#)
3 rem 0 = Numeric_Error
3 rem 1 = 0 (2#11# rem 2#1# = 2#0#)
3 rem 2 = 1 (2#11# rem 2#10# = 2#1#)
3 rem 3 = 0 (2#11# rem 2#11# = 2#0#)
0 mod 0 = Numeric_Error
0 mod 1 = 0 (2#0# mod 2#1# = 2#0#)
0 mod 2 = 0 (2#0# mod 2#10# = 2#0#)
0 mod 3 = 0 (2#0# mod 2#11# = 2#0#)
1 mod 0 = Numeric_Error
1 mod 1 = 0 (2#1# mod 2#1# = 2#0#)
1 mod 2 = 1 (2#1# mod 2#10# = 2#1#)
1 mod 3 = 1 (2#1# mod 2#11# = 2#1#)
2 mod 0 = Numeric_Error
2 mod 1 = 0 (2#10# mod 2#1# = 2#0#)
2 mod 2 = 0 (2#10# mod 2#10# = 2#0#)
2 mod 3 = 2 (2#10# mod 2#11# = 2#10#)
3 mod 0 = Numeric_Error
3 mod 1 = 0 (2#11# mod 2#1# = 2#0#)
3 mod 2 = 1 (2#11# mod 2#10# = 2#1#)
3 mod 3 = 0 (2#11# mod 2#11# = 2#0#)
0 ** 0 = 1 (2#0# ** 2#0# = 2#1#)
0 ** 1 = 0 (2#0# ** 2#1# = 2#0#)
0 ** 2 = 0 (2#0# ** 2#10# = 2#0#)
0 ** 3 = 0 (2#0# ** 2#11# = 2#0#)
1 ** 0 = 1 (2#1# ** 2#0# = 2#1#)
1 ** 1 = 1 (2#1# ** 2#1# = 2#1#)
1 ** 2 = 1 (2#1# ** 2#10# = 2#1#)
1 ** 3 = 1 (2#1# ** 2#11# = 2#1#)
2 ** 0 = 1 (2#10# ** 2#0# = 2#1#)
2 ** 1 = 2 (2#10# ** 2#1# = 2#10#)
2 ** 2 = Numeric_Error
3 ** 0 = 1 (2#11# ** 2#0# = 2#1#)
3 ** 1 = 3 (2#11# ** 2#1# = 2#11#)
3 ** 2 = Numeric_Error
0 and 0 = 0 (2#0# and 2#0# = 2#0#)
0 and 1 = 0 (2#0# and 2#1# = 2#0#)
0 and 2 = 0 (2#0# and 2#10# = 2#0#)
0 and 3 = 0 (2#0# and 2#11# = 2#0#)
1 and 0 = 0 (2#1# and 2#0# = 2#0#)
1 and 1 = 1 (2#1# and 2#1# = 2#1#)
1 and 2 = 0 (2#1# and 2#10# = 2#0#)
1 and 3 = 1 (2#1# and 2#11# = 2#1#)
2 and 0 = 0 (2#10# and 2#0# = 2#0#)
2 and 1 = 0 (2#10# and 2#1# = 2#0#)
2 and 2 = 2 (2#10# and 2#10# = 2#10#)
2 and 3 = 2 (2#10# and 2#11# = 2#10#)
3 and 0 = 0 (2#11# and 2#0# = 2#0#)
3 and 1 = 1 (2#11# and 2#1# = 2#1#)
3 and 2 = 2 (2#11# and 2#10# = 2#10#)
3 and 3 = 3 (2#11# and 2#11# = 2#11#)
0 or 0 = 0 (2#0# or 2#0# = 2#0#)
0 or 1 = 1 (2#0# or 2#1# = 2#1#)
0 or 2 = 2 (2#0# or 2#10# = 2#10#)
0 or 3 = 3 (2#0# or 2#11# = 2#11#)
1 or 0 = 1 (2#1# or 2#0# = 2#1#)
1 or 1 = 1 (2#1# or 2#1# = 2#1#)
1 or 2 = 3 (2#1# or 2#10# = 2#11#)
1 or 3 = 3 (2#1# or 2#11# = 2#11#)
2 or 0 = 2 (2#10# or 2#0# = 2#10#)
2 or 1 = 3 (2#10# or 2#1# = 2#11#)
2 or 2 = 2 (2#10# or 2#10# = 2#10#)
2 or 3 = 3 (2#10# or 2#11# = 2#11#)
3 or 0 = 3 (2#11# or 2#0# = 2#11#)
3 or 1 = 3 (2#11# or 2#1# = 2#11#)
3 or 2 = 3 (2#11# or 2#10# = 2#11#)
3 or 3 = 3 (2#11# or 2#11# = 2#11#)
0 xor 0 = 0 (2#0# xor 2#0# = 2#0#)
0 xor 1 = 1 (2#0# xor 2#1# = 2#1#)
0 xor 2 = 2 (2#0# xor 2#10# = 2#10#)
0 xor 3 = 3 (2#0# xor 2#11# = 2#11#)
1 xor 0 = 1 (2#1# xor 2#0# = 2#1#)
1 xor 1 = 0 (2#1# xor 2#1# = 2#0#)
1 xor 2 = 3 (2#1# xor 2#10# = 2#11#)
1 xor 3 = 2 (2#1# xor 2#11# = 2#10#)
2 xor 0 = 2 (2#10# xor 2#0# = 2#10#)
2 xor 1 = 3 (2#10# xor 2#1# = 2#11#)
2 xor 2 = 0 (2#10# xor 2#10# = 2#0#)
2 xor 3 = 1 (2#10# xor 2#11# = 2#1#)
3 xor 0 = 3 (2#11# xor 2#0# = 2#11#)
3 xor 1 = 2 (2#11# xor 2#1# = 2#10#)
3 xor 2 = 1 (2#11# xor 2#10# = 2#1#)
3 xor 3 = 0 (2#11# xor 2#11# = 2#0#)
not 0 = 3 (not 2#0# = 2#11#)
not 1 = 2 (not 2#1# = 2#10#)
not 2 = 1 (not 2#10# = 2#1#)
not 3 = 0 (not 2#11# = 2#0#)
Pred(0) = Constraint_Error
Pred(1) = 0 (Pred(2#1#) = 2#0#)
Pred(2) = 1 (Pred(2#10#) = 2#1#)
Pred(3) = 2 (Pred(2#11#) = 2#10#)
Succ(0) = 1 (Succ(2#0#) = 2#1#)
Succ(1) = 2 (Succ(2#1#) = 2#10#)
Succ(2) = 3 (Succ(2#10#) = 2#11#)
Succ(3) = Constraint_Error
=======================================
Modular types:
---------------------------------------
The modulus is 3
+ 0 = 0 (+ 2#0# = 2#0#)
+ 1 = 1 (+ 2#1# = 2#1#)
+ 2 = 2 (+ 2#10# = 2#10#)
- 0 = 0 (- 2#0# = 2#0#)
- 1 = 2 (- 2#1# = 2#10#)
- 2 = 1 (- 2#10# = 2#1#)
0 + 0 = 0 (2#0# + 2#0# = 2#0#)
0 + 1 = 1 (2#0# + 2#1# = 2#1#)
0 + 2 = 2 (2#0# + 2#10# = 2#10#)
1 + 0 = 1 (2#1# + 2#0# = 2#1#)
1 + 1 = 2 (2#1# + 2#1# = 2#10#)
1 + 2 = 0 (2#1# + 2#10# = 2#0#)
2 + 0 = 2 (2#10# + 2#0# = 2#10#)
2 + 1 = 0 (2#10# + 2#1# = 2#0#)
2 + 2 = 1 (2#10# + 2#10# = 2#1#)
0 - 0 = 0 (2#0# - 2#0# = 2#0#)
0 - 1 = 2 (2#0# - 2#1# = 2#10#)
0 - 2 = 1 (2#0# - 2#10# = 2#1#)
1 - 0 = 1 (2#1# - 2#0# = 2#1#)
1 - 1 = 0 (2#1# - 2#1# = 2#0#)
1 - 2 = 2 (2#1# - 2#10# = 2#10#)
2 - 0 = 2 (2#10# - 2#0# = 2#10#)
2 - 1 = 1 (2#10# - 2#1# = 2#1#)
2 - 2 = 0 (2#10# - 2#10# = 2#0#)
0 * 0 = 0 (2#0# * 2#0# = 2#0#)
0 * 1 = 0 (2#0# * 2#1# = 2#0#)
0 * 2 = 0 (2#0# * 2#10# = 2#0#)
1 * 0 = 0 (2#1# * 2#0# = 2#0#)
1 * 1 = 1 (2#1# * 2#1# = 2#1#)
1 * 2 = 2 (2#1# * 2#10# = 2#10#)
2 * 0 = 0 (2#10# * 2#0# = 2#0#)
2 * 1 = 2 (2#10# * 2#1# = 2#10#)
2 * 2 = 1 (2#10# * 2#10# = 2#1#)
0 / 0 = Numeric_Error
0 / 1 = 0 (2#0# / 2#1# = 2#0#)
0 / 2 = 0 (2#0# / 2#10# = 2#0#)
1 / 0 = Numeric_Error
1 / 1 = 1 (2#1# / 2#1# = 2#1#)
1 / 2 = 0 (2#1# / 2#10# = 2#0#)
2 / 0 = Numeric_Error
2 / 1 = 2 (2#10# / 2#1# = 2#10#)
2 / 2 = 1 (2#10# / 2#10# = 2#1#)
0 rem 0 = Numeric_Error
0 rem 1 = 0 (2#0# rem 2#1# = 2#0#)
0 rem 2 = 0 (2#0# rem 2#10# = 2#0#)
1 rem 0 = Numeric_Error
1 rem 1 = 0 (2#1# rem 2#1# = 2#0#)
1 rem 2 = 1 (2#1# rem 2#10# = 2#1#)
2 rem 0 = Numeric_Error
2 rem 1 = 0 (2#10# rem 2#1# = 2#0#)
2 rem 2 = 0 (2#10# rem 2#10# = 2#0#)
0 mod 0 = Numeric_Error
0 mod 1 = 0 (2#0# mod 2#1# = 2#0#)
0 mod 2 = 0 (2#0# mod 2#10# = 2#0#)
1 mod 0 = Numeric_Error
1 mod 1 = 0 (2#1# mod 2#1# = 2#0#)
1 mod 2 = 1 (2#1# mod 2#10# = 2#1#)
2 mod 0 = Numeric_Error
2 mod 1 = 0 (2#10# mod 2#1# = 2#0#)
2 mod 2 = 0 (2#10# mod 2#10# = 2#0#)
0 ** 0 = 1 (2#0# ** 2#0# = 2#1#)
0 ** 1 = 0 (2#0# ** 2#1# = 2#0#)
0 ** 2 = 0 (2#0# ** 2#10# = 2#0#)
1 ** 0 = 1 (2#1# ** 2#0# = 2#1#)
1 ** 1 = 1 (2#1# ** 2#1# = 2#1#)
1 ** 2 = 1 (2#1# ** 2#10# = 2#1#)
2 ** 0 = 1 (2#10# ** 2#0# = 2#1#)
2 ** 1 = 2 (2#10# ** 2#1# = 2#10#)
2 ** 2 = 1 (2#10# ** 2#10# = 2#1#)
0 and 0 = 0 (2#0# and 2#0# = 2#0#)
0 and 1 = 0 (2#0# and 2#1# = 2#0#)
0 and 2 = 0 (2#0# and 2#10# = 2#0#)
1 and 0 = 0 (2#1# and 2#0# = 2#0#)
1 and 1 = 1 (2#1# and 2#1# = 2#1#)
1 and 2 = 0 (2#1# and 2#10# = 2#0#)
2 and 0 = 0 (2#10# and 2#0# = 2#0#)
2 and 1 = 0 (2#10# and 2#1# = 2#0#)
2 and 2 = 2 (2#10# and 2#10# = 2#10#)
0 or 0 = 0 (2#0# or 2#0# = 2#0#)
0 or 1 = 1 (2#0# or 2#1# = 2#1#)
0 or 2 = 2 (2#0# or 2#10# = 2#10#)
1 or 0 = 1 (2#1# or 2#0# = 2#1#)
1 or 1 = 1 (2#1# or 2#1# = 2#1#)
1 or 2 = Constraint_Error
2 or 0 = 2 (2#10# or 2#0# = 2#10#)
2 or 1 = Constraint_Error
2 or 2 = 2 (2#10# or 2#10# = 2#10#)
0 xor 0 = 0 (2#0# xor 2#0# = 2#0#)
0 xor 1 = 1 (2#0# xor 2#1# = 2#1#)
0 xor 2 = 2 (2#0# xor 2#10# = 2#10#)
1 xor 0 = 1 (2#1# xor 2#0# = 2#1#)
1 xor 1 = 0 (2#1# xor 2#1# = 2#0#)
1 xor 2 = Constraint_Error
2 xor 0 = 2 (2#10# xor 2#0# = 2#10#)
2 xor 1 = Constraint_Error
2 xor 2 = 0 (2#10# xor 2#10# = 2#0#)
not 0 = Constraint_Error
not 1 = 2 (not 2#1# = 2#10#)
not 2 = 1 (not 2#10# = 2#1#)
Pred(0) = 2 (Pred(2#0#) = 2#10#)
Pred(1) = 0 (Pred(2#1#) = 2#0#)
Pred(2) = 1 (Pred(2#10#) = 2#1#)
Succ(0) = 1 (Succ(2#0#) = 2#1#)
Succ(1) = 2 (Succ(2#1#) = 2#10#)
Succ(2) = 0 (Succ(2#10#) = 2#0#)
---------------------------------------
The modulus is 4
+ 0 = 0 (+ 2#0# = 2#0#)
+ 1 = 1 (+ 2#1# = 2#1#)
+ 2 = 2 (+ 2#10# = 2#10#)
+ 3 = 3 (+ 2#11# = 2#11#)
- 0 = 0 (- 2#0# = 2#0#)
- 1 = 3 (- 2#1# = 2#11#)
- 2 = 2 (- 2#10# = 2#10#)
- 3 = 1 (- 2#11# = 2#1#)
0 + 0 = 0 (2#0# + 2#0# = 2#0#)
0 + 1 = 1 (2#0# + 2#1# = 2#1#)
0 + 2 = 2 (2#0# + 2#10# = 2#10#)
0 + 3 = 3 (2#0# + 2#11# = 2#11#)
1 + 0 = 1 (2#1# + 2#0# = 2#1#)
1 + 1 = 2 (2#1# + 2#1# = 2#10#)
1 + 2 = 3 (2#1# + 2#10# = 2#11#)
1 + 3 = 0 (2#1# + 2#11# = 2#0#)
2 + 0 = 2 (2#10# + 2#0# = 2#10#)
2 + 1 = 3 (2#10# + 2#1# = 2#11#)
2 + 2 = 0 (2#10# + 2#10# = 2#0#)
2 + 3 = 1 (2#10# + 2#11# = 2#1#)
3 + 0 = 3 (2#11# + 2#0# = 2#11#)
3 + 1 = 0 (2#11# + 2#1# = 2#0#)
3 + 2 = 1 (2#11# + 2#10# = 2#1#)
3 + 3 = 2 (2#11# + 2#11# = 2#10#)
0 - 0 = 0 (2#0# - 2#0# = 2#0#)
0 - 1 = 3 (2#0# - 2#1# = 2#11#)
0 - 2 = 2 (2#0# - 2#10# = 2#10#)
0 - 3 = 1 (2#0# - 2#11# = 2#1#)
1 - 0 = 1 (2#1# - 2#0# = 2#1#)
1 - 1 = 0 (2#1# - 2#1# = 2#0#)
1 - 2 = 3 (2#1# - 2#10# = 2#11#)
1 - 3 = 2 (2#1# - 2#11# = 2#10#)
2 - 0 = 2 (2#10# - 2#0# = 2#10#)
2 - 1 = 1 (2#10# - 2#1# = 2#1#)
2 - 2 = 0 (2#10# - 2#10# = 2#0#)
2 - 3 = 3 (2#10# - 2#11# = 2#11#)
3 - 0 = 3 (2#11# - 2#0# = 2#11#)
3 - 1 = 2 (2#11# - 2#1# = 2#10#)
3 - 2 = 1 (2#11# - 2#10# = 2#1#)
3 - 3 = 0 (2#11# - 2#11# = 2#0#)
0 * 0 = 0 (2#0# * 2#0# = 2#0#)
0 * 1 = 0 (2#0# * 2#1# = 2#0#)
0 * 2 = 0 (2#0# * 2#10# = 2#0#)
0 * 3 = 0 (2#0# * 2#11# = 2#0#)
1 * 0 = 0 (2#1# * 2#0# = 2#0#)
1 * 1 = 1 (2#1# * 2#1# = 2#1#)
1 * 2 = 2 (2#1# * 2#10# = 2#10#)
1 * 3 = 3 (2#1# * 2#11# = 2#11#)
2 * 0 = 0 (2#10# * 2#0# = 2#0#)
2 * 1 = 2 (2#10# * 2#1# = 2#10#)
2 * 2 = 0 (2#10# * 2#10# = 2#0#)
2 * 3 = 2 (2#10# * 2#11# = 2#10#)
3 * 0 = 0 (2#11# * 2#0# = 2#0#)
3 * 1 = 3 (2#11# * 2#1# = 2#11#)
3 * 2 = 2 (2#11# * 2#10# = 2#10#)
3 * 3 = 1 (2#11# * 2#11# = 2#1#)
0 / 0 = Numeric_Error
0 / 1 = 0 (2#0# / 2#1# = 2#0#)
0 / 2 = 0 (2#0# / 2#10# = 2#0#)
0 / 3 = 0 (2#0# / 2#11# = 2#0#)
1 / 0 = Numeric_Error
1 / 1 = 1 (2#1# / 2#1# = 2#1#)
1 / 2 = 0 (2#1# / 2#10# = 2#0#)
1 / 3 = 0 (2#1# / 2#11# = 2#0#)
2 / 0 = Numeric_Error
2 / 1 = 2 (2#10# / 2#1# = 2#10#)
2 / 2 = 1 (2#10# / 2#10# = 2#1#)
2 / 3 = 0 (2#10# / 2#11# = 2#0#)
3 / 0 = Numeric_Error
3 / 1 = 3 (2#11# / 2#1# = 2#11#)
3 / 2 = 1 (2#11# / 2#10# = 2#1#)
3 / 3 = 1 (2#11# / 2#11# = 2#1#)
0 rem 0 = Numeric_Error
0 rem 1 = 0 (2#0# rem 2#1# = 2#0#)
0 rem 2 = 0 (2#0# rem 2#10# = 2#0#)
0 rem 3 = 0 (2#0# rem 2#11# = 2#0#)
1 rem 0 = Numeric_Error
1 rem 1 = 0 (2#1# rem 2#1# = 2#0#)
1 rem 2 = 1 (2#1# rem 2#10# = 2#1#)
1 rem 3 = 1 (2#1# rem 2#11# = 2#1#)
2 rem 0 = Numeric_Error
2 rem 1 = 0 (2#10# rem 2#1# = 2#0#)
2 rem 2 = 0 (2#10# rem 2#10# = 2#0#)
2 rem 3 = 2 (2#10# rem 2#11# = 2#10#)
3 rem 0 = Numeric_Error
3 rem 1 = 0 (2#11# rem 2#1# = 2#0#)
3 rem 2 = 1 (2#11# rem 2#10# = 2#1#)
3 rem 3 = 0 (2#11# rem 2#11# = 2#0#)
0 mod 0 = Numeric_Error
0 mod 1 = 0 (2#0# mod 2#1# = 2#0#)
0 mod 2 = 0 (2#0# mod 2#10# = 2#0#)
0 mod 3 = 0 (2#0# mod 2#11# = 2#0#)
1 mod 0 = Numeric_Error
1 mod 1 = 0 (2#1# mod 2#1# = 2#0#)
1 mod 2 = 1 (2#1# mod 2#10# = 2#1#)
1 mod 3 = 1 (2#1# mod 2#11# = 2#1#)
2 mod 0 = Numeric_Error
2 mod 1 = 0 (2#10# mod 2#1# = 2#0#)
2 mod 2 = 0 (2#10# mod 2#10# = 2#0#)
2 mod 3 = 2 (2#10# mod 2#11# = 2#10#)
3 mod 0 = Numeric_Error
3 mod 1 = 0 (2#11# mod 2#1# = 2#0#)
3 mod 2 = 1 (2#11# mod 2#10# = 2#1#)
3 mod 3 = 0 (2#11# mod 2#11# = 2#0#)
0 ** 0 = 1 (2#0# ** 2#0# = 2#1#)
0 ** 1 = 0 (2#0# ** 2#1# = 2#0#)
0 ** 2 = 0 (2#0# ** 2#10# = 2#0#)
0 ** 3 = 0 (2#0# ** 2#11# = 2#0#)
1 ** 0 = 1 (2#1# ** 2#0# = 2#1#)
1 ** 1 = 1 (2#1# ** 2#1# = 2#1#)
1 ** 2 = 1 (2#1# ** 2#10# = 2#1#)
1 ** 3 = 1 (2#1# ** 2#11# = 2#1#)
2 ** 0 = 1 (2#10# ** 2#0# = 2#1#)
2 ** 1 = 2 (2#10# ** 2#1# = 2#10#)
2 ** 2 = 0 (2#10# ** 2#10# = 2#0#)
2 ** 3 = 0 (2#10# ** 2#11# = 2#0#)
3 ** 0 = 1 (2#11# ** 2#0# = 2#1#)
3 ** 1 = 3 (2#11# ** 2#1# = 2#11#)
3 ** 2 = 1 (2#11# ** 2#10# = 2#1#)
3 ** 3 = 3 (2#11# ** 2#11# = 2#11#)
0 and 0 = 0 (2#0# and 2#0# = 2#0#)
0 and 1 = 0 (2#0# and 2#1# = 2#0#)
0 and 2 = 0 (2#0# and 2#10# = 2#0#)
0 and 3 = 0 (2#0# and 2#11# = 2#0#)
1 and 0 = 0 (2#1# and 2#0# = 2#0#)
1 and 1 = 1 (2#1# and 2#1# = 2#1#)
1 and 2 = 0 (2#1# and 2#10# = 2#0#)
1 and 3 = 1 (2#1# and 2#11# = 2#1#)
2 and 0 = 0 (2#10# and 2#0# = 2#0#)
2 and 1 = 0 (2#10# and 2#1# = 2#0#)
2 and 2 = 2 (2#10# and 2#10# = 2#10#)
2 and 3 = 2 (2#10# and 2#11# = 2#10#)
3 and 0 = 0 (2#11# and 2#0# = 2#0#)
3 and 1 = 1 (2#11# and 2#1# = 2#1#)
3 and 2 = 2 (2#11# and 2#10# = 2#10#)
3 and 3 = 3 (2#11# and 2#11# = 2#11#)
0 or 0 = 0 (2#0# or 2#0# = 2#0#)
0 or 1 = 1 (2#0# or 2#1# = 2#1#)
0 or 2 = 2 (2#0# or 2#10# = 2#10#)
0 or 3 = 3 (2#0# or 2#11# = 2#11#)
1 or 0 = 1 (2#1# or 2#0# = 2#1#)
1 or 1 = 1 (2#1# or 2#1# = 2#1#)
1 or 2 = 3 (2#1# or 2#10# = 2#11#)
1 or 3 = 3 (2#1# or 2#11# = 2#11#)
2 or 0 = 2 (2#10# or 2#0# = 2#10#)
2 or 1 = 3 (2#10# or 2#1# = 2#11#)
2 or 2 = 2 (2#10# or 2#10# = 2#10#)
2 or 3 = 3 (2#10# or 2#11# = 2#11#)
3 or 0 = 3 (2#11# or 2#0# = 2#11#)
3 or 1 = 3 (2#11# or 2#1# = 2#11#)
3 or 2 = 3 (2#11# or 2#10# = 2#11#)
3 or 3 = 3 (2#11# or 2#11# = 2#11#)
0 xor 0 = 0 (2#0# xor 2#0# = 2#0#)
0 xor 1 = 1 (2#0# xor 2#1# = 2#1#)
0 xor 2 = 2 (2#0# xor 2#10# = 2#10#)
0 xor 3 = 3 (2#0# xor 2#11# = 2#11#)
1 xor 0 = 1 (2#1# xor 2#0# = 2#1#)
1 xor 1 = 0 (2#1# xor 2#1# = 2#0#)
1 xor 2 = 3 (2#1# xor 2#10# = 2#11#)
1 xor 3 = 2 (2#1# xor 2#11# = 2#10#)
2 xor 0 = 2 (2#10# xor 2#0# = 2#10#)
2 xor 1 = 3 (2#10# xor 2#1# = 2#11#)
2 xor 2 = 0 (2#10# xor 2#10# = 2#0#)
2 xor 3 = 1 (2#10# xor 2#11# = 2#1#)
3 xor 0 = 3 (2#11# xor 2#0# = 2#11#)
3 xor 1 = 2 (2#11# xor 2#1# = 2#10#)
3 xor 2 = 1 (2#11# xor 2#10# = 2#1#)
3 xor 3 = 0 (2#11# xor 2#11# = 2#0#)
not 0 = 3 (not 2#0# = 2#11#)
not 1 = 2 (not 2#1# = 2#10#)
not 2 = 1 (not 2#10# = 2#1#)
not 3 = 0 (not 2#11# = 2#0#)
Pred(0) = 3 (Pred(2#0#) = 2#11#)
Pred(1) = 0 (Pred(2#1#) = 2#0#)
Pred(2) = 1 (Pred(2#10#) = 2#1#)
Pred(3) = 2 (Pred(2#11#) = 2#10#)
Succ(0) = 1 (Succ(2#0#) = 2#1#)
Succ(1) = 2 (Succ(2#1#) = 2#10#)
Succ(2) = 3 (Succ(2#10#) = 2#11#)
Succ(3) = 0 (Succ(2#11#) = 2#0#)
COMMENTS IN REGARD TO CURRENT IMPLMENTATIONS -- DEC
DEC defines its unsigned types in package SYSTEM. There are two "genuine"
unsigned integer types called UNSIGNED_BYTE and UNSIGNED_WORD plus a signed
integer type called UNSIGNED_LONGWORD and a record type called
UNSIGNED_QUADWORD which has two components of type UNSIGNED_LONGWORD.
Static subtypes of the (signed) type UNSIGNED_LONGWORD are provided. (See
below under "unresolved problems" for further discussion of UNSIGNED_LONGWORD.)
No discussion of further derivability of these types is provided. By the
absence of explicit statements, it may be inferred that the arithmetic on these
unsigned types is not modular.
For each unsigned integer type, the DEC implementation provides logical
operators and a corresponding constrained subtype of a common unconstrained
packed BOOLEAN array type and conversions to and from that subtype and the
unsigned types. For example:
type BIT_ARRAY is array (INTEGER range <>) of BOOLEAN;
pragma PACK (BIT_ARRAY); -- There must be exactly 1 bit for each BOOLEAN
-- component, which DEC kindly provides (but
-- the language does not require) for packed
-- unconstrained arrays of BOOLEANs.
Question: What is the weight of bit i? Is it 2**i or 2**(nn - 1 - i)?
Answer: It is implementation-dependent.
BIT_ARRAY should probably not be a standard part of the definition of
unsigned types, because the added functionality (indexing and catenation) is
of marginal value, the order of bits is implementation-dependent, and users
can implement it themselves if they so desire (using UNCHECKED_CONVERSION).
Implementers may decide to provide it anyway, of course. The functional
capabilities of BIT_ARRAY, other than catenation, are available for the
unsigned types proposed above, and the literals are much nicer. For instance
(taking a "little-endian" view):
if (Logical_Shift(Cardinal_Value, -4) and 1) = 1 then ... -- bit 4 is on
or, perhaps more efficiently and more clearly:
Bit_4 : constant := 2#10000#;
if (Cardinal_Value and Bit_4) = Bit_4 then ... -- bit 4 is on.
COMMENTS IN REGARD TO CURRENT IMPLEMENTATIONS -- ALSYS
Alsys defines its unsigned types in package UNSIGNED (available only for the
PC compiler). It defines two unsigned integer types, type BYTE
(range 0 .. 255) and type WORD (range 0 .. 65535).
Their representations in memory are context dependent. As record and array
components, they occupy 8 and 16 bits, respectively; as scalar objects, they
both occupy 16 bits.
Their arithmetic is modular and their subtypes have range-checking, but they
are not "properly" derivable, that is to say, types derived from them will be
ordinary integer types, because it is asserted that their base types are
predefined integer types, rather than the types themselves.
Instantiations of UNCHECKED_CONVERSION are provided to and from other integer
types, although one would expect explicit conversions to be sufficient.
Unchecked conversion from CHARACTER to BYTE is provided, but (curiously) no
function is provided to convert BYTE to CHARACTER (which ought to raise
CONSTRAINT_ERROR if the high bit is on). A user could (apparently) instantiate
UNCHECKED_CONVERSION on these types himself to remedy the oversight. Functions
LSB, MSB, and MAKE_WORD which extract the least or most significant BYTE of a
WORD and construct a WORD from two BYTES are provided; they could have been
written in Ada.
COMMENTS IN REGARD TO CURRENT IMPLEMENTATIONS -- VERDIX
Verdix provides no unsigned types, but they do provide bit-wise logical
operations on type Integer in package Iface_Bits. Contrary to what might be
expected, they are named Bit_And, Bit_Or, Bit_Xor, and Bit_Neg, rather than
the usual operator symbols.
ADA 9X ISSUES
The revision of the language needs to address the syntax and semantics for
declaring unsigned integer types in a more general fashion, similar to the
current model for signed integer types. The changes ought to be upward
compatible since the types would be declared in STANDARD. It is important to
discuss the potential revisions here in order to see the interactions that may
occur. The following illustrates some of the possibilities when no new
reserved words are used (THIS IS *NOT* A PART OF THE PROPOSAL):
In STANDARD, the (minimal) required declarations are:
type UNSIGNED_nn is implementation_defined;
type CARDINAL_nn is implementation_defined;
Other optional types could be supplied by the implementation.
A user defined unsigned integer with range-checked arithmetic might be declared
by:
type MY_UNSIGNED is range L;
where L specifies MY_UNSIGNED'LAST.
A user-defined unsigned integer with modular arithmetic might be declared by:
type MY_CARDINAL is mod M;
where M is the modulus of the arithmetic and MY_CARDINAL'LAST is M - 1.
A portable syntax for declaring the largest possible distinct unsigned types
would be available if the following are declared:
type LARGEST_UNSIGNED_TYPE is new implementation_defined_unsigned_type;
type LARGEST_CARDINAL_TYPE is new implementation_defined_cardinal_type;
(These could be defined more directly if true type renaming were allowed in the
revision:
type LARGEST_UNSIGNED_TYPE renames implementation_defined_unsigned_type;
type LARGEST_CARDINAL_TYPE renames implementation_defined_cardinal_type;)
Then the following declarations may be made:
type MY_UNSIGNED is new LARGEST_UNSIGNED_TYPE;
type MY_CARDINAL is new LARGEST_CARDINAL_TYPE;
As an alternative, if SYSTEM contains:
MAX_UNSIGNED : constant implementation_defined;
MAX_MODULUS : constant implementation_defined;
then the following declarations are possible:
type UNSIGNED is range SYSTEM.MAX_UNSIGNED;
type CARDINAL is mod SYSTEM.MAX_MODULUS;
(Note: MAX_MODULUS = MAX_UNSIGNED + 1.)
*****************************************************************************
!section 03.05.04 (07) 88-09-28 83-01021
!version 1983
!topic Unsigned Arithmetic
!reference AI-0597, 83-00974
Bryce lists the following goals for unsigned numbers in Ada: (my summary)
1. Non-negative integer range that exploits available hardware
(and that supports full range address arithmetic)
2. Numeric literals in arbitrary bases, to the full range of
the unsigned type.
3. Provide efficient support for modular arithmetic.
4. Provide straightforward and efficient logical operators
(including shifts, rotates and masks) on bits of unsigned types.
I fully support the first three goals, but I cannot support the
fourth. Unsigned integers are integers, and there is no arithmatic
definition of shifts, rotates and masks on integers (signed or
unsigned). On the other hand, I think it is probably necessary to
support conversion between integers (unsigned and signed) and some
representation of bit arrays on which shifts, rotates and masks can be
supported. (this conversion should be more than Unchecked_Conversion,
because there is no guarantee that U.C. does what you want.) I guess
there needs to be a function that converts between integer types and
appropriately sized bitmaps, something like:
type bitmap is array (positive range <>) of boolean;
pragma pack (bitmap); -- NOTE: this should work (i.e
-- bitmap(1..32)'size should be 32)
function to_bitmap (i : some_integer_type)
return bitmap; -- raises constraint_error if i'size >
-- bitmap'size. does something TBD
-- (maybe zero fill) if i'size < bitmap'size
function to_integer (bitmap_32)
return some_integer_type; -- can raise constraint_error if
-- conversion would result in a
-- value out of the range of
-- some_integer
-- define appropriate shift, rotate and mask operations on
-- the bitmap type
There is no reason to restrict this conversion to unsigned integers.
I think I like Bryce's distinction between UNSIGNED_INTEGER and
CARDINAL_INTEGER. My feeling is that these should be added to
Standard, and that implementations should provide types like
UNSIGNED_32 and CARDINAL_32, with efficient implementations of
appropriate operations, in package SYSTEM. Such types are clearly
system-dependent.
I guess the goals I'd set for unsigneds include the first 3 goals
provided by Bryce, plus a fourth goal:
4. (emery's goal) Types derived from unsigned types
behave correctly and reasonably.
Here's a place where this would be important: Steve Litvintchouk has
been looking at the Joint Frequency Hopping Specification. It has
lots of bit-packed fields, like X is a 3 bit integer range 0..7. So
I'd like to be able to do the following:
type X is new unsigned_integer range 0..7;
for X'size use 3;
and get all appropriate operations, including (in this case) modular
arithmetic.
dave emery
emery@mitre-bedford.arpa
*****************************************************************************
!section 03.05.04 (07) Geoff Mendal 88-10-20 83-01025
!version 1983
!topic Predefined integer type range required by CALENDAR
!reference AI-00459/03, 83-00991
Keith Enevoldsen's argument in the appendix is wrong, I think.
There is nothing in Ada83 which requires that the declaration
of Calendar.Year_Number elaborate without exception. If an
unfriendly implementation chooses to represent the type
Standard.Integer by the declaration,
type Integer is range -128 .. 128;
then any elaboration of the predefined package Calendar
will cause Constraint_Error to be raised. This is
obviously unfriendly but it doesn't violate any language
rules. Indeed it would make package Calendar a special-case
which it is not. Perhaps the appendix would make a more appropriate
URG issue. (See AI-00325/05 and UI-0008 which seem to have
set the standard for this kind of stuff.)
gom
*****************************************************************************
!section 03.05.04 (07) Dan Lehman/IDA 89-09-25 83-01330
!version 1983
!topic Objections to AI-00459/07
Let me again state my feelings re the direction the URG & ARG are taking
on predefined integer types.
AI-00459 contains a comment that there is no harm in its permissive inter-
pretation; but I submit that the language IS harmed when it allows users to
reason as did Dewar in the following remarks (which apparently became AI-459's
rationale):
> My underlying thinking in making these judgements is something like:
>
> INTEGER ought to correspond to the natural efficient integer size
> on the machine and be at least 16 bits.
>
> LONG_INTEGER should have as much precision as possible, consistent
> with reasonably efficient code, preferably at least 32 bits.
What an incredible assumption! The Ada language provides for the sort of
"as much precision as possible" (regardless of precise range) typing that is
desired above--and in a perspicuously coded form:
type Largest_Integer is range SYSTEM.MIN_INT .. SYSTEM.MAX_INT;
What would motivate a programmer to declare objects to be of type LONG_INT.?
If Dewar's reasoning above is the motivation, then it is clear that there is a
perfectly correct way of making that declaration which wants of no alternative.
What would motivate a programmer to declare objects to be of type BYTE_INT.? I
suspect that whatever it is, the resulting program would be highly suspect when
the type received not one byte--but four of them--from some implementation!
Why should we ever wish to bend the rules to encourage programmers to make some
sloppy, unjustified *assumptions* about what the language allows one to state
precisely and explicitly (providing clear documentation via the code)? I see
no reason for this, and thus find Dewar's thoughts re Ada 9X equally amazing
(in the era when the term "software engineering" is used with reverence).
> For the next version of the language I will argue that the implementation of
> SHORT_INTEGER and LONG_INTEGER be mandatory, with possibly a note that the
> intention is that where appropriate these types have substantially different
> precisions. I will then argue that the validation use a guideline of
> at least 8,16,32 bits for the three types (I mean we are interested in
> portability -- right? right?)
The text of the Commentary AI-459 includes Dewar's (I presume, the ARG's)
reasoning below; I remain unpersuaded.
> Why do I choose a liberal interpretation, well because it leads to a sensible
> result. Consider the three implementations following:
>
> Implementation A INTEGER 32 bits, LONG_INTEGER 32 bits
> Implementation B INTEGER 16 bits, LONG_INTEGER 32 bits
> Implementation C INTEGER 32 bits, no LONG_INTEGER
>
> Now as a user, I would clearly prefer implementation A (assuming that I was
> on a 32 bit machine with a large store so there is no penalty in INTEGER
> being 32 bits - it might even cost substantially to try to make INTEGER
> smaller, imagine trying to have INTEGER's 16 bits long on a Cyber!)
>
> Why do I prefer A? Simple, it runs all the programs that implementation B
> or C can run, and a lot of others besides.
Is it really so "simple"? --or is the above argument simplistic? Is is, I
argue, at least specious; it hides some very real harm that can follow from
the sort of programming strategy that seems to motivate it.
Consider some real examples, from validated compilers--the only two that I
know to have violated the Standard (in using the AI-00459 allowance), and two
others.
[B_I = BYTE_INT.; S_I = SHORT_INT.; INT = INTEGER; L_I =LONG_INT., etc.]
Impl.K Impl.L Impl.X Impl.Y size in bits
----- ------ ------ ------ ------------
S_I B_I B_I & S_I ~ 8
INT S_I INT ~ 16
L_I INT L_I B_I & S_I & INT 32
L_L_I L_I ~ ~ 64
Note that none of the above implementations supports a predefined byte-sized
type named "TINY_INTEGER", yet this is a common type because (nearly) all of
Verdix implementations implement it: should you urge that it be added to the
growing number of byte_sized types on Impl.X (also, urge that S_S_I be added)?
Should Impl.Y increase its collection of names for its 32-bit integer base by
including T_I, S_S_I, L_I, & L_L_I (and then it could boast as being the most
accommodating implementation in the world--"Runs All the Programs"!)?
Is it really a portability *benefit* in encouraging Ada implementations to
support a certain URG-urged set of predefined "types" (that are just different
names for the same base type) such as was done by Impl.Y? Is there really "no
harm" done by allowing a program that was designed for Impl.K using S_I ('LAST
=2**7-1) to be ported *successfully* to Impl.Y where S_I'LAST = 2**31-1? Will
Impl.Y really "run all the programs that Impl.L can run"?? (Think of yourself
relying on avionics at some lofty altitude, and now ponder what effects the
difference between even Impl.L's S_I (16 bits) and K's (8 bits) can make--an
8-bit difference!? Are you blessing "portability" then?)
I have called Dewar's examples A, B, & C specious, for one is likely to look
at 16 and 32 bits as comfortably large and thus fairly interchangeable sizes
for integers; yet the difference at the smaller sizes is likely to be much more
pronounced. Of course, the difference between 8 and 32 bits is of orders of
magnitude. But Dewar actually wants to make Ada 9X REQUIRE implementations to
have three named types regardless of whether there are three ranges to match!!?
(And this is what Brian said was to be done for BSI's Modula-2 standard!) I
find that incredible. (Agreed that Robert would urge that the types NOT be the
same range unless AI-325 justification were provided, but still ... !)
Which programs are likely to be more (genuinely) portable: ones that are
written with predefined types on implementations that depend on AI-00459 to get
validated, or ones that use NO PREDEFINED TYPES, BUT USE "... IS RANGE <...>"?
Which sort of programming does the URG (and Ada advocates in general) desire to
promote? Is there harm in allowing and even promoting the other (via AI-459)?
Again, the portability that is gained by AI-00459 is the porting of old and bad
programming practices from old languages to Ada--it is harmful.
---------------------------------------------------------------
I see one possible course for the URG to take that I would favor: urge that
implementations name ALL of their predefined integer types descriptively in the
uniform form of
type INTEGER_ is -(2**)..2**-1;
e.g., INTEGER_8, INTEGER_16, INTEGER_32, INTEGER_64
THEN use Ai-00459 (solely) to allow required type INTEGER to match ONE of those
other types in range. Clearly, what I recommend here is that implementations
assume the task of defining the ranges that programmers would otherwise have
to do via "...is range ...". This recommendation would affect all validated
implementations equally in the sense that none of them currently supports any
such named predefined integer type. But the recommendation seems vastly more
beneficial than allowing and even encouraging implementations to add all names
that might be used by programs (TINY_, BYTE_, SHORT_SHORT_, SHORT_) in the
name (destined to become notorious!) of "portability".
- ---Dan
------- End of Forwarded Message
*****************************************************************************
!section 03.05.04 (07) Ron Brender 90-06-04 83-01372
!version 1983
!topic Unsigned integer
!references AI-00579/02 etc
On page 3, insert something like "entities within" before "TEXT_IO for
such a file", which ends the first full paragraph on page 3. TEXT_IO
itself is not a generic unit.
The second full paragraph is much too vague and weak. It reads like:
just for the fun of it, the ARG has stuck in some random thoughts on
how an unsigned integer type might be defined. Instead, the tone
should be much more of a strong recommendation that if an
implementation does provide unsigned integer type, it's properties
should be precisely what follows. (I know this is "merely" a
permisive AI and can't force these and only these properties -- but
there should be no doubt that that is what is intended.)
The paragraph beginning "3.5(7-9) says" and ending "have the FIRST and
LAST attributes" belongs somewhere near the middle of page 2.
On page 4 (line 7 et al) , it probably is overkill, but if you want to
be really fussy, the correct statement is
T'POS(X+Y) = ...
T'POS(X-Y) = ...
For the final paragraph (on page 5), the alternative suggesting
nonstatic universal expressions must be supported even in the unsigned
integer range seems definitely a mistake. This would seem to require
some kind of runtime bookkeeping as to whether a value is signed or
unsigned in order to correctly range check the inevitable consuming
use (and type conversion). Such run-time type variability has no
precedent in Ada -- and no obvious important benefit either.
Finally, if you are going to provide modular types (which I do not
oppose), it is unfortunate that the base types are limited to powers
of two (or their ones complement equivalent). For example, a 60
second cyclic counter cannot be conveniently declared. This at least
desires to be made explicit.
*****************************************************************************
!section 03.05.04 (10) Daniel L. Stock/R.R. Software 89-10-19 83-01305
!version 1983
!topic AI-00387/05
I think it would be unwise to make this non-binding interpretation binding. I
doubt that JANUS/Ada (the compiler my company makes) is the only compiler that
still distinguishes between NUMERIC_ERROR and CONSTRAINT_ERROR. We have not
changed this, despite AI-387, because we perceive that our users have a large
amount of software that takes advantage of this frequently useful (albeit
system-dependent) distinction. While it would not be difficult to make
JANUS/Ada comply with the AI, I do not think that this should be forced upon
our users (or users of other non-compliant compilers) until Ada9X becomes a
reality.
- --------------------------------
*****************************************************************************
!section 03.05.04 (12) Keith Enevoldsen 88-07-20 83-00991
!version 1983
!topic Insert a note specifying minimum range of INTEGER
Type INTEGER must at least contain the range -2099 .. 2099
because it must support the following declaration in
package CALENDAR [RM 9.6]:
subtype YEAR_NUMBER is INTEGER range 1901 .. 2099;
An explicit statement of the minimum range of INTEGER in 3.5.4
would help the reader know what values of INTEGER are transportable.
*****************************************************************************
!section 03.05.04 (13) Terry Froggatt 86-12-14 83-00893
!version 1983
!topic SYSTEM.MAX/MIN_INT and Universal_Integer
The definition of SYSTEM.MAX_INT and SYSTEM.MIN_INT appears to include
Universal_Integer, which is unbounded in good implementations.
Presumeably Universal_Integer should be excluded from these definitions,
whether or not it is bounded.
*****************************************************************************
!section 03.05.05 (01) J. Goodenough 85-11-15 83-00687
!version 1983
!topic Enumeration types have an explicit conversion operation
This paragraph discusses conversion operations for integer types but fails to
say that a type conversion operation is declared for each enumeration type.
(4.6(4) states that conversion of an operand to its own type is always
allowed, so such an operation ought to be mentioned as being declared.)
*****************************************************************************
!section 03.05.05 (03) M Woodger 88-11-05 83-01074
!version 1983
!topic Replace "3.3.2" by "3.3.3"
Typo.
*****************************************************************************
!section 03.05.05 (07) J. Goodenough 83-04-27 83-00117
!version 1983
!topic Is 'VAL renameable?
Consider the following example:
generic
type T is range <>;
with function POS (X : T) return INTEGER is T'POS; -- illegal
with function VAL (X : INTEGER) return T is T'VAL; -- legal?
package P ... end P;
The declaration of POS is illegal because T'POS returns universal integer, and
so does not match INTEGER. The second case is unclear since 'VAL is defined to
be "special function" taking a single parameter "of any integer type". Does
"special" mean that the type of 'POS's parameter is not nameable? The RM does
not say in what sense 'VAL is special. I think the intent is to have the above
declaration of VAL be illegal? Does anyone disagree?
************************************************************************
!section 03.05.05 (07) P. Hilfinger 83-04-27 83-00118
!version 1983
!topic Is 'VAL renameable (83-00117)
It would pain me to say that it was the "intent" of the LRM to make
a perfectly reasonable and essentially cost-less construction illegal.
The problem here is that "special" is undefined. Let's instead say
(a la R. Dewar,)
"The term `special' in 3.5.5(7) is to be interpreted to
mean `overloaded.' "
In other words, 'VAL is special in the sense that at each point it is
used, it is overloaded on all integer types in scope. Although this
introduces a new concept, perhaps, (the overloaded attribute), the
alternative is to introduce another new concept (the special function.)
************************************************************************
!section 03.05.05 (07) Ron Brender 83-10-29 83-00200
!version 1983
!topic Is 'VAL renamable?
!reference AI-00013, 83-00119, 83-00118
I strongly support JLG's comment 83-00119 and differ with Hilfinger's
comment 83-00118. T'VAL is not renamable because the RM says so. The
"special" nature of T'VAL is that it takes an argument of any integer
type, in particular, even an integer type that is declared after the
declaration of T.
The suggested notion of an "overloaded" attribute is indeed a new
notion in the RM but not a new notion in the deliberations of the LDT
and DRs -- this notion was considered in a number of forms over a long
period and ultimately rejected. To re-introduce it now is a very bad
idea.
On the other hand, the notion of "special functions" is not new in the
RM, contrary to the claim in 83-00118. Multiplication and division
for fixed point types are special functions in almost exactly the same
sense and for the same reasons as T'VAL.
The only action that seems warranted is at most to add a note, or
expand on the existing note, to the effect that T'VAL is special as
stated above and, in particular, is not overloaded in any sense.
************************************************************************
!section 03.05.05 (11) Software Leverage, Inc. 84-03-15 83-00347
!version 1983
!topic 'Image of Enumeration Values
LRM 3.5.5(10) says that "The lower bound of the image is one" for 'Image of
integer values. But 3.5.5(11) doesn't say this for enumeration values.
Note that for the most obvious implementation of 'Image for enumeration
types (namely a slice of a string consisting of the catenated values), the
lower bound will not be one in general unless "sliding" is forced in some
manner. Thus it is plausible that the omission is deliberate.
Is it the intent that the lower bound be one for all discrete types? Or is
it intended that this not be required?
************************************************************************
!section 03.05.05 (11) Daniel L. Stock/R.R. Software 89-10-19 83-01304
!version 1983
!topic AI-00239/11
I think it would be unwise to make this non-binding interpretation binding. The
particular problem I have is with explicitly defining the IMAGE of the
non-graphic values of type CHARACTER (and similarly defining the effect of an
appropriate instance of ENUMERATION_IO on such characters). First, of course,
such a definition would explicitly overturn 3.5.5(11). Worse, on compilers that
have small target systems, this would frequently cause the use of several
hundred extra bytes at run time to store the necessary name information.
Although Ada almost always has several thousand bytes of run-time overhead,
this seems like a lot to add for little gain.
- --------------------------------
*****************************************************************************
!section 03.05.05 (13) M. Woodger/Alsys 85-08-29 83-00622
!version 1983
!topic T'VALUE for non-graphic characters
!reference AI-00239/06
I agree with Gerry Fisher that the definition of T'VALUE in 3.5.5(13)
needs extending to match the invertibility requirement of T'IMAGE in 3.5.5(11),
but the wording had better be a bit closer to the present text.
I suggest the following sentence be inserted to follow the first
sentence of 3.5.5(13)
For the type CHARACTER, if the sequence of characters is
the image of a character other than a graphic character,
the result is the corresponding enumeration value.
*****************************************************************************
!section 03.05.05 (13) M Woodger 88-11-05 83-01075
!version 1983
!topic Incomplete definition of T'VALUE
!reference AI-00239
After the first sentence, insert:
"For the type CHARACTER, if the sequence of characters is the image of
a character other than a graphic character, the result is the
corresponding enumeration value."
*****************************************************************************
!section 03.05.05 (13) M Woodger 88-11-05 83-01076
!version 1983
!topic Replace "a plus or minus sign" by "a plus or a minus sign"
Ambiguity.
*****************************************************************************
!section 03.05.05 (18) M Woodger 88-11-05 83-01077
!version 1983
!topic Example of 'WIDTH
Append:
" -- COLOR'WIDTH = 7 "
*****************************************************************************
!section 03.05.05 (18) Karl Nyberg 91-05-05 83-01410
!version 1983
!topic COLOR'WIDTH = 6, not 7
Credit this one to Ray Dreschler at the FAA in OK City, who thought it was a
typo in the AARM...
-- Karl --
*****************************************************************************
!section 03.05.05 (19) Eberhard Wegner 1983-08-18 83-00039
!version 1983
!topic Add a reference: lower bound 3.6.2,
to explain 3.5.5(10) last sentence.
************************************************************************
!section 03.05.06 (04) M Woodger 88-11-05 83-01078
!version 1983
!topic Incomplete statement
In the last line, after "on the real type definition" insert
"(and on small if specified)".
*****************************************************************************
!section 03.05.06 (06) Eberhard Wegner 1983-08-18 83-00040
!version 1983
!topic Append "even if the attribute T'MACHINE_OVERFLOW holds".
Otherwise the last sentence follows trivially from the first
instead of giving additional meaning.
************************************************************************
!section 03.05.06 (07) M Woodger 88-11-05 83-01079
!version 1983
!topic Add at the end "and subtype".
Incomplete statement.
(In fact, real types are only defined by constraints, so they might
better be called real subtypes; but for other reasons it is not
appropriate to change the name now.)
*****************************************************************************
!section 03.05.07 (06) Bobby J. Bethune, Singer Link 83-08-12 83-00281
!version 1983
!topic Floating point types
Current wording on page 3-20, 3.5.7 (6) is: "the number B is the integer
next above (D*log(10)/log(2)) + 1.)" This should read: "the number B is
the integer next below (D*log(10)/log(2)) + 1.)"
Current wording on page 3-22, 3.5.7 (19) is: "the largest model number
for the type MASS is approximately 1.27E30 and..." This should read:
"the largest model number for the type MASS is approximately 7.92E28
and..."
The reason for both changes is that the computation for number B is
incorrect as shown, consequently the value given for MASS LARGE is
also incorrect.
Derivation of B : given : 1.0 - 10-D <= 1.0 - 2-B is true.
1.0 - 10-D <= 1.0 - 2-B
- 10-D <= - 2-B
10-D >= 2-B
10D <= 2B
so: log(10D) <= log(2B)
d*log(10)<= B*log(2)
B >= D*log(10)/log(2)
B = [D*log(10)/log(2)] -- ceiling function
or B = [D*log(10)/log(2)] + 1.0 -- floor function
************************************************************************
!section 03.05.07 (06) Paul N. Hilfinger 85-01-17 83-00493
!version 1983
!topic Floating point types
!reference 83-00281
The referenced comment claims that in 3.5.7(6), the phrase,
"The number B is the integer next above (D*log(10)/log(2)) + 1.)"
should read,
"The number B is the integer next below (D*log(10)/log(2)) + 1.)"
They reach this conclusion by first translating the requirement that the
binary significand length (B) be sufficient to give a precision at least as
good as that given by the decimal significand length (D) into the inequality
2**(-B) <= 10**(-D) (1)
whence one can derive
B >= D*log(10)/log(2)
B = ceil(D*log(10)/log(2)) (since B is an integer)
B = floor(D*log(10)/log(2)+1)
(Note: the last equality is true only because D*log(10)/log(2) is never
integral for D a positive integer.)
However, their comment is incorrect, because their assumption (1), although
intuitively appealing, is an incorrect translation of the requirement.
An example is probably the best way to see the problem. Consider D=2, so
that by the proposed rule, B = 7, whereas by the current rule, B = 8. Now
consider numbers in the vicinity of 8, whose representation is 0.80E1 in
decimal and 2#0.1000000#E4 if carried to 7 bits of precision in binary. Now
the difference between 8 and the number next above it in the decimal
representation is, of course, .1, whereas the difference in the binary
representation is 2#0.0000001#E4 or 0.125 (1/8). In other words, the
resolution of the 7-bit binary representation, measured as the distance
between adjacent model numbers, varies over its range, and is sometimes not
as good as that of the 2-digit decimal representation. Intuitively, the
resolution of any floating point number declines in steps, one step at each
decade (for decimal) or binade (for binary). Because the points at which
these steps occur are not the same for both representations, there are
places where each gets its turn at being better than the other.
The derivation of the rule in 3.5.7(6) is as follows. It is sufficient to
prove that, for x>0
next(x,10,D) - previous(x,10,D) >= next(x,2,B) - previous(x,2,B)
where next(x,r,n) is the next model number > x (assuming unlimited exponent
range) in radix r with significand length n, and previous(x,r,n) is the
next model number <= x for radix r and significand length n. Now, for x>0,
x*r**(-n) < next(x,r,n) - previous(x,r,n) <= x*r**(-n+1)
(The lower bound comes from considering the best case significand,
1-r**(-n), and the upper bound from considering the worst case, r**(-1).).
Hence, we it is sufficient to insure that
x*10**(-D) >= x*2**(-B+1)
(i.e., the minimum granularity in decimal is no less than the maximum
granularity in binary.) From this, we get
10**D <= 2**(B-1)
log(10)*D <= log(2)*(B-1)
log(10)/log(2) * D + 1 <= B
B = ceil(log(10)/log(2)*D+1) (since B is integral).
Q.E.D.
************************************************************************
!section 03.05.07 (09) J. Storbank Pedersen (DDC) 83-05-25 83-00326
!version 1983
!topic Subtypes and safe numbers
Should the safe numbers of a subtype not be a subtype of those of the
type? (as the model numbers are; see 3.5.7(15))
************************************************************************
!section 03.05.07 (09) Peter Belmont 84-11-14 83-00473
!version 1983
!topic What are the safe numbers for IBM-370 FloatingPoint
!reference AI-00021/02
I confess to some confusion on Ada numerics as they apply to
real machines. The requirements for raising NUMERIC_ERROR
when MACHINE_OVERFLOWS is true (See AI-00021/02) got me
thinking about it.
What we're after, as far as I can see, is to say that numeric
values will be at least as correct as the model numbers provide
and that NUMERIC_ERROR is raised only when this will not happen.
My problem is that both model numbers and safe numbers are defined
with SYMMETRIC binary exponents and the IBM-370 has ASYMMETRIC
hex exponents (implying asymmetric "equivalent" binary exponents).
The "base type" for each predefined type is the type itself, an
Ada-ish rather than a hardware-ish remark. I take it to mean
that each predefined type has model numbers with
B binary digits, an exponent in -4*B..4*B. I take it that the
safe numbers for such a predefined type have (also) B binary
digits and an exponent in the range -E..E with E >= 4*B.
In this case, IBM-370 hardware does not correspond very well to
model or to safe numbers. The 370 has floatingpoint numbers
of the form
16**N for -64 <= N <= 63
times .HHHHHH
where "H" is a hex digit from 0..15.
The leading "H" may have leading zero binary digits, so the apparent
least exponent becomes (in binary terms)
[(16** -64) -3] ==> 2 ** -256 -3
==> 2 ** -259
whereas the greatest exponent becomes (in binary terms)
[16**63] ==> 2 ** 252
Summarizing, IBM-370 has (if my calculations are correct)
floatingpoint numbers with effective binary exponents
in the range -259 .. 252
and with 21 (not 24) binary digits of mantissa, this in the
normalized 32-bit floatingpoint format.
I would like to believe that the "base type" for this kind
of number is hardwarish, not a LRM-ish symmetrically-exponented
abstraction.
Since, as luck would have it, the upper-bound is the smaller,
I suppose we may define "safe" numbers as having exponents
in the range -252 .. 252 .
Is there a problem, due to this asymmetry? Would there be a
problem if the exponent range were the other way,
-252 .. 259
Where, for that matter, does someone go who wishes to implement
Ada on a decimal machine? The binary-defined model numbers are
not exactly representable there (if floatingpoint is decimal).
************************************************************************
!section 03.05.07 (09) M. Woodger 89-03-18 83-01268
!version 1983
!topic "operations with model..." -> "operations in terms of model..."
Section 4.5.7 defines accuracy of operations with real operands, not
only with model and safe numbers. 4.5.7(1) says ".. the accuracy
required from any basic or predefined operation giving a real result
[is]... defined IN TERMS OF these model numbers".
So the penultimate sentence of 3.5.7(9) should read:
The rules defining the accuracy of operations in terms of model and
safe numbers are given in section 4.5.7.
*****************************************************************************
!section 03.05.07 (10-12) 03.05.09(8-10) Ron Brender 86-08-23 83-00795
!version 1983
!topic Real type definitions cannot contain a RANGE attribute
AI-00240/05 points out that a range attribute cannot be used as the
range in an integer type definition. Presumably, essentially the same
rational leads to the conclusion that that a range attribute cannot be
used as the range in a floating point or fixed point type definition
either.
This should be confirmed by a new AI.
*****************************************************************************
!section 03.05.07 (12) J. Storbank Pedersen (DDC) 83-05-25 83-00327
!version 1983
!topic Range of safe numbers
What is "the range of safe numbers?" and what does it mean that L and R
"belong to" such a range? (T'FIRST and T'LAST need not yield model or
safe numbers, 3.5.8(19)).
************************************************************************
!section 03.05.07 (17) P. N. Hilfinger 85-08-16 83-00611
!version 1983
!topic Restricting the allowed values of a floating point subtype
The Note 3.5.7(17) reads in part:
"The imposition of a floating point constraint on a type mark in a
subtype indication cannot reduce the allowed range of values unless
it includes a range constraint (the range of model numbers that
correspond to the specified number of digits can be smaller than
the range of numbers of the type mark)."
How is this to be deduced from the rest of the section? What is the
"allowed range of values"? It is true that the safe numbers of a subtype
are the same as those of the base type. However, this does not imply the
Note, since there are also numbers beyond the safe numbers that may be
(depending on an implementation) "allowed" in the sense that they behave
reliably and do not cause exceptions when assigned to variables or used in
arithmetic operations that do not exceed the machine bounds of
the base type. Why can't an implementation handle these differently for
different subtypes? There are, in fact, reasons for wanting to do just this.
I believe that the Note is incorrect. It cannot be deduced from the rest of
the section, nor is it desirable that anything be changed so that it can be
deduced.
*****************************************************************************
!section 03.05.07 (17) M Woodger 88-11-05 83-01080
!version 1983
!topic Helpful notes
Append to the last sentence:
", and conversely, a model number of the subtype (that is, defined by
the specified number of digits) need not belong to the subtype; it
could be outside the range defined by the subtype".
*****************************************************************************
!section 03.05.08 Bob J Bethune, Singer Link 83-08-12 83-00280
!version 1983
!topic Operations of floating point types
Currently 3.5.8 (3) reads: "the attributes of this group are the
attribute BASE (see 3.3.2)..." It should read: "the attributes of
this group are the attribute BASE (see 3.3.3)..." The reason
for this change is an incorrect paragraph reference.
************************************************************************
!section 03.05.08 (19) M Woodger 88-11-05 83-01081
!version 1983
!topic Helpful notes
Add:
"T'LARGE is a model number of the subtype T, but may not belong to
this subtype, depending on the range."
*****************************************************************************
!section 03.05.09 (01) M Woodger 88-11-05 83-01082
!version 1983
!topic Error bound specified by small
Add:
"Alternatively, the number small can be specified."
*****************************************************************************
!section 03.05.09 (02) Terry Froggatt 86-12-11 83-00890
!version 1983
!topic Fixed and Floating type Declarations needlessly Different
If there is ever a major revision of Ada, the declaration of fixed and
floating types should be unified: in both cases the programmer wants to
give a fairly coarse indication of the minimum accuracy required.
This is discussed in some more detail in my paper,
"Fixed-Point Conversion, Multiplication, & Division, in Ada(R)",
to appear shortly in Ada Letters.
Clearly, fixed-point types need a range, whereas this is optional for
subtypes and floating-point types. The range should be used to determine
the scale of the fixed-point type, such that (at least) one bound is
(almost) represented by the endpoints of the underlying integer type.
The other thing that a fixed-point declaration has to do is to enable the
implementation to decide which underlying integer type best meets the user's
minimum accuracy requirements. This can be done by a "digits" clause just as
for floating-point, and the number given should specify the number of digits
required for the whle mantissa, not just the fractional part.
Thus, fixed and floating point type declarations for the same range and
accuracy could be identical, even though fixed types give absolute accuracy
whereas floating types give relative accuracy. Of course, some means of
distinguishing a fixed type declaration from a floating one is needed.
This could be done in true Ada style, by overloading an existing keyword:
fixed_accuracy_definition ::= ABS DIGITS static_simple_expression.
*****************************************************************************
!section 03.05.09 (04) R. Peter Wehrum 87-07-17 83-00948
!version 1983
!topic Null-range and "singleton" fixed-point types
The tests C35A03E and C35A03R should be withdrawn from the
official ACVC test suite version 1.9 for the following two
reasons:
First, the set of model numbers for a null-range type is not
the empty set contrary to what one might have expected. In
consequence of RM3.5.9(6), zero always belongs to the set of
model numbers and there must be model numbers in the "immediate"
neighbourhood of the specified lower and upper bounds of the
corresponding range constraint, and there must be safe intervals
to hold the values for FIRST and LAST. So the set of model
numbers of a null-range type depends upon the specified bounds
and, generally, different values for FIRST and LAST. In general,
the attribute MANTISSA does not yield the value zero for
null-range fixed-point types. This follows from the general
rules, given the fact that the RM does not mention null-range
fixed-point types specifically.
Second, each of the above-mentioned tests contains the
"singleton" type type BIG_DELTA_MO is delta 8.0 range-2.0..2.0;
and the test supposes that MANTISSA is zero for this type.
Intuitively, this appears to be correct (e.g., because no storage
is needed at all for just one value) and, futhermore, seems to
follow directly from RM3.5.9(6).
However, I think one cannot deduce from the RM that the attribute
MANTISSA yields the value zero for "singleton" fixed-point types,
but only that MANTISSA is undefined in fhis case. Moreover,
assuming that this case is actually undefined, I would propose to
lift this "undefinedness" by giving MANTISSA the value of 1 -
instead of 0 - for the reasons explained below
According to RM3.5.9(4) a canonical form is definde only for
non-zero fixed-point values, thus mantissa (in the canonical
form) always denotes a positive integer, and thus B >= 1. If no
canonical form is defined / exists, then mantissa does not exist,
and B in undefined.
If the mantissa (in the canonical form) is used to represent an
(unbiased positive integer and the value for zero, then B binary
digits just cover the range
0.. 2**B - 1,
such that the set of model numbers is
M = { +- j * small | j = 0,1 ... 2**B - 1 } (1)
If the requirements of the first sentence of RM3.5.9(6) are taken
seriously, it follows that each of the 2**B different binary
patterns must be used to represent a positive integer. This
corresponds to changing the encoding function by utilizing a bias
of 1. Then B binary digits cover the range
1 ..2**B,
such that the set of model numbers becomes
M' = { +- j * small | j = 0,1,...2**B }. (2)
The interpretation (1) is evidently the one intended by the
language definition; cf. the notes of RM 3.5.10(16) which provide
the formula:
T'LARGE = (2** T'MANTISSA - 1) * T'SMALL .
So, in fact, mantissa is used to store the model number zero
(i.e., one bit pattern is utilized for zero), and this should
uniformly hold including null-range fixed-point types.
For example, consider the type declaration
type FIX is delta 1.0 range 0.0.. 3.0;
If (1) is adopted, then B = 2. If (2) is assumed, then B = 1.
*****************************************************************************
!section 03.05.09 (05) Terry Froggatt 86-12-07 83-00886
!version 1983
!topic "Small" should be a power of two TIMES THE RANGE
Ada's default power of two scaling of "small" was a mistake.
Power of two scaling is more of a distraction than an abstraction:
it seems to be of very limited use. For serious embedded applications,
range-related scalings are necessary: and in their absence programmers
will sensibly use pure fractions.
With range-related scalings, we get maximum accuracy, we get range
checks at minimum cost, and we avoid spurious scaling operations.
Thus we get a cheap floating point that is both cheaper and better
than with power of two smalls. And we get the scaled fractions of
classical fixed-point working for use where this is appropriate
whether or not we have floating hardware, such as angles with
natural scalings and sensors with given scalings.
If there is ever a major revision of Ada, the right solution would
be to make range-related scalings the default. There would then be no
need for small representation clauses at all. (Anyone then wanting the
current default scaling or a true delta-related scaling could achieve
this by declaring a type with an appropriately expanded range followed
by a subtype of it with the required range: no extra facility is needed).
Implementation of range-related scaling is in itself straighforward,
but for Ada's counter-productive accuracy requirements.
These matters are discussed in some more detail in my paper,
"Fixed-Point Conversion, Multiplication, & Division, in Ada(R)",
to appear shortly in Ada Letters.
*****************************************************************************
!section 03.05.09 (05) M Woodger 88-11-05 83-01083
!version 1983
!topic Small overlooked
The last sentence should read:
"The guaranteed minimum accuracy ... is defined in terms of either the
model numbers of the fixed point constraint ... (see 4.5.7) or the
model numbers defined by the specified value of small."
The last sentence of paragraph 6 should end:
"... the model numbers are defined by the delta of the fixed accuracy
definition, or by the length clause specifying small if there is one,
and by the range of the subtype denoted by the type mark."
*****************************************************************************
!section 03.05.09 (06) B. Spinney 83-10-27 83-00234
!version 1983
!topic model numbers for delta 1.0 range -127.0 .. 128.0
If we write
type F is delta 1.0 range -127.0 .. 128.0;
B, the mantissa length, for F should be 8, even though 128 may not be a
representable number in F's base type, if F'BASE'MANTISSA is 8. Since the
wording requires that the SMALLEST integer number for which each bound of the
specified range is either a model number or lies at most small distant from a
model number, F'MANTISSA is required to be 8 -- the range -127.0 .. 127.0
fits in 8 bits, and 128 is "at most" 1.0 from a model number, namely, 127.
Is this analysis correct?
************************************************************************
!section 03.05.09 (06) R.P.Wehrum/Siemens AG Munich 85-01-20 83-00556
!version 1983
!topic Null range fixed-point types.
Let null_fix denote a null-range fixed-pint type. Is the according
set of model numbers void? What are the values of the attributes
of null_fix? Especially, what is null_fix'mantissa, what is null_fix'first,
what is null_fix'last?
If 03.05.09(6) is taken literally, then the set of model numbers
is never void, it always has at least one element , i.e., zero,
and moreover, if B>0, -small and small are model numbers too.
Of 03.05.09(6) is taken literally, then two different null-range
fixed-types can assume different values for first and last:
Type null_fix_1 is delta 1.0 range 1000..-1000;
Would have much more model numbers than say
Type null_fix_2 is delta 1.0 range 1..-1;
and null_fix_1'last would be -1000 whereas null_fix_2'first
would be -1. Is this intended?
At any rate, the LRM does not precisely define what is meant by
null-range fixed-point types.
*****************************************************************************
!section 03.05.09 (07) R.P. Wehrum/Siemens AG Munich 85-01-17 83-00552
!version 1983
!topic Attributes of anonymous predefined fixed-point types.
LRM 03.05.09(7) states that "The model numbers of each predefined fixed
point type comprise...numbers for which mantissa...has the number of
binary digits returned by the attribute MANTISSA, and for which the number
small has the value returned by the sttribute SMALL."
However, the predefined fixed-point types are anonymous and it is
therefore impossible to employ these attributes directly. What is
(probably) meant is the following:
Let F be any fixed-point subtype. Then one can get the values for the attributes
MANTISSA and SMALL of the underlying predefined type by using
F'BASE'MANTISSA and F'BASE'SMALL, resp. If this interpretation is
correct, then it should be expressed like this in the LRM.
__
If 03.05.09 (9) is taken literally - what is not self-evident,
remember the discussions on the staticness of subtypes - then one can
deduce that the underlying predefined type must always have the same
value for small as the subtype under consideration (provided that a
subtype that does not specify a new delta inherits the delta from its parent).
However, according to 03.05.09 (10) one can get the impression that
the predefined type may possess a finer small (what would be not
unreasonable to allow). A clarification is needed.
*****************************************************************************
!section 03.05.09 (07) Terry Froggatt 86-12-13 83-00892
!version 1983
!topic What are Fixed Point BASE types and Predefined types?
Is an implementation permitted to have anonymous predefined
floating or integer types?
Is an implementation permitted to have named predefined
fixed-point types in package STANDARD (in addition to DURATION)?
The description of "predefined" fixed point types seems very misleading.
Each of the underlying hardware integer type which an implementation uses
to map fixed point types onto, could be used for infinitely many fixed types
with differing scales (even if scales are limited to powers of two). It
might have been better to talk of a "predefined family" of fixed types, or
to say that each fixed type declaration "introduced" a distinct parent type.
It is not clear what properties the fixed point base type (or parent type)
of a declared first named subtype is expected to have: apart from having a
symmetric range with a bound of up to twice that of the declared subtype's.
It could offer exactly this and nothing more: but I imagine the intention
was that 'BASE could be used to obtain the attributes represented by the
whole of the underlying hardware type. In this case 'BASE'MANTISSA would
typically be 16 or 32, rather than equalling the declared type's 'MANTISSA.
So if fixed point numbers are right-justified, the base type could have a
much wider range than the declared type, whereas if they are (sensibly)
left-justified, it could have a much finer accuracy than the declared type.
A strict reading of 3.5.9(9) contradicts this: it implies that T'BASE'SMALL
has to return T'SMALL even when left-justification is used. But 3.5.9(10)
implies, and AI-341 says, that T'BASE'SMALL can be less than T'SMALL.
Note that the "!topic" of AI-341 is misleading: "Can fixed point types be
represented with extra precision". The legality of providing extra precision
(as offered by left-justification) is not in doubt here: the issue is
whether the user can find out how good it is, via the base type.
If my understanding of the objectives of fixed-point base types is correct,
it might be better to state clearly, using wording similar to that for
float and integer types, that "an implementation shall provide at least one
family of fixed point types with the same mantissa length but different
scalings; it may provide several such families each with (substantially)
different mantissae".
*****************************************************************************
!section 03.05.09 (08) R P Wehrum, Siemens A.G., Muenchen 83-06-02 83-00250
!version 1983
!topic The Non-Existence of Static Fixed_Point_Types
Let us consider the fixed-point declaration
type F is delta 0.1 range -1.0 .. 1.0; -- (1)
Does F denote a static subtype in the sense of section 04.09?
It does not.
Because of section 3.5.9(8,9) the declaration (1) is equivalent to the
sequence of declarations:
type(anonymous_fixed_point_type_X)is new (predefined_fixed_point_type_x); -- (2a)
subtype F is (anonymous_fixed_pt_type_X) range
(anonymous_fixed_pt_type_X) (-1.0) ..(anonymous_fixed_pt_type_X)(1.0); --
(2b)
Because of the explicit conversions in the range constraint in (2b) the
range is not static. Therefore, F is not a static subtype according to
section 4.9(11).
Moreover, this entails that there exist no user defined static subtypes
at all.
It remains an open question whether the predefined fixed-point types,
which are all anonymous - apart from the type DURATION -, may be
considered as being static. The type DURATION itself is not static since
the above logic, i.e. the "equivalence rule", holds again. We come to the
conclusion that user defined static fixed-point types do not exist
(though every attribute may be required to be static as in type
declarations of the form (1)) and that the predefined anonymous fixed-
point types may turn out to be not static either.
A similar analysis can be given for integer types and floating-point
types. The corresponding predefined types have names and should be
considered as being static (hopefully, the "equivalence rule" must not be
invoked). But again all user-defined types are not static.
We conclude that at least all numeric types which can be defined by the
user are non-static subtypes.
It is questionable whether this was intended.
************************************************************************
!section 03.05.09 (09) B. Spinney 83-10-27 83-00235
!version 1983
!topic Can a fixed point type declaration raise NUMERIC_ERROR?
Consider the declaration:
type F is delta 1.0 range -128.0 .. 128.0;
This is stated to be equivalent to:
type %FP is new %PDF;
subtype F is %FP range -128.0 .. 128.0;
An implementation is allowed to choose the base type such that all and only the
model numbers of F are representable, e.g., %PDF might occupy just 8 bits.
Since 128.0 is not representable in 8 bits, the conversion %FP(128.0) is
allowed to raise NUMERIC_ERROR. Hence, if we believe that the original
declaration is really equivalent to the pair of declarations given above, then
it is possible that the type declaration will be accepted at compile time and
raise NUMERIC_ERROR at run-time.
An alternative explanation might be that the equivalence is for expository
purposes, and hence the type declaration should never raise NUMERIC_ERROR. Of
course, even if the type declaration does not raise NUMERIC_ERROR, it would be
acceptable to raise NUMERIC_ERROR for F'(128.0).
Which interpretation of the equivalence is correct?
************************************************************************
!section 03.05.09 (09) M. Woodger 86-09-16 83-00804
!version 1983
!topic representation of base type of a fixed point type
!reference AI-00341/05
The discussion section of this commentary has overlooked the
possibility that T'SMALL can be specified by a length clause;
this is explicitly mentioned in 3.5.9(14). In such a case the
delta given in the fixed point constraint is not used to define
the model numbers of T, so the argument breaks down. Use of the
word "include" in 3.5.9(10) can now be explained as covering
precisely this case where T'SMALL is specified.
Perhaps the following should be appended to the summary:
"This only happens when T'SMALL is specified by a length
clause."
*****************************************************************************
!section 03.05.09 (09) Ron Brender 86-10-06 83-00822
!version 1983
!topic Correction to AI-00144/08 examples
!reference AI-00144/08
In the examples (seven places)
... delta 2**(-15) ...
should be
... delta 2.0**(-15) ...
*****************************************************************************
!section 03.05.09 (09) Terry Froggatt 88-05-25 83-00977
!version 1983
!topic Comment on AI-00341
I am worried that we are moving away from viewing fixed point numbers in Ada
as "real" towards "dollars and cents".
The way the language is at present, SMALL is a power of two by default, and
the only way to get range-related scales is to use a 'SMALL rep. spec.
It's a bad idea to give 'SMALL rep.specs the additional semantics of forcing
the compiler to remove any accuracy which it can offer at no cost in time and
space. AI-00341 presently requires the compiler to remove extra accuracy that
can be provided.
(It won't be long before someone suggests that real arithmetic in the compiler
should emulate the inaccuracies of the target!)
I would strongly recommend that this change to 'SMALL be deferred, UNTIL the
language offers a better way to get range-related scalings.
Whilst it's probably too late to change the default powers-of-2, perhaps Ada
9X could offer a simple "for FIXED_TYPE use range;"?
*****************************************************************************
!section 03.05.09 (09) Mike Woodger 89-01-23 83-01258
!version 1983
!topic Further correction to AI-00144
AI-00471 corrected AI-00144 by replacing 2**(-15) in the deltas of examples
with 2.0**(-15). However, the corrected declaration in the discussion reads:
type F is delta 2.0**(-15) range -1.0 .. 1.0-2**(-15);
This is still incorrect since the final 2**(-15) should read 2.0**(-15).
*****************************************************************************
!section 03.05.09 (10) Japanese comments on DP8652 85-05-10 83-00558
!version 1983
!topic Decimal fixed point representations
Is the freedom left to implementation for representing a fixed number (with
length clause) as either binary code or decimal code, if accuracy defined by
model number is satisfied?
*****************************************************************************
!section 03.05.09 (10) J. Goodenough 85-06-17 83-00565
!version 1983
!topic re: Decimal fixed point representations
!reference 83-00558
Fixed point numbers can be represented using decimal or binary integers as
long as the full set of model numbers is represented. For example, given the
declaration:
type FIX is delta 0.1 range -99.9 .. 99.9;
If no length clause is given, 0.25 is a model number, since FIX'SMALL is
required to be a power of 2, and will in fact be equal to 0.0625. Equally
well, 0.1875 is a model number, and 8 * 0.1875 must equal 0.5. Finally, the
range of the model numbers is -128.0 + 0.0625 .. 128 - 0.0625. This means the
sum of 99.9 and 0.3 is not allowed to overflow. In short, whether or not
decimal representation is used, the model numbers must include the integers
-2047 .. 2047. If decimal representation is used, this means four digits are
required, and even the model number closest to 99.9 (namely 1598) requires
four decimal digits to represent with the correct accuracy.
Now if a length clause specifying FIX'SMALL is given, the situation is
somewhat different:
for FIX'SMALL use 0.1;
Now a decimal representation can also be used. The model numbers in this case
cover the range -(1023 * 0.1) + 0.1 .. 102.3 - 0.1, so it is still required
that 99.9 + 0.3 be evaluated without raising overflow. This means
computations using the predefined arithmetic operations must generally use
four decimal digits to ensure NUMERIC_ERROR is not raised incorrectly. On the
other hand, three decimal digits will suffice to hold stored values of type
FIX since these values can never lie outside the range -99.9 .. 99.9.
If binary representation is used, the set of model numbers is the same, of
course. The value 1023 requires 10 bits and the value 999 also requires 10
bits, so there is no difference in size between stored values of type FIX and
values of the base type.
In short, the answer to the question is, "Yes, decimal representation can be
used for the model numbers if accuracy AND RANGE requirements are satisfied."
*****************************************************************************
!section 03.05.09 (11) C Bendix Nielsen, AdaFD, DDC 86-06-09 83-00750
!version 1983
!topic For a fixed point type, safe and model numbers are the same?
3.5.9(11) says: "The safe numbers of a fixed point TYPE are the
model numbers of its base type."
3.3(4) says: "The base type of a type is the type itself."
Conclusion: The safe numbers of a fixed point type are the model
numbers of the type itself - or did 3.5.9(14) intend to define the
safe numbers of a fixed point SUBTYPE?
*****************************************************************************
!section 03.05.09 (14) B. Spinney 83-10-27 83-00233
!version 1983
!topic 'MANTISSA for fixed point subtypes
The model numbers of a fixed point type depend on the delta and range in a
fixed point constraint [3.5.9(6)]. Consider the following declaration:
type F is delta 1.0 range -127.0 .. 127.0;
subtype S is F delta 1.0 range L..R;
The model numbers of S are determined by the values of L and R, and therefore,
S'MANTISSA must, in general, be computed at run time, since L and R need not be
static. Is this the intent?
************************************************************************
!section 03.05.09 (14) R.P. Wehrum/Siemens AG Munich 85-01-17 83-00551
!version 1983
!topic Model numbers for certain fixed-point subtypes undefined.
If a subtype indication involves a range-constraint but no delta,
then the set of model numbers of the corresponding fixed-point subtype
(anonymous or non-anonymous subtype or derived type) seems to be
undefined. LRM 03.05.09 only describes those cases which contain a fixed
accuracy definition. So there seems to be a gap. It is obvious how this gap
can be filled.
*****************************************************************************
!section 03.05.09 (14) Terry Froggatt 86-12-05 83-00884
!version 1983
!topic Fixed Point Subtypes inheriting Small
3.5.9 (14) states, and 3.5.9 (16) clarifies, that a fixed point subtype S
of a fixed point type T, inherits the Small of T if and only if that Small
was specified by a length clause.
However, I imagine that most fixed point types will have a small length
clause to ensure that Small is Delta rather than a power of 2. I cannot
see small length clauses being used to produce a much smaller small than
the delta, since this this can be achieved by using a smaller delta.
A subtype of T which inherits T's small can always be obtained by
using a range constraint rather than a fixed-point constraint;
so if a fixed-point constraint is given it should be honoured.
Thus, the Small of S should be the largest power of 2 times the Small
of T that is not more than the Delta of the fixed-point constraint;
regardless of whether T's Small was specified or defaulted.
*****************************************************************************
!section 03.05.09 (14) M Woodger 88-11-05 83-01084
!version 1983
!topic "if there is one {(for the original type or subtype)}.
!reference AI-00146
Ambiguity.
*****************************************************************************
!section 03.05.09 (16) B. Spinney 83-11-04 83-00238
!version 1983
!topic model numbers for a fixed point subtype with length clause
The note does not seem to be derivable from the text. 3.5.9(14) specifies how
the model numbers for a fixed point subtype are determined:
type F is delta 0.1 range -15.0 .. 15.0
for F'SMALL use 0.1;
subtype FS is F delta 1.0;
Is FS'MANTISSA < F'MANTISSA? The note in paragraph 16 says that FS'SMALL =
F'SMALL. But paragraph 14 says the model numbers for a subtype are "defined
by the corresponding fixed point constraint and also by the length clause
specifying small, if there is one." There is no length clause specifying
SMALL for FS; presumably for this example, paragraph 14 is referring to the
length clause given for F. If so, the wording could be clearer.
************************************************************************
!section 03.05.09 (16) M. Woodger 86-09-16 83-00805
!version 1983
!topic model numbers for a fixed point subtype with length clause
!reference AI-00146/05
The discussion section of this commentary omits to make the point
that is specified in 3.5.9(5): The model numbers defined by a
fixed point constraint ignore the specified delta if SMALL has
been specified by a length clause. This is the reason why
FS'SMALL is inherited from F'SMALL instead of using the delta
value 0.8. The argument about FS2'SMALL is not convincing.
The second sentence of the quoted Note is not a consequence of
the first sentence of the Note. Therefore in line 2 of the
discussion the words "In particular" and the proceding period
should be replaced by "and that".
*****************************************************************************
!section 03.05.09 (16) Terry Froggatt 86-12-02 83-00881
!version 1983
!topic Left-Justification of Fixed-Point
Ada's fixed-point arithmetic is intended as an approximation to real
arithmetic: consequently, there are never "spare bits", just better and
worse approximations. So numbers should always be as far left-justified
within the chosen wordlength as the range permits, subject to Ada's
constraint that the "small" (whether a default power of two or that
given by a small representation clause) is mapped exactly onto some bit.
To me, this statement is as self-evident for
accuracy as the statement that a compiler should not generate
spurious NOPs is for time and space.
This matter is discussed in some more detail in my paper,
"Fixed-Point Conversion, Multiplication, & Division, in Ada(R)",
to appear shortly in Ada Letters.
Unfortunately, the majority of people who read 3.5.9 seem to jump to
the conclusion that Ada expects "small" to be mapped onto unity:
i.e. Right Justification. In fact "small" only determines what
the model numbers and minimum legal accuracy are.
I believe that the Notes in the LRM (not just the Implementor's guide)
should make it clear that "Left Justification" is permitted
by the language and is the most desirable form of implementation.
*****************************************************************************
!section 03.05.09 (16) M Woodger 88-11-05 83-01085
!version 1983
!topic Helpful note
Insert a new paragraph before this one as follows:
"The model numbers of the predefined fixed point type selected to satisfy
the declaration of a fixed point type T can be determined from
T'BASE'MANTISSA and T'BASE'SMALL."
*****************************************************************************
!section 03.05.09 (18) J.P. Rosen NYU, 83-11-07 83-00232
!version 1983
!topic Error in example
The type FRACTION does not fit into one machine word.
The mantissa part of -1.0 is 1.0, thus B must be chosen such that 1.0
can be represented. Fixed point types do not allow an extra value for
negative number. The example should read:
type FRACTION is delta DEL range -1.0 + DEL .. 1.0 - DEL;
************************************************************************
!section 03.05.09 (18) M. Woodger 89-03-18 83-01269
!version 1983
!topic Helpful comment
After the line
type fraction is delta DEL range -1.0 .. 1.0 - DEL;
insert the lines
-- -1.0 is not a model number
(For examples of length clauses see 13.2).
*****************************************************************************
!section 03.05.10 (03) M Woodger 88-11-05 83-01086
!version 1983
!topic Replace "3.3.2" by "3.3.3"
Typo.
*****************************************************************************
!section 03.05.10 (08) Software Leverage, Inc. 84-01-23 83-00265
!version 1983
!topic Attribute 'FORE Not Defined as Intended?
Consider this subprogram:
with TEXT_IO;
procedure P is
type T is delta 0.01 range 0.0..9.99; -- T'AFT is 2, T'FORE is 2
for T'SMALL use 0.01; -- Make things simple
package Q is new TEXT_IO.FIXED_IO(T);
begin
Q.PUT(T'LAST, AFT => 1);
-- Prints as " 10.0" (more than T'FORE digits before decimal point)
end P;
We believe the intent of using 'FORE as the default for the FORE
parameter to PUT in FIXED_IO was to make all possible values of the
item being output print in the same number of columns; otherwise the
FORE default could just be zero. Our concern is that 'FORE doesn't
always do this.
The above phenomenon doesn't only happen when the defaults are
overridden. Consider the following:
with TEXT_IO;
procedure P is
type T is delta 0.01 range 0.0..9.999; -- T'AFT is 2, T'FORE is 2
for T'SMALL use 0.009999; -- 1/1000 of T'LAST
package Q is new TEXT_IO.FIXED_IO(T);
begin
Q.PUT(T'LAST); -- Use only default values
-- Prints as " 10.00" (the same problem occurs)
end P;
The problem is that the "number of characters needed for the integer
part of the decimal representation" of a real value may not be
sufficient to hold the number of digits in a rounded value.
Let fore(x, n) be the number of digits before the decimal point if x
is rounded to n digits. (Here x is a mathematical real, and n a
positive mathematical integer.)
If a fixed subtype T is a null range, then T'FORE = 2. If T isn't
null, let u = max(abs(T'FIRST), abs(T'LAST)); the three "reasonable"
guesses at what the LRM intended are
1. T'FORE = fore(u, 1)
2. T'FORE = fore(u, T'AFT)
3. T'FORE = limit of fore(u, n) as n -> infinity.
We believe that that the third of these is what is implied by the LRM.
We think this may not be what was intended. The first of the three
definitions is the only one that will "always work" when actually used
in the TEXT_IO package in the sense that no possibly unexpected output
will arise. The second is justifiable on the grounds that it will
work if the user overrides neither of the defaults; one can
reasonably argue that in doing so he or she should be required to
allow for the consequences.
Assuming our belief is correct, the manual should state explicitly
that the "decimal representation" is an exact one. (If not, it should
state either that rounding to T'AFT places is assumed, or that a "most
pessimistic" value is to be used which will work for any rounding
accuracy. The latter is, of course, equivalent to rounding to 1
place.)
It would also be useful to add a note in chapter 14 to the effect that
the value given for DEFAULT_FORE will not suffice for certain
instantiations of TEXT_IO.FIXED_IO with (hopefully infrequently
encountered) fixed point subtypes.
Which interpretation is correct?
************************************************************************
!section 03.05.10 (08) Lester p.p. Ada-Europe 85-09-05 83-00654
!version 1983
!topic AI-179: 'FORE not defined as intended.
We feel that the manual is quite clear: using the default of 'FORE for
generic parameter FORE will lead to program output in which the user
will get ragged columns where he expected nicely-aligned ones. Sad, but
true: the definer of 'FORE overlooked the cases cited in the question.
Intense dislike of this seems to us insufficient to over-turn a clear
statement in the standard, given that the standard appears to have no
contradicting or overriding statement, or even a statement which would
confuse the issue.
*****************************************************************************
!section 03.05.10 (08) J. Goodenough 86-09-07 83-00803
!version 1983
!topic Correction to AI-00179/06
!reference AI-00179/06
The discussion section of this AI contains an error. The following example
is given:
type G is delta 0.01 range 1.00 .. 10.00;
for G'SMALL use 0.01;
subtype SG is F delta 0.1 range 1.0 .. 9.95;
The discussion says:
For the subtype SG, 9.9 and 10.0 are consecutive model numbers
(3.5.9(14)). It is implementation dependent whether the upper
bound of SG is represented as the model number 9.9 or the model
number 10.0. Depending on the implementation's choice, the
value returned by SG'FORE will be either 2 or 3. In addition,
note that the bounds of SG need not be given by static
expressions. If the upper bound is non-static and has a value
lying in the model interval 9.9 to 10.0, SG'FORE's value will
be implementation dependent (and must be computed at run-time).
The fact that 'FORE may return implementation dependent values
should be taken into consideration by programmers.
The discussion is incorrect; the model numbers for SG are the same as the
model numbers for type G because SMALL is explicitly specified with a length
clause. For the discussion to be correct, the example should be changed so
the upper bound of SG is 9.995. The model interval in the discussion will
then range from 9.99 to 10.00. After these corrections are made, the example
and discussion should read as follows:
type G is delta 0.01 ragne 1.00 .. 10.00;
for G'SMALL use 0.01;
subtype SG is F delta 0.01 range 1.00 .. 9.995;
For the subtype SG, 9.99 and 10.00 are consecutive model
numbers (3.5.9(14)). It is implementation dependent whether
the upper bound of SG is represented as the model number 9.99
or the model number 10.0. Depending on the implementation's
choice, the value returned by SG'FORE will be either 2 or 3.
In addition, note that the bounds of SG need not be given by
static expressions. If the upper bound is non-static and has a
value lying in the model interval 9.99 to 10.00, SG'FORE's
value will be implementation dependent (and must be computed at
run-time). The fact that 'FORE may return implementation
dependent values should be taken into consideration by
programmers.
*****************************************************************************
!section 03.05.10 (08) J. Kelly 88-08-30 83-01015
!version 1983
!topic Correction to AI-00179/08
!reference AI-00179, AI-00467
The corrections in AI-00467/04 to AI-00179 are not sufficient. The response
section of AI-00179/08 now reads:
type F is delta 0.1 range 0.0 .. 9.96;
for F'SMALL use 0.01;
...
type G is delta 0.01 range 1.00 .. 10.00;
for G'SMALL use 0.01;
subtype SG is F delta 0.01 range 1.00 .. 9.995;
For the subtype SG, 9.99 and 10.00 are consecutive model numbers
(3.5.9(14)).
SG should be a subtype of G, not of F, especially since both the delta and the
upper bound of SG are not compatible with those of F.
Furthermore, in a previous version of AI-00179, the delta of SG was 0.1. It
could still be 0.1 since the representation clause for G forces SG'SMALL to
be the same as G'SMAll.
Thus, SG should be declared as:
subtype SG is G delta 0.1 range 1.00 .. 9.995;
*****************************************************************************
!section 03.06 (01) Don Clarson 83-06-30 83-00006
!version 1983
!topic ...consisting of components that have the same {constrained} subtype.
************************************************************************
!section 03.06 (01) Rockwell International 86-03-20 83-00728
!version 1983
!topic Allowed range of index subtypes
The Rockwell International - Computer Support Systems group is designing
an ADA compiler for a proprietary 16 bit microprocessor.
The type INTEGER will range from -32_768 to +32_767.
The type LONG_INTEGER will range from -2_147_483_648 to +2_147_483_647.
This microprocessor can have multiple 64K data areas. Any object, such as an
array, can occupy an entire 64K area but cannot cross over into the next 64K
area.
If LONG_INTEGER is supported an array such as
type EXAMPLE is array ( LONG_INTEGER range 0 .. 100_000 ) of INTEGER;
would exceed the 64K data area.
QUESTIONS:
a) Does the phrase "specified discrete types" imply all discrete types or
is it left to the compiler implementer to decide which discrete types
will be allowed as array indexes.
b) Can we implement all the LONG_INTEGER numeric operations but limit arrays
with LONG_INTEGER indexes to have ranges less than 64K and still have a
validateable ADA compiler?
Thank you for your assistance.
*****************************************************************************
!section 03.06 (01) J. Goodenough 86-03-24 83-00729
!version 1983
!topic Allowed range of index subtypes
!reference 83-00728
In my opinion, your implementation could raise STORAGE_ERROR when attempting
to allocate an array of type EXAMPLE, since there is insufficient storage
available to hold an array of this size. The wording of 11.1(8) would
seem to support this. You would have to allow the type declaration,
although you might give a compile-time warning.
*****************************************************************************
!section 03.06 (04) M. Woodger 89-03-18 83-01270
!version 1983
!topic Not meant.
The second sentence reads:
A multidimensional array has a distinct component for each possible
sequence of index values that can be formed by selecting one value
for each index position (in the given order).
But this is not what is meant. The values selected must not lie outside the
index range. The correct description uses the phrase that was employed to
describe a one-dimensional array, which was
possible index value.
The sentence should read:
A multidimensional array has a distinct component for each sequence
of index values that can be formed by selecting one possible value
for each index position (in the given order).
*****************************************************************************
!section 03.06 (05) Software Leverage, Inc. 84-05-01 83-00373
!version 1983
!topic Index types should be required to be discrete.
The manual does not explicitly forbid the following:
type T is array(FLOAT range <>) of INTEGER;
Since it was obviously the intent that index types be discrete, the manual
should be amended to state, "The type mark given in an index subtype
definition must be a discrete subtype".
************************************************************************
!section 03.06 (05) M Woodger 88-11-05 83-01087
!version 1983
!topic Helpful wording
At the end of the third sentence, after "index subtype definition", add "given
after the word array". This makes it clear that a syntactic category is
meant, not some definition of an index subtype mentioned elsewhere.
*****************************************************************************
!section 03.06 (11) Eberhard Wegner 1983-08-18 83-00041
!version 1983
!topic Replace VECTOR by TUPLE everywhere in the Reference Manual.
A tuple may but need not represent a vector. In Linear Algebra, a
vector relates to a tuple just as a linear mapping relates to a
matrix: A vector is invariant to changes of the coordinate system (as
is a linear mapping), the tuple depends on the coordinate system
chosen (as does a matrix representing a linear mapping). Your SIGMA in
12.1(9) suggests interpretation as the sum of the items, but if your
tuple VECTOR does represent a vector, the sum of the coordinates is
meaningless because it depends on the coordinate system while the
vector does not. (If it helps you: William A. Wulf agrees with my
point.)
************************************************************************
!section 03.06.01 (02) M Woodger/Alsys 83-05-17 83-00161
!version 1983
!topic "iteration rule" -> "iteration scheme"
************************************************************************
!section 03.06.01 (02) J. Goodenough 83-11-18 83-00211
!version 1983
!topic legality of -1..10
The rule in this paragraph makes
for I in -1..10 loop
illegal when an implementation has more than one integer type declared in
STANDARD and legal when an implementation has decided to support only one
predefined integer type.
To see this, consider the case where INTEGER is the only type declared in
STANDARD. 1 (and 10) are either of type universal integer or can be implicitly
converted to any integer type in scope, namely INTEGER and COUNT (which is
declared in TEXT_IO) (the implicit conversion to COUNT is a red herring, so
don't worry too much about it). We now ask what unary "-" operators are
visible that take operands of type universal integer, INTEGER, and COUNT, and
find that only the operators for universal integer and INTEGER are visible.
!section 03.06.01 (02) Hans Hurvig 89-07-04 83-01322
!version 1983
!topic The illegality of the discrete range -1..10
!references AI-00140. AI-00148
AI-00148 and various parts of AI-00140 argue that -1..10 is
illegal as a discrete range. I agree, but believe that the
analysis is flawed.
It is not the case that both operands are convertible, so the
first part of 3.6.1(2) does not apply. Therefore we must
determine a unique discrete type using only the fact that the
operand types are the same.
The are at least two possibilities, universal_integer and
INTEGER, plus one for each additional integer type in scope
with a visible "-".
>From these possibilities we simply choose universal_integer
because in needs no implicit conversions, while all the
other ones require two. However universal_integer is
disallowed by 3.6.1(2) so the discrete range is illegal.
I believe it is justified to invoke 4.6(15) like this even though
it is not mentioned in 3.6.1(2), because it never is anyway; it
is always invoked as a tie-breaker whenever there is a need.
Strictly speaking we shouldn't even attempt to apply the implicit
conversions in the first place, because there is a perfectly
reasonable interpretation without them, albeit one that turns out
to be illegal; thus there isn't even a tie to break (except in
the resolution-propagation approach to overload resolution, but
that is not part of the standard proper).
*****************************************************************************
!section 03.07.02 (15) Mats Weber 89-10-03 83-01337
!version 1983
!topic Discriminants need not have defined values
Consider the following piece of code:
type T (A : Natural := 0) is ...;
Error : exception;
procedure P (X : out T) is
begin
X := (A => 5, ...);
raise Error;
end P;
...
declare
Y : T := (A => 1, ...);
begin
P(Y);
exception
when Error =>
-- *
end;
The value of Y.A (which is a discriminant) at the *-mark is not defined.
It depends on which parameter passing mechanism is used for P.X, and
depending on the parameter passing mechanism is erroneous.
LRM 3.7.2(15) should be removed or rephrased.
*****************************************************************************
!section 03.08 Hans Hurvig 89-07-04 83-01325
!version 1983
!topic When is an access type constrained?
!reference 03.03(4), 12.03.04(4), AI-00301
It is not particularly clear exactly when an access type is
constrained. This has to be decided for the generic matching
rules, eg. element types of formal arrays, 12.3.4(4).
Consider:
subtype STR is STRING(1..10);
type A1 is access STRING;
type A2 is access STR;
subtype A3 is A1(1..10);
The basic rule is that a type is constrained if it has had
a constraint applied to it. Thus A3 is constrained, and A1 is
not, but what about A2? In most respects it behaves like A3,
but it has not been constrained explicitly.
An addition to the rule would thus be that an access type is as
constrained as its designated type, but then what about scalar
designated types? Is INTEGER constrained? What about access to
access types? Could such considerations be short-circuited by
saying that an access type with non-composite designated type is
always/never constrained because no constraint could ever be
applied anyway? Clarification is sought.
AI-00301 says that whatever constraint there is is irrelevant
for deciding staticness, and so avoids the issue alltogether.
*****************************************************************************
!section 03.08 (00) F.Mazzanti 90-03-07 83-01360
!version 1983
!topic Operational meaning of access values
--
There seem to be no requirement in the Standard forcing the interpretation
of an access value as a real memory address in the storage.
As a consequence, it seems that an implementation is allowed, for example,
to implement a collection as a large array, using the indexes as the
corresponding values of the access type.
In this case the correpsondence between the access values and the real
storage addresses is clearly lost.
Unfortunately, it seems to be a widespreading practice to perform unchecked
conversions between SYSTEM.ADDRESS values and access value to access objects
stored in arbitrary parts of the memory. This kind of practice (which I believe
contrary to the programming style encourage by Ada) might appear to be even
justified by the note in 3.8(10) asserting that access values are called
pointers or references in some other languages (and which in other languages
usually correspond to the real memory addresses).
--
Conclusion: It seems that either access values are required to correspond to
what in other languages are usually called pointers, i.e to memory addresses,
explicitly disallowing in this case implementations doing something different,
or an implementation is really left free to use, for example, simple array
indexes as access values, and in this case a note pointing out this potential
difference should be added to the notes of section 3.8, just to make more
evident the high danger of the use of unchecked conversion bewteen ADDRESS
values and access values.
--
*****************************************************************************
!section 03.08 (06) Norman Cohen 89-12-08 83-01345
!version 1983
!topic When is an access type constrained?
!reference AI-00839/00
3.3(4) states:
A type is a subtype of itself; such a subtype is said to be
_unconstrained_; it corresponds to a condition that imposes no
restriction.
Furthermore, by 3.3.2(6), in a subtype declaration,
If the subtype indication does not include a constraint, the subtype
is the same as that denoted by the type mark.
I resort to 1.5(3) to conclude that a subtype with a constraint imposed
upon it is constrained:
All other terms are in the English language and bear their natural
meaning, as defined in Webster's Third New Dictionary of the English
Language.
The imposition of a constraint upon a subtype occurs only through the
elaboration of a subtype indication, as described in 3.3.2(6-9).
To answer the questions in comment 83-01325, STR and A3 are constrained;
A1, A2, INTEGER, and any base access type are unconstrained.
*****************************************************************************
(since we have not said "with TE!section 03.04 (10) M. Woodger 89-06-07 83-01282
!version 1983
!topic Rep clauses preceding type derivations
!reference AI-000138/09
AI-138/09 has not abandoned the requirement that a representation
clause for the parent type must precede the derivation in order
to influence the derived type. So it still supports (and clarifies)
3.4(10).
But your final recommendation in AI-599/01 and your comment of
89-01-22 do abandon this, and reject 3.4(10).
Consider the following examples.
Example 1
type A is (A1, A2, A3);
type B is new A;
X: B := A3;
-- forcing occurrence for type B:
-- compiler can choose size 2 and codes (1,2,3)
for A use (1, 2, 4);
-- determines codes for A
-- size must exceed 2
Here the representation clause for type A is given later than a
forcing occurrence for the derived type B. The clause for A
should not influence the choice of representation for B.
Example 2
type A is (A1, A2, A3);
type B is new A;
X: B := A3;
-- forcing occurrence for type B
W: A := A3;
-- forcing occurrence for type A
-- compiler need not choose same representation as B
In this case there is no representation clause for A. The
representation of B is forced to be chosen at the declaration
of X, and the representation of A is forced to be chosen at
the later declaration of W.
These representations need not be the same: the choice for B
should be independent of the (later) choice for A.
I see two possible ways to go:
(A) Maintain "linear elaboration" and retain 3.4(10).
This is closest to AI-138/10.
(B) Require "look ahead" for the parent type in order to
determine representation of the derived type.
This abandons 3.4(10) and changes the effect of AI-138/10.
The following draft recommendations would suffice for each case,
and would also answer the questions raised in AI-138/10:
A1 Append to the first sentence of the second paragraph of
the summary of AI-138/10:
"The effect of any corresponding implicit clause for this
aspect of the derived type is superseded at the place of
the explicit clause." (Answers second question.)
A2 Replace the first paragraph of the summary by:
"If an implicit representation clause exists for a
parent type, and has not been superseded by an explicit
representation clause prior to the derived type definition,
then there is a corresponding implicit representation
clause for the derived type." (Answers first question.)
A3 Add:
"There is an implicit representation clause for the derived
type corresponding to each aspect of the representation of
the parent type that has been determined by default prior
to the derived type definition."
B1 Append to the first sentence of the second paragraph of
the summary of AI-138/10:
"This explicit clause supersedes any corresponding implicit
clause for the same aspect of the derived type."
(Answers second question.)
B2 Replace the first paragraph of the summary by:
"There is an implicit representation clause for the derived
type corresponding to each aspect of the representation of
the parent type that is determined, whether this determination
is by default or by an explicit or implicit representation
clause, and whether determined before or after the derived
type definition." (Answers first question.)
In case (A), the only inherited representation requirements are
those established prior to derivation.
In case (B), whatever representation is ultimately decided for a
parent type is always inherited, but can be changed by explicit
representation clauses for the derived type.
Also in case (B), a forcing occurrence of the derived type is bound
by any choice ultimately made at a forcing occurrence of the parent
type.
My support is strongly for route (A).
*****************************************************************************
*