[Ada Information Clearinghouse]

Ada '83 Quality and Style:

Guidelines for Professional Programmers

Copyright 1989, 1991,1992 Software Productivity Consortium, Inc., Herndon, Virginia.

CHAPTER 6: Concurrency

6.1 Tasking

Many problems map naturally to a concurrent programming solution. By understanding and correctly using the Ada language tasking features, you can produce solutions that are independent of target implementation. Tasks provide a means, within the Ada language, of expressing concurrent asynchronous threads of control and relieving programmers from the problem of explicitly controlling multiple concurrent activities.

Tasks cooperate to perform the required activities of the software. Synchronization is required between individual tasks. The Ada rendezvous provides a powerful mechanism for this synchronization.

In this section...
6.1.1 Tasks
6.1.2 Anonymous Task Types
6.1.3 Dynamic Tasks
6.1.4 Priorities
6.1.5 Delay Statements
Summary of Guidelines from this section


6.1.1 Tasks

guideline

example

Asynchronous entities are the naturally concurrent objects within the problem domain. These tend to be objects in the problem space that have state, such as elevators in an elevator control system or satellites in a global positioning system. The following is an example for an elevator control system:
------------------------------------------------------------------------ 
package Elevator_Objects is

   ... 
   type Elevator_States is (Moving, Idle, Stopped, At_Floor); 
   type Up_Down         is (Up, Down);
   
   --------------------------------------------------------------------- 
   task type Elevators is
   
      entry Initialize; 
      entry Close_Door; 
      entry Open_Door; 
      entry Stop; 
      entry Idle; 
      entry Start         (Direction        : in     Up_Down); 
      entry Current_State (My_State         :    out Elevator_States; 
                           Current_Location :    out Float);
                           
   end Elevators; 
   ---------------------------------------------------------------------
   
   ...
   
end Elevator_Objects; 
------------------------------------------------------------------------
A task that manages updates from multiple concurrent user tasks to a graphic display is an example of a control and synchronization task.

Multiple tasks that implement the decomposition of a large matrix multiplication algorithm is an example of an opportunity for real concurrency in a multi-processor target environment. In a single processor target environment this approach may not be justified.

A task that updates a radar display every 30 milliseconds is an example of a cyclic activity supported by a task.

A task that detects an over-temperature condition in a nuclear reactor and performs an emergency shutdown of the systems is an example of a task to support a high priority activity.

rationale

These guidelines reflect the intended uses of tasks. They all revolve around the fact that a task has its own thread of control separate from the main subprogram. The conceptual model for a task is a separate program with its own virtual processor. This provides the opportunity to model entities from the problem domain in terms more closely resembling those entities, and the opportunity to handle physical devices as a separate concern from the main algorithm of the application. Tasks also allow naturally concurrent activities which can be mapped to multiple processors when available.

Resources shared between multiple tasks, such as devices and abstract data structures, require control and synchronization since their operations are not atomic. Drawing a circle on a display may require that many low level operations be performed without interruption by another task. A display manager would ensure that no other task accesses the display until all these operations are complete.

Language Ref Manual references: 9.1 Abort Statements, 9.2 Task Types and Task Objects


6.1.2 Anonymous Task Types

guideline

example

The example below illustrates the syntactic differences between the kinds of tasks discussed here. Buffer is static and has a name, but its type is anonymous. Because it is declared explicitly, the task type Buffer_Manager is not anonymous. Channel is static and has a name, and its type is not anonymous. Like all dynamic objects, Encrypted_Packet_Queue.all is essentially anonymous, but its type is not.
task      Buffer; 
task type Buffer_Manager; 
type Replaceable_Buffer is access Buffer_Manager;

... 
Encrypted_Packet_Queue : Replaceable_Buffer; 
Channel                : Buffer_Manager;

... 
Encrypted_Packet_Queue := new Buffer_Manager; 
...

rationale

The use of named tasks of anonymous type avoids a proliferation of task types that are only used once, and the practice communicates to maintainers that there are no other task objects of that type. If the need arises later to have additional tasks of the same type, then the work required to convert a named task to a task type is minimal.

The consistent and logical use of task types, when necessary, contributes to understandability. Identical tasks can be derived from a common task type. Dynamically allocated task structures are necessary when you must create and destroy tasks dynamically or when you must reference them by different names.

note

Though changing the task from an anonymous type to a task type is trivial, structural changes to the software architecture may not be trivial. Introduction of multiple tasks of the task type may require the scope of the task type to change and may change the behavior of the network of synchronizing tasks.

Language Ref Manual references: 9.2 Task Types and Task Objects


6.1.3 Dynamic Tasks

guideline

example

The approach used in the following example below is not recommended. The example shows why caution is required with dynamically allocated task objects. It illustrates how a dynamic task can be disassociated from its name.
task type Radar_Track; 
type      Radar_Track_Pointer is access Radar_Track;

Current_Track : Radar_Track_Pointer;

--------------------------------------------------------------------- 
task body Radar_Track is 
begin 
   loop 
      -- update tracking information 
      ... 
      -- exit when out of range 
      delay 1.0; 
   end loop;
   
... 
end Radar_Track; 
---------------------------------------------------------------------

... 
loop 
   ...
   
   -- Unless some code deals with non-null values of Current_Track, 
   -- (such as an array of existing tasks) 
   -- this assignment leaves the existing Radar_Track task running with 
   -- no way to signal it to abort or to instruct the system to 
   -- reclaim its resources. 
   Current_Track := new Radar_Track;
   
   ... 
end loop;

rationale

A dynamically allocated task object is a task object created by the execution of an allocator. Allocators can be used to avoid limiting the number of tasks. Memory and timing requirements are positively or negatively affected by the decision to use dynamic tasks. Both creation and deletion of dynamic tasks and scheduling of dormant static tasks adversely affect performance. Dormant static tasks incur memory overhead that can be avoided using dynamic tasks. Creation and deletion of dynamic tasks is typically more expensive than scheduling overhead in terms of CPU time.

Allocated task objects referenced by access variables allow you to generate aliases; multiple references to the same object. Anomalous behavior can arise when you reference an aborted task by another name.

A dynamically allocated task that is not associated with a name (a "dropped pointer") cannot be referenced for the purpose of making entry calls, nor can it be the direct target of an abort statement (see Guideline 5.4.3).

Language Ref Manual references: 3.8 Access Types, 4.8 Allocators, 9.3 Task Execution - Task Activation


6.1.4 Priorities

guideline

example

For example, let the tasks have the following priorities:
task T1 ... pragma Priority (High)   ... Server.Operation ... 
task T2 ... pragma Priority (Medium) ... Server.Operation ... 
task Server                          ... accept Operation ...
At some point in its execution, T1 is blocked. Otherwise, T2 and Server may never execute. If T1 is blocked, it is possible for T2 to reach its call to Server's entry (Operation) before T1. Suppose this has happened and that T1 now makes its entry call before Server has a chance to accept T2's call.

This is the timeline of events so far:
T1 blocks 
T2 calls Server.Operation 
T1 unblocks 
T1 calls Server.Operation

Does Server accept the call from T1 or from T2?
Some people might expect that, due to its higher priority, T1's call would be accepted by Server before that of T2. However, entry calls are queued in first-in-first-out (FIFO) order and not queued in order of priority. Therefore, the synchronization between T1 and Server is not affected by T1's priority. As a result, the call from T2 is accepted first. This is a form of priority inversion.

A solution might be to provide an entry for a High priority user and an entry for a Medium priority user.
--------------------------------------------------------------------- 
task Server is 
   entry Operation_High_Priority; 
   entry Operation_Medium_Priority; 
   ... 
end Server;

--------------------------------------------------------------------- 
task body Server is 
begin

   loop 
      select 
         accept Operation_High_Priority do 
            Operation; 
         end Operation_High_Priority; 
      else  -- accept any priority
      
         select 
            accept Operation_High_Priority do 
               Operation; 
            end Operation_High_Priority; 
         or 
            accept Operation_Medium_Priority do 
               Operation; 
            end Operation_Medium_Priority; 
         or 
            terminate; 
         end select; 
      end select;
      
   end loop;
   
... 
end Server; 
---------------------------------------------------------------------
However, in this approach T1 still waits for one execution of Operation when T2 has already gained control of the task Server. In addition, the approach increases the communication complexity (see Guideline 6.2.6).

rationale

The pragma Priority allows relative priorities to be placed on tasks to accomplish scheduling. Precision becomes a critical issue with hard-deadline scheduling. However, there are certain problems associated with using priorities that warrant caution.

Priority inversion occurs when lower priority tasks are given service while higher priority tasks remain blocked. In the above example, this occurred because entry queues are serviced in FIFO order, not by priority. There is another situation referred to as a race condition. A program like the one in the first example might often behave as expected as long as T1 calls Server.Operation only when T2 is not already using Server.Operation or waiting. You cannot rely on T1 always winning the race, since that behavior would be due more to fate than to the programmed priorities. Race conditions change when either adding code to an unrelated task or porting this code to a new target. Task priorities are not a means of achieving mutual exclusion.

Arranging task bodies in order of priority will elaborate the higher priority tasks first.

exceptions

When there are dependencies between tasks, the dependencies will influence the order in which the tasks should be elaborated. In these cases, the dependencies in conjunction with the task priorities should be used to order the task bodies.

note

Work is being done to minimize these problems, including the introduction of a scheduling algorithm known as the priority ceiling protocol (Goodenough and Sha 1988). The priority ceiling protocol reduces the blocking time that causes priority inversion to only one critical region (defined by the entries in a task). The protocol also eliminates deadlock unless a task recursively tries to access a critical region. This protocol is based on priority inheritance and thus deviates from the standard Ada tasking paradigm.

Priorities are used to control when tasks run relative to one another. When both tasks are not blocked waiting at an entry, the highest priority task is given precedence. However, the most critical tasks in an application do not always have the highest priority. For example, support tasks or tasks with small periods may have higher priorities, because they need to run frequently.

Language Ref Manual references: 9.1 Abort Statements, 9.3 Task Execution - Task Activation, 9.5 Entries, Entry Calls, and Accept Statements, 9.8 Priorities, B Predefined Language Pragmas


6.1.5 Delay Statements

guideline

example

The phase of a periodic task is the fraction of a complete cycle elapsed as measured from a specified reference point. In the following example an inaccurate delay causes the phase of the periodic task to drift over time (i.e., the task starts later and later in the cycle):
Periodic: 
   loop 
      delay Interval; 
      ... 
   end loop Periodic;
The following example shows how to compensate for the inaccuracy of the delay statement. This approach works well when the periodic requirement can be satisfied with an average period. Periodic tasks based on an inaccurate delay can drift from their phase. Prevention of this drift can be achieved by calculating the next time-to-occur based on the actual time of the current execution. The following example illustrates this tactic:
No_Drift: 
   declare 
      use Calendar;
      
      -- Interval is a global constant of type Duration 
      Next_Time : Calendar.Time := Calendar.Clock + Interval;
      
   begin  -- No_Drift 
      Stable_Periodic: 
         loop 
            delay Next_Time - Clock; 
            ...
            
            Next_Time := Next_Time + Interval; 
         end loop Stable_Periodic; 
   end No_Drift;

rationale

The Ada language definition only guarantees that the delay time is a minimum. The meaning of a delay statement is that the task is not scheduled for execution before the interval has expired. In other words, a task becomes eligible to resume execution as soon as the amount of time has passed. However, there is no guarantee of when (or if) it is scheduled after that time.

A busy wait can interfere with processing by other tasks. It can consume the very processor resource necessary for completion of the activity for which it is waiting. Even a loop with a delay can have the impact of busy waiting if the planned wait is significantly longer then the delay interval. If a task has nothing to do, it should be blocked at an accept or select statement.

Using knowledge of the execution pattern of tasks to achieve timing requirements is nonportable. Ada does not specify the underlying scheduling algorithm.

Language Ref Manual references: 9.3 Task Execution - Task Activation, 9.6 Delay Statements, Duration, and Time, 9.7.3 Timed Entry Calls, C Predefined Language Environment


Back to document index