ASTRA is an implementation of the AgentSpeak(L) programming language plus a number of extensions. In order to learn how to program with ASTRA, you first need to get a basic understanding of AgentSpeak(L). This guide provides an overview of AgentSpeak(L). We recommend that you read it before attempting any of the practical guides.

What is AgentSpeak(L)?

AgentSpeak(L) is an agent programming language, that is based on Belief-Desire-Intention (BDI) theory.  This theory models rational decision making as a process of state manipulation.  The idea is that the current state of the world can be represented as a set of beliefs (basically facts about the state of the world) and the ideal state of the world can be represented as a set of desires.  The theory then maps out how a rational decision maker achieves its desires – that is, how it changes the world so that its desires become its beliefs.  For instance, a decision maker may believe that it is in Ireland, but it may also have a desire to be in China. BDI theory attempts to explain how the decision maker selects some course of action so that it eventually believes that it is in  China, thus satisfying its desire to be in China.

The way in which BDI theory achieves this is by adding a third state – intentions – defined as a subset of desires that the decision maker is committed to achieving . Why a subset? Basically, in BDI theory, it is considered acceptable for desires to be mutually inconsistent. That is, an agent can have two desires that cannot be realised at the same time.  For example, in addition to desiring to be in China, our example decision maker may also desire to be in France.  The problem is that there is a physical constraint on the achievement of the desires – the decision-maker cannot be in two places at the same time – so it can only satisfy one of its desires at a time.  This issue can be generalised to the idea that decision-makers are resource bounded entities and may not be able to achieve all their desires due to a lack of sufficient resources.  As a result, they must select a subset of those desires that they “believe” they are capable of realising given their resource constraints – these are their intentions. Once selected, the decision maker attempts to make its intentions into beliefs by identifying and following an appropriate course of action.  The identification of this course of action, known as a plan, can be based on selection of the plan from a library of pre-written plans or through the use of a planner that constructs the plan on the fly (beliefs are the start state and intentions are the end/goal state). AgentSpeak(L) adopts the former of these approaches (a plan library).

There are two further refinements to BDI theory. First, the concept of a goal is often introduced as a replacement for desires.  Goals are defined as a mutually consistent set of desires (so the decision maker could desire to be in both China and France, but could only have a goal to be in one of those places). AgentSpeak(L) adopts goals as the representation of future state. The second refinement is the idea of how to represent intentions.  In the pure model, intentions are a subset of desires, but intentions are associated with commitment. This implies that a decision maker has some “plan of action”  for achieving its intentions.  As such, it is possible to represent intentions as either state (intention-to-achieve) or as the plan that will bring about that state (intention-to-do). AgentSpeak(L) adopts the latter model of intention (intention-to-do).

AgentSpeak(L) defines a set of programming constructs, encoded using a specific syntax and supported by a corresponding interpreter algorithm that is based on a Belief-Goal-Intention(to-do) model of rational decision making.  The core constructs provided are:

  • Beliefs: predicate formulae representing facts about the state of the agents environment. Together, the set of beliefs held by an agent are equivalent to the state of an object.
  • Goals: predicate formulae (prefixed with a bang operator (!)) identify what the agent wants to do.  Goals are not stored explicitly in the agent state, instead, they are declared as required and mapped contextually to a behaviour that will realise the goal.  Goals are equivalent to method calls in object-oriented programming. The mapping is achieved through the use of events and associated event handler, known as plan rules.
  • Events: Events drive the behaviour of an agent.  Internally, the agent contains an event queue. Each iteration of the interpreter, one event is selected from the event queue and processed through contextual mapping of the event to an event handler (plan rule). AgentSpeak(L) includes events for: the adoption of new beliefs, the retraction of existing beliefs, and the adoption of goals. There is no analogy between events and object-oriented programming – perhaps the closest concept they map on to is the message that is received by an object (which is matched against one of the methods supported by the object).
  • Plan Rules: Plan rules are the heart of an agent program; they define the core behaviours of the agent, contextually mapping those behaviours to the events that trigger them. Behaviours are specified as a sequence of plan operators; which support the following functionality: belief adoption/retraction; sub goal adoption; belief querying and private actions.   Plan rules are equivalent to methods in object-oriented programming, where the triggering event is equivalent to the method signature and the behaviour is equivalent to the method implementation.

The core interpreter cycle for AgentSpeak(L) can be reduced to the following steps:

1. select an event, e, from the agents event queue

2. match the event to a plan rule, p whose triggering event matches e, and whose context is satisfied.

3. if the event is a belief adoption / retraction event, then create a new intention to process the
   behaviour specified in p else update the intention that generated e to also process the behaviour 
   specified in e.

4. select one intention, i and execute its next step.

5. return to 1

“Hello World” with AgentSpeak(L)

As a first example of AgentSpeak(L) we present the basic hello world program.  This program consists of two statements: an initial goal (line 01) and a plan rule (lines 03-04). As can be seen, statements are terminated by a period (,).  The first statement declares a goal, !init().  This goal results in a goal adoption event being added to the agents event queue.  This is only done once before the first iteration of the interpreter.  The second statement is a plan rule.  This rule is designed to handle the goal adoption event generated by the first statement.  To specify a goal adoption event, the goal is simply prefixed by a + operator.  The arrow (<-) operator is used to separate the triggering event from the behaviour implementation (which is on line 04).  The behaviour contains a single plan operator – a private action that prints out the argument to the console.

01 !init().
02 
03 +!init() <-
04     println(hello world).

In terms of execution: on the first iteration, the interpreter selects the +!init() event; matches this event to the rule; and creates an intention to execute the behaviour associated with the rule.  Next, the interpreter selects this newly created intention and executes the next step, which in this case involves “hello world” being printed out (this is an example of a private action). Upon completion of the step, the intention is marked as completed, and dropped.  The agent continues to execute, but it never generates another event. This means that it never adopts another intention, which in turn means that the program does nothing more.

Declaring and Handling Subgoals

The second example program illustrates the use of goals (and in particular, subgoals) in AgentSpeak(L) programs. The program itself is a slightly modified version of the Hello World program that moves the code to print out “Hello World” into a subgoal.

01 !init().
02 
03 +!init() <-
04     !printHello().
05
06 +!printHello() <-
07     println(hello world).

This program is a slight modification of the previous program where the print action is moved to a separate rule that is used to handle the adoption of the !printHello() goal (lines 06-07).  This goal is invoked as a subgoal on line 04 of the program (in the previous program, this line contained the actual print action),

In terms of execution, the following happens: on iteration 1, the agent removes the +!init() goal adoption event from the event queue and matches it to the first rule (lines 03-04), causing an intention to be created. This intention is then selected by the agent and the first step is executed. This step is a subgoal plan operator, which has the effect of creating a +!printHello() goal adoption event.  Because it is a subgoal, the intention is also suspended (this means that the intention cannot be selected for execution). The goal adoption event also includes a reference to this intention, indicating that the event corresponds to a subgoal. On iteration 2, the agent removes the +!printHello() goal adoption event from the event queue and matches it to the second rule (lines 06-07).  Because the event was generate by a subgoal plan operator, the agent appends the plan part of the rule to the intention from which the subgoal was invoked, and resumes that intention.  The result of this is that the agent has a single intention that combines the first and the second rules.  This is achieved by making an intention a stack.  Each element of the stack contains a plan body and a program counter to indicate what step of that plan body is next.  In this example, after the second event is handled, the intention contains 2 elements: an entry that represents the body of the !init() rule (lines 03-04) with a program counter indicating that the first step has been completed; a second entry then represents the body of the !printHello() rule (lines 06-07) indicating that no steps have been completed.  The second entry is at the top of the stack.  This intention is then selected by the agent, and the next step is executed.In this case, the agent peeks at the top of the stack and executes the first rule of the second entry (which calls the print action). On the 3rd iteration, the agent has no new events to process, so it simply selects the intention and executes the next step.  When it peeks at the top entry in the intention, it notes that the entry is completed, so it removes that entry and then peeks at the new top entry.  Again, the agent notices that this entry is also complete, so it removes the second entry, leaving the stack empty.  This indicates to the agent that the intention has been completed, so it is dropped.

Managing Beliefs with AgentSpeak(L)

This final example illustrates how AgentSpeak(L) permits the modification of the agents internal state through the belief update plan operators.  One operator is provided to support the addition of new beliefs and a second operator is provided to support the removal of existing beliefs.  No operator is provided for the modification or an existing belief (this is achieved through the retraction of the existing belief and the subsequent adoption of the new belief).

01 light(on).
02
03 +light(on) <-
04     println(the light is on, turn it off!);
05     -light(on);
06     +light(off).
07     
08 +light(off) <-
09     println(the light is off, turn it on!);
10     -light(off);
11     +light(on).

This program includes an initial belief, representing the fact the a light is on, and two rules.  The triggering event of the first rule (lines 03-06) is the event that the agent adopts a belief that the light is on (like the initial belief).  The body of the rule consists of a sequence of three actions: (1) It prints out a message to the console, (2) it retracts the belief that the light is on, and (3) it adopts a belief that the light is off. The second rule (lines 08-11)  does the opposite of this – it retracts the belief that the light is off and adopts the belief that the light is on.  In terms of behaviour, this program implements an infinite loop, where either the first rule or the second rule is executed on each iteration.  In fact, the last operation of each rule generates the event that triggers the next rule.

In terms of execution, the following happens: on iteration 1, the agent adopts the belief that the light is on and adds the associated belief adoption event to the event queue.  The agent then selects that event from the event queue and handles it by matching the event with the first rule (lines 03-06) and adopting a new intention that contains the associated plan. The agent then selects this intention for execution and executes the first step of the plan which prints out the “the light is on, turn it off”. On iteration 2, the agent does not select an event because the event queue is empty.  It does, however select the intention again, this time executing the second step of the plan, which causes the belief light(on) to be dropped. This action has the side effect of generating a belief retraction event that is added to the agents event queue.  On the 3rd iteration, the agent selects the belief retraction event from the event queue and attempts to match it against  rule.  No matching rule exists, so this event is ignored (in some implementations the event queue is filtered so that this type of event is never added as it can never affect the behaviour of the agent). The intention is selected for a third time, and the last step is executed, resulting in the belief light(off) being adopted. Again, this has a side effect – namely the generation of a belief adoption event, which is added to the event queue. At the end of this iteration, the intention is marked as completed and dropped. On the 4th iteration, the agent selects the belief adoption event and matches it to the second rule (lines 08-11). This results in the adoption of a new intention that contains the associated plan.  The agent selects this intention, and executes the first step, which results in the following message being printed to the console: “the light is off, turn it on”. Over the next two iterations, the agent drops the belief light(off) and adopts the belief light(on), resulting in first rule being triggered and the behaviour described on iterations 1-3 is repeated.  this behaviour finishes on iteration 9,resulting in the behaviour of iterations 4-6 being repeated and so on. In fact, the overall behaviour of the agent is an infinite loop where the agent prints out the statement that the “light is on…” followed by the statement that the “light is off…” repeatedly.

Summary

Congratulations! You have complete the Introduction to AgentSpeak(L). This should give you a basic awareness of how AgentSpeak(L), and by association, ASTRA, works. As a recap, AgentSpeak(L) is an event driven language. Two main types of event exist: beliefs events and goal events.

Belief events are added whenever the beliefs (state) of the agent change. Belief events are generated for the addition of new beliefs or the dropping of existing beliefs. Goal events are added when the agent is created (initial goals) or when the agent reaches a decision point. The idea of a goal is to indicate what you want to happen next without having to specify how it will be done.

Agents make decisions by processing events. Events are processed in the order they arrive. Processing of events involves matching the event to a plan that is applicable in the current context. A plans applicability is determined by a context condition that is a bit like a guard in an if statement. If the context is true with respect to the current beliefs (state) of the agent, then the rule is applicable, otherwise it is not. When processing an event, the agent identifies all plans whose triggering event matches the event being processed. Those plans are then filtered for applicability (we remove any whose context is not true) and a single plan is chosen from the remaining. Typically, this is determined based on the order in which the plans were written in the agent program (nearer the start of the program = higher priority).

Once a plan is selected, the agent must execute it. This is done by adopting a new intention or refining an existing intention. In the case where the event is a belief event or an initial goal event, a new intention is created. In the cases where the event is a subgoal event, the plan is added to the existing intention. At any point in time, an agent can have multiple intentions. Intentions are executed in parallel. On each iteration of the agent interpreter, a single intention is selected (if one exists) and the next step of the intention is executed.

See Also

As a next step, we advise you to start on the Building ASTRA Projects with Maven. This will take you through the ASTRA equivalent of the “Hello World” agent that was described above. After this, you will be ready to explore one of the other guides listed below:

  • t.b.c.