Introduction to AgentSpeak(L)

AgentSpeak(L) is an agent programming language, that is based on Belief-Desire-Intention (BDI) theory.  This theory models rational decision making as a process of state manipulation.  The idea is that the current state of the world can be represented as a set of beliefs (basically facts about the state of the world) and the ideal state of the world can be represented as a set of desires.  The theory then maps out how a rational decision maker achieves its desires – that is, how it changes the world so that its desires become its beliefs.  For instance, a decision maker may believe that it is in Ireland, but it may also have a desire to be in China. BDI theory attempts to explain how the decision maker selects some course of action so that it eventually believes that it is in  China, thus satisfying its desire to be in China.

The way in which BDI theory achieves this is by adding a third state – intentions – defined as a subset of desires that the decision maker is committed to achieving . Why a subset? Basically, in BDI theory, it is considered acceptable for desires to be mutually inconsistent. That is, an agent can have two desires that cannot be realised at the same time.  For example, in addition to desiring to be in China, our example decision maker may also desire to be in France.  The problem is that there is a physical constraint on the achievement of the desires – the decision-maker cannot be in two places at the same time – so it can only satisfy one of its desires at a time.  This issue can be generalised to the idea that decision-makers are resource bounded entities and may not be able to achieve all their desires due to a lack of sufficient resources.  As a result, they must select a subset of those desires that they “believe” they are capable of realising given their resource constraints – these are their intentions. Once selected, the decision maker attempts to make its intentions into beliefs by identifying and following an appropriate course of action.  The identification of this course of action, known as a plan, can be based on selection of the plan from a library of pre-written plans or through the use of a planner that constructs the plan on the fly (beliefs are the start state and intentions are the end/goal state). AgentSpeak(L) adopts the former of these approaches (a plan library).

There are two further refinements to BDI theory. First, the concept of a goal is often introduced as a replacement for desires.  Goals are defined as a mutually consistent set of desires (so the decision maker could desire to be in both China and France, but could only have a goal to be in one of those places). AgentSpeak(L) adopts goals as the representation of future state. The second refinement is the idea of how to represent intentions.  In the pure model, intentions are a subset of desires, but intentions are associated with commitment. This implies that a decision maker has some “plan of action”  for achieving its intentions.  As such, it is possible to represent intentions as either state (intention-to-achieve) or as the plan that will bring about that state (intention-to-do). AgentSpeak(L) adopts the latter model of intention (intention-to-do).

AgentSpeak(L) defines a set of programming constructs, encoded using a specific syntax and supported by a corresponding interpreter algorithm that is based on a Belief-Goal-Intention(to-do) model of rational decision making.  The core constructs provided are:

  • Beliefs: predicate formulae representing facts about the state of the agents environment. Together, the set of beliefs held by an agent are equivalent to the state of an object.
  • Goals: predicate formulae (prefixed with a bang operator (!)) identify what the agent wants to do.  Goals are not stored explicitly in the agent state, instead, they are declared as required and mapped contextually to a behaviour that will realise the goal.  Goals are equivalent to method calls in object-oriented programming. The mapping is achieved through the use of events and associated event handler, known as plan rules.
  • Events: Events drive the behaviour of an agent.  Internally, the agent contains an event queue. Each iteration of the interpreter, one event is selected from the event queue and processed through contextual mapping of the event to an event handler (plan rule). AgentSpeak(L) includes events for: the adoption of new beliefs, the retraction of existing beliefs, and the adoption of goals. There is no analogy between events and object-oriented programming – perhaps the closest concept they map on to is the message that is received by an object (which is matched against one of the methods supported by the object).
  • Plan Rules: Plan rules are the heart of an agent program; they define the core behaviours of the agent, contextually mapping those behaviours to the events that trigger them. Behaviours are specified as a sequence of plan operators; which support the following functionality: belief adoption/retraction; sub goal adoption; belief querying and private actions.   Plan rules are equivalent to methods in object-oriented programming, where the triggering event is equivalent to the method signature and the behaviour is equivalent to the method implementation.

The core interpreter cycle for AgentSpeak(L) can be reduced to the following steps:

1. select an event, e, from the agents event queue

2. match the event to a plan rule, p whose triggering event matches e, and whose context is satisfied.

3. if the event is a belief adoption / retraction event, then create a new intention to process the
   behaviour specified in p else update the intention that generated e to also process the behaviour 
   specified in e.

4. select one intention, i and execute its next step.

5. return to 1