Intelligent Agents
Franco GUIDI POLANCO
Politecnico di Torino / CIM Group
http://www.cim.polito.it
[email protected]
09-APR-2003
Franco Guidi P.
1
Agenda
Introduction
Abstract Architectures for Autonomous
Agents
Concrete Architectures for Intelligent
Agents
Multi-Agent Systems
Summary
Franco Guidi P.
2
Introduction
Franco Guidi P.
3
What agents are
“One who is authorised to act for or in place of
another as a : a representative, emissary, or official
of a government <crown agent> <federal agent> b :
one engaged in undercover activities (as espionage)
: SPY <secret agent> c : a business representative
(as of an athlete or entertainer) <a theatrical agent>”
Franco Guidi P.
4
What agents are
"An agent is anything that can be viewed as
perceiving its environment through sensors
and acting upon that environment through
effectors."
Russell & Norvig
Franco Guidi P.
5
What agents are
"Autonomous agents are computational
systems that inhabit some complex dynamic
environment, sense and act autonomously
in this environment, and by doing so realize a
set of goals or tasks for which they are
designed."
Pattie Maes
Franco Guidi P.
6
What agents are
“Intelligent agents continuously perform three
functions: perception of dynamic conditions in
the environment; action to affect conditions in
the environment; and reasoning to interpret
perceptions, solve problems, draw
inferences, and determine actions.”
Barbara Hayes-Roth
Franco Guidi P.
7
What agents are
"Intelligent agents are software entities that
carry out some set of operations on behalf of
a user or another program with some degree
of independence or autonomy, and in so
doing, employ some knowledge or
representation of the user's goals or desires."
IBM's Intelligent Agent Strategy white paper
Franco Guidi P.
8
What agents are
Definition that refers to “agents” (and not
“intelligent agents”):
“An agent is a computer system that is
situated in some environment, and that is
capable of autonomous action in this
environment in order to meet its design
objectives.”
Wooldridgep & Jennings
Franco Guidi P.
9
What agents are
Franco Guidi P.
10
Agents & Environments
The agent takes sensory input from its environment,
and produces as output actions that affect it.
sensor
input
Agent
action
output
Environment
Franco Guidi P.
11
Agents & Environments (cont.)
In complex environments:



An agent do not have complete control over its
environment, it just have partial control
Partial control means that an agent can
influence the environment with its actions
An action performed by an agent may fail to
have the desired effect.
Conclusion: environments are nondeterministic, and agents must be
prepared for the possibility of failure.
Franco Guidi P.
12
Agents & Environments (cont.)
Effectoric capability: agent’s ability to
modify its environment.
Actions have pre-conditions
Key problem for an agent: deciding which of
its actions it should perform in order to best
satisfy its design objectives.
Franco Guidi P.
13
Examples of agents
Control systems

e.g. Thermostat
Software daemons

e.g. Mail client
But… are they known as Intelligent Agents?
Franco Guidi P.
N
14
What is “intelligence”?
Franco Guidi P.
15
What intelligent agents are
“An intelligent agent is one that is capable of flexible
autonomous action in order to meet its design
objectives, where flexible, I mean three things:



reactivity: agents are able to perceive their environment, and
respond in a timely fashion to changes that occur in it in order
to satisfy its design objectives;
pro-activeness: intelligent agents are able to exhibit goaldirected behaviour by taking the initiative in order to satisfy its
design objectives;
social ability: intelligent agents are capable of interacting
with other agents (and possibly humans) in order to satisfy its
design objectives”;
Wooldridgep & Jennings
Franco Guidi P.
16
Agent characteristics
Weak notion of agent
Autonomy
Proactiveness (Goal
oriented)
Reactivity
Socially able (a.k.a.
communicative)
Strong notion of agent
Weak notion +
• Mobility
• Veracity
• Benevolence
• Rationality
An Agent has the weak agent characteristics. It may have the strong
agent characteristics. (Amund Tveit)
Franco Guidi P.
17
Objects & Agents
sayHelloToThePeople() Object
say Hello to the people
“Hello People!”
Classes control its
states
Agents control its states
and behaviours
“Objects do it for free; agents do it for money”
Franco Guidi P.
18
Objects & Agents (cont.)
Distinctions:
Agents embody stronger notion of
autonomy than objects
 Agents are capable of flexible (reactive,
pro-active, social) behaviour
 A multi-agent system is inherently multithreaded

Franco Guidi P.
19
Abstract Architectures for
Autonomous Agents
Franco Guidi P.
20
Formalization
Agents
Standard agents
 Purely reactive agents
 Agents with state

Environments
History
Perception
Franco Guidi P.
21
Agents & Environments
Agent’s environment states characterised by a
set:
S={ s1,s2,…}
Effectoric capability of the Agent characterised by
a set of actions:
A={ a1,a2,…}
sensor
input
Agent
action
output
Environment
Franco Guidi P.
22
Standard agents
A Standard agent decides what action to
perform on the basis of his history (experiences).
A Standard agent can be viewed as function
action: S*  A
S* is the set of sequences of elements of S.
Franco Guidi P.
23
Environments
Environments can be modeled as function
env: S x A  P(S)
where P(S) is the powerset of S;
This function takes the current state of the environment
sS and an action aA (performed by the agent), and
maps them to a set of environment states env(s,a).
Deterministic environment: all the sets in the
range of env are singletons.
Non-deterministic environment: otherwise.
Franco Guidi P.
24
History
History represents the interaction between an agent
and its environment. A history is a sequence:
h:s0
a0
s1
a1
s2
a2
…
au-1
su
au
Where:
s0 is the initial state of the environment
au is the u’th action that the agent choose to perform
su is the u’th environment state
Franco Guidi P.
25
Purely reactive agents
A purely reactive agent decides what to do without
reference to its history (no references to the past).
It can be represented by a function
action: S  A
Example: thermostat
Environment states: temperature OK; too cold
heater off
if s = temperature OK
action(s) =
heater on
otherwise
Franco Guidi P.
26
Perception
see and action functions:
see
action
Agent
Environment
Franco Guidi P.
27
Perception (cont.)
Perception is the result of the function
see: S  P
where

P is a (non-empty) set of percepts (perceptual inputs).
Then, the action becomes:
action: P*  A
which maps sequences of percepts to actions
Franco Guidi P.
28
Perception ability
Non-existent
perceptual ability
Omniscient
MAX
MIN
|E|=1
|E|=|S|
where
E: is the set of different perceived states
Two different states s1 S and s2  S (with s1  s2) are
indistinguishable if see( s1 ) = see( s2 )
Franco Guidi P.
29
Perception ability (cont.)
Example:
x = “The room temperature is OK”
y = “There is no war at this moment”
then:
S={ (x,y),(x,y),(x,y),(x,  y)}
s1
s2
s3
s4
but for the thermostat:
p1 if s=s1 or s=s2
see(s) =
p2 if s=s3 or s=s4
Franco Guidi P.
30
Agents with state
see, next and action functions
see
action
next
state
Agent
Environment
Franco Guidi P.
31
Agents with state (cont.)
The same perception function:
see: S  P
The action-selection function is now:
action: I  A
where
I: set of all internal states of the agent
An aditional function is introduced:
next: I x P  I
Franco Guidi P.
32
Agents with state (cont.)
Behaviour:





The agent starts in some internal initial state i0
Then observes its environment state s
The internal state of the agent is updated with
next(i0,see(s))
The action selected by the agent becomes
action(next(i0,see(s))), and it is performed
The agent repeats the cycle observing the
environment
Franco Guidi P.
33
Concrete Architectures for
Intelligent Agents
Franco Guidi P.
34
Classes of agents
Logic-based agents
Reactive agents
Belief-desire-intention agents
Layered architectures
Franco Guidi P.
35
Logic-based architectures
“Traditional” approach to build artificial
intelligent systems:


Logical formulas: symbolic
representation of its
environment and desired
behaviour.
Logical deduction or
theorem proving: syntactical
manipulation of this
representation.
Franco Guidi P.
and
grasp(x)
Kill(Marco, Caesar)
or
Pressure( tank1, 220)
36
Logic-based architectures:
example
A cleanning robot
•In(x,y)
agent is at (x,y)
•Dirt(x,y) there is a dirt at
(x,y)
•Facing(d) the agent is
facing direction d
Franco Guidi P.
37
Logic-based architectures:
abstraction
Let L be the set of sentences of classical firstorder logic
Let D=P(L) be the set of L databases (the
internal state of the agent is element of D),
and 1, 2,.. memebers of D
The agent decision making rules are
modelled through a set of deduction rules, 
 |   means that formula  can be proved
from database  using only the deduction
rules 
Franco Guidi P.
38
Logic-based architectures:
abstraction (cont.)
The perception function remains unchanged:
see: S  P
The next function is now :
next: D x P  D
The action function becomes:
action: D  A
Franco Guidi P.
39
Logic-based architectures:
abstraction (cont.)
Pseudo-code of function action is:
1. begin function action
2.
for each a  A do
3.
if  |  Do(a) then return a
4.
for each a  A do
5.
If  |   Do(a) then return a
6.
return null
7. end function action
Franco Guidi P.
40
Reactive architectures
Forces:
Rejection of symbolic representations
 Rational behaviour is seen innately linked
to the environment
 Intelligent behaviour emerges from the
interaction of various simpler behaviours

situation  action
Franco Guidi P.
41
Reactive architectures:
example
A mobile robot that avoids obstacles
•ActionGoTo
(x,y):
moves to position (x,y)
•ActionAvoidFront(z):
turn left or rigth if there
is an obstacle in a
distance less than z
units.
Franco Guidi P.
42
Belief-Desire-Intention (BDI)
architectures
They have their Roots in understanding
practical reasoning.
It involves two processes:
Deliberation: deciding what goals we
want to achieve.
 Means-ends reasoning: deciding how we
are going to achieve these goals.

Franco Guidi P.
43
BDI architectures (cont.)
First: try to
understand what
options are
available.
Then: choose
between them,
and commit to
some.
These choosen options become intentions, which
then determine the agent’s actions.
Franco Guidi P.
44
BDI architectures (cont.)
Intentions are important in
practical reasoning:




Intentions drive means-end
reasoning
Intentions constrain future
deliberation
Intentions persist
Intentions influence beliefs
upon which future reasoning is
based
Franco Guidi P.
45
BDI architectures:
reconsideration of intentions
Example (taken from Cisneros et al.)
P
Time t = 0
Desire: Kill the alien
Intention: Reach point P
Belief: The alien Franco
is Guidi
atP. P
46
BDI architectures:
reconsideration of intentions
Q
P
Time t = 1
Desire: Kill the alien
Intention: Reach point P
Belief: The alien Franco
is Guidi
atP. P
Wrong!
47
BDI architectures:
reconsideration of intentions
Dilemma:


If intentions are not reconsidered sufficiently often, the
agent can continue to aim to an unreachable or no
longer valid goal (bold agents)
If intentions are constantly reconsidered, the agent
can fail to dedicate sufficient work to achieve any goal
(cautious agents)
Some experiments:


Environments with low rate of change: better bold
agents than cautious ones.
Environments with high rate of change: the opposite.
Franco Guidi P.
48
Layered architectures
To satisfy the requirement of integrating a
reactive and a proactive behaviour.
Two types of control flow:


Horizontal layering: software layers are each
directly connected to the sensory input and action
output.
Vertical layering: sensory input and action output
are each dealt with by at most one layer each.
Franco Guidi P.
49
Layered architectures:
horizontal layering
Advantage: conceptual
simplicity (to implement n
behaviours we implement
n layers)
Problem: a mediator
function is required to
ensure the coherence of
tje overall behaviour
Layer n
…
perceptual
input
Franco Guidi P.
Layer 2
action
output
Layer 1
50
Layered architectures:
vertical layering
Subdivided into:
action output
Two pass architecture
Layer n
Layer n
…
…
Layer 2
Layer 2
Layer 1
Layer 1
perceptual input
perceptual
input
One pass architecture
Franco Guidi P.
action
output
51
Layered architectures:
TOURINGMACHINES
Proposed by Innes Ferguson
Modelling layer
sensor input
Perception subsystem
Planning layer
Action subsystem
action output
Reactive layer
Control system
Franco Guidi P.
52
Layered architectures:
INTERRAP
Proposed by Jörg Müller
Cooperation layer
Social knowledge
Plan layer
Planning knowledge
Behaviour layer
World model
World interface
sensor input
action output
Franco Guidi P.
53
Multi-Agent Systems (MAS)
Franco Guidi P.
54
Main idea
Cooperative working environment
comprising synergistic software
components can cope with complex
problems.
Franco Guidi P.
55
Cooperation
Three main approaches:
Cooperative interaction
 Contract-based co-operation
 Negotiated cooperation

Franco Guidi P.
56
Rationality
Priciple of social rationality by Hogg et al.:
“Whithin an agent-based society, if a socially rational
agent can perform an action so that agents’ join
benefit is greather than their joint loss then it may
select that action.”
EU(a) = f( IU(a), SU(a) )
where:
EU(a): expected utility
of action a
IU(a): individual utility
SU(a): social utility
Franco Guidi P.
57
Communication
Agent Communication Languages
(ACL)
Different ACLs:
FIPA (Foundation for Intelligent Physical
Agents) ACL
 etc.

Ontology
Franco Guidi P.
58
MAS Tools and Techniques
Some products identified by AgentLink:
ADK
AgentSheets
AgentTool
Bee-gent
CABLE
Cornet Way JAK
CORMAS
Cougaar
DECAF
Excalibur Agent
FIPA-OS
Grasshopper
IDOL
IMPACT
JACK
JADE
JADE / LEAP
JAFMAS /JIVE
JATLiteBean
JESS
Kaarlboga
LEE
Living Markets
MAML
MAP /CSM
Franco Guidi P.
Massyve Kit
NARVAL
RePast
RESTINA
SEMOA
SIM_AGENT
StarLogo
TuCSon
VOYAGER
Xraptor
ZEUZ
59
Summary
Franco Guidi P.
60
Summary
Agents exhibit autonomy, responsiveness,
proactiveness and social ability. They may also
exhibit mobility, veracity, benevolence, rationality
and cooperation
Frameworks for agent development see agents
as intentional systems. Some invoke semantics
of possible worlds, other distinguish between
explicit and implicit belief
Franco Guidi P.
61
Summary (cont.)
Agents’ architectures may be fundamentally
deliberative or reactive, or may combine both
approaches in a hybrid architecture
Rationality in MAS involves considering the
social and the individual utility of an action
For an effective communication between
agents is required a common language and
ontology
Franco Guidi P.
62
References
Cisneros J., Huerta D. and Mandujano S. “Arquitectura
BDI - Sistemas multiagente”
Franklin S. et al. “Is it an Agent, or just a Program?: A
Taxonomy for Autonomous Agents” in Proceedings of
the Third International Workshop an Agent Theories,
Architectures, and Languages. Springer-Verlag, 1996
Maes P. “Software Agents”. Available
http://www.media.mit.edu
Mangina E. “Review of software products for multi-agent
systems”. Available http://www.agentlink.com
Wooldridge M. “An introduction to multiagent systems”.
John Wiley & Sons, Chichester, February 2002
Franco Guidi P.
63
Descargar

Intelligent Agents - Politecnico di Torino