Preferences in Game Logics Abstract
even excluding the possibility that an agent might value op-tions equally. Models of social procedures that do not re-
We introduce a Game Logic with Preferences (GLP),
quire one to completely specify the preferences of agents
which makes it possible to reason about how information
are thus highly desirable. Gal and Pfeffer [9, 10] have also
or assumptions about the preferences of other players can
recognized this problem, and use Networks of Influence
be used by agents in order to realize their own preferences.
Diagrams to model game situations in which agent may
GLP can be applied to the analysis of social protocols such
have different beliefs about which game they are playing. as voting or fair division problems; we illustrate this use of
We take a different approach by starting with one common
GLP with a number of worked examples. We then prove that
game form and use logical assumptions to represent differ-
the model checking problem for GLP is tractable, and de-scribe an implemented model checker for the logic – by us-
One can also model social procedures as distributed or
ing the model checker, it is possible to automate the analy-
interpreted systems [8]. These systems are models for the
sis and verification of social protocols.
options and knowledge that agents have: important proper-ties such as ‘Alice, Bob and Caroline will eventually agree
1. Introduction
on an outcome’ can be verified using temporal logics [4];temporal epistemic logics can be used for knowledge based
We are interested in the formal specification, analysis, and
requirements [8], and coalition (epistemic) logics [17] can
verification of mechanisms for social interaction [15]. Ex-
be used for properties such as ‘Alice knows that on her own
amples of such mechanisms include voting and election
she has no strategy to block Zambia as an outcome’. Unfor-
procedures [5], auction and trading protocols, and solutions
tunately, these approaches do not deal with preferences at
for fair division and dispute resolution [3]. As a simple ex-
all. The assumption in these systems is that agents know
ample, consider the problem of constructing a voting proce-
nothing about other agents’ preferences – and of course
dure for Alice, Bob, and Caroline to decide between three
this is also an unrealistic assumption. In short, this paper
holiday destinations: Xanadu, Yemen, and Zambia. An ex-
presents a logic that is intended to overcome these limita-
ample requirement for this social choice mechanism might
tions. This logic allows us to reason about interactions be-
be that if two people prefer the same choice, then that choice
tween agents with only partial knowledge about their pref-
erences. Specifically, we are interested in the question how
One can model many mechanisms for social interaction
the addition of information about other agents’ preferences
as extensive games [14, 2], and then use game theoretic
influences the strategies of agents and coalitions of agents.
concepts to characterise properties of the protocol. Unfor-
The key assumption we make is that agents reason about
tunately, a game is not just a model for a protocol, it is a
the preferences of other agents, and then make certain
model for the protocol and the preferences of the agents.
strategic decisions. We do not specify how they know it
One would like to model procedures as complete and per-
(it might be by means of communication, espionage, or
fect information games, because this is a simple class of
by studying previous histories of the game); the informa-
games; but then all agents have complete knowledge of
tion about the preferences might even be hypothetically as-
each other’s preferences. This is an unrealistic assumption.
sumed. Reasoning about the preferences of (other) agents
In fact, for the holiday example above, each agent can or-
is interpreted as reasoning about their strategies, because
der the three countries in six possible ways, and therefore
for a rational agent, having a preference implies choos-
one would need to consider at least 216 different games,
ing a strategy that leads to this preference. The knowl-
edge that agents have, based upon their knowledge of other
a tree t = N, T, n0 , with root n0 ∈ N. Given a tree t, we
agents’ preferences, is called strategic knowledge [7]. Al-
define the successor function Succ : N → 2N, yielding for
though our aim is to study strategic knowledge in this pa-
every n ∈ N its set of successors {n ∈ N | (n, n ) ∈ T}.
per, we do this implicitly, since we do not introduce strate-
The function Z(T) = {n ∈ N|Succ(n) = ∅} defines the set
gies as first class citizens of our logic, but rather refer only
of all terminal nodes of the tree: we refer to them as states.
to preferences. We develop a logic GLP (Game Logic withGame Form A game form is a tuple F = Σ, Ag, N, T, n0 , Preferences), which allows us to reason about the conse-
where Σ is a set of agents or players and N, T, n0 is a tree.
quences of the announcement of information about prefer-
The function Ag : N \ Z(T) → Σ assigns an agent to each
ences. Initially, agents know the protocol but do not know
non-terminal node: this agent is thought of as the agent that
other agents’ preferences. The logic enables us to make as-
decides the next action, or, equivalently, what the next node
sumptions about their preferences, and subsequently to ex-
will be. Conversely, the function Ow returns, for every agent
press the consequences of such assumptions. Note that some
i, the set of nodes “owned” by that agent.
simplifying assumptions are made. We assume agents haveperfect information about the current game state, so we do
Game A game is a tuple (F,
not consider imperfect information games. We also assume
: Σ → 2Z(T)×Z(T) is a function that assigns to each
that all agents have the same information about preferences.
The remainder of the paper is structured as follows. Sec-
Preference relations are assumed to be transitive and sym-
tion 2 defines some game theoretic notions that are used
i n means that agent i thinks that n
throughout the paper. The logic GLP for expressing proper-
is at least as good as n . We write n
ties of protocols is defined in Section 3, together with its se-
mantics. The use of GLP is illustrated through some exam-
Game Form Interpretation A game form interpretation is
ple protocols in Section 4. In Section 5, an efficient algo-
a tuple (F, P, π) where F is a game form Σ, Ag, N, T, n0 ,
rithm for evaluating GLP properties is given, and an imple-
P is a finite set of propositions and π : Z(T) → 2P as-
mentation of this algorithm in a Java model checker for GLP
signs to each non-terminal node the set of propositions that
is described. We present some conclusions in Section 6. Game Interpretation A game interpretation is a tuple 2. Extensive Games and Game Forms
(F, , P, π) such that (F, P, π) is a game form interpreta-tion and (F,
Extensive games are defined in several different but essen-tially equivalent ways in the literature: for example as a
Preference We say that agent i has a preference for a set
tree [13], or as a set of runs [14]. We use a definition as a
of outcomes S ⊆ Z(T) in game (F,
tree, but stay close to the notation of [14], and we will sim-
t ∈ Z(T) \ S we have that si t. In this definition, a set S is
ply refer to ‘game’ when we mean ‘extensive game’.
thus preferred over its complement. Note that this is a strong
Informally, a game consists of two components. The
notion of preferences. We can only say that a person prefers
game form F specifies the available actions for each agent,
to have coffee if the worst situation in which it has cof-
fee is better than the best situation in which it does not have
erences. For our formal approach we also need an interpre-
coffee. This strong definition is convenient in a logical ap-
tation, π, which allows us to use propositions for describ-
proach, because it allows one to draw definite conclusions.
ing end states. We do not always need all these components
A notion of preference in which the average value of s ∈ S
at the same time, and therefore we distinguish the follow-
is higher than the average value of an element t /
be applicable within a probabilistic framework. A coalitionof agents Γ prefers a set S if each member i ∈ Γ prefers S. Strategies A pure strategy for agent i in a game form F is
a function PSi : Ow(i) → N such that PSi(n) ∈ Succ(n).
This function prescribes a decision for agent i to each nodeowned by i. A mixed strategy is a function MSNotation Throughout this paper we follow coalition logic
conventions [17] by using Σ to denote a set of agents, Γ
i(n) ⊆ Succ(n). A mixed strategy pre-
scribes a subset of alternatives to each node owned by i. A
for a coalition of agents (so Γ ⊆ Σ), i ∈ Σ for an arbi-
strategy for a coalition Γ, either mixed or pure, is a set of
trary agent, P for a set of propositions and π as an interpre-
strategies, one for each agent in Γ. If S
and n ∈ N with Ag(n) ∈ Γ, then SΓ(n) = SAg(n)(n). We use
Trees The symbol N is used for a finite set of nodes, and
both the notation n ∈ SΓ(n) and SΓ(n) = n when not con-
T ⊆ (N × N) is a binary relation between them, yielding
fusing. For the grand coalition Σ, SΣ is called a strategyprofile. Such a profile s represents a Nash Equilibrium if no
The logic GLP contains all connectives of propositional
agent in Σ can unilaterally deviate from s and improve his
logic and two additional operators. One can be used to intro-
outcome. It is well-known that every two person zero-sum
duce formulas of propositional logic in GLP, and the other
game has such an equilibrium, which can be obtained us-
one is used to express consequences of preferences. Definition 2 Let P be a set of propositions, and Σ a group Strategy Updates A game is often thought of as a set of of agents. Let PL be the language of propositional logic
possible computations or runs. Each path on the game tree
over P. The language for GLP is the smallest language L
then corresponds to a run. Formally, a run of the game form
such that for any formula ϕ0 ∈ PL and ϕ, ψ ∈ L, it is the
Σ, Ag, N, T, n
0 is a sequence n0, n1, . . . , nz where n0 is the
root of T and nz ∈ Z(T) is a terminal node, and for each k it
is the case that (nk, nk+1) ∈ T. If an agent or a coalition of
agents is committed to a certain strategy S, certain runs can
be excluded from consideration because these runs are not
compatible with that strategy. A run n0, n1, . . . , nz is com-patible with a mixed strategy SΓ if for any pair (n
with Ag(nk) ∈ Γ it is the case that nk+1 ∈ SΓ(nk). This
game in which coalition Γ prefers ϕ0, ψ will hold’. The
means that every choice made by an agent from coalition Γ
ϕ0, takes a propositional logic formula ϕ0,
must be made according to SΓ.
which can be interpreted in a specific state, and converts it
We define an update function u for any game form
into a GLP formula: it means that ϕ0 holds for every pos-
F = Σ, Ag, N, T, n
sible outcome. A useful shorthand is the sometimes opera-
0 and strategy SΓ, such that u(F, SΓ) =
Σ, Ag , N , T , n
0, which means that ϕ0 holds for some outcome. It
Γ. To achieve this, we stipulate that Ag is the
restriction of Ag to N ⊆ N which is generated from N by
sible, we omit brackets and commas in the notation of a set
the new T relation, which is defined as
Γ of agents, for example writing [ABC : ϕ0]ψ instead of[{A, B, C} : ϕ0]ψ. T = {(n, n ) ∈ T | Ag(n) ∈ Γ ⇒ n ∈ SΓ(n)}
Sample formulas of this logic, with their reading in the
holiday example from Section 1 are given in the next table.
Note that in this updated model, agents not in Γ still have
The outcomes Xanadu, Yemen or Zambia are indicated by
all of their options open. Only agents that are part of Γ are
propositions x, y and z, while Alice, Bob and Caroline are
now limited to making choices that are compatible with SΓ.
called A, B and C, respectively.
♦x Xanadu is a possible outcome. 3. A Logic for Preferences
[AB : y] y If Alice and Bob prefer Yemen, then Yemen is
In this section we define the syntax and semantics of the
[A : x ∨ y][B : x] ¬z If Alice prefers Xanadu or Yemen,
and Bob prefers Yemen, then they will not go to Zam-
3.1. Syntax of
We find it convenient to define the syntax of GLP in sev-
3.2. A Game-theoretic Semantics
eral stages, beginning with (classical) propositional logic.
We now give the formal semantics of GLP. We begin by
Definition 1 The language PL of propositional logic over a
showing how propositional logic can be used for describ-
set of propositions P is the smallest set L ⊇ P such that for
ing the outcomes of game form interpretations. We then
any ϕ ∈ L and ψ ∈ L we have that ϕ ∨ ϕ ∈ L and ¬ϕ ∈ L.
show how GLP can be evaluated over game form interpre-tations, by using an update function for preferences, and to
We use propositional logic to express properties of the out-
complete our definition we define the update function. The
comes or results of games. An example inspired by football
first and easiest step is defining the interpretation of propo-
is the statement ¬winA ∧ ¬winB, which expresses that nei-
ther team A wins nor team B wins. It is important to realizethat propositional logic is only used for properties of termi-
nal states, not for intermediate states of a game or proto-
In the interpretation of GLP, the logical connectives are de-fined similarly. The operator
all end-states’, and the assumption operator [Γ : ϕ
is defined on game form interpretations M as follows. We then calculate a restricted game form interpreta-
= Up(M, Γ, ϕ0), in which Γ acts as to guar-
antee ϕ0, and we evaluate whether M fulfills ψ. Let
Σ, Ag, N, T, n0, P, π be a game form interpreta-
for all s ∈ Z(T) π, s |= ϕ0
Figure 1. Alice and Bob eat cake
In Section 2, we defined what it means for a coalition to
antees ϕ0 in F . The strategy S∗ = S∗(F, Γ, ϕ0) is the most
prefer a set of outcomes S. We now extend this definition
general sub game perfect strategy for Γ and ϕ0 in F. The
to propositional logic formulas. We say that Γ has a prefer-
ence for ϕ in game interpretation (F,
the set S = {s|π, s |= ϕ}. Thus, a propositional formula is
Up(M, Γ, ϕ0) = u(F, S∗(F, Γ, ϕ0)), P, π
preferred if the set of end nodes in which it is true is pre-ferred. The update function Up(M, Γ, ϕ
where π is the restriction of π to the new set of nodes.
in which the actions of agents in Γ are restricted to actionsthat help them achieve a preferred outcome. We use the no-
4. Examples
tion of a subgame perfect strategy [16]. The behaviour ofagents should be rational in any subgame-form F of the
This section provides some examples that illustrate how
complete game form F. Moreover, the agents should not
GLP can be used in the specification and analysis of social
assume any helpful behaviour from agents outside Γ. Fi-
nally, the model Up(M, Γ, ϕ0) should not be unnecessar-ily restricted: any actions that are rational should be avail-
Alice and Bob eat cake
able to agents in Γ. Below we have formalized when a strat-egy guarantees something, when it is subgame perfect, and
Alice and Bob have a cake, and they have agreed to divide it
when a strategy is more general than another strategy. These
by means of a “cut-and-choose” protocol [3]. Alice has cut
notions are use to define the update function.
the cake and unfortunately one of the pieces is bigger than
A mixed strategy MSΓ for coalition Γ contains a pure
the other. Bob can now choose from three options: he can
strategy PSΓ of coalition Γ iff for all nodes n with Ag(n) ∈
select the big piece, select the small piece, or he can say to
Γ we have that PSΓ(n) ∈ MS(n). A mixed strategy is com-
Alice ‘No, you choose’. If he lets Alice choose, she can ei-
pletely described by the pure strategies it contains. We can
ther choose the big piece or the small piece. Both agents
think of a mixed strategy as giving some information about
have common knowledge of this protocol, and they know
which pure strategy is used. A mixed strategy seen in this
that they neither know the other agent’s preferences (see
way is a description of the information other agents may
Figure 1): proposition a means that Alice gets the biggest
have about the strategy used [1]. One can compare mixed
piece, b that Bob gets the biggest piece, and e means that
strategies using an inclusion operator. For two mixed strate-
something embarrassing has happened, namely that either
gies S1 and S2, both for coalition Γ, we say that S1 is con-
Alice or Bob has chosen the biggest piece. In many cul-
tained in S2, or that S2 is more general than S1 and write
tures this is not polite and in several of the example formu-
S1 ⊆ S2, if and only if for all nodes n with Ag(n) ∈ Γ
las we assume that everyone is embarrassed if this happens.
we have that S1(n) ⊆ S2(n). This provides a partial order-
Using GLP one can express relevant properties of this pro-
tocol. First we will provide several GLP formulas (A stands
Definition 3 A strategy SΓ guarantees ϕ0 in game form F
for Alice, B stands for Bob). iff every outcome s of game form u(F, SΓ) fulfills π, s |= ϕ0.
• [B : ¬e] a If B does not want either of them being im-
We say that Γ can guarantee ϕ0 in F if there exists a strat-
polite, he must take the smallest piece. Our semantics takes
egy SΓ such that SΓ guarantees ϕ0. We say that SΓ is sub-
a pessimistic view, so Bob cannot take the risk of letting A
game perfect for ϕ0 in F if for every subgame F of F such
choose. Figure 2 shows the model Up(M, {B}, ¬e). that Γ can guarantee ϕ0 in F it is the case that SΓ guar-
• [B : ¬e][A : ¬e] a This formula is a consequence of the
Figure 2. Model Up(M, {B}, ¬e) Figure 3. Model Up(M, {A, B}, ¬e)
previous example. It expresses that if B does not want em-
alternatives (x, y, and z). We use this problem to show
barrassment and that A does not want embarrassment, then
how GLP can be used for expressing the requirements of
A gets the biggest piece. This may seem strange, since there
such protocols. Next, we present two example solutions and
is an outcome in which ¬e and b are true. However the or-
show how GLP can express the differences between them.
der of assumptions is important. The formula expresses that
We require a voting protocol P in which exactly one
B wishes to guarantee the absence of embarrassment, inde-
option is chosen and any group of two agents can decide
pendently of what A does. Two possible readings of the for-
on any outcome. This is formalized in the next formu-
mula are that he commits himself to his strategy before he
las. Let Γ ⊆ {A, B, C} and u, v variables over the options
learns that A has the same preference, or that he thinks that
O = {x, y, z}.
this goal is so important that he does not wish to rely on Afor this property.
• [AB : ¬e][B : b] b In this example, A and B com-monly want to avoid embarrassment, and B also prefers
b. If this is the case, B can let A choose and then A will
take the smallest piece. Figure 3 shows the updated model
Up(M, {A, B}, ¬e). Note that, strictly speaking, B cannot
In Figure 4, a protocol P1 is depicted which satisfies
prefer ¬e and b simultaneously, since that would mean that
these requirements – it is in fact a smallest protocol that
the situation with ¬e ∧ ¬b is preferred over the situation
satisfies the requirements, as one can verify by testing all
with e ∧ b and vice versa. However the preference for b ap-
smaller trees. It works in two steps. First, A can say whether
pears in the scope of a preference for ¬e, so one could say
B or C can make a choice. Then, either B or C can indi-
that the agent prefers first ¬e ∧ b, then ¬e and then ¬e ∨ b.
cate which of the three destinations is chosen. The protocol
This does correspond to a single preference relation.
may seem strange because A cannot directly support a cer-
• [A : ¬e][B : ¬e][B : b] b This formula expresses that if A
tain outcome. What is the best action for A depends on what
does not want embarrassment, B does not want embarrass-
B and C will do. In this protocol it is thus important for A
ment, and B prefers the biggest piece then B gets the biggest
to know what the others do, while C and B need no infor-
piece. The behaviour of B is influenced by the fact that he
mation about the other agents. The next GLP formulas illus-
knows that A prefers to avoid embarrassment. In this sce-
trate this peculiarity of protocol P1.
nario A should try to hide the fact that she has good man-
P |= [AB : x] x
ners, because it is not in her advantage if B knows this. P |= [B : x][A : x] x
These examples illustrate that, by using GLP, one can
P |= [A : x][B : x] x
express consequences of ordering goals in a certain way. Similar to the concept of common knowledge in epistemic
There is of course more than one protocol that meets the
logic [8], we have the idea of a common goal, which is dif-
given requirements: Figure 5 shows another solution. In this
protocol, A can vote for any alternative and then B votes forone of the alternatives. If they vote for the same option, then
Voting Protocol
this option is chosen. If they disagree, agent C can choosebetween the two options that A and B voted for. So agent
In the introduction, we already mentioned the problem of
C has only two options. Nine extra propositions are intro-
defining a fair voting protocol. The problem is to design a
duced, to indicate which agent has cast which vote: propo-
voting protocol for three agents (A, B, and C) over three
sition ax means that agent a has chosen option x, and simi-
Figure 4. A voting protocol P1 Figure 5. A second voting protocol y, az, bx, by, bz, cx, cy, cz. In the figure only the left
This protocol also satisfies the requirements. A nice
be reached? In the next table the preference relation of each
property of this protocol is that A can directly indicate a
preference. The extra propositions (ax, bx, . . .) show howone can use propositions in end states to indicate that a cer-
This demonstrates that in finite trees, one can use propo-
sitions in end states for events that one would describe us-
All formulas that are preferred by some coalition are listed
ing intermediate propositions or propositions on transitions
The extra propositions allow one to formulate questions
about strategic voting. An agent votes strategically if it does
not vote for the options it prefers most [11]. In an ideal vot-
ing protocol, agents do not gain anything by strategic vot-
ing. Unfortunately the Gibbard-Satterthwaite Theorem [11]
states that in any voting scheme not completely determined
One can see that there are not many coalition preferences
by a single agent it is possible to benefit from strategic vot-
for this preference relation. The reason is that our definition
ing. With GLP, one can determine for any protocol in which
of a preference is quite strong. One can show that Yemen is
circumstances strategic voting might occur.
a likely outcome of the holiday election according to proto-
First, we list some related properties about the options.
We see that agent A is completely free to vote for x if shewants that, while C is not. On the other hand, C’s vote is
P1 |= [A : x][C : y][AC : x ∨ y] y
always decisive, which is not true for A’s vote.
This formula expresses that given the preferences of A and
C and the protocol, A will say that C may decide and C
will choose for option y. This outcome is a sub-game per-
((cx → x) ∧ (cy → y) ∧ (cz → z))
fect Nash equilibrium of the game (P,
((ax → x) ∧ (ay → y) ∧ (az → z))
tocol, the same outcome can be rationalized by:
An interesting result is that C, under our rationality con-
dition, should not vote for an option it does not prefer. Un-
P2 |= [A : x][C : y][AC : x ∨ y] y
fortunately, this result cannot be shown for A. If A prefersx, but A knows that B and C prefer y, then it is equally ra-
However, other outcomes can be rationalized by making
tional for A not to vote for x, since A knows it will not be
2 |= [B : x ∨ z][C : x ∨ y][A : x]
P |= [B : x][A : x] x
In this case, A votes for x. From the assumption that BP |= [A : x][B : x] x
prefers x or z, everyone can conclude that B will not vote
Suppose we know what the preferences of our agents
y. If B votes x, x is the outcome. Otherwise c will chooses x
are: can one then use GLP to derive which outcome will
over z, and x is also the outcome.
From these examples, it should be clear that it is not
only important who has which preferences, but also which
eval(((Σ, N, T, I), P, π), ϕ)={
preferences one considers. If different preferences are an-
nounced, signaled, or otherwise communicated in some
5. Model Checking GLP
return ¬eval(((Σ, N, T, I), P, π), ψ)
Model checking is the problem of deciding whether a given
formula is true on a given model [4]. A model checking
return eval(((Σ, N, T, I), P, π), ψ) ∨
algorithm for GLP can be used for the automatic verifica-
eval(((Σ, N, T, I), P, π), χ)
tion of multi agent interaction protocols, such as verifying
that the protocols from Section 4 satisfy the stated require-
ments. In this section, we prove that the model checking
u =update(((Σ, N, T , I), P, π), Γ, ϕ0 )
GLP is tractable. Our proof is constructive: Fig-
ures 6 and 7 give a polynomial time model checking al-gorithm for GLP. Formally, we assume that the size of Σis bounded by some constant. The size of a model (|M|)
Figure 6. The GLP model checking algorithm
is defined as the number of nodes it contains, while thesize of a formula (|ϕ|) is the number of connectives, op-
end nodes must be less than |M|, we obtain our intermedi-
erators, and propositions. Let c(X) denote the number of
operations needed to compute the value of expression X.
To complete the proof of the main theorem, we note that
The notation c(X) = O(f (|X|)) means that there is a size
the evaluation function either evaluates its subformulas ϕ
s and a constant g such that, for any input X with |X| > s,
c(x) < g · f (|X|) [6]. Now:
sub|)), or propositional logic formu-
las ϕprop in end nodes (O(|M| · |ϕprop|)), or it updates with
Theorem 1 The problem of determining whether a GLP for-
a subformula ϕsub (O(|M| · |ϕsub|)). Finally, note that ev-
mula holds on a game form interpretation can be solved in
ery part of the formula ϕ is processed only once. time polynomial in the size of the game form and the size of
{x} ∪ {c1(T, y)|(x, y) ∈ T} and c2(N, T) = T ∩ N2. These function remove spurious transitions and nodes, such
c(M |= ϕ) = O(|M| · |ϕ|)
that c1(T, x), c2(c1(T, x), T), x is always a proper tree.
We have implemented a model checking system for
Outline proof: In order to see that this theorem is true, we
GLP which uses this algorithm. The three example pro-
need to establish both that the algorithms in Figure 6 and 7
tocols described in this paper are part of the distribu-
are correct and that the algorithms have the stated efficiency.
tion of the model checker. The program can be found at
The eval algorithm is basically an algorithmic representa-
tion of the semantics of GLP, and its correctness proof fol-lows from induction on the structure of formulae. The up-date2 function of Figure 7 is a modified version of Zer-
6. Conclusion
melo’s algorithm or backward induction [2]. (Instead ofonly returning the value of the game (either −1 or 1) it
What agents do not only depends on their own preferences,
also returns a set of transitions that one should avoid in or-
but also on their knowledge of other agents’ preferences.
der to realize the value). The correctness of Zermelo’s algo-
The logic GLP allows one to analyze multi agent proto-
rithm, we claim, carries over to our algorithm. For the com-
cols making various assumptions on what is known about
plexity bound, we use that checking propositional formulas
agents’ preferences. Besides defining this logic and a suit-
in end nodes takes time proportional to the size of the for-
able interpretation over extensive game forms, we show that
mula: c(π, s |= ϕ0) = O(|ϕ0|). In order to prove the main
the model checking problem for this logic has a low com-
theorem we claim the following about the update function:
putational complexity. We demonstrated how one can spec-
c(Up(M, Γ, ϕ0)) = O(|M| · |ϕ0|). Zermelo’s algorithm tra-
ify interesting social problems and protocols as an exten-
verses the whole game tree without visiting a node twice.
sive game form, and how GLP is useful for understanding
This takes time proportional to the size of M. In each end
node it evaluates propositional logic formulas, which takes
In GLP one can do semantic updates of a specific kind:
time proportional to the size of ϕ0. Since the number of
we assign a preference to a coalition, and since agents act
[12] does not provide a notion of hypothetical reasoning
update(((Σ, N, T, I), P, π), Γ, ϕ)={
would players play such an equilibrium.
let (R, v) :=update2(T, π, n0, x, Γ, ϕ)
An extension of GLP would be a framework in which
return ((Σ, c1(T \ R, n0), c2(c(T \ R, n0), T), I), P, π)
coalitions not only have preferences over outcomes, but
over complete games. Another extension would be to have
update2(T, π, x, Γ, ϕ)={
parallel updates. Another direction is to develop a similar
logic for extensive games with imperfect information. This
would allow us to study a wider class of protocols and prob-
lems in which information is important.
let N := {(x, n)|(x, n) ∈ T}
References
let R := ∅ , R2 := ∅ , v := 1if I(x) ∈ Γ
[1] R. Aumann and A. Brandenburger. Epistemic conditions for
a nash equilibrium. Econometrica, 63:1161–1180, 1995.
(Rn, vn) := update2(T, π, n, Γ, ϕ)
[2] K. Binmore. Fun and Games: A Text on Game Theory. D.
C. Heath and Company: Lexington, MA, 1992.
[3] S. Brahms and D. Taylor. Fair division: from cake cutting toR2 := R2 ∪ {(x, n)}
dispute resolution. Cambridge University Press, 1996.
[4] E. M. Clarke, O. Grumberg, and D. A. Peled. Model Check-ing. The MIT Press: Cambridge, MA, 2000.
[5] V. Conitzer and T. Sandholm. Complexity of mechanism de-
sign. In Proceedings of the Uncertainty in Artificial Intelli-gence Conference (UAI), Edmonton, Canada., 2002.
[6] T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduc-
(Rn, vn) := update2(T, π, n, Γ, ϕ)
tion to Algorithms. The MIT Press: Cambridge, MA, 1990.
[7] S. Druiven. Knowledge development in games of imperfect
information, 2002. University Maastricht Master Thesis.
[8] R. Fagin, J. Halpern, Y. Moses, and M. Vardi. Reasoningabout knowledge. The MIT Press: Cambridge, MA, 1995.
[9] Y. Gal and A. Pfeffer. A language for modeling agents’ deci-
sion making processes in games. In Autonomous Agents andFigure 7. The GLP update function Multi Agent Systems 2003, Melbourne, July 2003.
[10] Y. Gal and A. Pfeffer. A language for opponent modeling in
repeated games. In AAMAS Workshop on Decision-Theoreticand Game-Theoretic Agents, Melbourne, July 2003.
rational with respect to the known preferences, this con-
[11] A. Gibbard. Manipulation of voting schemes: a general re-
straints the coalition’s behaviour. These updates are similar
sult. Econometrica, 41(4), 1973.
to public announcements in dynamic epistemic logic [18],
[12] B. Harrenstein, W. van der Hoek, J.-J. C. Meyer, and C. Wit-
in the sense that all agents learn commonly from the as-
teveen. On modal logic interpretations of games. In Procs
sumption. We believe the assumption that agents have par-
ECAI 2002, volume 77, pages 28–32, Amsterdam, July 2002.
tial knowledge about each other’s preferences is an impor-
[13] H. Kuhn. Extensive games and the problem of information.
tant one. The issue how the strategic knowledge that arises
Contributions to the Theory of Games, II:193–216, 1953.
from this assumption influences agents needs more investi-
[14] M. J. Osborne and A. Rubinstein. A Course in Game Theory.
gation, which this paper starts by looking at strategic knowl-
edge in finite extensive games of perfect information.
[15] M. Pauly. Logic for Social Software. PhD thesis, University
of Amsterdam, 2001. ILLC Dissertation Series 2001-10.
The difference between GLP and a coalition logic like
Reexamination of the perfectness concept for
ATEL is that GLP assumes that once a coalition adopts a cer-
equilibrium points in extensive games. International jour-
tain strategy, this strategy is known to all agents. This as-
nal of game theory, 4:25–55, 1975.
sumption makes sense from a game theoretic and security
[17] W. van der Hoek and M. Wooldridge. Cooperation, knowl-
viewpoint. We believe this difference makes GLP suitable
edge, and time: Alternating-time temporal epistemic logic
for the analysis of adversarial multi-agent systems. Harren-
and its applications. Studia Logica, 75(4):125–157, 2003.
stein et al ([12]) also uses preferences over extensive game
[18] H. P. van Ditmarsch. Knowledge Games. PhD thesis, Uni-
trees, in this case to find modal formulas over such trees,
versity of Groningen, Groningen, 2000.
conceived of as Kripke models, that characterize game the-oretic notions, like that of Nash equilibrium. Unlike GLP,
SOLUZIONI COMPITO PARZIALE CHIMICA 20-05-11 1A) Il permanganato di potassio reagisce con il cloruro di potassio in presenza di acido nitrico in eccesso dando luogo alla formazione di nitrato di potassio, nitrato di manganese (II), cloro molecolare ed acqua. Bilanciare la reazione con il metodo ionico-elettronico e calcolare il volume di cloro, misurato a 35°C e 698 torr, che si può ott
Efavirenz Marca comercial: Clase de medicamento: Inhibidor de la transcriptasa inversa no análogo El efavirenz, conocido también como EFV o Sustiva, es un tipo de antirretroviral llamado inhibidor de latranscriptasa inversa no análogo de los nucleósidos (NNRTI). Esta clase de medicamentos bloquea latranscriptasa inversa, una proteína que necesita el VIH para multiplicarse.