arXiv:2006.09490v2 [math.OC] 4 May 2023
NASH EQUILIBRIUM PROBLEMS OF POLYNOMIALS
JIAWANG NIE AND XINDONG TANG
Abstract. This paper studies Nash equilibrium problems that are given by
polynomial functions. We formulate efficient polynomial optimization prob-
lems for computing Nash equilibria. The Moment-SOS relaxations are used
to solve them. Under generic assumptions, the method can find a Nash equi-
librium if there is one. Moreover, it can find all Nash equilibri a if there are
finitely many ones of them. The method can also detect nonexistence if there
is no Nash equilibrium.
1. Introduction
The Nash equilibrium problem (NEP) is a kind of games for finding s trategies
for a group of players such that each player’s objective is o ptimized, for given other
players’ strategies. Suppose there are N players, and the ith player’s str ategy is
the variable x
i
R
n
i
(the n
i
-dimensional real Euclidean space). We denote that
x
i
:
= (x
i,1
, . . . , x
i,n
i
), x
:
= (x
1
, . . . , x
N
).
The total dimension of all players’ strategies is
n : = n
1
+ · · · + n
N
.
When the ith player’s strategy x
i
is being optimized, we use x
i
to denote the
subve ctor of all players’ strategies except x
i
, i.e.,
x
i
:
= (x
1
, . . . , x
i1
, x
i+1
, . . . , x
N
),
and write x = (x
i
, x
i
) accordingly. When the writing x
i
appears, the ith player’s
strategy is being considered for optimization, while the vector of all other players’
strategies is fixed to be x
i
. In an NEP, the ith player’s best strategy x
i
is the
minimizer for the optimization problem
(1.1) F
i
(x
i
) :
min
x
i
R
n
i
f
i
(x
i
, x
i
)
s.t. g
i,j
(x
i
) = 0 (j E
i
),
g
i,j
(x
i
) 0 (j I
i
),
for the given other players’ strategie s x
i
. In the a bove, f
i
is the ith player’s
objective function, and g
i,j
are co nstraining functions in x
i
. The E
i
and I
i
are
disjoint labeling sets of finite cardinalities (possibly empty). The feasible set of the
optimization F
i
(x
i
) in (1.1) is
(1.2) X
i
:
= {x
i
R
n
i
: g
i,j
(x
i
) = 0 (j E
i
), g
i,j
(x
i
) 0 (j I
i
)}.
2000 Mathematics Subject Classification. 90C23,90C33,91A10,65K05.
Key words and phrases. Nash equilibrium, Polynomial Optimization, Moment-SOS relaxation,
Lagrange multiplier expression, Tight relaxation.
1
2 JIAWANG NIE AND XINDONG TANG
For NEPs, each set X
i
does not depend on x
i
. This is different from generalized
Nash equilibrium problems (GNEPs), where each player’s feasible set depends on
other players’ strategies. We say the strategy vector x is feasible if
x = (x
1
, . . . , x
N
) X
:
= X
1
× · · · × X
n
.
That is, each x
i
X
i
. The NEP can be formulated as
(1.3) find x
R
n
such that each x
i
is a minimizer o f F
i
(x
i
),
where x
= (x
1
, . . . , x
N
). A solution of (1.3) is called a Nash equilibrium (NE)
1
.
When the defining functions f
i
and g
i,j
are continuous, then the NEP is called a
continuous Nash equilibrium problem. In this paper, we conside r case s that each f
i
is a polynomial in x and g
i,j
’s ar e polynomia ls in x
i
. Such an NEP is called a Nash
equilibrium problem of polynomials (NEPP). The following is an example.
Example 1.1. Consider the 2-player NEP with the individual optimization
1st player:
(
min
x
1
R
2
x
1,1
(x
1,1
+ x
2,1
+ 4x
2,2
) + 2x
2
1,2
s.t. 1 (x
1,1
)
2
(x
1,2
)
2
0,
2nd player:
(
min
x
2
R
2
x
2,1
(x
1,1
+ 2x
1,2
+ x
2,1
) + x
2,2
(2x
1,1
+ x
1,2
+ x
2,2
)
s.t. 1 (x
2,1
)
2
(x
2,2
)
2
0.
In this NEP, each player’s objective is strictly convex with respect to its strateg y,
because their Hessian matrices with respect to their own strategies are positive
definite. This NEP has only 3 NEs (see Section 3.3), which are
1st NE: x
1
= (0, 0), x
2
= (0, 0);
2nd NE: x
1
= (1, 0), x
2
=
1
5
(1, 2);
3rd NE: x
1
= (1, 0), x
2
=
1
5
(1, 2).
NEPs are challenging problems to solve. Even for the sp ecial case s where each
player’s objective function is multilinear in (x
1
, . . . , x
N
), and each feasible set is a
simplex, finding an NE is PPAD-complete [8]. The problem becomes more diffi-
cult when players’ optimization problems are nonconvex. This is because an NE
x
= (x
1
, . . . , x
N
) requires that each x
i
is a global minimizer of F
i
(x
i
). Indeed,
finding a global minimizer of a single polynomial optimiza tion problem is already
NP-hard [28]. For polynomial optimization problems, global optimizers can be
computed e fficie ntly by the Moment-SOS hierarchy of se midefinite relaxations (see
[26, 28, 30] for related work). Moreover, for some NEP s, there may not exist any
NE. Such NEPs are a lso interesting and have impor tant applications (e.g., NEPs in
generative adversarial networks [12]). If an NE does not exist, how can we detect
its nonexistence? This question is mostly open for general NEP s, to the best of
the author’s knowledge. However, under certain nonsingularity conditions, nonex-
istence of NEs for NEPPs can be certified by the infeasibility of some semidefinite
programs. For the above reasons, this paper focus es on NEPPs.
NEPs are generalizations of finite games [33], where each X
i
is a finite set, i.e.,
|X
i
| < . In recent years, there has been an increasing number of applications
of NEPs in various fields, such as economics, environmental protection, p olitics,
1
In some literature, this is also called a pure strategy Nash equilibrium, in contrast to mixed
strategy Nash equilibria, which are probability measures supported on the feasible strategy sets.
We refer to Section 6 for more details on mixed strategy NEs; als o see [10, 23, 33, 49, 56, 63].
NASH EQUILIBRIUM PROBLEMS OF POLYNOMIALS 3
supply chain ma nagement, machine learning, etc. We refer to [4, 6, 12, 14, 32, 53]
for some recent applications of NEPs. In Section 5, we present some concrete
applications of NEPs in environmental pollution control a nd the electricity market.
Moreover, we refer to surveys [3, 63] for more general work on NEPs.
In this paper, our primary goal is to find NEs for NEPs. In the following,
we review some previous work on solving NEPs. The NEP is called a zero-su m
game if the sum of objective functions is identically equal to a constant. Two-
player zero-sum games are equivalent to saddle point problems. We refer to [5,
34] for algorithms of solving saddle point problems under convexity assumptions,
and [47] for the method of s olving nonconvex polynomial saddle point problems.
For finite games, finding mixed strategy solutions is a special case of NEPs of
polynomials; see [2, 9, 21, 63] for some related approaches. There ex ists work on
mixed strategy solutions for continuous games, see [10, 49, 56] for mixed strategy
solutions to polynomia l games, and [1, 23] for the recently developed multiple oracle
algorithms. For finding pure strategy solutions for gener al continuous NEPs, we
refer to techniques such as variational inequalities [1 5, 24], Nikaido-Isoda functions
[22, 58], and manifold optimization tools [52]. In most earlier work, convexity is
often assumed for each player’s optimization. Moreover, NEPs a re special cases
of GNEPs [11], where each player’s feasible set is dependent on other players’
strategies. For GNEPs given by polynomia l functions, the work [7] introduces a
parametric SOS relaxation approach, and the Gauss-Seidel method using Moment-
SOS relaxations is studied in [46]. When the GNEPs are further assumed to be
convex, the semidefinite relaxation method is introduced in [45]. At the moment,
it is mostly an ope n question to solve general NEPs, espe cially when the players’
optimization problems are nonconvex.
Contributions. This paper foc uses on Nash equilibrium problems tha t are given
by polynomials. We formulate efficient p olynomial optimization for computing one
or more Nash e quilibria. The Moment-SOS hierar chy of semidefinite relaxations is
used to solve the appea ring polynomial optimization problems. Our major results
are:
Under some genericity assumptions, we prove that our method can compute
a Nas h equilibrium if there exists one, or it can detect nonexis tence of NEs.
Moreover, if there are only finitely many NEs, we show how to find all of
them. I n the prior existing work, there do not exist similar methods that
can achieve such computational goals.
When the objective and constr aining polynomials are generic (i.e., they
have generic coefficients), we show that the NEPP has only finitely many
KKT points. For such ge ne ric NEPPs, our method can compute all NEs,
if they exist, o r can detect their nonexistence.
When the objective and constraining polynomia ls are not g eneric, our
method can still be applied to compute one or more NEs, or to detect
their nonexistence. Even if there are infinitely many NEs, our method may
still be able to get an NE. In computatio nal pr actice, there is no nee d to
check if the NEP is generic or not to implement our algor ithms. In fact, our
method is self-verifying, that in the actual implementation, the algor ithm
can check whether the computed point is an NE, and check if the computed
solution set is complete or not.
4 JIAWANG NIE AND XINDONG TANG
The paper is organized as fo llows. Some preliminaries about po lynomial opti-
mization are given in Section 2. We give efficient polynomial optimization formu-
lations in Section 3. We show how to solve polynomial optimization problems by
the Moment-SOS hierarchy in Section 4. Numerical experiments and applications
are given in Section 5. Conclusions and discussions are proposed in Se ction 6. The
finiteness of the KKT set for generic NEPs is showed in Appendix.
2. Preliminaries
Notation. The symbol N (resp., R, C) stands for the set of nonnegative integers
(resp., real numbers , complex numbe rs). For a positive integer k, denote the se t
[k]
:
= {1, . . . , k}. For a real number t, t (resp., t) denotes the smallest integer
not smalle r than t (resp., the biggest integer not bigger than t). For the ith player’s
strategy variable x
i
R
n
i
, the x
i,j
denotes the jth entry of x
i
, j = 1, . . . , n
i
.
The R[x] (re sp., C[x]) denotes the ring of polynomia ls with real (resp., complex)
coefficients in x. The R[x]
d
(resp., C[x]
d
) denotes its s ubset of polynomials whose
degrees are not greater than d. For the ith player’s strategy vector x
i
, the nota tion
R[x
i
], C[x
i
], R[x
i
]
d
, C[x
i
]
d
are defined in the same way. For i th player’s objective
f
i
(x
i
, x
i
), the notation
x
i
f
i
,
2
x
i
f
i
respectively denote its gradient and Hessian
with respe ct to x
i
.
In the following, we use the letter z to represent either x or x
i
for the convenience
of discussion. Suppose z
:
= (z
1
, . . . , z
l
) and α
:
= (α
1
, . . . , α
l
) N
l
, denote
z
α
:
= z
α
1
1
· · · z
α
l
l
, |α|
:
= α
1
+ . . . + α
l
.
For an integer d > 0, denote the monomial power set
N
l
d
:
= {α N
l
: |α| d}.
We use [z]
d
to de note the vector of all mono mials in z and whose de gree is at most
d, ordered in the graded alphabetica l ordering. Fo r example, if z = (z
1
, z
2
), then
[z]
3
= (1, z
1
, z
2
, z
2
1
, z
1
z
2
, z
2
2
, z
3
1
, z
2
1
z
2
, z
1
z
2
2
, z
3
2
).
Throughout the paper, the word generic” is used for a property if it holds for all
points outside a set of Lebesgue measure z ero in the space of input data. For a given
multi-degree (d
1
, . . . , d
N
) (resp., a degree d) in the variable x = (x
1
, . . . , x
N
) (resp.,
in variable x
i
), we say a polynomial p(x) (resp., q(x
i
)) is generic if the coefficient vec-
tor of p (resp., q) is generic in the spac e of coefficients. For multi-degrees a
1
, . . . , a
N
and degrees b
1,1
, b
1,2
, . . . , b
1,m
1
, b
2,1
, . . . , b
N,m
N
, we say the NEPP is generic if for
each i and j, the f
i
(x
1
, . . . , x
N
) is a generic polynomial with multi-degree a
i
, and
the g
i,j
(x
i
) is a generic polynomial whos e degree is b
i,j
.
2.1. Ideals and positive polynomials. Let F = R or C. For a polynomial
p F[z] and subsets I, J F[z], define the product a nd Minkowski sum
p · I
:
= { pq : q I}, I + J
:
= { a + b : a I, b J}.
The subset I is an ideal if p · I I for all p F[z] and I + I I. For a tuple of
polynomials q = (q
1
, . . . , q
m
), the set
Ideal[q]
:
= q
1
· F[z] + . . . + q
m
· F[z]
is the ideal generated by q, which is the smallest ideal containing each q
i
.
We review basic concepts in polynomial optimization. A polynomial σ R[z] is
said to be a sum of squares (SOS) if σ = s
2
1
+ s
2
2
+ . . . + s
2
t
for some polynomials
NASH EQUILIBRIUM PROBLEMS OF POLYNOMIALS 5
s
1
, . . . , s
t
R[z]. The set of all SOS polynomials in z is denoted as Σ[z]. For a
degree k, we denote the truncation
Σ[z]
2k
:
= Σ[z] R[z]
2k
.
For a tuple g = (g
1
, . . . , g
t
) of polynomials in z, its quadratic module is the set
Qmod[g]
:
= Σ[z] + g
1
· Σ[z] + . . . + g
t
· Σ[z].
Similarly, we denote the truncation of Qmod(g)
Qmod[g]
2k
:
= Σ[z]
2k
+ g
1
· Σ[z]
2kdeg(g
1
)
+ · · · + g
t
· Σ[z]
2kdeg(g
t
)
.
The tuple g determines the basic closed semi-alg ebraic s et
(2.1) S(g)
:
= {z R
l
: g(z) 0}.
For a tuple h = (h
1
, . . . , h
s
) of polynomials in R[z], its real zero set is
Z(h)
:
= {u R
l
: h
1
(u) = · · · = h
s
(u) = 0}.
The set Ideal[h] + Qmod[g] is said to be archimedean if there exists ρ Ideal[h] +
Qmod[g] such that the set S(ρ) is compact. If Ideal[h] + Qmod[g] is archimedean,
then Z(h) S(g) must be compact. Conversely, if Z(h) S(g) is compact, say,
Z(h) S(g) is contained in the ball R kzk
2
0, then Ide al(h) + Qmod(g, R
kzk
2
) is archimedean and Z(h) S(g) = Z(h) S(g, R kzk
2
). Clearly, if f
Ideal[h] + Qmod[g], then f 0 on Z(h) S(g). The reverse is not necessarily true.
However, when Ideal[h]+Qmod[g] is archimedean, if f > 0 on Z(h)S(g), then f
Ideal[h] + Qmod[g]. This conclusion is refer enced as Putinar’s Positivestellensatz
[50]. Interestingly, if f 0 on Z(h) S(g), we also have f Ideal[h] + Qmod[g],
under some standard optimality conditions [40].
2.2. Localizing and moment matrices. Let R
N
l
2k
denote the space of all real
vectors that are labeled by α N
l
2k
. Each y R
N
l
2k
is labeled as
y = (y
α
)
αN
l
2k
.
Such y is called a truncated multi-sequence (tms) of degre e 2k. For a p olynomial
f =
P
αN
l
2k
f
α
z
α
R[z]
2k
, define the operation
(2.2) hf, yi =
X
αN
l
2k
f
α
y
α
.
The operation hf, yi is a bilinear function in (f, y). For a polynomial q R[z] with
deg(q) 2k and the integer
t = k deg(q)/2,
the o uter product q · [z]
t
([z]
t
)
T
is a symmetric matrix po lynomial in z, with le ngth
n+t
t
. We write the expansion as
q · [z]
t
([z]
t
)
T
=
X
αN
l
2k
z
α
Q
α
,
for some symmetric matrices Q
α
. Then we define the matrix function
(2.3) L
(k)
q
[y]
:
=
X
αN
l
2k
y
α
Q
α
.
6 JIAWANG NIE AND XINDONG TANG
It is called the kth localizing matrix o f q and generated by y. For given q, L
(d)
q
[y] is
linear in y. Clearly, if q(u) 0 and y = [u]
2k
, then
L
(k)
q
[y] = q(u)[u]
t
[u]
T
t
0.
For instance, if l = k = 2 and q(z) = 1 z
1
z
1
z
2
, then
L
(2)
q
[y] =
y
00
y
10
y
11
y
10
y
20
y
21
y
01
y
11
y
12
y
10
y
20
y
21
y
20
y
30
y
31
y
11
y
21
y
22
y
01
y
11
y
12
y
11
y
21
y
22
y
02
y
12
y
13
.
When q is the constant one polynomial, the localizing ma trix L
(k)
1
[y] reduces to a
moment matrix, which we de note a s
(2.4) M
k
[y]
:
= L
(k)
1
[y].
For instance, for n = 2 and y R
N
2
4
, we have M
0
[y] = [y
00
],
M
1
[y] =
y
00
y
10
y
01
y
10
y
20
y
11
y
01
y
11
y
02
, M
2
[y] =
y
00
y
10
y
01
y
20
y
11
y
02
y
10
y
20
y
11
y
30
y
21
y
12
y
01
y
11
y
02
y
21
y
12
y
03
y
20
y
30
y
21
y
40
y
31
y
22
y
11
y
21
y
12
y
31
y
22
y
13
y
02
y
12
y
03
y
22
y
13
y
04
.
Localizing and moment matrices are basic tools to formulate semidefinite relax-
ations for polynomial optimization problems. They are important tools for solving
polynomial, matrix, and tensor optimization problems [20, 35, 36, 41, 48].
2.3. Optimality conditions for NEPs. Consider the ith player’s indiv idual op-
timization problem F
i
(x
i
) in (1.1), for given x
i
. Suppose E
i
I
i
= [m
i
] for some
m
i
N. For convenience, we write the constraining functions as
g
i
(x
i
)
:
= (g
i,1
(x
i
), . . . , g
i,m
i
(x
i
)).
Suppose x = (x
1
, . . . , x
N
) is an NE. Under linear independence constraint quali-
fication condition (LICQC) at x
i
, i.e., the set of gradients for active constraining
functions a re linearly independent, there exist Lagrange multipliers λ
i,j
such that
(2.5)
P
m
i
j=1
λ
i,j
x
i
g
i,j
(x
i
) =
x
i
f
i
(x),
0 λ
i,j
g
i,j
(x
i
) 0 (j I
i
).
In the above, λ
i,j
g
i,j
(x
i
) means that λ
i,j
· g
i,j
(x
i
) = 0. This is called the KKT
condition for the optimization F
i
(x
i
). We say a point x R
n
is a KKT point if
there exis t vectors of Lagrange multipliers λ
1
, . . . , λ
N
such that (2.5) holds. For
the NE x, if the LICQC of F
i
(x
i
) holds at x
i
for every i [N], then x must be
a KKT point. Moreover, if each player’s optimization problem is convex, i.e., the
f
i
(x
i
, x
i
) is convex in x
i
for all x
i
X
1
× · · · × X
i1
× X
i+1
× · · · × X
N
, and
every X
i
is a convex set, then all KKT points are NEs [11, Theorem 4.6].
Example 2.1. Consider the 2-player NEP in Example 1.1. Each individual opti-
mization is strictly convex, because Hessian matrices
2
x
1
f
1
and
2
x
2
f
2
are positive
NASH EQUILIBRIUM PROBLEMS OF POLYNOMIALS 7
definite. The constraints ar e the convex ball conditions. The KKT system is
(2.6)
2x
1,1
+ x
2,1
+ 4x
2,2
= 2λ
1
x
1,1
, 4x
1,2
= 2λ
1
x
1,2
,
x
1,1
+ 2x
1,2
+ 2x
2,1
= 2λ
2
x
2,1
, 2x
1,1
+ x
1,2
+ 2x
2,2
= 2λ
2
x
2,2
,
λ
1
(1 (x
1,1
)
2
(x
1,2
)
2
) = 0, λ
2
(1 (x
2,1
)
2
(x
2,2
)
2
) = 0,
1 (x
1,1
)
2
(x
1,2
)
2
0 , 1 (x
2,1
)
2
(x
2,2
)
2
0 ,
λ
1
0 , λ
2
0 .
By solving the ab ove directly, one can show that this NEP has only 3 KKT points,
together with Lagrange multipliers as follows
Nash equilibrium
Lagra nge multiplier
x
1
= (0, 0), x
2
= (0, 0), λ
1
= λ
2
= 0;
x
1
= (1, 0), x
2
=
1
5
(1, 2),
λ
1
=
9
5
10
1, λ
2
=
5
2
1;
x
1
= (1, 0), x
2
=
1
5
(1, 2),
λ
1
=
9
5
10
1, λ
2
=
5
2
1.
All these KKT points are NEs since the NEP is convex. Furthermore, sinc e for each
i = 1, 2, the LICQC of F
i
(x
i
) holds for all x X, these NEs are all solutions to
the NEP. This is very different from a single convex optimization problem, where
the set of minimizers, if it is none mpty, must be a singleton or have a n infinite
cardinality if the objective function is convex, and the minimizer has to be unique
if the objective function is further assumed to be strictly c onvex.
However, the KKT point may not be an NE of the NEP when there is no con-
vexity assumed. This is beca use the KKT condition (2.5) is typically not sufficient
for x
i
to be a minimizer of F
i
(x
i
), which makes nonconvex NEPs quite difficult
to solve. In this paper, we mainly focus on finding NEs fo r nonconvex NE Ps of
polynomials.
3. Polynomial optimization formulations
In this section, we s how how to formulate efficient polynomial optimization prob-
lems for solving the NEPP (1.3). We first introduce the polynomial expressions for
Lagra nge multiplier expressions in Section 3.1. Then, in Section 3.2, polynomial
optimization problems are formulated for finding NEs, and an algorithm to solve
nonconvex NE Ps is proposed. Convex NEPs of poly nomials are studied in Sec-
tion 3.3. L ast, we further extend our a pproach to find more NEs in Section 3.4.
3.1. Optimality conditions and Lagrange multiplier expressions. For the
NEP (1.3), if x is an NE where the LICQC is satisfied, then it must be a KKT
point, i.e., x s atisfies (2.5) for all i [N]. Therefo re, every NE must satisfy the
following equation system:
(3.1)
x
i
g
i,1
(x
i
)
x
i
g
i,2
(x
i
) · · ·
x
i
g
i,m
i
(x
i
)
g
i,1
(x
i
) 0 · · · 0
0 g
i,2
(x
i
) · · · 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 · · · g
i,m
i
(x
i
)
|
{z }
G
i
(x
i
)
λ
i,1
λ
i,2
.
.
.
λ
i,m
i
|
{z }
λ
i
=
x
i
f
i
(x)
0
.
.
.
0
|
{z }
ˆ
f
i
(x)
.
If there exists a matrix po lynomial H
i
(x
i
) such that
(3.2) H
i
(x
i
)G
i
(x
i
) = I
m
i
,
8 JIAWANG NIE AND XINDONG TANG
then we can express λ
i
as
λ
i
= H
i
(x
i
)G
i
(x
i
)λ
i
= H
i
(x
i
)
ˆ
f
i
(x).
Interestingly, the matrix polynomial H
i
(x
i
) satisfying (3.2) exists under the non-
singularity condition on g
i
. The po lynomial tuple g
i
is said to be nonsingular if
G
i
(x
i
) has full c olumn rank for all x
i
C
n
i
[42]. It is a ge ne ric condition [44,
Proposition 2.1]. We remark that if g
i
is nonsingular, then the LICQC holds at
every minimizer of (1.1), so there must exist λ
i,j
satisfying (2.5) and we can express
λ
i,j
as
(3.3) λ
i,j
= λ
i,j
(x)
:
=
H
i
(x
i
)
ˆ
f
i
(x)
j
for all NEs. For example, we consider the following two cases:
For the constraint {x
i
R
n
i
:
P
n
i
j=1
x
i,j
1, x
i
0}, the constr aining
polynomials are
g
i,0
= 1
n
i
X
j=1
x
i,j
, g
i,1
= x
i,1
, . . . , g
i,n
i
= x
i,n
i
.
If we let
H
i
(x
i
) =
1 x
i,1
x
i,2
. . . x
i,n
i
1 . . . 1
x
i,1
1 x
i,2
. . . x
i,n
i
1 . . . 1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
x
i,1
x
i,2
. . . 1 x
i,n
i
1 . . . 1
x
i,1
x
i,2
. . . x
i,n
i
1 . . . 1
,
then one may check that the (3.2) holds. The Lagrange multipliers λ
i,j
can
be accordingly represented as
(3.4) λ
i,0
= x
T
i
x
i
f
i
, λ
i,j
=
f
i
x
i,j
x
T
i
x
i
f
i
, j = 1, . . . , n
i
.
For the sphere constraint 1 x
T
i
x
i
= 0 or the ball constraint 1 x
T
i
x
i
0 ,
the constraining polynomial is g
i,1
= 1 x
T
i
x
i
. If we let
H
i
(x
i
) =
1
2
x
i,1
1
2
x
i,2
. . .
1
2
x
i,n
i
1
,
then one may check that the (3.2) ho lds. The Lagrange multiplier can be
according ly expressed as
(3.5) λ
i,1
=
1
2
x
T
i
x
i
f
i
.
For genera l nonsingular constraining tuple, one may find H
i
(x
i
) satisfying (3.2 ) by
solving linear equations. We refer to [42] for more deta ils on g etting the polynomial
expressions of Lagrange multipliers.
Throughout the paper, we assume that every constraining polynomial tuple g
i
is nonsingular. This is a generic assumption. So all λ
i,j
can be expressed as poly-
nomials as in (3.3). Then, each Nash equilibrium satisfies the following polynomial
system
(3.6)
x
i
f
i
(x)
P
m
i
j=1
λ
i,j
(x)
x
i
g
i,j
(x
i
) = 0 (i [N ]),
g
i,j
(x
i
) = 0 (i [N], j E
i
), λ
i,j
(x)g
i,j
(x
i
) = 0 (i [N], j I
i
),
g
i,j
(x
i
) 0 (j I
i
), λ
i,j
(x) 0 (i [N ], j I
i
).
NASH EQUILIBRIUM PROBLEMS OF POLYNOMIALS 9
3.2. An alg orithm for finding an NE. For the NEP of polynomials (1.3), let
λ
i,j
(x) be polynomial Lagrange multiplier expr essions as in (3.3) for each i [N]
and j [m
i
]. Then every NE must satis fy the polynomial system (3.6). Choose a
generic positive definite matrix
Θ R
(n+1)×(n+1)
.
Then all NEs are feasible points for the following optimization problem
(3.7)
min
x
[x]
T
1
· Θ · [x]
1
s.t.
x
i
f
i
(x)
P
m
i
j=1
λ
i,j
(x)
x
i
g
i,j
(x
i
) = 0 (i [N ]),
g
i,j
(x
i
) = 0 (j E
i
, i [N]),
λ
i,j
(x)g
i,j
(x
i
) = 0 (j I
i
, i [N]),
g
i,j
(x
i
) 0 (j I
i
, i [N]),
λ
i,j
(x) 0 (j I
i
, i [N]).
In the above, the vector [x]
1
:
= (1, x
1
, x
2
, . . . , x
n
)
T
R
n+1
. Note that x R
n
is a KK T point for the NEP if and only if it is feasible for (3.7). It is important
to observe that if (3.7) is infeasible, then there are no NEs. If (3.7) is feasible,
then it must have a minimizer, bec ause its objective is a positive definite quadratic
function. Moreover, for a gene ric Θ R
(n+1)×(n+1)
, the minimizer of (3.7) is unique
(see Theorem 4.2). The (3.7) is a polynomial optimization pr oblem, which can be
solved by the Moment-SOS se midefinite relaxations (s ee Sectio n 4).
Assume that u
:
= (u
1
, . . . , u
N
) is an optimizer of (3.7). Then u is an NE if and
only if each u
i
is a minimizer of F
i
(u
i
). To this end, for each player, consider the
optimization problem:
(3.8)
ω
i
:
= min f
i
(x
i
, u
i
) f
i
(u
i
, u
i
)
s.t. g
i,j
(x
i
) = 0 (j E
i
),
g
i,j
(x
i
) 0 (j I
i
).
If all the optimal value s ω
i
0, then u is a Nash equilibrium. If one of them is
negative, say, ω
i
< 0, then u is not an NE. For such a case, let U
i
be a set of some
optimizers o f (3.8), then u violates the following inequalities
(3.9) f
i
(x
i
, x
i
) f
i
(v, x
i
) (v U
i
).
However, every Nas h equilibrium must satisfy (3.9).
When u is not an NE, we aim at finding a new candidate by posing the inequal-
ities in (3.9). Therefore, we consider the following optimization problem:
(3.10)
min
x
[x]
T
1
· Θ · [x]
1
s.t.
x
i
f
i
(x)
P
m
i
j=1
λ
i,j
(x)
x
i
g
i,j
(x
i
) = 0 (i [N ]),
g
i,j
(x
i
) = 0 (j E
i
, i [N]),
λ
i,j
(x)g
i,j
(x
i
) = 0 (j I
i
, i [N]),
g
i,j
(x
i
) 0 (j I
i
, i [N]),
λ
i,j
(x) 0 (j I
i
, i [N]),
f
i
(v, x
i
) f
i
(x
i
, x
i
) 0 (v K
i
, i [N]).
In the above, each K
i
is a set of some optimizers of (3.8). We solve (3.10) again
for a minimizer, say, ˆu. If ˆu is verified to be an NE, then we are done. If it is not,
we can add more inequalities like (3.9) to exclude both u and ˆu. Repeating this
procedure, we get the following algorithm for c omputing an NE.
Algorithm 3.1. For t he N EP given as in (1.1) and (1.3), do the following
10 JIAWANG NIE AND XINDONG TANG
Step 0 Initialize K
i
:
= for all i and
:
= 0. Choose a generic positive definite
matrix Θ of length n + 1.
Step 1 Solve the polynomial optimization problem (3.10). If it is infeasible, then
output that there is no NE and stop; otherwise, solve it for an optimizer u.
Step 2 For each i = 1, . . . , N, solve the optimization (3.8). If all ω
i
0, then
output the NE u and stop. If one of ω
i
is negative, then go to the next step.
Step 3 For each i with ω
i
< 0 , obtain a set U
i
of some (may not all) optimizers of
(3.8); then update the set K
i
:
= K
i
U
i
. Let
:
= + 1, then go to Step 1.
In the Step 0, we can set Θ = R
T
R for a randomly generated matrix R of
length n + 1. The objective in (3.10) is a positive definite quadratic function, so
it ha s a minimizer if (3.10) is feasible. The case is slightly different for (3.8). If
the feasible set X
i
is compac t or f
i
(x
i
, u
i
) is coercive for the given u
i
, then (3.8)
has a minimizer. If X
i
is unbounded and f
i
(x
i
, u
i
) is not coercive, it may be
difficult to co mpute the optimal value ω
i
. In applications, we are mostly interested
in cases that (3.8) has a minimizer, for the existence of an NE. We discuss how to
solve the optimization problems in Algorithm 3.1 by the Moment-SOS hierarchy of
semidefinite r elaxations in Sectio n 4.
The following is the convergence theo rem for Algorithm 3.1.
Theorem 3.2. Assume each constraining polynomial tu ple g
i
is nonsingular and
let λ
i,j
(x) be polynomial expressions of Lagrange multipliers as in (3.3). Let G be
the feasible set of (3.7 ) and G
be the set of all NEs. If the complement G\G
is a
finite set, i.e., the cardinality
:
= |G\G
| < , then Algorithm 3.1 must t erminate
within at most
loops.
Proof. Under the nonsing ularity assumption of polynomial tuples g
i
, the Lagrange
multipliers λ
i,j
can be expressed as polynomials λ
i,j
(x) as in (3.3). For each u that
is a fea sible point of (3 .7), every NE must satisfy the constraint
f
i
(u
i
, x
i
) f
i
(x
i
, x
i
) 0.
Therefore, every NE must also be a feasible point of (3.10). Since the matrix Θ
is positive definite, the optimization (3.10) must have a minimizer, unless it is
infeasible. When Algo rithm 3.1 goes to a newer loop, say, from the th to the
( + 1)th, the optimizer u for (3.10) in the th loop is no longer fea sible for (3 .10)
in the ( + 1)th loop. This means that the feasible set of (3.1 0) must loose at least
one point after each loop, unless an NE is met. Also note that the feasible set of
(3.10) is contained in G. If G\G
is a finite set, Algorithm 3.1 must terminate after
some loops. The number of loops is at most
.
As shown in the appendix, when the NEP is given by generic polynomials, the
NEP has finitely many KKT points (see Theorem A.1). For such cases, |G\G
|
|G| < and finite termination of Algorithm 3.1 is guaranteed.
Theorem 3.3. Let d
i,j
> 0, a
i,j
> 0 be degrees, for all i [N] , j [m
i
]. If each
g
i,j
is a generic polynomial in x
i
of degree d
i,j
and each f
i
is a generic polynomial
in x whose degree in x
j
is a
i,j
, then Algorithm 3.1 t erminates within finitely many
loops, i.e., it either finds an NE if there ex ists any, or detect nonexistence of N Es.
Proof. The conclusion follows directly from Theorems 3.2 and A.1.
When there exist infinitely many KKT points that are not NEs, Algor ithm 3.1
can still be applied to compute an NE if there exists one , or detect nonexistence of
NASH EQUILIBRIUM PROBLEMS OF POLYNOMIALS 11
NEs if they do not exist. See Example 5.2(ii) for such a case. However, for such
NEPPs, the convergence property of Algorithm 3.1 is not fully understood.
3.3. Convex NEPs. The NEP is said to be convex if for every i [N], the
f
i
(x
i
, x
i
) is c onvex in x
i
for all x
i
X
i
:
=
Q
j[N ]\{i}
X
j
, the g
i,j
(x
i
) is linear
for each j E
i
, and is concave for every j I
i
. For convex NEPs, every K KT
point must be an NE, since the KKT conditions are sufficient for global optimality.
Moreover, for c onvex NEPPs, when every constraining tuple g
i
is nonsingular,
the LICQC holds for all x X, and a point is an NE if and only if it satisfies the
KKT conditions. Note that the Lagrange multipliers can be expressed by polyno-
mials as in (3.3) when nonsingularity is assumed. For such cases, the s olution set for
(2.5) is exactly the set of NEs. Ther efore, if we solve the polynomial optimization
problem (3.10) with K
i
= for all i [N] (i.e., the polynomial optimization (3.7)),
then every minimizer, if the feasible set is nonempty, must be an NE. On the other
hand, if (3.10) is infeasible, then we immediately know the NEs do not exist. This
shows that, for convex NEPPs, Algorithm 3.1 must terminate at the initial loop.
Corollary 3.4. Assume each g
i
is a nonsingular tuple of polynomials. Suppose each
g
i,j
(x
i
) (j E
i
) is linear, each g
i,j
(x
i
) (j I
i
) is concave, and each f
i
(x
i
, x
i
) is
convex in x
i
for all x
i
X
i
. Then Algorithm 3.1 must terminate at the first
loop with = 0, returning an NE or reporting that there is no NE.
Example 3.5. Consider the convex NEP in Example 1.1. In this NEP, both
players have ball constraints, so their Lagrange multipliers can be expressed by
polynomials as in (3 .5). We ran Algorithm 3.1 for solving this NEP
2
, and found
the NE x
= (x
1
, x
2
) with
x
1
= (1.0000 , 0.0000), x
2
= (0.4472, 0 .8944)
in the initial loop. It took around 0 .88 second.
3.4. More Nash equilibri a. Algorithm 3.1 aims at finding a single NE. In some
applications, peo ple may be interested in more NEs. Moreover, when there is a
unique NE, people are also interested in a certificate for uniqueness.
In this subsection, we study how to find more NEs or check the completeness of
solution sets. Assume that x
is a Nas h equilibrium produced by Algorithm 3.1,
i.e., x
is also a minimizer o f (3.10). Then all K KT points x satisfying [x]
T
1
Θ[x]
1
<
[x
]
T
1
Θ[x
]
1
are excluded from the feasible set of (3.10) by the constraints
f
i
(u
i
, x
i
) f
i
(x
i
, x
i
) 0 ( u K
i
, i [N ]).
If x
is an isolated NE (e.g ., this is the case if there are finitely many NEs), there
exists a scalar δ > 0 such that
(3.11) [x]
T
1
Θ[x]
1
[x
]
T
1
Θ[x
]
1
+ δ
2
See Section 4 for how to solve polynomial optimization problems, and Section 5 for compu-
tational information.
12 JIAWANG NIE AND XINDONG TANG
for all other NEs x. For such a δ, we can try to find a different NE by solving the
following optimization problem
(3.12)
min
x
[x]
T
1
Θ[x]
1
s.t.
x
i
f
i
(x)
P
m
i
j=1
λ
i,j
(x)
x
i
g
i,j
(x
i
) = 0 (i [N ]),
g
i,j
(x
i
) = 0 (j E
i
, i [N]),
λ
i,j
(x)g
i,j
(x
i
) = 0 (j I
i
, i [N]),
g
i,j
(x
i
) 0 (j I
i
, i [N]),
λ
i,j
(x) 0 (j I
i
, i [N]),
f
i
(v, x
i
) f
i
(x
i
, x
i
) 0 (v K
i
, i [N]),
[x]
T
1
Θ[x]
1
[x
]
T
1
Θ[x
]
1
+ δ.
When an optimizer of (3.12) is computed, we can check if it is an NE or not by
solving (3.8) for all i [N]. If it is, we get a new NE that is different from x
. If
it is not, we update the set K
i
as in Step 3 of Algorithm 3.1. Repeating the above
process, we are able to get more Nash equilibria.
A concern in computation is how to choose the constant δ > 0 for (3.12). We
want a value δ > 0 such that (3.11) holds for all unknown NE s. To this end, we
consider the following maximization problem
(3.13)
max
x
[x]
T
1
Θ[x]
1
s.t.
x
i
f
i
(x)
P
m
i
j=1
λ
i,j
(x)
x
i
g
i,j
(x
i
) = 0 (i [N ]),
g
i,j
(x
i
) = 0 (j E
i
, i [N]),
λ
i,j
(x)g
i,j
(x
i
) = 0 (j I
i
, i [N]),
g
i,j
(x
i
) 0 (j I
i
, i [N]),
λ
i,j
(x) 0 (j I
i
, i [N]),
f
i
(v, x
i
) f
i
(x
i
, x
i
) 0 (v K
i
, i [N]),
[x]
T
1
Θ[x]
1
[x
]
T
1
Θ[x
]
1
+ δ.
Interestingly, if x
is also a maximizer of (3.13), i.e., the maximum of (3.13) eq uals
[x
]
T
Θ[x
]
1
, then the feasible set of (3.12) contains all NEs exc e pt x
, under some
general assumptions.
Propositi on 3.6. Ass ume Θ is a generic positive definite matrix, and x
is a
minimizer of (3.10).
(i) If x
is also a m aximizer of (3.13), then there is no other Nash equilibrium
u satisfying [u]
T
1
Θ[u]
1
[x
]
T
1
Θ[x
]
1
+ δ.
(ii) If x
is an isolated KKT point, then there exists δ > 0 such that x
is also
a maximizer of (3.13).
Proof. Note that every NE is a feasible p oint of (3.10).
(i) If x
is also a maximizer of (3.13), then the objective [x]
T
1
Θ[x]
1
achieves
a constant value in the following set of (3.13). If u is a Nash equilibrium with
[u]
T
1
Θ[u]
1
[x
]
T
1
Θ[x
]
1
+ δ, then
[u]
T
1
Θ[u]
1
= [x
]
T
1
Θ[x
]
1
.
This means that u is also a minimizer of (3.10). When Θ is a generic positive
definite matrix, the optimization (3.10) has a unique optimizer, so u = x
.
(ii) Since Θ is positive definite, there exists ǫ > 0 such that
[x]
T
1
Θ[x]
1
ǫ(1 + kxk)
2
NASH EQUILIBRIUM PROBLEMS OF POLYNOMIALS 13
for all x. Let C =
q
[x
]
T
1
Θ[x
]
1
, then the following s e t
T
:
=
y = [x]
2
x
i
f
i
(x)
P
m
i
j=1
λ
i,j
(x)
x
i
g
i,j
(x
i
) = 0 (i [N]),
g
i,j
(x
i
) = 0 (j E
i
, i [N]),
λ
i,j
(x)g
i,j
(x
i
) = 0 (j I
i
, i [N]),
g
i,j
(x
i
) 0 (j I
i
, i [N]),
λ
i,j
(x) 0 (j I
i
, i [N]),
f
i
(v, x
i
) f
i
(x
i
, x
i
) 0 (v K
i
, i [N]),
kxk C
is compact. No te that [x
]
2
T . Let θ be the vector such that
[x]
T
1
Θ[x]
1
= θ
T
y
for all y = [x]
2
. Since x
is an isolated KKT point, the y
:
= [x
]
2
is also an isolated
point of T . T he n its subset
T
1
:
= T \{y
}
is also a compact set. Since x
is a minimizer of (3.10), the hyperplane H
:
=
{θ
T
y = θ
T
y
} is a supporting hyperplane for the set T . Since Θ is generic, the
optimization (3.10) has a unique minimizer, which implies that y
is the unique
minimizer of the linear function θ
T
y on T . So, H does not intersect T
1
, and their
distance is positive. There exists a scalar τ > 0 such that
[x]
T
1
Θ[x]
1
= θ
T
y θ
T
y
+ τ = [x
]
T
1
Θ[x
]
1
+ τ
for all y = [x]
2
T
1
. Then, for the choice δ
:
= τ/2, the point x
is the only feasible
point for (3.13). Hence, x
is also a maximizer of (3.13).
Proposition 3.6 shows the existence of δ > 0 such that (3.10) and (3.13) have
the same optimal value. However, it does not give a concrete lower bound for δ. In
computational practice, we can first give a priori value for δ. If it does not work,
we can decrease δ to a smaller value (e.g., let δ
:
= δ/5). By rep eating this, the
optimization (3.13) will eventually have x
as a maximizer. The following is the
algorithm for finding an NE that is different from x
.
Algorithm 3.7. For the given NEP (1.3) and a computed NE x
, let Θ be the
positive definite matrix for computing x
.
Step 0 Give an initial value for δ (say, 0.1 ) .
Step 1 Solve the maximization problem (3.13). If its optimal value η equals υ
:
=
[x
]
T
1
Θ[x
]
1
, then go to Step 2. If η is bigger than υ, then let δ
:
= δ/5 and
repeat this step.
Step 2 Solve the optimization problem (3.12). If it is infeasible, then out pu t there
are no additional NEs and stop; otherwise, solve (3.12) for a minimizer u.
Step 3 For each i = 1, . . . , N, solve the optimization (3.8) for the optimal value
ω
i
. If all ω
i
0, stop and outpu t the new NE u. If one of ω
i
is negative,
then go to Step 4.
Step 4 For each i [N ], update the set K
i
:
= K
i
U
i
, and then go back to Step 2.
When x
is not an isolated KKT point, there may not exist a satisfactor y δ > 0
for Step 1. For such a case, more investigation is r e quired to verify the completeness
of the solution set or to find other NE s. However, for generic NEPs, there are
finitely many KKT points (see Theorem A.1 in the appendix ). The following is the
convergence re sult for Algorithm 3.7.
14 JIAWANG NIE AND XINDONG TANG
Theorem 3.8 . Under the same assumptions in Theorem 3.2, if Θ is a generic
positive definite matrix and x
is an isolated KKT point, then Algorithm 3.7 must
terminate after finitely many steps, either returning an NE that is different from
x
or reporting the nonexistence of other NEs.
Proof. Under the given assumptions, Proposition 3.6(ii) shows the existence of
δ > 0 satisfactory for the Step 1 of Algorithm 3.7. Again, by Proposition 3.6(i),
the feasible set of (3.12) contains all NEs except x
. The finite termination of
Algorithm 3.7 can be proved in the same way as for Theorem 3.2.
Once a new NE is obtained, we can repeatedly apply Algorithm 3.7, to compute
more NEs, if they exist. In particular, if there are finitely many NEs, then we
enumerate them as
x
(1)
, . . . , x
(s)
.
Without loss of generality, we assume
[x
(1)
]
T
1
Θ[x
(1)
]
1
< · · · < [x
(s)
]
T
1
Θ[x
(s)
]
1
,
since Θ is generic. If the first r NEs, say, x
(1)
, . . . , x
(r)
, are obtained, there exists
δ > 0 such that
[x
(j)
]
T
1
Θ[x
(j)
]
1
> [x
(r)
]
T
1
Θ[x
(r)
]
1
+ δ
for all j = r + 1, . . . , s. Therefore, if we apply Algorithm 3 .7 with x
= x
(r)
, the
next Nash equilibrium x
(r+1)
can be obtained, if it exists. Therefore , we have the
following conclusion.
Corollary 3. 9. Under the assumptions of Theorem 3.8, if there are finitely many
Nash equilibria, then all of them can be found by applying Algorithm 3.7 repeatedly.
Remark 3.10. Under the assumption of Theorem 3.3, the NEP has finitely many
KKT points. For such cases, Algorithm 3.7 can find all NEs and certify the com-
pleteness of solutions set within finitely many steps, by Corollary 3.9.
4. Solve polynomial optimization problems
In this sec tion, we discuss how to solve occur ring polynomial optimization prob-
lems in Algorithms 3.1 and 3.7. For the NEP, we assume the constr aining poly-
nomial tuples g
i
are all nonsingular. Therefore, the Lagrange multipliers λ
i,j
can
be expressed as polynomial functions λ
i,j
(x) as in (3.3) for all Nash equilibria. We
apply the Moment-SOS hierarchy of semidefinite relaxations [17, 26, 28, 30] for solv-
ing these polynomial optimization problems. New convergence results for solving
these polynomial optimization problems are given due to the usage of polynomial
expressions for Lagrange multipliers.
For the variable z such that z = x or z = x
i
for some i [N ], denote by l the
dimension of z. Consider the polynomial optimization problem in the variable z:
(4.1)
ϑ
:
= min
zR
l
θ(z)
s.t. p(z) = 0 ( p Φ),
q(z) 0 ( q Ψ).
In the above, Φ and Ψ are sets o f equality and inequality cons training polynomials,
respectively. Denote the degree
(4.2) d
0
:
= ma x{⌈deg(p)/2 : p {θ} Φ Ψ}.
NASH EQUILIBRIUM PROBLEMS OF POLYNOMIALS 15
For a degree k d
0
, recall that the set Ideal[Φ]
2k
+ Qmod[Ψ]
2k
is introduced in
Section 2.1. The kth order SOS relaxation for (4.1) is
(4.3)
ϑ
(k)
sos
:
= max γ
s.t. θ γ Ideal[Φ]
2k
+ Qmod[Ψ]
2k
.
The dual problem of (4.3) is the kth order moment relaxation
(4.4)
ϑ
(k)
mom
:
= min
y
hθ, yi
s.t. y
0
= 1 , L
(k)
p
[y] = 0 (p Φ),
M
d
[y] 0, L
(k)
q
[y] 0 (q Ψ),
y R
N
l
2k
,
where the moment matrix M
k
[y] and localizing matrices L
(k)
p
[y], L
(k)
q
[y] are given
by (2.3) a nd (2.4). Both (4.3) and (4.4) are semidefinite programs, and the primal-
dual pair is called the Moment-SOS relaxations for the polynomial optimizatio n
problem (4.1). If z R
l
is a feasible point of (4.1), then [z]
k
R
l
2k
must be a
feasible point of (4.4). Thus (4.1) has an empty feasible set if (4.4) is infeasible.
When (4.4) has a nonempty feasible set, it is clear that ϑ
(k)
mom
ϑ
(k)
sos
ϑ
for
all k, and both ϑ
(k)
mom
and ϑ
(k)
sos
are monotonica lly increasing. The following is the
Moment-SOS a lgorithm for solving (4.1).
Algorithm 4.1. For the polynomial optimization problem (4.1), let d
0
be the degree
given by (4.2).
Step 0 Initialize k
:
= d
0
.
Step 1 Solve the moment relaxation (4.4). If it is infeasible, then the polynomial
optimization problem (4.1) is infeasible and stop; otherwise, solve (4.4) for
the minimum value ϑ
(k)
mom
and a minimizer y
(k)
.
Step 2 Let t
:
= d
0
. If y
satisfies the rank condition
(4.5) rank M
t
[y
] = rank M
td
0
[y
],
then extract a set U
i
of r
:
= r ank M
t
(y
) minimizers for (4.1) and stop.
Step 3 If (4.5) fails t o hold and t < k, let t
:
= t+1 and then go to Step 2; otherwise,
let k
:
= k + 1 and go to Step 1.
Algorithm 4.1 is known as the Moment-SOS hierarchy of semidefinite r elaxations
[26]. We say the Moment-SOS hierarchy has asymptotic convergence if ϑ
(k)
sos
ϑ
as k , and we say it has finite convergence if ϑ
(k)
sos
= ϑ
for all k that is large
enough. For a general polynomial optimization problem, if Ideal[Φ] + Qmod[Ψ] is
archimedean, then ϑ
(k)
mom
ϑ
as k [2 6]. In Step 2, the rank condition (4.5)
is called flat truncation [37]. It is a sufficient (and almost necessary ) condition to
check the finite convergence of moment relaxations. When (4.5) holds, the method
in [18] can be used to extract r minimizers fo r (4.1). This method and Algorithm 4.1
are implemented in the software GloptiPoly 3 [19]. In the following subsections,
we study the convergence result of Algorithm 4.1 when it is applied for solving
(3.8), (3.10), (3.12) and (3.13).
4.1. The optimization for all players. We discuss the convergence of Algo-
rithm 4.1 for solving (3.10), (3.12) and (3.13).
16 JIAWANG NIE AND XINDONG TANG
First, we consider (3.10). Let
(4.6) z
:
= x, θ(x)
:
= [x]
T
1
Θ[x]
1
,
and we denote the polynomial tuples
(4.7)
Φ
i
:
=
n
x
i
f
i
(x)
P
m
i
j=1
λ
i,j
(x)
x
i
g
i,j
(x
i
)
o
n
g
i,j
(x
i
) : j E
i
o
n
λ
i,j
(x) · g
i,j
(x
i
) : j I
i
o
,
(4.8)
Ψ
i
:
=
n
g
i,j
(x
i
) : j I
i
o
n
λ
i,j
(x) : j I
i
o
n
f
i
(v, x
i
) f
i
(x
i
, x
i
) : v K
i
o
.
In the a bove, for a vector p = (p
1
, . . . , p
s
) of polynomials, the set {p} stands for
{p
1
, . . . , p
s
}, for notational convenience. Denote the unions
(4.9) Φ
:
=
N
[
i=1
Φ
i
, Ψ
:
=
N
[
i=1
Ψ
i
.
They are both finite sets of polynomials. Then, the optimization (3.10) can be
written as (4.1), and we may apply Algorithm 4.1 for solving it. Recall that e
i
is
the vector in R
n
such that its ith entry is 1 and all other entries are zero. For a tms
y R
N
n
2k
, the y
e
i
means the entry of y labelled by e
i
. For example, when n = 4,
y
e
2
= y
0100
. Let y
(k)
be a minimizer of the kth order moment relaxation (4.4) for
(3.10), and denote
(4.10) u
(k)
:
= (y
(k)
e
1
, y
(k)
e
2
, . . . , y
(k)
e
n
).
Then, u
(k)
is a minimizer of (3.10) if u
(k)
is feasible fo r (3.10) and
θ, y
(k)
= θ(u
(k)
).
Moreover, we have the following convergence result for solving (3.10):
Theorem 4.2 . For the polynomial optimization problem (3.10), assume Θ is a
generic positive definite matrix. Let z
:
= x, and let θ, Ψ, Φ be given as in (4.6)-
(4.9). Suppose Ideal[Φ] + Qmod[Ψ] is archimedean.
(i) If the optimization (3.10) is infeasible, then the moment relaxation (4.4)
must be infeasible when the order k is big enough.
(ii) Suppose the optimization (3.10) is feasible. Let u
(k)
be given as in (4.10).
Then u
(k)
converges to the unique minimizer of (3.10). In particular, if the
real zero set of Φ is finite, t hen u
(k)
is the unique minimizer of (3.10) and
(4.5) holds at y
(k)
with the rank equals 1 when k is sufficiently large.
Proof. (i) If (3.10) is infeasible, the constant polyno mial 1 can be viewed as
a positive polynomial on the feasible set of (3.10). Since Ideal[Φ] + Qmod[Ψ] is
archimedean, we have 1 Ideal[Φ]
2k
+ Qmod[Ψ]
2k
, for k big enough, by Putinar’s
Positivstellensatz [50]. For such a big k, the SOS relaxation (4.3) is unbounded
from above, hence the moment relaxation (4.4) must be infeasible.
(ii) When the optimization (3.10) is feasible, it must have minimizers. Let K be
the feasible set of (3 .10), and
R
2
(K)
:
= cone({[u]
2
: u K}).
NASH EQUILIBRIUM PROBLEMS OF POLYNOMIALS 17
In the a bove, the cone means the conic hull. Consider the moment optimization
problem
(4.11)
(
min
w
hθ, wi
s.t. y
0
= 1 , w R
2
(K).
If the matrix Θ is a generic positive definite matrix, then the function θ is generic
in Σ
n,2
. By [39, Proposition 5.2], the moment optimization problem (4.11) has
a unique minimizer. When (4.11) has minimizers, its minimum value equals ϑ
.
Suppose (3.10) has two distinct minimizers, say, x
(1)
and x
(2)
. Then, [x
(1)
]
2
and
[x
(2)
]
2
are two distinct minimizers of (4.11), a contradiction to the uniqueness of
the minimizer for (4.11). Therefore, (3.10) must have a unique minimizer x
when
Θ is generic.
The convergence of u
(k)
to x
is shown in [54] or [37, Theorem 3.3]. For the
special case that Φ(x) = 0 has finitely many real solutio ns, the point u
(k)
must
equal x
, when k is large enough. This is shown in [29] (also see [38]).
The archimedeanness of Ideal[Φ] + Qmod[Ψ] is essentially requiring that the
feasible set of (3.10) is compact. If the real zero set of Φ is compact, then Ideal[Φ]+
Qmod[Ψ] must be archimedean. In particular, if the NEPP has finitely many
real KKT points, then Ideal[Φ] + Qmod[Ψ] is archimedean. Interestingly, when
the objective and constraining polynomials are generic, there are finitely many
KKT points. See Theor em A.1 in the appendix. In fact, as shown in the pr oof of
Theorem A.1, the zero set of Φ is finite for generic NEPPs, and hence Algorithm 4.1
has finite convergence. Moreover, by Theorem 4.2, when Θ is generic and the
minimizer y
(k)
for (4.4) is obtained, one may let u
(k)
be given as in (4.10) and
directly check if u
(k)
is the unique minimizer or not, instead of checking the flat
truncation (4.5).
The other minimization problem (3.12) can be solved in the same way by Algo-
rithm 4.1. The convergence prope rty is the s ame. For the cleanness of the paper,
we omit the details.
For the maximization (3.13), we let z
:
= x and
(4.12) θ(x)
:
= [x]
T
1
Θ[x]
1
.
Recall that the polynomial tuples Φ
i
and Ψ
i
are given by (4.7-4.8). Denote the set
of polynomials
(4.13) Φ
:
=
N
[
i=1
Φ
i
, Ψ
:
=
N
[
i=1
Ψ
i
n
[x
]
T
1
Θ[x
]
1
+ δ [x]
T
1
Θ[x]
1
o
.
Then (3.13) can be equivalently written as (4.1). Similarly, Algorithm 4.1 can be
used to solve (3.13). The optimization (3.13) is always feasible because x
is a
feasible point. Therefore, the moment relaxation (4.4) is also feasible, and there
is no need to check its feasibility in Step 1 of Algorithm 4.1. Since the minimum
value ϑ
(k)
mom
is a lower bound of ϑ
, if ϑ
(k)
mom
[x
]
T
1
Θ[x
]
1
, then
ϑ
(k)
mom
= ϑ
= [x
]
T
1
Θ[x
]
1
,
and x
is a maximize r of (3.13). When ϑ
k
< [x
]
T
1
Θ[x
]
1
, the flat truncation
condition (4.5) can be applied for checking the finite convergence of the Moment-
SOS hierarchy. Under some classica l optimality conditions, we have ϑ
(k)
mom
= ϑ
when k is large enough [40]. Moreover, if the real zero set of Φ is finite, then the
18 JIAWANG NIE AND XINDONG TANG
Moment-SOS hierarchy has finite convergence and (4.5) holds [38]. We would like
to remark that when the NEP is given by generic polynomials, the c omplex zero
set of Φ is finite (see Theorem A.1), thus Algorithm 4.1 ha s finite convergence.
4.2. Checking Nash equilibria. Suppose u is a minimizer of (3.10). To check if
u = (u
i
, u
i
) is an NE or not, we need to solve the individual optimization (3.8)
for all i [N ].
For the given u R
n
and i [N], (3.8) is a polynomial optimization problem in
the variable x
i
. If (3.8) is unbounded fro m below, then u cannot be an NE, and the
point v for precluding u can be obtained by adding a suitable extra ball constr aint.
In the following, we suppose that the minimum of (3.8) is attainable. Since we
assume that the polynomial tuple g
i
(x
i
) is nonsingular, polynomial expressions for
Lagra nge multiplier expressions exist and can be applied to solve (3.8). Let λ
i
(x)
be the Lagr ange multiplier expressions in (3.3). Note that the nonsingularity of
g
i
implies the LIC QC holds at every x
i
X
i
. Every minimizer of (3.8) must be
a KKT point of (3.8). Therefore, (3.8) is equivalent to the following polynomial
optimization problem:
(4.14)
ω
i
:
= min
x
i
R
n
i
f
i
(x
i
, u
i
) f
i
(u
i
, u
i
)
s.t.
x
i
f
i
(x
i
, u
i
)
m
i
P
j=1
λ
i,j
(x
i
, u
i
)
x
i
g
i,j
(x
i
) = 0,
g
i,j
(x
i
, u
i
) = 0 (j E
i
),
g
i,j
(x
i
, u
i
)λ
i,j
(x
i
, u
i
) = 0 (j I
i
),
g
i,j
(x
i
, u
i
) 0 (j I
i
),
λ
i,j
(x
i
, u
i
) 0 (j I
i
).
We introduce the convergence result of Algorithm 4.1 for solving (4.14). Let
(4.15) z
:
= x
i
, θ(x
i
)
:
= f
i
(x
i
, u
i
) f
i
(u
i
, u
i
),
(4.16)
Φ
:
=
g
i,j
(x
i
) : j E
i
λ
i,j
(x
i
, u
i
) · g
i,j
(x
i
) : j I
i
x
i
f
i
(x
i
, u
i
)
X
m
i
j=1
λ
i,j
(x
i
, u
i
)
x
i
g
i,j
(x
i
)
,
(4.17) Ψ
:
=
g
i,j
(x
i
) : j I
i
λ
i,j
(x
i
, u
i
) : j I
i
.
Like earlier cases, the set {p} stands for {p
1
, . . . , p
s
}, when p = (p
1
, . . . , p
s
) is a
vector of polynomials. The n, the (4.14) can be rewritten as (4.1), and the Moment-
SOS relaxations of (4.14) are given by (4.3) and (4.4). We would like to rema rk
that the optimization (4.14) is always feasible since u
i
is in its feasible set. Thus the
moment relaxation (4.4) for (4.14) is also feasible, and there is no need to check the
feasibility for (4.4) in the firs t step of Algorithm 4.1. Moreover, the minimum ϑ
(k)
mom
of (4.4) is a lower bound for ω
i
, and ω
i
0 . If ϑ
(k)
mom
0 for some k d
0
, then ω
i
must be 0, and we can stop Algorithm 4.1 immediately because this implies that
u
i
is the minimizer fo r F
i
(u
i
). If ϑ
(k)
mom
< 0 , we need to apply the flat truncation
(4.5) to certify if the finite convergence for the Moment-SOS hierarchy is achieved
or not. The following theorem concerns the finite convergence of Algorithm 4.1 for
solving (4.14). Its proof follows from [4 7, Theorem 4.4].
Theorem 4.3. Assume the ith player’s constraining polynomial tuple g
i
is nons-
ingluar and its optimization (3.8) has a minimizer for the given u
i
. Let z
:
= x
i
,
NASH EQUILIBRIUM PROBLEMS OF POLYNOMIALS 19
and let θ, Ψ, Φ be given as in (4.15)-(4.17) . Assume either one of the following
conditions hold:
(i) The set Ideal[Φ] + Qmod[Ψ] is archimedean,
(ii) The real zero set of polynomials in H
i
(u) is finite.
If each minimizer of (4.14) is an isolated critical point, then all minimizers of (4.4)
must satisfy the flat truncation (4.5), for all k big enough. Therefore, Algorithm 4.1
must terminate within finitely many loops.
We remark that if Ideal[g
i,j
: j E
i
] + Qmod[g
i,j
: j I
i
] is archimedean, then
Ideal[Φ] + Qmod[Ψ] is also archimedean. Therefore, if the a rchimedeanness holds
for the ith player’s optimization (1.1), then the condition (i) in Theorem 4.3 is
satisfied.
5. Numerical Experiments
This section reports numerical experiments fo r solving NEPs by Algorithms 3.1
and 3.7. For all polynomial optimization problems appearing in the algorithms, we
apply the software GloptiPoly 3 [19] to formulate Moment-SOS semidefinite relax -
ations, and use SeDuMi [5 7] for solving these semidefinite programs. The computa-
tion is implemented in an Alienware Aurora R8 desktop, with an Intel
®
Core(TM)
i7-9700 CPU at 3.00GHz×8 and 16GB of RAM, in a Windows 10 operating system.
For ball and simplex constraints, the expressions are given by (3.4) and (3.5)
respectively. Polynomial expressions of Lagrange multipliers for other types of
constraints are given in the descriptio ns of each example. In Step 2 of Algorithm 3.1
and Step 3 of Algorithm 3.7, if the optimal value ω
i
0 for all players, then the
point u is an NE. In numerical computation, we cannot have ω
i
0 ex actly, due
to round-o errors. Therefore, we use the parameter
ω
:
= min
i=1,...,N
ω
i
to measure the accuracy of the computed NE. Typically, if ω
is small, say, ω
10
6
, then we regard the computed solution as an NE.
Example 5.1. For the convex NEP in Example 1.1, Algorithm 3 .1 found the NE
x
1
= (1.0000 , 0.0000), x
2
= (0.4472, 0 .8944)
in the first loop, as shown in Example 3.5. T he accuracy parameter is ω
= 7.9 793·
10
9
. Then, we ran Algorithm 3.7 and found two more NEs, which are
x
1
= (0.0000 , 0.0000), x
2
= (0.0000, 0 .0000), ω
= 1.4147 · 10
10
;
x
1
= (1.0000, 0.0000), x
2
= (0.4472 , 0.8944), ω
= 1.7829 · 10
8
.
Moreover, Algorithm 3.7 certified that these three NEs are all solutions to this
NEP. It took around 1.40 seconds to find these two additional NEs and certify the
completeness o f the solution s et.
In the following example, we show that our algorithm can find NEs for NEPs
which have infinitely many KKT points.
Example 5.2. (i) Consider the convex NEP
1st player:
(
min
x
1
R
2
(x
1,1
+ x
1,2
x
2,1
x
2,2
)
2
,
s.t. 1 (x
1,1
)
2
(x
1,2
)
2
0 ,
20 JIAWANG NIE AND XINDONG TANG
2nd player:
(
min
x
2
R
2
(x
2,1
x
1,1
)
2
+ (1 x
2,2
)
2
s.t. 1 x
2,1
x
2,2
0 , x
2,1
0, x
2,2
0 ,
then one may check that fo r each α [0, 1/2], x
1
= (2α, 1 2α), x
2
= (α, 1 α) is
an NE. Applying Algorithm 3 .1, we got the NE:
x
1
= (0.9247, 0 .0753), x
2
= (0.4624, 0.53 76), ω
= 2.1 940 · 10
8
.
The computation to ok about 0.19 second.
(ii) Consider the NEP
1st player:
(
min
x
1
R
2
x
2,1
(x
1,1
)
2
x
2,2
x
1,1
+ (x
2,2
1
2
)x
1,2
s.t. 1 (x
1,1
)
2
(x
1,2
)
2
0 ,
2nd player:
(
min
x
2
R
2
x
1,2
x
2,1
+ (x
2,2
1
2
)
2
s.t. 1 x
2,1
x
2,2
0 , x
2,1
0, x
2,2
0 .
One may check that for each α [
1
4
, 0), the x
1
= (α, 0), x
2
= (
1
4α
,
1
2
) is a KKT
point that is not a n NE. Applying Algorithm 3.1, we got the NE
x
1
= (1.0000, 0.0 000), x
2
= (0.4259, 0 .5000), ω
= 6.2 187 · 10
9
.
The computation to ok about 0.33 second.
Example 5.3. In this example, we c onsider NEPs with box constraints such that
every x
i
R
1
. For each i [N ], the ith player’s feasible set is given by
1 + x
i
0 , 1 x
i
0.
Then, the associated Lagrange multipliers can be expressed as
λ
i,1
=
1
2
f
i
x
i
· (1 x
i
), λ
i,2
= λ
i,1
f
i
x
i
.
(i) Consider the two-player zero-sum game with box constraints in [49, Exam-
ple 3.1] (s ee a lso [23, Example 1]), where the objective functions are
f
1
(x
1
, x
2
) = (x
1
)
2
2x
1
(x
2
)
2
+ x
2
, f
2
(x
1
, x
2
) = f
2
(x
1
, x
2
).
Applying Algorithm 3.1, we got the NE:
x
1
= 0 .3969, x
2
= 0.63 00, ω
= 2.9179 · 10
11
in the initial loop. It took around 0 .54 second.
(ii) Cons ide r the two-player game with box constraints in [56, Example 2.3] (see
also [23, Example 2]), where the objective functions are
f
1
(x
1
, x
2
) = 2(x
1
)
3
+ 3(x
1
x
2
)
2
2x
1
x
2
+ x
1
3(x
2
)
3
,
f
2
(x
1
, x
2
) = 4(x
2
)
3
2(x
1
x
2
)
2
+ (x
1
)
2
(x
1
)
2
x
2
4x
2
.
Applying Algorithm 3.1, we detected no ne xistence of NEs in the third loo p
3
. It
took around 0.85 second.
3
We remark that for this NEP, as well as the NEP in Example 5.3(iii), though a (pure strategy)
NE does not exist, there exist mixed strategy solutions. See [23, 56] for more details.
NASH EQUILIBRIUM PROBLEMS OF POLYNOMIALS 21
(iii) Consider the generalization of separable network games in [23, Example 5].
The objective functions are
f
1
(x
1
, x
2
, x
3
) = 2(x
1
)
2
+ 2x
1
(x
2
)
2
5x
1
x
2
+ 4x
1
x
3
+ x
2
+ 2x
3
,
f
2
(x
1
, x
2
, x
3
) = 2(x
2
)
2
2x
1
(x
2
)
2
+ 5x
1
x
2
5x
2
x
3
+ 2x
2
(x
3
)
2
x
2
+ 2(x
1
)
2
,
f
3
(x
1
, x
2
, x
3
) = 2x
2
(x
3
)
2
4x
1
x
3
+ 5x
2
x
3
2x
3
4(x
1
)
2
2(x
2
)
2
.
Applying Algorithm 3.1, we detected none xistence of NEs in the second loop. It
took around 0.90 second.
For all NEPs in the following examples except Example 5.7, our method found
all NEs with certified completeness of solution sets. In the following, we only report
the numerical result of finding all solutions, unless spe c ific ally mentioned, for the
neatness of this paper.
Example 5.4. Consider the 2-player NEP
1st player:
(
min
x
1
R
3
P
3
j=1
x
1,j
(x
1,j
j · x
2,j
)
s.t. 1 x
1,1
x
1,2
0, 1 x
1,2
x
1,3
0 , x
1,1
0 ,
2nd player:
min
x
2
R
3
Q
3
j=1
x
2,j
+
P
1i<j3
1k3
x
1,i
x
1,j
x
2,k
+
P
1i3
1j<k3
x
1,i
x
2,j
x
2,k
s.t. 1 (x
2,1
)
2
(x
2,2
)
2
= 0.
The first player’s optimization is non-convex, with an unbo unded feasible set. The
Lagra nge multipliers for the first player’s optimization are
λ
1,1
= (1 x
1,1
x
1,2
)
f
1
x
1,1
, λ
1,2
= x
1,1
f
1
x
1,2
, λ
1,3
= x
1,1
f
1
x
1,1
x
1,2
f
1
x
1,2
.
Applying Algorithm 3.7, we got four NEs:
x
1
= (0.3198, 0 .6396, 0.6396), x
2
= (0.6396, 0.63 96, 0.4264);
x
1
= (0.0000, 0 .3895, 0.5842), x
2
= (0.8346, 0.3 895, 0.3895);
x
1
= (0.2934, 0.5578, 0.8803), x
2
= (0.5869, 0.5 578, 0.5869);
x
1
= (0.0000, 0.5774, 0.8660), x
2
= (0.5774, 0 .5774, 0.5774).
Their accuracy parameters are respectively
7.1879 · 10
8
, 3.5040 · 10
7
, 4.3732 · 10
7
, 6.4360 · 10
7
.
It took about 30 seconds.
However, if the second player’s objective becomes
3
Y
j=1
x
2,j
+
X
1i3
1j<k3
x
1,i
x
2,j
x
2,k
X
1i<j3
1k3
x
1,i
x
1,j
x
2,k
,
then there is no NE, which was detected by Algorithm 3.1. It took around 16
seconds.
Example 5.5. Consider the 3-player NEP
1st player:
min
x
1
R
2
(2x
1,1
x
1,2
+ 3)x
1,1
x
2,1
+[(2x
1,2
)
2
+ (x
3,2
)
2
]x
1,2
s.t. 1 x
T
1
x
1
0,
22 JIAWANG NIE AND XINDONG TANG
2nd player:
min
x
2
R
2
[(x
2,1
)
2
x
1,2
]x
2,1
+[(x
2,2
)
2
+ 2x
3,2
+ x
1,2
x
3,1
]x
2,2
s.t. x
T
2
x
2
1 = 0, x
2,1
0 , x
2,2
0 ,
3rd player:
min
x
3
R
2
(x
1,1
x
1,2
1)x
3,1
[3(x
3,2
)
2
+ 1]x
3,2
+2[x
3,1
+ x
3,2
]x
3,1
x
3,2
s.t. 1 (x
3,1
)
2
0 , 1 (x
3,2
)
2
0 .
The Lagrange multipliers can be represented as
λ
2,1
=
1
2
(x
T
2
x
2
f
2
), λ
2,2
=
f
2
x
2,1
2x
2,1
λ
2,1
, λ
2,3
=
f
2
x
2,2
2x
2,2
λ
2,1
,
λ
3,1
=
x
3,1
2
f
3
x
3,1
, λ
3,2
=
x
3,2
2
f
3
x
3,2
.
Applying Algorithm 3.7, we got the unique NE
x
1
= (0.3558 , 0.9346), x
2
= (1.0000, 0 .0000), x
3
= (0.3331, 1.0 000).
The accuracy parameter is 9.23 10 · 10
9
. It took ar ound 9 seconds.
Nonetheless, if the third player’s objective beco mes f
1
(x)f
2
(x), then the NEP
becomes a ze ro-sum game and there is no NE, which was detected by Algorithm 3.1.
It took around 3 seconds.
Example 5.6. Consider the 2-player NEP
1st player:
min
x
1
R
2
2x
1,1
x
1,2
+ 3x
1,1
(x
2,1
)
2
+ 3(x
1,2
)
2
x
2,2
s.t. (x
1,1
)
2
+ (x
1,2
)
2
1 0,
2 (x
1,1
)
2
(x
1,2
)
2
0 ,
2nd player:
min
x
2
R
2
(x
2,1
)
3
+ (x
2,2
)
3
+ x
1,1
(x
2,1
)
2
+x
1,2
(x
2,2
)
2
+ x
1,1
x
1,2
(x
2,1
+ x
2,2
)
s.t. (x
2,1
)
2
+ (x
2,2
)
2
1 0,
2 (x
2,1
)
2
+ (x
2,2
)
2
0 .
The Lagrange multipliers can be represented as (i = 1, 2):
λ
i,1
=
1
2
x
i
f
T
i
x
i
(2 x
T
i
x
i
), λ
i,2
=
1
4
x
i
f
T
i
x
i
(1 x
T
i
x
i
).
By Algorithm 3.7, we got the unique NE
x
1
= (1.3339, 0.4 698), x
2
= (1.4118 , 0.0820),
with the accuracy par ameter 3.5186 · 10
8
. It took ar ound 5 seconds.
Example 5.7. Consider the NEP
1st player:
(
min
x
1
R
n
1
P
1ijn
1
x
1,i
x
1,j
(x
2,i
+ x
2,j
)
s.t. 1 (x
2
1,1
+ · · · + x
2
1,n
1
) = 0,
2nd player:
(
min
x
2
R
n
2
P
1ijn
2
x
2,i
x
2,j
(x
1,i
+ x
1,j
)
s.t. 1 (x
2
2,1
+ · · · + x
2
2,n
2
) = 0,
where n
1
= n
2
. We ran Algorithm 3.7 for cases n
1
= n
2
= 3 , 4, 5, 6. The computa-
tional results are shown in Table 1. In the table, n
1
is the dimension for variables
x
1
and x
2
, the column ‘NE’ shows the computed solutions to the NEP, and w
is
the accuracy parameter. All time consumptions are displayed in seconds, Because
of the relatively large amount of computational time, we only compute one NE for
each case above.
NASH EQUILIBRIUM PROBLEMS OF POLYNOMIALS 23
Table 1. Computational results fo r Example 5.7
n
1
NE ω
time
3
x
1
= (0.5774 , 0.5774, 0.5774)
x
2
= (0.5774 , 0.5774, 0.5774)
1.0689 · 10
7
1.31
4
x
1
= (0.8381, 0.5 024,
0.0328, 0.2098)
x
2
= (0.1791 , 0.0683,
0.4066, 0.8933)
1.4459 · 10
9
62.85
5
x
1
= (0.8466, 0.440 7, 0.1744,
0.0101, 0.2418)
x
2
= (0.1944 , 0.0512, 0.1238,
0.3370, 0.9114)
2.7551 · 10
9
682.67
6
x
1
= (0.8026, 0.472 4, 0.1 799,
0.1799, 0.0637, 0.2527)
x
2
= (0.1979, 0.0 772, 0.1091,
0.1091, 0.4040, 0.8762)
7.0354 · 10
9
18079.99
We would like to rema rk that our method can also be applied to solve uncon-
strained NEPs where all individual optimization problems have no c onstraints, or
equivalently, the feasible set X
i
for (1.1) is the entire space R
n
i
. For unconstrained
NEPs, the KKT system (2.5) becomes
x
i
f
i
(x
) = 0, i = 1, . . . , N,
and Algor ithms 3.1 and 3.7 can be implemented in the same way.
Example 5.8. Consider the unconstrained NEP
1st player:
min
n
1
P
i=1
(x
1,i
)
4
+
P
0ijkn
1
x
1,i
x
1,j
(x
1,k
+x
2,i
+x
3,j
)
(n
1
)
2
s.t. x
1
R
n
1
,
2nd player:
min
n
2
P
i=1
(x
2,i
)
4
+
P
0ijkn
2
x
2,i
x
2,j
(x
2,k
+x
3,i
+x
1,j
)
(n
2
)
2
s.t. x
2
R
n
2
,
3rd player:
min
n
3
P
i=1
(x
3,i
)
4
+
P
0ijkn
3
x
3,i
x
3,j
(x
3,k
+x
1,i
+x
2,j
)
(n
3
)
2
s.t. x
3
R
n
3
,
where x
1,0
= x
2,0
= x
3,0
= 1, and n
1
= n
2
= n
3
. We implement Algorithm 3.7 fo r
the cases n
1
= n
2
= n
3
= 2, 3, 4, 5, 6. The computational results are shown in the
following table. For all cases, we computed an NE successfully and obtained that
x
1
= x
2
= x
3
(up to round-off errors). There is a unique NE for each ca se. The
computational results are reported in Table 2 . The time is displayed in seconds.
The following are some examples of NEPs from applications .
Example 5.9. Consider the environmental pollution control problem for three
countries for the case autarky [4]. Let x
i,1
(i = 1 , 2, 3) denote the (gross) emissions
from the ith country. The revenue of the ith country depends on x
i,1
, e.g., a
typically one is x
i,1
(b
i
1
2
x
i,1
). The variable x
i,2
represents the investment by
the ith country to local environmental projects. The net emission in country i is
24 JIAWANG NIE AND XINDONG TANG
Table 2. The computational results for Example 5.8.
n
1
x
1
= x
2
= x
3
ω
time
2 (0.8410, 0.7125) 8.8291 · 10
9
0.34
3 (0.6743, 0.6157, 0.5236) 6.6507 · 10
9
1.58
4
(0.5950, 0.5606
0.5097, 0.4363)
1.0577 · 10
9
16.86
5
(0.5476, 0.5247, 0.4919,
0.4472, 0.3860)
4.4438 · 10
9
177.63
6
(0.5157, 0.4992, 0.4762,
0.4457, 0.4060, 0.3534)
3.7536 · 10
9
1379.27
x
i,1
γ
i
x
i,2
, which is always nonnegative and must be kept below or equal to a
certain prescribed level E
i
> 0 under an environmental constraint. The damage
cost of the ith country is assumed to be d
i
(x
i,1
γx
i,2
)+
P
j6=i
c
i,j
x
i,2
x
j,1
. For given
parameters b
i
, c
i,j
, d
i
, γ
i
, E
i
, the ith (i = 1, 2, 3) country’s optimization problem is
min
x
i
R
2
x
i,1
(b
i
1
2
x
i,1
) +
(x
i,2
)
2
2
+ d
i
(x
i,1
γ
i
x
i,2
) +
P
j6=i
c
i,j
x
i,2
x
j,1
s.t. x
i,2
0, x
i,1
b
i
,
0 x
i,1
γ
i
x
i,2
E
i
.
We consider the general cases that b
i
6= E
i
. The Lagrange multipliers can be
expressed as
λ
i,4
=
1
(b
i
E
i
)E
i
f
i
x
i,2
x
i,2
(x
i,1
γ
i
x
i,2
)
f
i
x
i,1
(b
i
x
i,1
)(x
i,1
γ
i
x
i,2
)
,
λ
i,3
=
1
b
i
(b
i
x
i,1
)(
f
i
x
i,1
+ λ
i,4
) x
i,2
(
f
i
x
i,2
γ
i
λ
i,4
)
,
λ
i,2
= λ
i,3
λ
i,4
f
i
x
i,1
,
λ
i,1
=
f
i
x
i,2
+ γ
i
λ
i,3
γ
i
λ
i,4
.
We solve the NEP for the following typical parameters:
b
1
= 1 .5, b
2
= 2, b
3
= 1.8, c
1,2
= 0.2, c
1,3
= 0 .3, c
2,1
= 0 .4,
c
2,3
= 0 .2, c
3,1
= 0.5, c
3,2
= 0.1, d
1
= 0 .8, d
2
= 1 .2, d
3
= 1.0,
E
1
= 3, E
2
= 4 , E
3
= 2 , γ
1
= 0.7, γ
2
= 0.5, γ
3
= 0 .9.
By Algorithm 3.7, we got the unique NE
x
1
= (0.7000, 0.16 00), x
2
= (0.8000, 0.16 00), x
3
= (0.8000, 0.47 00),
with the accuracy par ameter 1.1059 · 10
9
. It took about 10 seconds.
Example 5.1 0. Consider the NEP of the electricity market problem [6]. There
are three g e ne rating companie s, and the ith company poss e sses s
i
generating units.
For the ith company, the power generation of his jth generating unit is denoted
by x
i,j
. Assume 0 x
i,j
E
i,j
, where the nonzero parameter E
i,j
represents its
maximum capa city, and the cost of this generating unit is
1
2
c
i,j
(x
i,j
)
2
+ d
i,j
x
i,j
,
where c
i,j
, d
i,j
are para meters. The electricity price is given by
φ(x)
:
= b a(
3
X
i=1
s
i
X
j=1
x
i,j
).
NASH EQUILIBRIUM PROBLEMS OF POLYNOMIALS 25
The a im of each company is to maximize its profits, that is, to solve the following
optimization problem:
ith player:
(
min
x
i
R
s
i
1
2
P
s
i
j=1
(c
i,j
(x
i,j
)
2
+ d
i,j
x
i,j
) φ(x)(
P
s
i
j=1
x
i,j
).
s.t. 0 x
i,j
E
i,j
(j [s
i
]).
The Lagrange multipliers a ssociated to the constraints g
i,2j1
:
= E
i,j
x
i,j
0, g
i,2j
:
= x
i,j
0 can be represented as
λ
i,2j1
=
f
i
x
i,j
· x
i,j
/E
i,j
, λ
i,2j
=
f
i
x
i,j
+ λ
i,2j1
. (j [s
i
])
For the following parameters
s
i
= i, a = 1, b = 10,
c
1,1
= 0 .4, c
2,1
= 0 .35, c
2,2
= 0 .35, c
3,1
= 0.46 , c
3,2
= 0 .5, c
3,3
= 0 .5,
d
1,1
= 2, d
2,1
= 1.75 , d
2,2
= 1 , d
3,1
= 2 .25, d
3,2
= 3, d
3,3
= 3,
E
1,1
= 2, E
2,1
= 2.5, E
2,2
= 0 .67, E
3,1
= 1 .2, E
3,2
= 1.8, E
3,3
= 1.6,
we ran Algorithm 3.7 and found the unique NE
x
1
= 1 .7184, x
2
= (1.8413, 0.67 00), x
3
= (1.2000, 0.08 23, 0.0823).
The accuracy parameter is 5.11 83 · 10
7
. It took about 8 seconds.
6. Conclusions and Discussions
This pap er studies Nash equilibrium problems that are given by polynomial
functions. Algorithms 3.1 and 3.7 are propose d for computing one or all NEs. The
Moment-SOS hierarchy of semidefinite relaxations is used to solve the appearing
polynomial optimization problems. Under generic assumptions, we can compute a
Nash equilibrium if it exists, and detect its none xistence if there is none. Moreover,
we can get all Nash equilibria if there are finitely many ones of them.
In [45], a semidefinite relax ation method using rational and parametric Lagrange
multiplier expressions is proposed for solving convex GNEPs. Under some gene ral
conditions, the method in [45] is guaranteed to find one GNE or detect nonexistence
of GNEs. The NEPs considered in this work are special cases of GNEPs, since
they can be viewed as GNEPs where every player’s feasible set is independent of
other players’ strategies. Moreover, for convex NEPs, Algorithm 3.1 reduces to [45,
Algorithm 5.3] and terminates at Step 2 in the first loop, as shown in Corollary 3.4.
In contrast, this paper mainly focuses on solving nonconve x NEPs, and the ma in
difficulty of problems in the scope of this paper is brought by nonconvexity. Major
differences between contributions in this paper and those in [45] are as follows:
In this paper, we prima rily focus on nonconvex NEPs of p olynomials. One
of our main contributions in this work is that we proposed an algor ithm
that finds NEs for nonconvex NEPs, if they exist. Note when there is no
convexity being ass umed, every block x
i
of the NE x
is the global mini-
mizer for F
i
(x
i
), which is usually nonconvex. For nonconvex NEPs, the
KKT conditions are typically not sufficient for global optimality, thus the
updating scheme K
i
:
= K
i
U
i
in Step 3 of Algorithm 3.1 is applied to pre-
clude K KT points that are not NEs. Therefore , we usually need to solve a
sequence of polynomial optimization problems to get NEs. In comparison,
the [45] concerns GNEPs where every player solves a co nvex optimizatio n
problem. Therefore, once a KKT point is obtained with some constraint
26 JIAWANG NIE AND XINDONG TANG
qualification conditions being satisfied, this KKT point must be a GNE.
So there is no need to preclude any KKT point, and we usually only need
to solve one polynomial optimization problem for a GNE. Indeed, convex
NEPs are studied in Section 3.3, which is the intersection of problems con-
sidered in this work and in [45]. One can easily see that it is way more
difficult to solve NEPs without any convexity assumption from our discus-
sion in Sections 3.2 and 3.3.
The goal of the method in [45] is to find just one GNE, and it c annot check
whether the computed GNE is unique or not. In c omparison, Algorithm 3.7
proposed in Section 3.4 aims to find more NEs. Furthermore, when there
are finitely many NEs, Algorithm 3.7 can find all NEs and check the com-
pleteness of the computed solution set, under some general conditions. We
would like to remark that there is no o ther numerical method that can
achieve such computational goals for general NE Ps given by po lynomials,
to the be st of the authors’ knowledge.
Algorithms 3.1 and 3.7 assume that all constraining polynomial tuples g
i
are nonsingular, so that there exist polynomial expressions for Lagrange
multipliers. When the NEP is given by g e ne ric polynomia ls, nonsingular-
ity is satisfied for all i [N]. However, polynomial Lagrange multiplier
expressions typically do not exist for GNEPs. For such cas e s, one may
consider the corresponding Lagrange multipliers as new variables, but this
is often computationally expensive, especially w he n there are a lot of con-
straints. In [45], rational and parametric Lagrange multiplier expressions
are studied for solving convex GNEPs. For NEPs, when constraints are
singular, rational and parametric Lagrange multiplier expres sions can also
be applied to find NEs. Nonetheless, convergence results in Theo rem 3.2
and Corollary 3.9 may no longer hold, since there may exist NEs that are
not KKT points when po lynomial expressions for Lagrange multipliers do
not ex ist.
There is much interesting future work to do. If there are only finitely many KKT
points that are not NEs, Algorithm 3.1 must terminate within finitely many loops.
This is s hown in Theorem 3.2. For generic NEPPs, the finiteness of KKT points is
shown in Theo rem A.1. Howeve r, the convergence pr operty of Algorithm 3.1 is not
known when there are infinitely many KKT points. In Example 5.2(ii), there are
infinitely many KKT points that are not NEs, but Algorithm 3.1 is still able to get
an NE in a few loops. If ther e are infinitely many KKT points that are not NEs,
does Algorithm 3.1 still converge to find an NE? T his question is mostly op en to
the authors.
It is important to compute NEs efficiently for large-scale NEPs. Even for uncon-
strained NEPs, the kth orde r moment relaxation for (3.7) is a semidefinite program
with O(n
2k
) variables. Algorithm 3.1 may not be computationally practical for
solving large-scale NEPs. Sparse polynomial optimization problems are studied in
[27, 43, 59, 60, 61, 62]. Recently, the software TSSOS [31] that implements the term
and correlative sparse Moment-SOS relaxations is developed. In Algorithms 3.1
and 3.7, polynomia l optimization problems are formulated to find NEs, and one
may implement sparse Moment-SOS relaxations for solving these polynomia l opti-
mization pro ble ms . However, even for the NEPP where each player’s optimization
problem F
i
(x
i
) is sparse, the polyno mial o ptimization problem (3.7) may not be
NASH EQUILIBRIUM PROBLEMS OF POLYNOMIALS 27
sparse. This is because both the polynomial expressions of Lagrange multipliers
and the KKT system may consist of dense polynomials (see [51] for more details).
Therefore, how to exploit sparsity to find NEs efficiently for large-scale NEPs is
important for future work.
Nonconvex NEPs may or may not have NEs, even if all feasible sets are compact.
For each i [N], let B
i
be the se t of Borel probability measur es s uppo rted in X
i
.
Define the measure function
Γ
i
(µ
1
, . . . , µ
N
)
:
=
Z
X
1
· · ·
Z
X
N
f
i
(x
1
, . . . , x
N
)
1
· · ·
N
.
The mixed strategy extension for the NEP (1.3) is to find (µ
1
, . . . , µ
N
) B
1
× · · · ×
B
N
such that
(6.1) Γ
i
(µ
1
, . . . , µ
i1
, µ
i
, µ
i+1
, µ
N
) Γ
i
(µ
1
, . . . , µ
i1
, µ
i
, µ
i+1
, µ
N
)
holds for all i [N ] and for all µ
i
B
i
. Such a (µ
1
, . . . , µ
N
) is called a mixed strategy
solution and it always exists [13]. Mixed strategy solutions to finite games are
studied in [2, 3, 8, 21, 33, 6 3]. The mixed strategy extensions of gener al continuous
NEPs are typically difficult to solve because it is a computational challenge to do
operations with measures. However, when the functions are polynomials, the mixed
strategy extension can be equivalently expressed in terms of moment var iables. We
discuss how this can be done in the following.
For the NEPP (1.3), let a
i,j
be the degree of f
i
in x
j
and let
b
j
= max{a
1,j
, . . . , a
N,j
}.
Let T
(i)
be the N th order tensor such that for all u
j
= [x
j
]
b
j
and j [N],
f
i
(x) = T
(i)
(u
1
, . . . , u
N
)
:
=
X
k
1
,...,k
N
T
(i)
k
1
,...,k
N
(u
1
)
k
1
. . . (u
N
)
k
N
.
Denote the set X
i
:
= { [x
i
]
b
i
: x
i
X
i
}. Let conv(X
i
) be the convex hull of X
i
. For
a probability measure µ
i
B
i
, if u
j
=
R
X
i
[x
j
]
b
j
i
, then we have u
j
conv(X
i
)
(see [17, 28, 30]). Since f
i
is a polynomial, for every (µ
1
, . . . , µ
N
) B
1
× . . . × B
N
,
there exists (u
1
, . . . , u
N
) conv(X
1
) × . . . × conv(X
N
) such that
(6.2)
Z
X
1
. . .
Z
X
N
f
i
(x
1
, . . . , x
N
)
1
. . .
N
= T
(i)
(u
1
, . . . , u
N
).
Conversely, fo r each (u
1
, . . . , u
N
) conv(X
1
) × . . . × conv(X
N
), there exist proba-
bility measure s µ
1
, . . . , µ
N
such that each µ
i
B
i
and (6.2) holds. Therefore, the
mixed strategy extension of the NEPP (1.3) is equivale nt to its convex moment
relaxation: find a tuple
(u
1
, . . . , u
N
) conv(X
1
) × . . . × conv(X
N
)
such that for each i = 1, . . . , N,
T
(i)
(u
1
, . . . , u
i1
, u
i
, u
i+1
, . . . , u
N
) T
(i)
(u
1
, . . . , u
N
)
for all u
i
conv(X
i
). Moreover, if each u
i
is an e xtreme point of conv(X
i
), then
one can get an NE for the o riginal NEPP from (u
1
, . . . , u
N
). We re fer to [25]
for moment game problems, and [1 0, 49, 56] for more details on mix ed-strategy
solutions to polynomial games.
28 JIAWANG NIE AND XINDONG TANG
Acknowledgements The authors would like to thank the editors and anonymous
referees for fruitful suggestions. The first author is partially supported by the NSF
grant DMS-2110780 .
Appendix
Appendix A. Finiteness of KKT points for generic NEPPs
The finiteness of KKT points implies that Algorithms 3.1 and 3.7 has finite
termination. In the following, we discuss the finiteness of KKT points for generic
NEPPs.
After the e numeration of a ll pos sibilities of active inequality constraints, we can
generally consider the case that (1.1) only has equality constraints. Consequently,
the length m
i
of the ith player’s constraining polynomials can b e assumed less than
or e qual to n
i
, the dimension of its strategy x
i
. To prove the finiteness, we c an
ignore the sign conditions λ
i,j
0 for Lagrange multipliers. Then the KKT system
for all players is
(A.1)
P
m
i
j=1
λ
i,j
x
i
g
i,j
(x
i
) =
x
i
f
i
(x) (i [N ]),
g
i,j
(x
i
) = 0 (i [N], j [m
i
]).
When the objectives f
i
are generic polynomials in x and each g
i,j
is a generic
polynomial in x
i
, we show that (A.1) has finitely many complex solutions.
Theorem A.1. Let d
i,j
> 0, a
i,j
> 0 be degrees for all i [N] and j [m
i
].
If each g
i,j
is a generic polynomial in x
i
of degree d
i,j
, and each f
i
is a generic
polynomial in x, whose degree in x
j
is a
i,j
, then the KKT system (A.1 ) has finitely
many complex solutions and hence the NEP has finitely many KKT points.
Proof. For each player i = 1, . . . , N, denote
b
i
:
= a
i,i
1 + d
i,1
+ · · · + d
i,m
i
m
i
,
˜x
i
:
= (x
i,0
, x
i,1
, . . . , x
i,n
i
), ˜x
:
= (˜x
1
, . . . , ˜x
N
).
The homogenization of g
i,j
is ˜g
i,j
, a form in ˜x
i
. Let P
n
i
be the n
i
dimensional
projective space over the complex field. Consider the projective varie ties
U
i
:
=
n
(˜x
1
, . . . , ˜x
N
) P
n
1
× . . . × P
n
N
: ˜g
i
(˜x
i
) = 0
o
, i = 1, . . . , N,
U
:
= U
1
· · · U
N
.
When all g
i,j
are generic poly nomials in x
i
, the codimension of U
i
is m
i
(see [16]),
so U has the codimension m
1
+ · · · + m
N
.
The ith player’s objective f
i
is a polynomial in x = (x
1
, . . . , x
N
), we denote the
multi-homogenization of f
i
(x
i
, x
i
) as
˜
f
i
(˜x
i
, ˜x
i
)
:
= f
i
(x
1
/x
1,0
, . . . , x
N
/x
N,0
) · (
N
Y
j=1
(x
i,0
)
a
i,j
).
It is a multi-homogeneous polynomial in ˜x. For each i, consider the determinantal
variety (the
x
i
denote the gradient with respect to x
i
)
W
i
:
=
x C
n
rank[
x
i
f
i
(x)
x
i
g
i,1
(x
i
) · · ·
x
i
g
i,m
i
(x
i
) ] m
i
.
29
Its multi-homogenization is
f
W
i
:
=
˜x
rank[
x
i
˜
f
i
(˜x)
x
i
˜g
i,1
(˜x
i
) · · ·
x
i
˜g
i,m
i
(˜x
i
)] m
i
.
The matrix in the above can be explicitly written as
J
i
(˜x
i
, ˜x
i
)
:
=
x
i,1
˜
f
i
(˜x)
x
i,1
˜g
i,1
(˜x
i
) · · ·
x
i,1
˜g
i,m
i
(˜x
i
)
x
i,2
˜
f
i
(˜x)
x
i,2
˜g
i,1
(˜x
i
) · · ·
x
i,2
˜g
i,m
i
(˜x
i
)
.
.
.
.
.
.
.
.
.
.
.
.
x
i,n
i
˜
f
i
(˜x)
x
i,n
i
˜g
i,1
(˜x
i
) · · ·
x
i,n
i
˜g
i,m
i
(˜x
i
)
.
The (m
i
+ 1)-by-(m
i
+ 1) minors of the matrix J
i
are homogene ous in ˜x
i
of degree
b
i
. They are homogeneous in ˜x
j
of degree a
i,j
, for j 6= i. By [44, Propositio n 2.1],
when g
i,j
are generic polynomials in x
i
, the right m
i
columns of J
i
are linearly
independent for all ˜x
i
U
i
. That is, for every ˜x U
i
, there must exist a nonzero
m
i
-by-m
i
minor from the right m
i
columns of J
i
. In the following, we consider
fixed generic poly nomials g
i,j
.
First, we show that U
f
W
1
have the codimension n
1
+ m
2
+ · · · + m
N
. Let V
be the projective variety consisting of all equivalent classes of the vectors
(A.2) m
1
(˜x)
:
= [˜x
1
]
hom
b
1
[˜x
2
]
hom
a
1,2
· · · [˜x
N
]
hom
a
1,N
,
for equivalent classes of ˜x U. In the above, denotes the Kronecker product,
[u]
hom
d
denotes the vector of all monomials in u of degrees equal to d. In other
words, [ u]
hom
d
is the subvec tor of [u]
d
for monomials of the highest degree d. Note
that U is bir ational to V (consider the natural embedding ϕ : U ֒ V such that
φ(˜x) = m
1
(˜x)). So U and V have the same codimension [55]. For each subset
I [n
1
] of cardinality m
1
, we use det
I
J
1
to denote the m
1
-by-m
1
minor of J
1
for
the submatrix whose row indices are in I and whose columns are the right ha nd
side m
1
columns. Then
f
W
1
=
[
I[n
1
],|I|=m
1
X
I
where
X
I
:
= {˜x : rank J
1
(x) m
1
, det
I
J
1
(x) 6= 0}.
For each I, we have ˜x X
I
if and only if the (m
1
+ 1)-by-(m
1
+ 1) minors of J
1
,
corres ponding to the row indices I {} with [n
1
]\I, are equal to zeros. There
are totally n
1
m
1
such minors. Vanishing of these (m
1
+ 1)-by-(m
1
+ 1) minors of
J
1
gives n
1
m
1
linear equations in the vector m
1
(˜x) as in (A.2). The co e fficients
of these linear equations are linearly parameterized by coefficients of f
1
. Therefore,
when f
1
has generic coefficients, the set
Y
I
:
= { m
1
(˜x) : ˜x X
I
U}
is the intersection of V with hyperplanes given by n
1
m
1
generic linear equatio ns.
Since X
I
U is birational to Y
I
, they have the same codimension, so the codimension
of X
I
U is n
1
+ m
2
+ · · · + m
N
. This c onclusion is true for all the above subsets
I. Since
U
f
W
1
=
[
I[n
1
],|I|=m
1
X
I
U,
the codimension of U
f
W
1
is equal to n
1
+ m
2
+ · · · + m
N
.
Second, we repeat the above a rgument to show that
(U
f
W
1
)
f
W
2
30
has codimension n
1
+n
2
+m
3
+· · ·+m
N
. Let V
be the projective variety consisting
of all equivalent classes of the vectors
(A.3) m
2
(˜x)
:
= [˜x
1
]
hom
a
2,1
[˜x
2
]
hom
b
2
[˜x
3
]
hom
a
2,3
· · · [˜x
N
]
hom
a
2,N
for equivalent classes of ˜x U
f
W
1
. Note that U
f
W
1
is birational to V
. They
have the same codimension. Similarly, we have
f
W
2
=
[
I[n
2
],|I|=m
2
X
I
where
X
I
:
= {˜x : rank J
2
(x) m
2
, det
I
J
2
(x) 6= 0}.
When f
2
has generic coefficients, the set
Y
I
:
= { m
2
(˜x) : ˜x X
I
U
f
W
1
}
is the intersection of V
with n
2
m
2
generic hype rplanes of codimension 1. Since
X
I
U
f
W
1
is birational to Y
I
, they have the same dimension, so the codimension
of X
I
U
f
W
1
is n
1
+ n
2
+ m
3
+ · · · + m
N
. This conclusion is true for all Y
I
. Last,
because
U
f
W
1
f
W
2
=
[
I[n
2
],|I|=m
2
X
I
U
f
W
1
,
we know U
f
W
1
f
W
2
has the co dimens ion n
1
+ n
2
+ m
3
+ · · · + m
N
.
Similarly, by repeating the above, we can eventually show that
U
f
W
1
f
W
2
· · ·
f
W
N
has codimension n
1
+ n
2
+ · · · + n
N
. This implies the KKT system (A.1) has
codimension n
1
+ n
2
+ · · · + n
N
, i.e., the dimension of the solution set of (A.1) is
zero. So, there are finitely many complex KKT points.
References
[1] L. Adam, R. Horˇc´ık, T. Kasl, and T. Kroupa. Double oracle algorithm for computing equi-
libria in continuous games. In Proceedings of the AAAI Conference on Artificial Intellige nce,
volume 35, pages 5070–5077, 2021.
[2] A. A. Ahmadi and J. Zhang. Semidefinite programming and Nash equilibria in bimatrix
games. INFORMS Journal on Computing, 33(2):607–628, 2021.
[3] J.-P. Aubin. Optima and equilibria: an introduction to nonlinear analysis, volume 140.
Springer Science & B usiness Media, 2002.
[4] M. Breton, G. Zaccour, and M. Zahaf. A game-theoretic formulation of joint implementation
of environmental projects. European Journal of Operational Research, 168(1):221–239, 2006.
[5] Y. Chen, G. Lan, and Y. Ouyang. Optimal primal-dual methods for a class of saddle point
problems. SIAM Journal on Optimization, 24(4):1779–1814, 2014.
[6] J. Contreras, M. Klusch, and J. B. Krawczyk. Numerical solutions to Nash-Cournot equilibria
in coupled constraint electricity markets. IEEE Transactions on Power Systems, 19(1):195–
206, 2004.
[7] E. Couzoudis and P. Renner. Computing generalized Nash equilibria by polynomial program-
ming. Mathematical Methods of Operations Research, 77(3):459–472, 2013.
[8] C. Daskalakis, P. W. Goldberg, and C. H. Papadimitriou. The complexity of computing a
Nash equilibrium. SIAM Journal on Computing, 39(1):195–259, 2009.
[9] R. S. Datta. Finding all Nash equilibria of a finite game using polynomial algebra. Economic
Theory, 42(1):55–96, 2010.
[10] M. D resher, S. Karlin, and L. Shapley. Polynomial games. Contributions to the Theory of
Games I, 24:161–180, 2016.
[11] F. Facchinei and C. Kanzow. Generalized Nash equilibrium problems. Annals of Operations
Research, 175(1):177–211, 2010.
31
[12] F. Farnia and A. Ozdaglar. Do GANs always have Nash equilibria? In International Confer-
ence on Machine Learning, pages 3029–3039. PMLR, 2020.
[13] I. L. Glicksberg. A further generalization of the Kakutani fixed point theorem, with ap-
plication to Nash equilibrium points. Proceedings of the American Mathematical Society,
3(1):170–174, 1952.
[14] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Far ley, S. Ozair, A. Courville,
and Y. Bengio. Generative adversarial networks. Communications of the ACM, 63(11):139
144, 2020.
[15] G. G¨urkan and J.-S. Pang. Approximations of Nash equilibria. Mathematical Programming,
117(1):223–253, 2009.
[16] J. Harris. Algebraic geometry: a first course, volume 133. Spri nger Science & Business Media,
2013.
[17] D. Henrion, M. Korda, and J. B. Lasserre. Moment-SOS Hierarchy, The: Lectures In Prob-
ability, Statistics, Computational Geometry, Control And Nonlinear Pdes, volume 4. World
Scientific, 2020.
[18] D. Henrion and J.-B. Lasserre. Detecting global optimality and extracting solutions in Glop-
tipoly. In Positive polynomials in control, pages 293–310. Springer, 2005.
[19] D. Henrion, J.-B. Lasserre, and J. ofberg. Gloptipoly 3: moments, optimization and semi-
definite programming. Optimization Methods & Software, 24(4-5):761–779, 2009.
[20] C. Hillar and J. Nie. An elementary and constructive solution to Hilbert’s 17th problem f or
matrices. Proceedings of the American Mathematical Society, 136(1):73–76, 2008.
[21] S. C. Kontogiannis, P. N. Panagopoulou, and P. G. Spirakis. Polynomial algorithms for
approximating Nash equilibria of bimatrix games. In International Workshop on Internet
and Network Economics, pages 286–296. Springer, 2006.
[22] J. B. Krawczyk and S. Uryasev. Relaxation algorithms to find Nash equilibria with economic
applications. Environmental Modeling & Assessment, 5(1):63–73, 2000.
[23] T. Kroupa and T. Votroubek. Multiple or acle algorithm to solve continuous games. In Deci-
sion and Game Theory for Security: 13th International Conference, GameSec 2022, Pitts-
burgh, PA, USA, October 26–28, 2022, Proceedings, pages 149–167. Springer, 2023.
[24] A. A. Kulkarni and U. V . Shanbhag. On the variational equilibrium as a refinement of the
generalized Nash equilibrium . Automatica, 48(1):45–55, 2012.
[25] R. Laraki and J. B. Lasserre. Semidefinite programming for min–max problems and games.
Mathematical Programming, 131(1):305–332, 2012.
[26] J. B. Lasserre. Global optimization with polynomials and the problem of moments. SIAM
Journal on optimization, 11(3):796–817, 2001.
[27] J. B. Lasserre. Convergent SDP-relaxations in polynomial optimization with sparsity. SIAM
Journal on Optimization, 17(3):822–843, 2006.
[28] J. B. Lasserre. An introduction to polynomial and semi-algebraic optimization, volume 52.
Cambridge University Press, 2015.
[29] J. B. Lasserre, M. Laurent, and P. Rostalski. Semidefinite characterization and computation
of zero-dimensional r eal radical ideals. Foundations of Computational Mathematics, 8(5):607–
647, 2008.
[30] M. Laurent. Sums of squares, moment matrices and optimization over polynomials. In Emerg-
ing applications of algebraic geometry, pages 157–270. Spri nger, 2009.
[31] V. Magron and J. Wang. TSSOS: a Julia library to exploit sparsity for large-scale polynomial
optimization. arXiv preprint arXiv:2103.00915, 2021.
[32] E. Maskin. Nash equilibr ium and welfare optimality. The Review of Economic Studies,
66(1):23–38, 1999.
[33] J. Nash. Non-cooperative games. Annals of mathematics, pages 286–295, 1951.
[34] A. Nedi´c and A. Ozdaglar. Subgradient methods for saddle-point problems. Journal of opti-
mization theory and applications, 142(1):205–228, 2009.
[35] J. Nie. Polynomial matrix inequality and semidefinite representation. Mathematics of opera-
tions research, 36(3):398–415, 2011.
[36] J. Nie. Sum of squares methods for minimizi ng polynomial forms over spheres and hypersur-
faces. Frontiers of mathematics in China, 7(2):321–346, 2012.
[37] J. Nie. Certifying convergence of Lasserre’s hierarchy via flat truncation. Mathematical Pro-
gramming, 142(1):485–510, 2013.
32
[38] J. Nie. Polynomial optimization with real varieties. SIAM Journal On Optimization,
23(3):1634–1646, 2013.
[39] J. Nie. The A-truncated K-moment problem. Foundations of Computational Mathematics,
14(6):1243–1276, 2014.
[40] J. Nie. Optimality conditions and finite convergence of Lasserre’s hierarchy. Mathematical
Programming, 146(1):97–121, 2014.
[41] J. Nie. Symmetric tensor nuclear norms. SIAM Journal on Applied Algebra and Geometry,
1(1):599–625, 2017.
[42] J. Nie. Tight relaxations for polynomial optimization and Lagrange multiplier expressions.
Mathematical Programming, 178(1):1–37, 2019.
[43] J. Nie and J. Demmel. Sparse SOS relaxations for minimizing functions that are summations
of small polynomials. SIAM Journal on Optimization, 19(4):1534–1558, 2009.
[44] J. Nie and K. Ranestad. Algebraic degree of polynomial optimization. SIAM Journal on
Optimization, 20(1):485–502, 2009.
[45] J. Nie and X. Tang. Convex generalized Nash equilibrium problems and polynomial optimiza-
tion. Mathematical Programming, 198:1485–1518, 2023.
[46] J. Nie, X. Tang, and L. Xu. The Gauss–Seidel method for generalized Nash equilibrium prob-
lems of polynomials. Computational O pti mization and A pplications, 78(2):529–557, 2021.
[47] J. Nie, Z. Yang, and G. Zhou. T he saddle point problem of polynomials. Foundations of
Computational Mathematics, 22(4):1133–1169, 2022.
[48] J. Nie and X. Zhang. Real eigenvalues of nonsymmetric tensors. Computational Optimization
and Applications, 70(1):1–32, 2018.
[49] P. A. Parrilo. Polynomial games and sum of squares optimization. In Proceedings of the 45th
IEEE Conference on Decision and Control, pages 2855–2860. IEEE, 2006.
[50] M. Putinar. Positive polynomials on compact semi-algebraic sets. Indiana University Math-
ematics Journal, 42(3):969–984, 1993.
[51] Z. Qu and X. Tang. A corr el ative sparse Lagrange multiplier expression relaxation for poly-
nomial optimization. arXiv preprint arXiv:2208.03979, 2022.
[52] L. J. Ratliff, S. A. Burden, and S. S. Sastry. Char acterization and computation of local Nash
equilibria in continuous games. In 2013 51st Annual Allerton Conference on Communication,
Control, and Computing (Allerton), pages 917–924. IEEE, 2013.
[53] N. Schofield and I. Sened. Local Nash equilibrium in multiparty politics. Annals of Operations
Research, 109(1):193–211, 2002.
[54] M. Schweighofer. Optimization of polynomials on compact semialgebraic sets. SIAM Journal
on Optimization, 15(3):805–825, 2005.
[55] I. R. Shafarevich. Basic Algebraic Geometry 1: Varieties in Projective Space. Springer Science
& Business Media, 2013.
[56] N. D. Stein, A. Ozdaglar, and P. A. Parrilo. Separable and low-rank continuous games.
International Journal of Game Theory, 37(4):475–504, 2008.
[57] J. F. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones.
Optimization met hods and software, 11(1-4):625–653, 1999.
[58] S. Uryas’ev and R. Y. Rubinstein. On relaxation algorithms in computation of noncooperative
equilibria. IEEE Transactions on Automatic Control, 39(6):1263–1267, 1994.
[59] H. Waki, S. Kim, M. Kojima, and M. Muramatsu. Sums of squares and semidefinite program
relaxations for polynomial optimization problems with structured sparsity. SIAM Journal on
Optimization, 17(1):218–242, 2006.
[60] J. Wang, V. Magron, and J.-B. Lasserre. Chordal-TSSOS: a moment-SOS hierar chy that
exploits term sparsi ty with chordal extension. SIAM Journal on Optimization, 31(1):114–
141, 2021.
[61] J. Wang, V. Magron, and J.-B. Lasserre. T SSOS: A moment-SOS hierarchy that exploits
term sparsity. SIAM Journal on Optimization, 31(1):30–58, 2021.
[62] J. Wang, V. Magron, J. B. Lasserre, and N. H. A. Mai. CS-TSSOS: Correlative and term spar-
sity for large-scale polynomial optimization. ACM Transactions on Mathematical Software
48(4):1–26
[63] P. Young and S. Zamir. Handbook of game theory. Elsevier, 2014.
33
Jiawang Nie, Department of Mathematics, University of California San Diego, 9500
Gilman Drive, La Jolla, CA, USA, 92093.
Email address: njw@math.ucsd.edu
Xindong Tang, Department of Applied Mathematics, The Hong Kong Polytechnic
University, Hung Hom, Kowloon, Hong Kong.
Email address: xindong.tang@polyu.edu.hk