Abstract: 
We study infinite stochastic games played by nplayers on a finite graph with goals specified by sets of infinite traces. The games are concurrent (each player simultaneously and independently chooses an action at each round), stochastic (the next state is determined by a probability distribution depending on the current state and the chosen actions), infinite (the game continues for an infinite number of rounds), nonzerosum (the players’ goals are not necessarily conflicting), and undiscounted. We show that if each player has an upwardclosed objective, then there exists an εNash equilibrium in memoryless strategies, for every ε>0; and exact Nash equilibria need not exist. Upwardclosure of an objective means that if a set Z of infinitely repeating states is winning, then all supersets of Z of infinitely repeating states are also winning. Memoryless strategies are strategies that are independent of history of plays and depend only on the current state. We also study the complexity of finding values (payoff profile) of an εNash equilibrium. We show that the values of an εNash equilibrium in nonzerosum concurrent games with upwardclosed objectives for all players can be computed by computing εNash equilibrium values of nonzerosum concurrent games with reachability objectives for all players and a polynomial procedure. As a consequence we establish that values of an εNash equilibrium can be computed in TFNP (total functional NP), and hence in EXPTIME.
