This work introduces a unified framework for a more detailed exploration of games. In existing literature, strategies of players are typically assigned scalar values, and the concept of Nash equilibrium is used to identify compatible strategies. However, this approach lacks the internal structure of a player, thereby failing to accurately model observed behaviors.
To address this limitation, we propose an abstract definition of a player. This allows for a more nuanced understanding of players and brings the focus to the challenge of learning that players face. Unlike Markov decision processes, which formalize control problems but not agent design, our framework subsumes standard reinforcement learning structures. It thus offers a language that enables a deeper connection between games and learning. To illustrate the need for such generality, we study a simple two-player game and show that even in the most basic settings, a sophisticated player may adopt dynamic strategies that cannot be captured by simpler designs or compatibility analysis alone.
In the discrete setting, we consider a player whose structure incorporates standard estimates from the literature. We explore connections to correlated equilibrium and highlight that dynamic programming naturally applies to all estimates. In the mean-field setting, we exploit symmetry to construct explicit examples of equilibria. Finally, we examine connections to reinforcement learning and bandit problems, demonstrating the broad applicability of the framework.