International Society of Dynamic Games

  • DGA Seminar: Roland Malhamé

    Roland P. Malhamé
    Department of Electrical Engineering
    Polytechnique Montréal
    Canada

    Dynamic Games and Applications Seminar

    A bottom-up approach to the construction of socially optimal discrete choices under congestion

    February 29, 2024 11:00 AM — 12:00 PM (Montreal time)

    Zoom webinar link

    We consider the problem of N agents having a limited time to decide on a destination choice among a finite number of alternatives D. The agents attempt to minimize collective energy expenditure while favoring motion strategies which limit crowding along their paths in the state space. This can correspond to a situation of crowd evacuation or a group of micro robots distributing themselves on tasks associated to distinct geographic locations. We formulate the problem as a Min linear quadratic optimal control problem with non-positive definite Q matrices accounting for negative costs accruing from decreased crowding. The solution proceeds in three stages, each one improving on the performance of the previous stage: (i) Mapping optimal paths for an arbitrary agent destination assignment; (ii) Mapping optimal paths for fixed fractions of agents assigned to each destination; (iii) Identifying the optimal fraction of agents’ assignments to each destination. The cost function associated with stage (iii) as N goes to infinity is proven to be convex, leads to simplified computations and to epsilon-optimal decentralized control policies when applied for N large.

    (with Noureddine Toumi and Jérôme Le Ny).

  • DGA Seminar: Franz Wirl

    Franz Wirl
    University of Vienna
    Austria

    Dynamic Games and Applications Seminar

    On the Non-Uniqueness of Linear Markov Perfect Equilibria in Linear-Quadratic Differential Games: A Geometric Approach

    February 22, 2024 11:00 AM — 12:00 PM (Montreal time)

    Zoom webinar link

    Although the possibility of multiple nonlinear equilibria in linear-quadratic differential games is extensively discussed, the literature on models with multiple linear Markov perfect equilibria (LMPEs) is scarce. And indeed, almost all papers confined to a single state (a very large majority of the application of differential games to economic problems) find a unique LMPE. This paper explains this finding and derives conditions for multiplicity based on the analysis of the phase plane in the state and the derivative of the value function. The resulting condition is applied to derive additional examples using pathways different from the (two) known ones. All these examples, more precisely, their underlying pathways, contradict usual assumptions in economic models. However, by extending the state space, we provide an economic setting (learning by doing) that leads to multiple LMPEs.

  • DGA Seminar: Luca Colombo

    Luca Colombo
    Rennes School of Business
    France

    Dynamic Games and Applications Seminar

    A Dynamic Analysis of Criminal Networks

    February 15, 2024 11:00 AM — 12:00 PM (Montreal time)

    Zoom webinar link

    We take a novel approach based on differential games to the study of criminal networks. We extend the static crime network game (Ballester et al., 2006, 2010) to a dynamic setting where criminal activities negatively impact the accumulation of total wealth in the economy. We derive a Markov Perfect Equilibrium, which is unique within the class of strategies considered, and show that, unlike in the static crime network game, the vector of equilibrium crime efforts is not necessarily proportional to the vector of Bonacich centralities. Next, we conduct a comparative dynamic analysis with respect to the network size, the network density, and the marginal expected punishment, finding results in contrast with those arising in the static crime network game. We also shed light on a novel issue in the network theory literature, i.e., the existence of a voracity effect. Finally, we study the problem of identifying the optimal target in the population of criminals when the planner’s objective is to minimize aggregate crime at each point in time. Our analysis shows that the key player in the dynamic and the static setting may differ, and that the key player in the dynamic setting may change over time. (with Paola Labrecciosa and Agnieszka Rusinowska)