Convergence of the policy iteration method for discrete and continuous optimal control problems holds under general assumptions. Moreover, in some circumstances, it is also possible to show a quadratic rate of convergence for the algorithm. For Mean Field Games, convergence of the policy iteration method has been recently proved in [9]. Here, we provide an estimate of its rate of convergence.
Rates of convergence for the policy iteration method for Mean Field Games systems
Camilli F.;
2022-01-01
Abstract
Convergence of the policy iteration method for discrete and continuous optimal control problems holds under general assumptions. Moreover, in some circumstances, it is also possible to show a quadratic rate of convergence for the algorithm. For Mean Field Games, convergence of the policy iteration method has been recently proved in [9]. Here, we provide an estimate of its rate of convergence.File in questo prodotto:
| File | Dimensione | Formato | |
|---|---|---|---|
|
JMAA_3.pdf
accesso aperto
Descrizione: Camilli_Tang_JMAA
Tipologia:
PDF editoriale
Dimensione
386.27 kB
Formato
Adobe PDF
|
386.27 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


