We propose a multiple-komi modification of the AlphaGo Zero/Leela Zero paradigm. The winrate as a function of the komi is modeled with a two-parameters sigmoid function, hence the winrate for all komi values is obtained, at the price of predicting just one more variable. A second novel feature is that training is based on self-play games that occasionaly branch -with changed komi- when the position is uneven. With this setting, reinforcement learning is shown to work on 7×7 Go, obtaining very strong playing agents. As a useful byproduct, the sigmoid parameters given by the network allow to estimate the score difference on the board, and to evaluate how much the game is decided. Finally, we introduce a family of agents which target winning moves with a higher score difference.
SAI: A Sensible Artificial Intelligence that plays Go
Gianluca Amato
;Maurizio Parton
;
2019-01-01
Abstract
We propose a multiple-komi modification of the AlphaGo Zero/Leela Zero paradigm. The winrate as a function of the komi is modeled with a two-parameters sigmoid function, hence the winrate for all komi values is obtained, at the price of predicting just one more variable. A second novel feature is that training is based on self-play games that occasionaly branch -with changed komi- when the position is uneven. With this setting, reinforcement learning is shown to work on 7×7 Go, obtaining very strong playing agents. As a useful byproduct, the sigmoid parameters given by the network allow to estimate the score difference on the board, and to evaluate how much the game is decided. Finally, we introduce a family of agents which target winning moves with a higher score difference.File | Dimensione | Formato | |
---|---|---|---|
ijcnn2019.pdf
accesso aperto
Tipologia:
Documento in Post-print
Dimensione
1.17 MB
Formato
Adobe PDF
|
1.17 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.