Evaluation environment of AI players in Checkers






An ultimate way to test quality of an AI player is to let it play against other implementations and/or humans and inspect its performance. If the relevant information of all the games is collected it may be later used for finding player's weaknesses which may lead to further improvement of AI logic.

In this project we implemented a complete environment that will allow executing matches between different computer players and between them and humans under different time constraints. The environment has a database of games history. The database was used to present performance metrics of various implementations as well as to follow specific games in order to find positive and negative behavioral patterns. In addition to the evaluation environment we implemented an AI player based on alpha-beta decision algorithm.