The project was developed as a preparation and verification phase to confirm the hypothesis of reinforcement learning being able to make the existing behavior pattern of game bots more natural and human-like, specifically in an on-line tactical team game for mobile devices. The project was done for the company Panzerdog that develops computer games.
The main goal was to teach the specific type of game bots (cars) to orientate in the map so that the bot adequately reacted to the dynamic changes of the game such as emerging artificial obstacles on the route. In other words, if the bot sees the obstacle on the route written in the scenario and it can be gone around, it must go around it changing the previous route, but still following the aim.
The Customer provided us with the game scene developed with C# for framework Unity 3D, a map for resting, game bot and the example of the ML-Agents library usage. Also we had the files of configurations, scripts, program code of server part of ML-Agents with the specific version of Unity 3D and pack of ML-Agents that algorithm must use.
As a result the behavior of the bot on the demo game map became more logical and closer to the human-like. The bot following the route could change it according to the detection of some objects (that were out of the field of vision before) on the way and successfully go around moving objects (imitating players movements) still reaching the destination point.
At the moment the model is being tested by the Customer team for the ability to use it on the other maps.
The main business task of the project was the prove of possibility to improve the existing game process using Machine Learning.
We plan to discuss new tasks for Machine Learning connected with the improvement of the game process and balance control in the game.