A decision tree is the structure and medium of how choices are stored in a zero-sum game, so it is important to have some understanding of what a decision tree is. It is a way of mapping information in a tree-like model where decisions and their outcomes are stored. Advanced players usually have great understanding of the resources available to themselves and the opponent as well as the implications. This is exactly how chess players can play blindfolded. At the very minimum, they are able to form some type of decision tree in their head.
Before we talk about minimax, we have to understand the method in which we find the minimax solution. I will not lie, alpha-beta pruning actually does not do all of the heavy lifting. In fact, it simply gets rid of 'branches' that we should not expend energy in computing (or in our case rubbing together brain cells). You are probably imagining the sheer amount of branches you can prune given the amount of choices you have at any given time. It seems impossible to prune these branches, doesn't it? Well, not if we apply a heuristic evaluation function. In fact, this is what a lot of AI engines do despite having significantly more compute resources than our feeble brains.
Minimax is a value that describes the smallest value that the opponent can force upon the player. There are many great algorithms with some even doing heuristic evaluation. Negascout, Best Node Search, and MTD-f are great and all, but we will look at the Monte Carlo tree search. The MCTS is quite different from all of these other algorithms where it actually does not use alpha-beta pruningto minimize the workspace. Instead, it attempts optimizing for the best branch. The main objective of the tree search is not to reduce the outcomes, but to simply go down the tree with the seemingly best outcome. Often it is supplimented with reinforcement learning and this proves to be very effective (e.g. AlphaGo, a Go program using MCTS, deep learning, and reinforcement learning.)
Back to the front page