Happy New Year!
It has been 20 years since the New Year of 1996--a time marker that has significant importance to me. I'd like to thanks Zen, Steve, Tyson and especially Laura for their parts. Happy New Year.
Surprisingly, the game went together in about a day with about a quarter of that time getting myself use to the ncurses library. I started by implementing the user interface—something to display the playing field and allowing a user to make moves. This evolved into users alternating to make moves until the game had no moves left.
Evaluating what moves are available is an interesting problem. The playing field consists of 7x7 cells. Each player starts with 2 cells. A player's cell may move to any cell within two squares of itself. If it moves just a single square, the cell duplicates. If the cell is 2 moves away, the piece jumps to the new location, vacating the original position. Naturally, moving a single space results in a net gain. When moving, any cell owned by an opponent will be taken over. The goal is to take over as many cells on the board is possible. When the board is full, or no more moves can be made by a player, the game is over. Which ever player controls the most cells wins the game. Now the AI considerations.
When determining a move, one can look at it from the standpoint of player controlled cells. However much of the time several player cells can move into the same empty cell. For a complete game tree, each option would need to be evaluated. However, if the starting cells are a single square away, the results are identical. So that section of the game tree only needs to be evaluated once. For cells that are two squares away, multiple sources will still need to be considered because the originating piece moves to make this happen. However, if a cell can be occupied from both a cell that is one square away and a cell that is two squares away, the cells two squares away can be ignored. Why jump and loss a piece when you can duplicate and obtain the same results at no loss? So the game tree can be further pruned with this in mind.
That leaves the question: how to make a list of moves that need to be explored. For this, I keep a construct I call a Move, and I have a Move for each cell in the playing field (7x7). Each Move consists of a list of cells that can move into it. Any cell only has 24 possible ways into it, but since we prune this based on jumps and duplications, the worse-cast is the highly unlikely event the cell is surrounded by cells 2 squares away, for which there are 16. When a move is to be evaluated, a map of what cells can be occupied is generated, and each occupied cell contains a list of parent cells do to the occupying. Each one of these occupations needs to be evaluated. Impossible worst-case: 7x7x16 or 784. However, each of these moves is then evaluated further. So 784n where n is the number of moves to think ahead is the absolute worst case complexity.
For evaluation, each of the move in the Move table is evaluated for the net gain. If this gain is the same as the last move, this move is stored in a list. If this move has more gains than the last, the list is reset and the move stored. If this move has less gains, it is ignored. The goal is to find the move with the highest gains. Now to think ahead, the gains for this round are subtracted by the gains from the next round. The next round is evaluated the same way, but as the other player. We assume the other player is going to try and maximize their gains.