Sometimes generating the solution tree will fail, or the tree will end up being too big or complex. There is a final step: Generating a solution tree, which will be discussed in Part II. We therefore search the position several times at gradually longer thinking times, making sure the result is always the same.Įven positions that satisfy all the above requirements will not always get accepted as puzzles. Our initial search may find only a single winning move, but there is a possibility that if we would let the chess engine think a little longer, it would discover that the move isn't winning after all, or that some other move is also winning. The "exactly one winning move" condition is a little problematic because it is often difficult to be 100% sure that a move is winning and that other moves are not. This should usually ensure that the win is not entirely trivial to spot. The winning move is a sacrifice or was missed by at least one of the two players.This is an attempt to eliminate puzzles that would be too easy. The winning move is not a mate in 1, not a promotion to queen, and not a simple capture of a hanging piece. ![]() Positions where the side to move is in check tend to have too few legal moves to be interesting as puzzles. We collect candidate puzzle positions by scanning through a database of human games, analysing every position in every game with a computer chess engine, and collecting the positions that satisfy the following requirements: We therefore want to also include the more mundane tactics that decide most real world games. Like all chess lovers, we love attractive combinations, but in practical games, those are rare. Many exercises, both in books and in some other tactics training apps and websites, are biased towards flashy sacrificial combinations. We also want our puzzles to be practical and representative of the kind of mistakes and missed opportunities that occur in the games of chess players of all levels. ![]() Furthermore, the solution move shouldn't be too obvious. In order for a position to be suitable as a tactical puzzle, there should be only a single solution move, at least in the initial position of the puzzle (we sometimes have to allow multiple correct moves deeper into the solution tree, or we would have to exclude too many otherwise good puzzles). Part II will explain how we generate solution trees for the exercises. Part I (this part) explains how we find candidate puzzle positions. This blog post is the first in a series of two posts describing how we use the Stockfish chess engine and the Chess.jl chess programming library to produce our own computer generated puzzles. Some of them are hand-picked by humans, others are generated by computer programs. So, to summarize - clean code, popularity which attracts developers, open source which allows anyone to participate, a big hardware pool to test various things fast, great testing methodology.The tactical exercises in our new Tactics Frenzy app are collected from a variety of sources, both inside and outside our core team. Also it helps that Komodo has like 150~ cores in it testing framework and stockfish had like 400 till ~sf9, after sf9 and noobpwnftw joining the project it has 1100 or so. Also Stockfish did pioneer a lot of stuff in search - countermove history stats are stockfish invention + capture history is also a thing because it was created by Stockfish developers. Better eval and search (or more precisely, their combination) :) The reason WHY it's this way is not precisely known, but common belief is that Komodo devs did run out of ideas + it code got kinda muddy (AFAIK Komodo eval has 3500 lines sloc while stockfish is like 700, even if you include psqt and pawns you wouldn't get more than 1500), stockfish did a lot of job in simplifying away no longer useful parts of code (even with slight elo loss) to free space to new stuff.
0 Comments
Leave a Reply. |