Although technically too late, still best wishes to you all.
I don't know if you already know this, but in Russia Programmers' Day is an official professional holiday, celebrated on the 256th day of each year (September 13 during common years and on September 12 on leap years). So the annual Dutch Open computer tournament was held at appropriate days the last two years
Maximus is improving but not yet ready for the strongest opponents. Here something about my status and plans, which will probably extend this year
I managed to get YBWC alphabeta running by the end of last year. Starting from scratch with parallel perft, then parallel minimax, then parallel alphabeta. All with a hash table that can be switched off. I can recommend these steps for anyone writing a new program (or testing an existing one), since all steps and options can be verified apart and combined, saving time in debugging.
In my experience (too), fixed depth parallel searches only produce different results in combination with a hash table, due to the GHI problem (haven't solved this yet).
After finding out that Java Object overhead is 24 bytes per object –so in my case per hash table entry– I wanted to change the implementation. With 32 bytes of data per entry this is way too much overhead! As Java doesn't support structs I flattened the table to an array of 64-bit longs. This is one of the few times I felt I had to do something awkward in Java as compared to C++.
Then I also changed (had to) from entry locking (overhead not too bad) to Hyatts idea of a lockless hash table.
Finally, I added the History Table again and made a few changes to the evaluation function. The evaluation is still not very advanced but at least it's fast. (I did not consider duplicating code for white and black.)
It does, however, use a 160 MB breakthrough table for all possible passed men on the last 5 ranks (minus the kingsrow).
Performance on the i7-920 quad core 2.8 GHz is almost 19.5 Mnodes/sec using 8 threads (hyperthreading). Quite a monster. Parallel scaling is factor 4.23 for nodes/sec and factor <3.4 for time-to-depth.
No egdb yet. I plan to use (only) the 6 pieces database from Michel and Harm, as I did in ABCdam. Must see how Java memory management holds well with this.
How bad is missing the 7 pieces db?
I played two DXP matches, each consisting of (the first) 64 ballot games, against the classic opponents.
Maximus 8CPU (10 sec/move) vs. Dam2.2 (75 in 10 min): +11 –0 =53 (59%)
Maximus 6CPU (15 sec/move) vs. Flits (75 in 10 min, with pondering): +3 –13 =48 (34%)
At least this was a good performance and "correctness" test.
By the way, I agree that it is too bad pondering cannot be switched off in Flits.
As I am also interested in learning to play a decent game myself, a long to be fulfilled wish is writing a better evaluation function. I also enjoy mailing with an experienced draughts player who has also started writing his own program. The losses to Flits I can (now) easily spot as gaps in the Maximus evaluation function.
At the moment I am struggling with MTD-f (failing low in the root) and time management. Other plans include some new stuff for me like fixing GHI, pruning and search extensions. And improving on the ABCdam opening book and pondering. Maybe I'll have a look at GUIDE as well.
I learn a lot from this forum and its regular contributors. If I can help with any questions or (limited

) curiosity please let me know.
Jan-Jaap