
Large enough to fit my 7P DB.
Hope i have time during this or next weekend to install it in my PC.
Will run the Ed benchmark to see the difference with my normal HD.
Keep you posted.....
Bert
Hi Ed,Ed Gilbert wrote:This forum has been awfully quiet lately! Where did everyone go? Watching world cup I imagine
Bert, did you get to try your SSD drive? I'm interested to hear about it.
Gerard, what are you up to? Probably riding your bike in the Alps and getting ready to follow the Tour. Have you made any progress solving the parallel search problem in Damy?
Harm, how is that new 64-bit bitboard engine working?
Rein I hear has a search working in his new engine. I am looking forward to seeing it.
I have recovered from my broken foot and have been doing a lot of bicycling. Also busy giving the kingsrow user interface a face lift.
-- Ed
I found the stockfish code helpful when I was working on the parallel search, but otherwise I did not learn much from it. Of course it is pvs where my search is mtd so there are those differences. But chess has so much more going on. They have a lot more information that they can use for move ordering, and that spills over into the ways they do extensions and truncations. Most of it seemed not exactly applicable to draughts.My search function is slowly coming together. I've been browsing the Stockfish source code quite a bit, as well as the BB+ report on the Rybka/Ippolit similarities. The Stockfish code is hard to digest though, with lots of 200+ line functions that almost but not quite do the same. I'm working on abstracting it into small-scale function templates building blocks that can more easily be combined, a la my move generator. I've also been testing my hash table quite a bit, more on that later this week...
Interesting that you think that chess has more information to base truncations on.Ed Gilbert wrote:I found the stockfish code helpful when I was working on the parallel search, but otherwise I did not learn much from it. Of course it is pvs where my search is mtd so there are those differences. But chess has so much more going on. They have a lot more information that they can use for move ordering, and that spills over into the ways they do extensions and truncations. Most of it seemed not exactly applicable to draughts.My search function is slowly coming together. I've been browsing the Stockfish source code quite a bit, as well as the BB+ report on the Rybka/Ippolit similarities. The Stockfish code is hard to digest though, with lots of 200+ line functions that almost but not quite do the same. I'm working on abstracting it into small-scale function templates building blocks that can more easily be combined, a la my move generator. I've also been testing my hash table quite a bit, more on that later this week...
-- Ed
Actually when I said Stockfish I meant Glarung which is the predecessor of Stockfish and which is the source code that I found helpful when I was implementing a parallel search. I have looked at Stockfish also but I am more familiar with Glarung.Interesting that you think that chess has more information to base truncations on.
Of course, draughts does not have pawn structure, king safety, discovered checks and pinned pieces, but for most other concepts there are close draughts analogues. E.g. the pawn structure in chess models a long-term eval term that rarely changes during searches of moderate depth. In draughts, this could be captured by the structure of black and white pieces on the middle 2 rows. You could mimick the use a dedicated "pawn hash" hash table. King safety has no analogue of course, but breakthrough extensions should work almost the same.
There are also draughts specific features that have no chess analogue. E.g. the amount of actually and potentially controlled terrain (see the long discussion on the French forum by Gerard) in relation to the advancement of the pieces is very important in draughts. Also accurately estimating the number of defenders for outposts on the 6th row requires special care. This is somewhat similar to the SEE capture algorithm in chess.
http://www.talkchess.com/forum/viewtopi ... 03&t=29176Ed Gilbert wrote:Hi Rein,
Actually when I said Stockfish I meant Glarung which is the predecessor of Stockfish and which is the source code that I found helpful when I was implementing a parallel search. I have looked at Stockfish also but I am more familiar with Glarung.
From reading the discussions among the chess programmers, one of the biggest elo gains in the last 15 or so years occurred due to LMR. For those that are not familier, LMR is short for late move reductions. The idea is that in an alphabeta search you of course order the moves by searching the expected strongest ones first, and if you still do not have a beta cutoff after the first few moves then you can reduce the search depth for the remaining moves, thus saving search time and allowing a deeper search in the same time for the better moves that were ordered near the top of the list. In chess you have some additional info to use for ordering moves that does not exist in draughts -- checks, discovered checks, passed pawns, and the exchange evaluator which considers the worth of the exchanged pieces. In common we have hashtable moves, killer moves, and in some cases the history table moves, although I see that Crafty no longer uses that, which is is also interesting as this is one of the move ordering terms in all versions of kingsrow. Some of the other terms that you mentioned for draughts are terms that I use in position evaluation but have not considered using them for move ordering. Maybe I should! I think at one time I tried using the static evaluation in the 8x8 checkers kingsrow for move ordering. It was quite a while ago and I cannot remember the results of the experiment but since I'm not using that now I have to assume it was not a positive result. The structure of the center pieces in the middle rows, breakthroughs, and the evaluation of outposts are certainly important eval terms, as are tempo, and perhaps controlled terrain, or at least reserved moves in a closed game, but I have not been using these for move ordering. My point was that I tried LMR for draughts and it did not work nearly as well as the scheme that I was already using for reductions. It's always difficult to know for sure why something 'works' or doesn't, but my suspicion about LMR was that it works better in chess due to the better move ordering info in that game.
I never tried a null move in draughts and just assumed that it could not work because of the nature of the game. Do you have reference for the double move trick by Vincent D.?
-- Ed
Hi Ed.Ed Gilbert wrote:This forum has been awfully quiet lately! Where did everyone go? Watching world cup I imagine
Gerard, what are you up to? Probably riding your bike in the Alps and getting ready to follow the Tour. Have you made any progress solving the parallel search problem in Damy?
-- Ed
Is that 4 hours for the complete (3v3, 4v2, 5v1) databases? That's quite impressive!TAILLE wrote: Hi Ed.
Yes Ed. I am still working hard on my parallel search algorithm and also on a parallel generation process in order to be able to begin the 8 pieces database generation. Now I am able to genarate the 6 pieces database in about 4 hours with two quad processors.
Yes Michel I need 4 hours for genarating the complete (3v3, 4v2, 5v1) db. This includes the compression phase but not the verification process. Verification is another problem. By the way and I need far more time for verification than for building the db. For the 8 db I intend to limit a bit more the verification process.MichelG wrote:Is that 4 hours for the complete (3v3, 4v2, 5v1) databases? That's quite impressive!TAILLE wrote: Hi Ed.
Yes Ed. I am still working hard on my parallel search algorithm and also on a parallel generation process in order to be able to begin the 8 pieces database generation. Now I am able to genarate the 6 pieces database in about 4 hours with two quad processors.
I am also working creating the 8 piece databases. The code runs parallel, but it needs some work on running faster. Did you make algorithmic improvements, or did you just optimize your existing code?
Michel
Why don't you build the databases twice and check whether they give the same results? That and the comparison with already published results by Ed and others should make sure your results are correct.TAILLE wrote:Yes Michel I need 4 hours for genarating the complete (3v3, 4v2, 5v1) db. This includes the compression phase but not the verification process. Verification is another problem. By the way and I need far more time for verification than for building the db. For the 8 db I intend to limit a bit more the verification process.MichelG wrote:Is that 4 hours for the complete (3v3, 4v2, 5v1) databases? That's quite impressive!TAILLE wrote: Hi Ed.
Yes Ed. I am still working hard on my parallel search algorithm and also on a parallel generation process in order to be able to begin the 8 pieces database generation. Now I am able to genarate the 6 pieces database in about 4 hours with two quad processors.
I am also working creating the 8 piece databases. The code runs parallel, but it needs some work on running faster. Did you make algorithmic improvements, or did you just optimize your existing code?
Michel
What is your approach for this verification phase ?
That's not so simple. When I rebuilt for the first time my 2,3,4,5 and 6 pieces db with my new multithread algorithm I was not able to discover the inconsistencies of my db : after each new db built the verification process was OK and all the results seemed identical to my "old" db and to the already results published.Rein Halbersma wrote: Why don't you build the databases twice and check whether they give the same results? That and the comparison with already published results by Ed and others should make sure your results are correct.
I already did a (partial) calculation of these. A slight complication is that some of the positions take 256 ply to solve which means you can't fit a position in a byte. Dragon uses 10 bits/position now to handle that.I also heard from Gijsbert Wiesenekker recently that he is building the distance to win database for 7 pieces.
Like Dragon Damy runs a single, multithreaded application, all cores working on the same database. Subdivisions for Damy are different. For a db without men Damy create 100 subdivisoins based on the position indexes. For db with men the subdivisions are based on the men configuration of one side.MichelG wrote:Dragon runs as a single, multithreaded application. I try i have it work on as big as possible databases that still fit in memory. Currently the databases are split according to the number of pieces, like 1210 for a database with 1 man, 2 crown and 1 black man.
For databases with 7 pieces, these are subdivided according to the rank of the most advanced man (splitting into maximal 9*9=81 subslices). And with 8 pieces i subdivide according to the position of the most advanced piece (45*45 slices)
At each iteration all cores work on the same database. I think that works quite well, because each of the passes through the database can be multithreaded quite efficientlyThe only disadvantage is that you can't easily run it concurrently on several computers, but it is possible manually by generating 5 vs 3 in parallel with 4 vs 4 pieces.
I didn't parallelize the compression phase though as it still runs fast enough to be not much of a problem.
One problem that dragon still has, is that it algorithm is intended for computing distances to win instead of only win/draw/lose, i think that is slowing it down a bit, and i am not quite happy about the speed it runs (5 piece database build takes 45 minutes on a q6600 or about 30 minutes on a i860)
I already did a (partial) calculation of these. A slight complication is that some of the positions take 256 ply to solve which means you can't fit a position in a byte. Dragon uses 10 bits/position now to handle that.I also heard from Gijsbert Wiesenekker recently that he is building the distance to win database for 7 pieces.
You say that verifying takes more time than the original computing. For me, i just check for consistency within the database and that takes about half the time of the build. It isn't a true verification because it wouldn't find errors in for example the move generator, but it does find (virtually all) hardware errors. Other than that, running the program twice would be the only option to make absolutely sure there are no hardware errors.
Michel