That is the same indexing that I used to build the 9-piece db for 8x8 checkers. It was nice because the indexing function was simple and I only used one computer. For 10 pieces I had to change to a more complex indexing scheme to make the subdivisions smaller and to allow more subdivisions to be built in parallel.And with 8 pieces i subdivide according to the position of the most advanced piece (45*45 slices)
I assume that on each pass you break the index range up into N subranges that are worked on in parallel. Do you find that it becomes very I/O bound during the passes where you are resolving captures and conversion moves, when all 4 (or 8?) cores are reading the database values of other subdivisions? Maybe if you have sufficient ram to cache enough of these other subdivisions then its not a problem.At each iteration all cores work on the same database. I think that works quite well, because each of the passes through the database can be multithreaded quite efficiently
I verify the same way. For the 8pc db the verify time was just about the same on average as the build time. I am verifying the compressed dbs that do not have capture positions. I assume that's what you do also? When I'm looking up the values of successor positions to do the verify, if any of those positions are captures then it requires a search to find their value and that's probably what makes the verify slow.You say that verifying takes more time than the original computing. For me, i just check for consistency within the database and that takes about half the time of the build.
-- Ed