Ed Gilbert wrote:The main benefit of the 8 and 9pc db is in analyzing games. It probably does not make a big difference in elo. When there are for example 14 pieces on the board and you want to know conclusively if the position is a win or draw, you need to probe the databases deep into the search tree. A strong program will usually know the correct move to make, but if it only probes the large db at the root it will not be able to give a conclusive game result. IIRC there are probably some examples in the thread "Help from the 8 pieces endgame database" where the root position is greater than 8 or 9 pieces, the program using a 7pc db could not get a conclusive result, but a conclusive result was obtained using an 8pc db and probing throughout the search.
-- Ed
Yes Ed. I agree with you : when in analyse you should probe the db deep into the tree in order to give the conclusive game result. In the examples given under the thread "Help from the 8 pieces endgame database" you have to note that, though Damy was not able to give the conclusive game result, Damy was able to find the good moves in practically all cases. As a consequence the gain brought be the 8 pieces db is a little difficult to see in a real game.
That is the reason why I plan a lot of tests to see how th 8 db can help in a real game. I am sure I will discover some means to improve the programm based on the 8 pieces db but I guess the ELO gain will not be very significant.
Ed Gilbert wrote:Rein, I have ECC memory in both my build machines. But I agree with Michel that this is no cureall as it does not catch many kinds of errors. It did not prevent two statistics errors from occurring in my counts. Some kind of verify is necessary for any computation that takes such a long time as building these databases. Even the verify that I do has some holes in it. The verify pass is run immediately after the db is built and compressed, and it shares the same cache of successor positions that the build uses. An error in the cache could go undetected.
-- Ed
I think i largely solved the issue with unreliable ram and harddisks. Consider that most databases that are in memory are not changed after they are loaded.
Dragon now writes a checksum for each database it writes.At some point this database will be loaded into memory, and at some point it will be removed from memory again. Dragon now checks the checksum when a database is removed from memory. If there was no error, the computation based its results on verified ram.
If an error was detected you have a problem because you don't know which databases are based on corrupt data and you will have to restart the computation.
Because 80+% of all assigned memory is in the form of read-only databases, this algorithm should catch 80+% of all RAM failures, and nearly 100% of harddisk failures.
Dragon now writes a checksum for each database it writes.At some point this database will be loaded into memory, and at some point it will be removed from memory again. Dragon now checks the checksum when a database is removed from memory. If there was no error, the computation based its results on verified ram.
If an error was detected you have a problem because you don't know which databases are based on corrupt data and you will have to restart the computation.
Because 80+% of all assigned memory is in the form of read-only databases, this algorithm should catch 80+% of all RAM failures, and nearly 100% of harddisk failures.
This doesn't work for me because I cache successor positions in 4k blocks. I have CRCs for each subdivision file, and there are several hundred thousand of these files for 8 pieces, but not for each 4k block.
Ed Gilbert wrote:
This doesn't work for me because I cache successor positions in 4k blocks. I have CRCs for each subdivision file, and there are several hundred thousand of these files for 8 pieces, but not for each 4k block.
It would still work, if you add a checksum to each block (you could even add the checksum after load rather than writing it into the files, and still be protected from ram errors)
Did you ever experience a crc failure during the computation?
It would still work, if you add a checksum to each block (you could even add the checksum after load rather than writing it into the files, and still be protected from ram errors)
True, although with ECC ram I should already be protected from ram errors. I only check the file CRCs when a db file gets copied from one machine to another, and then about once every week or two I did a global check of all the file CRCs.
Did you ever experience a crc failure during the computation?
I have seen a couple cases of file CRC errors after a db file was copied to another machine over the LAN after it was built. I also saw a couple of verify errors during the 8-piece build. These 2 occurred at the same time on 2 different instances of the build program so I assume it was some glitch in the computer. I have never seen a program crash or anything that looked like a possible RAM CRC error.
It may just be that my computer has some hardware issue. Did you finish your computation of 1412 yet?
No, I stopped it as soon as you reported that you had found the error and that we then had agreement. As I recall it was about 1/2 complete, and that part of it was in agreement with my original result. Unfortunately I discarded the partial rebuild data and removed all the build files from that computer because I needed the disk space for something else. Since you are still getting inconsistent results, maybe I should start it up again.
Have just finished the generation the 4 pieces against 4 pieces database, and all my counters are identical to yours.
It took Damy a little more than 2 months witch is what I planned.
I just begin the 5 pieces against 3 pieces generation. and the 5K against 3K is now under generation.
I hope to achieve the complete 5 pieces against 3 pieces db in 3 months.
Have just finished the generation the 4 pieces against 4 pieces database, and all my counters are identical to yours.
It took Damy a little more than 2 months witch is what I planned.
I just begin the 5 pieces against 3 pieces generation. and the 5K against 3K is now under generation.
I hope to achieve the complete 5 pieces against 3 pieces db in 3 months.
Good to hear
Dragon is also still working hard to solve endgames. I had some delay on 5v3 due to insufficient disk space, but it is now halfway done, with all the numbers matching as well until now.
I also computed 6v1, maybe we can compare those results as well.