EndGame Databases

Discussion about development of draughts in the time of computer and Internet.
Post Reply
BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Post by BertTuyt » Mon Jul 20, 2009 12:52

Gerard thanks.

Do you use, as Ed, a small search (till a static situation is reached) to find the result in case of a capture position?

I always thought that 12 GigaByte should be sufficient.
Nevertheless I run into memory problems.
Have to dig into this.

Maybe i don't free databases , which are no longer required in between.
I use 2 bit position, so i would guess, memory shouldn't be a problem.

In the mean time i managed to reduce the time for the 4K - 2K, and 2K - 4K to 2046.8 seconds.

Bert

Ed Gilbert
Posts: 859
Joined: Sat Apr 28, 2007 14:53
Real name: Ed Gilbert
Location: Morristown, NJ USA
Contact:

Post by Ed Gilbert » Mon Jul 20, 2009 13:53

Bert, IIRC the Grimminck builder uses memory mapped files to read successor values during the first pass. This gives you very little control over the caching, and I suspect you are running out of virtual memory because too many successor dbs are being loaded. If you need the value of one successor position from another slice you have to create a mapping of the entire file holding that position. This eats up your virtual memory very quickly. As a quick fix you might try increasing your virtual memory setting, but IMO you need to get rid of the memory mapped files and do your own caching. I use a simple LRU caching scheme with 4kb blocks, and this way I can control exactly how much memory is used for caching. A few hundred mb of cache is plenty to do the capture passes for the 8pc db.

-- Ed

TAILLE
Posts: 968
Joined: Thu Apr 26, 2007 18:51
Location: FRANCE

Post by TAILLE » Mon Jul 20, 2009 14:04

BertTuyt wrote:Do you use, as Ed, a small search (till a static situation is reached) to find the result in case of a capture position?
Yes Bert, I use a small search to reach a static position. In order to not generate too much positions during this small search I use an alpha-beta search with an initial alpha-beta value set according to the objective of the capture (to find a win or to find a draw).
To use 1 or 2 bits per position is another important choice which depends on your objective. I think you are right to choose the 2 bits approach whichis more easy and more efficient.
The only case where you may choose 1 bit per position is when there are big problems of memory and CPU time. I mention CPU time because with an algoritm based on only 1 bit per position it is also possible ro run a lot of thread in parallel on the same db. I do not think that this one bit approach is necessary for the 8 pieces bd generation so we can forget such approach.
Gérard

User avatar
wellnesswrotter
Posts: 323
Joined: Mon May 21, 2007 15:10
Location: www.snukenkuzco.nl
Contact:

Lallement

Post by wellnesswrotter » Mon Jul 20, 2009 15:07

From the book "Drie tegen één is gemeen" (3 against 1 is mean)

a math-book about the reason why 3 against 1 is always a draw ONLY in 10x10.

if you translate it to a bitboard it looks like this:

Code: Select all

.. .. .. .. 05 00 .. .. .. ..
.. .. .. 16 11 06 01 .. .. ..
.. .. 27 22 17 12 07 02 .. ..
.. 38 33 28 23 18 13 08 03 ..
49 44 39 34 29 24 19 14 09 04
.. 50 45 40 35 30 25 20 15 ..
.. .. 51 46 41 36 31 26 .. ..
.. .. .. 52 48 42 37 .. .. ..
.. .. .. .. 53 48 .. .. .. ..
Attachments
lallement_small.jpg
lallement_small.jpg (91.1 KiB) Viewed 7644 times

TAILLE
Posts: 968
Joined: Thu Apr 26, 2007 18:51
Location: FRANCE

Post by TAILLE » Wed Jul 22, 2009 14:01

BertTuyt wrote:Gerard thanks.

Do you use, as Ed, a small search (till a static situation is reached) to find the result in case of a capture position?

I always thought that 12 GigaByte should be sufficient.
Nevertheless I run into memory problems.
Have to dig into this.

Maybe i don't free databases , which are no longer required in between.
I use 2 bit position, so i would guess, memory shouldn't be a problem.

In the mean time i managed to reduce the time for the 4K - 2K, and 2K - 4K to 2046.8 seconds.

Bert
Hi Bert,

Just two questions in order to understand what you calculate :
1) Do you build at the same time the DTC table or do you build only the WLD table ?
2) Do you include in the time calculated the time necessary to compress the results ?

With the experience acquired, and for your information, I will experiment a new approach before beginning the 8 pieces db generation. It is based on 4 bits per position on the disk and 0,5bits per position in memory. This approach should allow me to build at the same time the DTC table and should allow also a multithread calculation, even with no men on the board.
Gérard

Ed Gilbert
Posts: 859
Joined: Sat Apr 28, 2007 14:53
Real name: Ed Gilbert
Location: Morristown, NJ USA
Contact:

Post by Ed Gilbert » Thu Jul 23, 2009 00:52

With the experience acquired, and for your information, I will experiment a new approach before beginning the 8 pieces db generation. It is based on 4 bits per position on the disk and 0,5bits per position in memory.
Gerard, I don't understand this. If your 8-piece db is going to consume 4 bits per position on disk, or 2 positions per byte, then you will need more than 8TB of disk space!

I have a couple of questions about your 2pc - 7pc db.
1) How large is it on disk? For me it is 24.7GB.
2) Are you still using the trie format, and only storing the smallest 2 tries of W, L, or D, each in separate tries, or have you changed to a different format?

-- Ed

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Post by BertTuyt » Thu Jul 23, 2009 01:37

Ed/Gerard.

I based my builder on the initial work of Michel.
But I have modified this towards a 64bit environment, and also the builder now uses a bitfield based movegenerator.
All the files I need in the first stage (captures, conversion to king, or conversion of man to a higher slice) are preloaded into memory.

I don't use a caching system so far, but I also think that for the 8P I most likely have to do, for the 7P I rely on my 12 GigaByte (Gerard, do you do also caching like Ed?).

I don't save DTC information (so only WLD), and I (as of today) also don't use compressed databases, or do a compression step after the new databases are generated.

It is evident that finally in the Damage program I will use compressed filed ( but so far not during build).

It is also clear that his method, maybe is somewhat faster, but very very memory consuming !!

So Ed you are right, it is not completely comparing apples and apples.

Bert

TAILLE
Posts: 968
Joined: Thu Apr 26, 2007 18:51
Location: FRANCE

Post by TAILLE » Thu Jul 23, 2009 09:26

Ed Gilbert wrote: Gerard, I don't understand this. If your 8-piece db is going to consume 4 bits per position on disk, or 2 positions per byte, then you will need more than 8TB of disk space!
I was not talking on the all db but only on the db being build.
Lets's take an example. Suppose I want to build the 5K - 2K1M db. For any position of the man I have about (with some inconsistent positions)
comb(50,5) * comb(50,2) = 2,118,760 * 1,225 = 2,595,481,000 positions with white to move and 2,595,481,000 positions with black to move.
For generating this specific db I need 2,595,481,000 byte buffer (4 bits per position) which could be with no harm on disk because it is used only with a sequential access, and I need only a 324,435,125 bytes (2,595,481,000 / 8) buffer really in memory because a random access to this buffer is necessary.
This save a lot of memory which could be used as cash for access to the db already generated.
Ed Gilbert wrote:
With the experience acquired, and for your information, I will experiment a new approach before beginning the 8 pieces db generation. It is based on 4 bits per position on the disk and 0,5bits per position in memory.
Gerard, I don't understand this. If your 8-piece db is going to consume 4 bits per position on disk, or 2 positions per byte, then you will need more than 8TB of disk space!

I have a couple of questions about your 2pc - 7pc db.
1) How large is it on disk? For me it is 24.7GB.
2) Are you still using the trie format, and only storing the smallest 2 tries of W, L, or D, each in separate tries, or have you changed to a different format?

-- Ed
1) My 2-7pc db takes 20,6GB on the disk in a trie format
2) Before beginning the 8pc generation I will take time to test other formats but, for the time being I use only the trie format
Gérard

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Post by BertTuyt » Thu Jul 23, 2009 11:12

Ed/Gerard, just for clarification.

I assume you have both built (only) the most relevant part of the 7p databases which is the 5p - 2p and 4p - 3p parts. Or did you include the 6p - 1p.

Ed, going to 8 basically the same question 5p - 3p and 4p - 4p?
I don't think (?) the other subparts are really interesting 6p - 2p and 7p - 1p ?

Bert

TAILLE
Posts: 968
Joined: Thu Apr 26, 2007 18:51
Location: FRANCE

Post by TAILLE » Thu Jul 23, 2009 11:29

BertTuyt wrote:Ed/Gerard, just for clarification.

I assume you have both built (only) the most relevant part of the 7p databases which is the 5p - 2p and 4p - 3p parts. Or did you include the 6p - 1p.

Ed, going to 8 basically the same question 5p - 3p and 4p - 4p?
I don't think (?) the other subparts are really interesting 6p - 2p and 7p - 1p ?

Bert
You are right Bert I did not build the 6p-1p db and I do not intend to buid 6p - 2p db.
These db will be necessary for building the 6p - 3p db but it is another story
Gérard

Ed Gilbert
Posts: 859
Joined: Sat Apr 28, 2007 14:53
Real name: Ed Gilbert
Location: Morristown, NJ USA
Contact:

Post by Ed Gilbert » Thu Jul 23, 2009 14:09

Ed, going to 8 basically the same question 5p - 3p and 4p - 4p?
Yes, only 4x4 and 5x3 positions, ~16.5e12 positions.

-- Ed

Ed Gilbert
Posts: 859
Joined: Sat Apr 28, 2007 14:53
Real name: Ed Gilbert
Location: Morristown, NJ USA
Contact:

Post by Ed Gilbert » Thu Jul 23, 2009 14:23

I was not talking on the all db but only on the db being build.
Ok.
Lets's take an example. Suppose I want to build the 5K - 2K1M db. For any position of the man I have about (with some inconsistent positions)
comb(50,5) * comb(50,2) = 2,118,760 * 1,225 = 2,595,481,000 positions with white to move and 2,595,481,000 positions with black to move.
Actually it is only comb(49,5) * comb(44,2) = 1,803,912,264 positions for one side to move. Maybe that is what you meant by inconsistent positions? But I understand your point.
1) My 2-7pc db takes 20,6GB on the disk in a trie format
With the trie format, when you need to lookup the value of a position, do you have to load the entire file of the slice containing the position, or can you load just a small part of it? I would think it must be difficult to load just a part of the file since the information for retrieving a position's value is distributed over what must typically be a large area of the file, as you traverse the trie from the root node to a leaf.

-- Ed

TAILLE
Posts: 968
Joined: Thu Apr 26, 2007 18:51
Location: FRANCE

Post by TAILLE » Thu Jul 23, 2009 15:02

Ed Gilbert wrote:Actually it is only comb(49,5) * comb(44,2) = 1,803,912,264 positions for one side to move. Maybe that is what you meant by inconsistent positions? But I understand your point.
Yes
Ed Gilbert wrote: With the trie format, when you need to lookup the value of a position, do you have to load the entire file of the slice containing the position, or can you load just a small part of it? I would think it must be difficult to load just a part of the file since the information for retrieving a position's value is distributed over what must typically be a large area of the file, as you traverse the trie from the root node to a leaf.
What do you mean by slice ?
I use the following definition : a slice is made of all positions obtained by moving only the kings. In other words a slice is defined by the position of the men, by the number of white kings and by the number of black kings.
In my implementation I always put a slice in one block. As a consequence a block size may be larger than 4kb if the number of kings is high.
Several small slices may be put together in the same block in order to have a good size for a block.
The number of tries is equal to the number of blocks. It is only a way to search a position in a given block.
Gérard

Ed Gilbert
Posts: 859
Joined: Sat Apr 28, 2007 14:53
Real name: Ed Gilbert
Location: Morristown, NJ USA
Contact:

Post by Ed Gilbert » Thu Jul 23, 2009 18:38

Gerard, I see now how you organize it. I thought you had reindexed the data into a different format for engine lookups, but you still keep each unique configuration of men as a different subdivision with its own trie. In my case I call a slice a unique combination of number of bm, bk, wm, and wk, so slices are very large. For example, my slice for (1bm 3bk 2wm 2wk) contains ~1.3e12 positions. The data for this slice is stored in one large file, and the index numbers range from 0 to ~1.3e12.

To lookup a position, given its index within a slice, I have to first find which 4k cacheblock contains the position. This requires a binary search as I have the starting index of each 4k block stored in memory. Then I do a binary search to find which 64-byte block contains the position, as I have the start index of each 64-byte block available for cache blocks that have been loaded. Once the position is located to a 64-byte block, the exact byte is found by linear search.

-- Ed

TAILLE
Posts: 968
Joined: Thu Apr 26, 2007 18:51
Location: FRANCE

Post by TAILLE » Thu Jul 23, 2009 20:42

Ed Gilbert wrote:To lookup a position, given its index within a slice, I have to first find which 4k cacheblock contains the position. This requires a binary search as I have the starting index of each 4k block stored in memory.
-- Ed
I do not understand clearly how you run this binary search. My intention is here to try to use a hash acces with the idea that a given block can be loaded only in 2 ou 3 different places. Do you also try such approach ?
Do you have to reorganise this list of starting index each time you replace a block by another one ? May be you can consider that the time required for rearranging this list is not significant comparing to the loading of the block in memory ?

Other questions. In case you do not find the position in the cache, how do you find the identity of the block you have to load from the disk? Is it also a binary search based on a table of all starting index of all blocks of a given slice (with your definition) ? If yes is this table in memory or is it in a another cache ?
Gérard

Post Reply