Search Algorithm

Discussion about development of draughts in the time of computer and Internet.
Post Reply
Rein Halbersma
Posts: 1721
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: Search Algorithm

Post by Rein Halbersma » Tue Jun 11, 2013 13:18

Walter wrote:
Rein Halbersma wrote:
BertTuyt wrote: Rein, with 31.560.000 seconds a year ( 365 * 24 * 60 * 60 ), i think it should be around 64 core-years for every search-second based on 2.000.000.000 positions .
em, yes, off-by-factor-of-two here. I wrote 2^30 / (365 * 24 * 60 * 60) in my calculator, instead of 2^31.
Instead of a fixed x-seconds (or x-depth) search you could attempt to make this dynamic based on hit rate. You could start with calculating the static evaluation value for each of the 2.000.000.000 positions. After that every position that gets actually hit might be flagged as work for the 1 second search to improve the evaluation accuracy. If a position has been hit 100 times, then it might be flagged as work for a 10 seconds search to improve the evalution score further.
I like this solution. It's iterative so you always have an intermediate solution that is usable. You can also combine this with DOE where you seed the DOE queue with the frequencies from TurboDambase or random play.

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Tue Jun 11, 2013 14:20

MichelG wrote:
Walter wrote: Instead of a fixed x-seconds (or x-depth) search you could attempt to make this dynamic based on hit rate. You could start with calculating the static evaluation value for each of the 2.000.000.000 positions. After that every position that gets actually hit might be flagged as work for the 1 second search to improve the evaluation accuracy. If a position has been hit 100 times, then it might be flagged as work for a 10 seconds search to improve the evalution score further.
Finding the required hashtable size proves harder to find than i thought. The problem is that the samples from the random games seem not as random as i would like. Other methods point to about 50-100 billion positions for 15vs15. I'll try and investigate more.

Anyway, with estimates between 4 and 100 billion (and that's 15v15 only!), 1 second searches are not going to be realistic unless you borrow Bert's 1 million core computer for a while :-) Maybe you can set your GPU to work though.

Michel
Michel, the base assumption for prediction the number of positions for a specific position group (like 15manx15man), is a uniform probability of the individual positions.

This is most likely not fully the case for those positions which contain many pieces in direct contact with each other. I have carried out multiple simulations from 20x20 towards 3x3, and also found that for the 15x15 group the estimation population count was far too small (compared with previous results obtained). For a smaller piece-count (like 3x3, 4x4, ..), random play seems to provide better predictions.

I will post an overview of these results in the afternoon...


Bert

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Tue Jun 11, 2013 21:07

Herewith the .xls file with all random simulations.
For every X-X (20-20 --> 3-3) position group I have done 100M random simulations/games.
As an example the graph of the 15-15 simulation below.
r15.png
r15.png (10.8 KiB) Viewed 8678 times
On the horizontal scale the number of simulations in multiples of 1M.
On the vertical line number of positions.

The red line (series 2) is the number of total X-X positions (in this case ( 15-15) ).
The blue line (series 1) is the number of unique positions.
The green line (series 3) is a measure of the "overlap" Factor * ( Total Positions - Unique Positions ) / Total Positions.
The green line should provide an indication of the overall 15-15 set size, if all positions would have an uniform distribution.

This is unfortunately not the case, i think.
As I expect that the 15-15 set is most likely the biggest set within the X-X population.
The overlap is more due to a non-uniform distribution of positions, and therefore not an indication for the set size.

One sees that the "cache hits" increase again with X small ( 4 --> 3).
Basically this overlap (or cache hits) is a (indirect) measure for the total population size (again based on the assumption of equal distribution).

Another thing one can observe in the graphs that the number of X-X positions extracted from these simulations reduces.
The logical explanation is that with random games one sees weird moves (and captures), so that overall material balance is less likely when the game progresses....

Bert
Attachments
Random.xls
(354.5 KiB) Downloaded 231 times

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Tue Jun 11, 2013 21:39

To be complete after 100M Simulations the program "discovered" 3918 unique 1-1 positions, which is 98.7% of the total 3970 ( 2 * 1985 ).
Of these 3918 are 1957 white to move and 1961 black to move.
I would expect that if a new 1-1 position is found, and you apply a perft-like algorithm from that point onwards, that you would "find" all 3970 positions.
Maybe I need to do that, to really prove that due to initial symmetry breaking all theoretical 1-1 positions are also possible in "practice".

As mentioned earlier, I guess that the "overlap" in the previous X-X simulations (for low X, like 6,5,4,3) provide some initial clues for the X-X population size.

Bert

MichelG
Posts: 244
Joined: Sun Dec 28, 2003 20:24
Contact:

Re: Search Algorithm

Post by MichelG » Wed Jun 12, 2013 13:20

BertTuyt wrote: The green line should provide an indication of the overall 15-15 set size, if all positions would have an uniform distribution.

This is unfortunately not the case, i think.
As I expect that the 15-15 set is most likely the biggest set within the X-X population.
The overlap is more due to a non-uniform distribution of positions, and therefore not an indication for the set size.

One sees that the "cache hits" increase again with X small ( 4 --> 3).
Basically this overlap (or cache hits) is a (indirect) measure for the total population size (again based on the assumption of equal distribution).

Another thing one can observe in the graphs that the number of X-X positions extracted from these simulations reduces.
The logical explanation is that with random games one sees weird moves (and captures), so that overall material balance is less likely when the game progresses....

Bert
The 'gap' in your graph seems to be increasing only very slowly. This points to a large number of required positions.

I have further worked out a lower bound for the population size.

Dragon played about 10 million games that started with a random position and then played to the end. These games were played at 4 plies. (that took a while... :-))

From these games, i extracted about 3 billion position positions. That's not enough, so i expand the dataset by allowing some variation of the pieces in fields 1-5 and 46-50. This gives me a table of about 100 billion unique positions.

My question: how big a hashtable do i need to have a 5% hitrate in this table?

This is what i get:

Code: Select all

piececount  hashtable at 5% hitrate
18vs18      0.05 gp  
16vs16      0.6
14vs14      3.3
12vs12      6.5
10vs10      4.5
8vs8        1.2
7vs7        0.5
6vs6        0.1
1 gp=1 billion positions.

You could extrapolate this to a higher percentage if the positions are uniformly distributed. However they are not uniformly distributed, but if we multiply by 20, you can use the following table as a lower bound to the required size:

Code: Select all

piececount  hashtable at 63% hitrate
18vs18        1 gp  
16vs16       12
15vs15       31
14vs14       64
12vs12      129
10vs10       89
8vs8         24
7vs7          9
6vs6          3
5vs5          1
I suspect that the real numbers are a couple of times higher than this. Add it all up, and you are looking at a 1-2 trillion position hashtable. It is not impossible, but is not for today.

Michel

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Wed Jun 12, 2013 22:55

Michel thanks for your post, some questions related to your findings:
Dragon played about 10 million games that started with a random position and then played to the end. These games were played at 4 plies. (that took a while... :-))
In what way did you choose/determine the random position (how many pieces did the position contain....)?
Yo make me curious, how much time did it take (did you use all 6 cores)?

Your peak in position count is around 12, in my calculations 15, interesting but strange.

To what extend does a program already perform better if (for example) the search is assisted (so a dynamic hash-table and a static one with pre-cooked positions) with 7P positions (7x7) which have also a reasonable search-depth?

If we use a stable DOE table for the first 10-15 moves, than we might already see the first searches with terminal nodes with low material count..
So in that case we skip the dangerous position-explosion zone.
Whether this is successful is also based on the point in the game, programs tend to make the (vital) mistake and hereafter loose.

An interesting "mind" experiment:
I guess it would be doable to store all nodes during 1 game (so capture, non capture, non-terminal, X-Y material imbalanced positions, ..., whatever).
We then would play a DXP-match against a strong opponent (like Kingsrow).
If the game is a draw the position table is cleared.
But if we loose, would it be possible based on further analysis of the total game tree, to find and recalibrate the positions which basically cause the loss and/or were based on completely wrong evaluation scores?

Bert

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Thu Jun 13, 2013 00:01

Herewith a picture to share my basic thoughts.
The idea is to create a layer between Opening (DOE-Book) and the Endgame (Endgame Data Bases EGDB).
I call this Book Assisted Search ( BAS) or Table Assisted Search (TAS).
As the calculations from Michael indicate the 7P set seems to have a "manageable size".
Question remains if the damage :D is not already done "somewhere else", and that probing this Table/Book might be too late in many cases...

To be continued.....

Bert
EMO.png
EMO.png (57.63 KiB) Viewed 8626 times

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Thu Jun 13, 2013 00:21

Herewith the .xls files with short learn DXP-Games as already reported.

Bert
Attachments
Learn20_20s.xls
(121 KiB) Downloaded 253 times

MichelG
Posts: 244
Joined: Sun Dec 28, 2003 20:24
Contact:

Re: Search Algorithm

Post by MichelG » Thu Jun 13, 2013 08:57

BertTuyt wrote: In what way did you choose/determine the random position (how many pieces did the position contain....)?
Yo make me curious, how much time did it take (did you use all 6 cores)?

Your peak in position count is around 12, in my calculations 15, interesting but strange.
For the first 14 ply, i let the program play a random move. If after this the game eval was balanced, i let the game continue with search. I don't know the exact time it took, (i did this a while ago), but dragon can do about 40.000 games/hour in 4 ply self play.

Its not so strange we have different peaks as we measure different things. Your method mostly measures often the X-piece positions are visited. Because no there is no saturation (yet), this is not an indication of the total number of possible positions.

Just imaging a sample of only 1 game; the 20-piece positions will occur about as often as 12-piece positions. This effect will persist until you run into saturation.
BertTuyt wrote: To what extend does a program already perform better if (for example) the search is assisted (so a dynamic hash-table and a static one with pre-cooked positions) with 7P positions (7x7) which have also a reasonable search-depth?

...

Here with a picture to share my basic thoughts.
The idea is to create a layer between Opening (DOE-Book) and the Endgame (Endgame Data Bases EGDB).
I call this Book Assisted Search ( BAS) or Table Assisted Search (TAS).
As the calculations from Michael indicate the 7P set seems to have a "manageable size".
Question remains if the damage :D is not already done "somewhere else", and that probing this Table/Book might be too late in many cases...
Using a big table of endgame positions would give some improvements i think, but not as much as in the middlegame.

I am thinking more of the DOE-generator.

Dragon has been creating its book for more than a month now; in this time, it visited 30 trillion positions (30 terapos), mostly within the 12-18 (white) pieces range. This means it must be searching the same positions over and over again, and that seems rather inefficient.

A simple hash-table is not enough; a position may occur on one thread on a particular machine, and reoccur on a different thread (or different machine) several days later.

Michel

MichelG
Posts: 244
Joined: Sun Dec 28, 2003 20:24
Contact:

Re: Search Algorithm

Post by MichelG » Mon Jun 17, 2013 12:58

I did some more research into this and have some more data; it seems that the hitrate is highly depended on the size of the hashtable, scaling approximately with the square root of the size of the hashtable.

So if your hashtable is 4 times as big, the hitrate will only double. Getting a high hitrate is incredibly difficult.

Some data points from 12v12 noncapture positions, computed with a somewhat different method than before:

Code: Select all

hitrate   required hashtable
0.011      2.2 giga-positions
0.018      5.4
0.026      9.9
0.035     16.5
0.060     42
0.104    115
0.175    303
0.383   1590
0.798  16265
NB: The number of positions estimates have a large margin for error.

Michel

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Mon Jun 17, 2013 16:54

Michael, for my understanding, a 1 hitrate in your table is that 1% or 100%.
And last but not least, what is your estimate (as of today) for the 12m - 12m set size ?

If all positions would have the same probability, than the expected number of Unique positions E, out of a set of N positions, after P "picks" is :

E = N * ( 1 - ( (N-1)/N)^P )

For P = N: E = N * ( 1 - ( (N-1)/N)^N )

If N goes to infinite (or very large) this yields the expression you also mentioned:

E/N = ( 1 - 1/e ) (something around 63%).

For the 3m-3m I "simulated" almost 1G random games, and you see that it is almost impossible to "get" a proper curve-fitting, which implies (if the formula is correct) that at least the boundary condion of equal probability is not realistic.

I still believe however that "in the end" 10^12 positions might be "sufficient" for a middelgame book/table.
But so far (other than for 20-20, 1-1, 2-2 and 3-3) I dont have good predictions for the other X-X set sizes (including the question if the peak is around 15-15 or 12-12 :) ), to justify this claim...

Bert

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Mon Jun 17, 2013 19:02

See below the next processor for Computer Draughts programmers :)
Haswell-E will be Intel's last and best offering using the 22 nm fabrication process, it will come in two versions, core count wise, 8 core part(s) as well as 6 core part(s) with hyper-threading enabled, therefore, boasting no less that 16 execution threads for the 8 core chips and 12 execution threads for the 6 core version(s). Judging by that alone, Haswell-E should constitute a far superior upgrade over Ivy Bridge-E, compared to what the latter will be in relation to Sandy Bridge-E, Haswell-E offering two additional physical cores that translate into four additional execution threads. The new chips will boast 2.5 MB of L3 Cache per core, summing up to 20 MB total L3 cache for the 8 core parts. TDP will remain in the same neighborhood it was in the case of its predecessors, around 130-140 W.

Haswell-E will of course be accompanied by a new platform, dubbed Wellsburg, the X99 chipset will bring a host of new features, the most important one being quad channel DDR4 support. With four basic frequency settings, starting at 1333 MHz and moving up in increments of 266 MHz to a maximum of 2133 MHz, at which point overclocking should be employed to attain superior clocks. However we sincerely doubt that any DDR4 modules/kits, clocked below 2133 MHz, will be made available for this platform. Modestly clocked (1333 MHz), low voltage (1.2 V) kits are supported by the new platform as well. The DIMM connector was also modified to support Non Volatile DIMMs, receiving four more pins for the purpose (288 instead of 284), modification that will not negatively impact compatibility with 284 pin modules in any way.

Other points of interest regarding the X99 chipset are:

Up to six USB 3.0 ports
Up to eight USB 2.0 ports
Up to ten SATA 6 Gbps ports
Integrated Clock support
TDP of 6.5W

The LGA 2011 socket will be updated too, dubbed LGA 2011-3, the socket will see the pin layout changing while remaining numerically the same. The change, going by Intel's own slides, claiming superior efficiency. The chip's IHS received a makeover as well, looking very different from current Intel offerings.
Bert

MichelG
Posts: 244
Joined: Sun Dec 28, 2003 20:24
Contact:

Re: Search Algorithm

Post by MichelG » Mon Jun 17, 2013 19:35

BertTuyt wrote:Michael, for my understanding, a 1 hitrate in your table is that 1% or 100%.
And last but not least, what is your estimate (as of today) for the 12m - 12m set size ?

If all positions would have the same probability, than the expected number of Unique positions E, out of a set of N positions, after P "picks" is :

E = N * ( 1 - ( (N-1)/N)^P )
0.01 means 1% in the table.

I don't think you can speak of 'a' set size. For a hashtable of size 3x10^12, you would get a 50% hitrate (in 12v12). So a well picked set of 3x10^12 will include 50% of all positions you will find in a game tree.

But because the positions are not randomly distributed, you can't expect a 6x10^12 hashtable to hit 75%. {100-(100-50%*50%/100)}. Some positions will occur much more often that others, and as P increases, the set of unique positions will grow indefinitely until some really huge number as more and more infrequent positions are encountered.

Michel

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Wed Jun 19, 2013 17:26

I have (re)started some Damage architecture "optimizations" (aiming for a 20% speed-increase, at least that is my target :) ).
To check if I didn't introduce major bugs and/or crashes , I played a standard 10-min match against Kingsrow.
Results from the perspective of Damage 4 wins, 5 losses and 149 draws.
As an indication for future matches the consecutive 56 draws ( game 18-73 ) is remarkable...

Attached the match .pdn file for those interested.

Question for the other programmers, as I dont have reference scores of other programs (like Dragon, Maximus, Damy), I still dont know if this result is comparable with similar matches.....

Bert
Attachments
dxpgames 19-Jun-2013.pdn
(157.52 KiB) Downloaded 261 times

MichelG
Posts: 244
Joined: Sun Dec 28, 2003 20:24
Contact:

Re: Search Algorithm

Post by MichelG » Wed Jun 19, 2013 20:22

BertTuyt wrote:I have (re)started some Damage architecture "optimizations" (aiming for a 20% speed-increase, at least that is my target :) ).
To check if I didn't introduce major bugs and/or crashes , I played a standard 10-min match against Kingsrow.
Results from the perspective of Damage 4 wins, 5 losses and 149 draws.
As an indication for future matches the consecutive 56 draws ( game 18-73 ) is remarkable...

Attached the match .pdn file for those interested.

Question for the other programmers, as I dont have reference scores of other programs (like Dragon, Maximus, Damy), I still dont know if this result is comparable with similar matches.....

Bert
Dragon's strength has been creeping up slowly the last couple of months. The last match of dragon vs kingsrow ended in:
25 wins, 35 losses, 170 draws and 15 unknowns, as seen from dragon's perspective. (48% score). This is with 90 moves/1 minute, at 2 cores and 128 mb hash and with a limited set of endgame databases. First 245 games of 3 move ballot openings.

I don't play dragon at 10 minutes a game, so it is a bit hard to compare. 10 minutes won't yield any useable statistics because i need my laptop every day for other uses.

Good luck getting that 20% :-) It must be getting harder and harder to find improvements.

My plan: release a new version of in a few weeks, and than focus on improving the tree search the next couple of months. Dragon still uses plain old alpha-beta search with limited selectivity...

Michel

Post Reply