Search Algorithm

Discussion about development of draughts in the time of computer and Internet.
Post Reply
MichelG
Posts: 244
Joined: Sun Dec 28, 2003 20:24
Contact:

Re: Search Algorithm

Post by MichelG » Mon May 13, 2013 12:38

BertTuyt wrote:Next to the DOE approach to generate a Opening Book , I have started a parallel path.

Also a collaboration effort on multiple computers could help here..

I'm open for suggestions, sharing, whatever...

Keep you posted.

Bert
My current opening book consisted of a (small) database of human games, but now i am moving over to the DOE approach.

For the human book approach, i build a tree of the small game-set, and then evaluated the leaf nodes. Combined with some blunderchecking, this gives a small but reasonable book.

However, my computer is now working hard on the DOE approach; it has been running for 10 days now on 12 threads and generates some 3000 positions per day. Each thread is handling a different position for maximum efficiency.

Won't generating this new book compete for CPU-time with your DOE generator? How far have you come with it?

Michel

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Mon May 13, 2013 21:10

Michel, thanks for your post.

I'm still testing several implementation ideas, before I go into stealth mode and start a DOE calculation period.
In the end I want to use a hybrid approach, I want to create a Book based on actual game situations, and including DOE extensions.
In this way I also want the option to add later positions, fragments of games (where Damage for example lost), or new interesting games with (according experts) new opening theory.

So far I use a DOE-ish implementation. Where I at least differ is that I dont distinguish between book player and non-book player, so everywhere my priority function is the same.
Which is different according to DOE in the Linkcke paper. Also I dont know yet what the optimal parameter is for w (which determines drop out behavior). Think I will use 1 (which in my case is a centi-manvalue, value man = 100 ).

How many positions do you want to reach in the end? And how deep (moves sequence) do you go with variations?
I also have not decided for max-depth, as it leads to all kinds of cycle problems (if both sides have a king). Although I think I solved it, im not sure if it is 100% foul-proof..

Im also not sure to what extend books add to the strength of the program. But think you also mentioned it, things could be different if books grow to huge numbers ( > 10^9 or so..).

Bert

Ed Gilbert
Posts: 859
Joined: Sat Apr 28, 2007 14:53
Real name: Ed Gilbert
Location: Morristown, NJ USA
Contact:

Re: Search Algorithm

Post by Ed Gilbert » Tue May 14, 2013 00:27

Where I at least differ is that I dont distinguish between book player and non-book player, so everywhere my priority function is the same.
The advantage of using the concept of a book player and non-book player is that it allows you to build a book that goes deeper into the game tree (for the same number of book positions). It means your engine gets to play on average more book moves before it drops out of its book.

Let me try some very coarse calculations. Suppose for example that at every position there are about 5 plausible moves that will need to be expanded by DOE, because the propagated scores of those positions are either the same as the score of the best move, or very close to that. Some positions have captures so they will have less than 5, but others will have more than 5. I don't think 5 is an unreasonable estimate for international draughts.

If you ignore the difference between the book player and non-book player, then the number of positions that you have to expand at each depth (d) into the game tree is 5^d. To get 10 plies into the tree, you need to search 5^10 = 9.8 million positions. This will take a very long time, even with multi-core PCs, and then you will still drop out of book after only 10 plies, 5 moves by each player, which isn't very far.

If instead you decide that the book player (your engine) will only play one best move at each position, then you don't need nearly that many book positions to go 10 plies into the game. To roughly estimate how many positions are needed, you need to consider separately the case where the book player plays first and then when he plays second. Starting from the root, if the non-book player plays first, after 1 ply there are 5 book positions, and after 2 plies there are 10. After 10 plies, the number of book positions is (5x2 + 25x2 + 125x2 + 625x2 + 3125x2) = 7810 positions. Of course you also have to allow for the case where the book player plays first, so that doubles it to 15620 positions. Quite a difference from 9.8M! This ignores transpositions, and you probably want to allow the book player to play more than one best move so your engine's play is not completely predictable, but still you can see that it is a huge optimization to use an algorithm that restricts the number of book player moves.

-- Ed

Rein Halbersma
Posts: 1722
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: Search Algorithm

Post by Rein Halbersma » Tue May 14, 2013 08:10

Ed Gilbert wrote:
Where I at least differ is that I dont distinguish between book player and non-book player, so everywhere my priority function is the same.
The advantage of using the concept of a book player and non-book player is that it allows you to build a book that goes deeper into the game tree (for the same number of book positions). It means your engine gets to play on average more book moves before it drops out of its book.

Let me try some very coarse calculations. Suppose for example that at every position there are about 5 plausible moves that will need to be expanded by DOE, because the propagated scores of those positions are either the same as the score of the best move, or very close to that. Some positions have captures so they will have less than 5, but others will have more than 5. I don't think 5 is an unreasonable estimate for international draughts.

If you ignore the difference between the book player and non-book player, then the number of positions that you have to expand at each depth (d) into the game tree is 5^d. To get 10 plies into the tree, you need to search 5^10 = 9.8 million positions. This will take a very long time, even with multi-core PCs, and then you will still drop out of book after only 10 plies, 5 moves by each player, which isn't very far.

If instead you decide that the book player (your engine) will only play one best move at each position, then you don't need nearly that many book positions to go 10 plies into the game. To roughly estimate how many positions are needed, you need to consider separately the case where the book player plays first and then when he plays second. Starting from the root, if the non-book player plays first, after 1 ply there are 5 book positions, and after 2 plies there are 10. After 10 plies, the number of book positions is (5x2 + 25x2 + 125x2 + 625x2 + 3125x2) = 7810 positions. Of course you also have to allow for the case where the book player plays first, so that doubles it to 15620 positions. Quite a difference from 9.8M! This ignores transpositions, and you probably want to allow the book player to play more than one best move so your engine's play is not completely predictable, but still you can see that it is a huge optimization to use an algorithm that restricts the number of book player moves.

-- Ed
You could even use a book1 vs book2 algorithm, where book1 is the book you want to drop-expand, and book2 is a collection of GM-games (say of all positions that have occurred more than 10 times in top-level games). This will generate a book1 that computes all deviations from published play. Did anyone ever try this?

Rein

MichelG
Posts: 244
Joined: Sun Dec 28, 2003 20:24
Contact:

Re: Search Algorithm

Post by MichelG » Wed May 15, 2013 12:49

BertTuyt wrote:Michel, thanks for your post.
So far I use a DOE-ish implementation. Where I at least differ is that I dont distinguish between book player and non-book player, so everywhere my priority function is the same.
Which is different according to DOE in the Linkcke paper. Also I dont know yet what the optimal parameter is for w (which determines drop out behavior). Think I will use 1 (which in my case is a centi-manvalue, value man = 100 ).

How many positions do you want to reach in the end? And how deep (moves sequence) do you go with variations?
I also have not decided for max-depth, as it leads to all kinds of cycle problems (if both sides have a king). Although I think I solved it, im not sure if it is 100% foul-proof..

Im also not sure to what extend books add to the strength of the program. But think you also mentioned it, things could be different if books grow to huge numbers ( > 10^9 or so..).

Bert
I'll go for about 200k positions for now.

As for the value of W, i used the following strategy:
1) Take a random value of W (e.g. 1)
2) Quickly create a 1000k book by doing shallow searches (e.g. 6 ply) [actually, i just extrapolated from a 100k book]
3) Compute the average depth of the book, and try it out.
4) Adjust W and try again until you find some compromise between the depth and width of the book.

I think the most important reason for the book at the moment is to save time. If you can avoid having to think about the first few ply, you can spend more computing time on the rest of the game.

What would be the advantage of a 10^9 positions in the book? If you want to compute it in your lifetime, you would have to use very shallow searches. Unless you got access to some huge cluster of computers.

Michel

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Wed May 15, 2013 23:52

Michel,

do you (as Ed suggested) use a different w-value for the book and non-book player?
So far I'm not sure if I will go that way, but need somewhat more testing.
I recognize that Ed's approach is more in line with the original DOE-principle.

However there are several reasons why I (maybe) dont want to restrict the book-player move to 1 (the best ) reply only:
- Sometimes more moves have the same "best" value, and i want to evaluate these lines.
- I want to introduce some randomness in the book play, so allow also lines with a minor lower score (in the order of centi-man value).
- When expanding book-lines which have an initial somewhat lower score, they sometimes could become more promising when searching deeper. With a proper DOE value also the book player will evaluate/examine these lines.
And due do the negative initial impact on th priority function, less initial effort will be spent anyway here.

In my case (with a w-value of 1, so far for book and non-book player) lines with an initial score difference of 5 centi-man value, will only be examined if the main line is 5 ply deeper.

So (although the jury is not out), it could be that I will use different w-values in the end (book-player 5, and non-book player 1), to avoid the explosion trap as Ed rightfully mentioned.

How do you handle this with Dragon so far, and/or what is your experience ??

Bert

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Thu May 16, 2013 00:24

MichelG wrote:
BertTuyt wrote:Michel, thanks for your post.
So far I use a DOE-ish implementation. Where I at least differ is that I dont distinguish between book player and non-book player, so everywhere my priority function is the same.
Which is different according to DOE in the Linkcke paper. Also I dont know yet what the optimal parameter is for w (which determines drop out behavior). Think I will use 1 (which in my case is a centi-manvalue, value man = 100 ).

How many positions do you want to reach in the end? And how deep (moves sequence) do you go with variations?
I also have not decided for max-depth, as it leads to all kinds of cycle problems (if both sides have a king). Although I think I solved it, im not sure if it is 100% foul-proof..

Im also not sure to what extend books add to the strength of the program. But think you also mentioned it, things could be different if books grow to huge numbers ( > 10^9 or so..).

Bert
I'll go for about 200k positions for now.

As for the value of W, i used the following strategy:
1) Take a random value of W (e.g. 1)
2) Quickly create a 1000k book by doing shallow searches (e.g. 6 ply) [actually, i just extrapolated from a 100k book]
3) Compute the average depth of the book, and try it out.
4) Adjust W and try again until you find some compromise between the depth and width of the book.

I think the most important reason for the book at the moment is to save time. If you can avoid having to think about the first few ply, you can spend more computing time on the rest of the game.

What would be the advantage of a 10^9 positions in the book? If you want to compute it in your lifetime, you would have to use very shallow searches. Unless you got access to some huge cluster of computers.

Michel
Michel, I agree with you that without huge computer capacity a 10^9 book is a lifetime activity (anyway hope to spent the rest of my life on this computer draughts subject anyway).
My open question (and I also dont have an answer ) is whether it would be possible to (significantly) improve the computer performance by injecting/added all kinds of non-opening (critical) positions (and DOE-like expansion of this position) to the book.

Maybe I can formulate my question also in a different way. In music encoding (MP3) one can throw away much information which for the human ear is apparently redundant or where our ears cannot differentiate anymore.

In endgame DB's we tend to compress but the information is not changed and/or reduced.
So would it be possible (for example), to also play "perfect" ( in the sense that a theoretically won game is also a win, ...) in (specific endgame situations) by only having access to a limited amount of positions.
Like in the 4K - 1K case, only a few standard positions need to be learned to reach a goal (in combination with shallow search).

I for example applied DOE-search on the Woldouby position, and found similar book-lines generated with the DOE-approach compared with known theory.

We also see sometimes positions where a detailed analysis (based on plain search) does not reveal a winning sequence ( although there).
Maybe when analyzing these positions in a DOE-way, could provide more clues. And when added to the Book, at least the program's horizon is extended for these cases.

Last but not least, I also dont know if large books (so covering opening, middle and end phases) could be used together with the hash-table, and if this would work (in some critical cases).
Especially when playing against humans/grandmasters when they play known openings (but it could be that this approach is abandoned, and not in line with an anti-computer approach).

Consider this as all random (not scientifically sound) thoughts. I'm sure that we finally will surpass human performance in Draughts. But it might be that we need to develop some new methods, not yet proven and/or implemented to boldly go where no-one has gone before...

Bert

MichelG
Posts: 244
Joined: Sun Dec 28, 2003 20:24
Contact:

Re: Search Algorithm

Post by MichelG » Fri May 17, 2013 13:23

BertTuyt wrote:Michel,

do you (as Ed suggested) use a different w-value for the book and non-book player?
So far I'm not sure if I will go that way, but need somewhat more testing.
I recognize that Ed's approach is more in line with the original DOE-principle.

However there are several reasons why I (maybe) dont want to restrict the book-player move to 1 (the best ) reply only:
- Sometimes more moves have the same "best" value, and i want to evaluate these lines.
- I want to introduce some randomness in the book play, so allow also lines with a minor lower score (in the order of centi-man value).
- When expanding book-lines which have an initial somewhat lower score, they sometimes could become more promising when searching deeper. With a proper DOE value also the book player will evaluate/examine these lines.
And due do the negative initial impact on th priority function, less initial effort will be spent anyway here.

In my case (with a w-value of 1, so far for book and non-book player) lines with an initial score difference of 5 centi-man value, will only be examined if the main line is 5 ply deeper.

So (although the jury is not out), it could be that I will use different w-values in the end (book-player 5, and non-book player 1), to avoid the explosion trap as Ed rightfully mentioned.

How do you handle this with Dragon so far, and/or what is your experience ??

Bert
In principle, for the book player, i do restrict it to the best move only.

However i agree there should also be some randomness in the book and rather than having some W-value set for the non-book player i use another method. My book will be a combination of 2 methods:

1) I want the randomness in play to be in the first few moves of the game. There is little point in creating a variation on move 20, since you will probably never get there anyway. So at the root of the tree i do allow (nearly) all moves, while deeper in the tree, it is restricted to 1.

2) A pure book, with the book-player restricted to the best move only, and the non-book player has some value of W.

By interleaving between the two methods you get a combination of a wide but shallow book and a deep but restricted book.
BertTuyt wrote:
Michel, I agree with you that without huge computer capacity a 10^9 book is a lifetime activity (anyway hope to spent the rest of my life on this computer draughts subject anyway).
My open question (and I also dont have an answer ) is whether it would be possible to (significantly) improve the computer performance by injecting/added all kinds of non-opening (critical) positions (and DOE-like expansion of this position) to the book.

Maybe I can formulate my question also in a different way. In music encoding (MP3) one can throw away much information which for the human ear is apparently redundant or where our ears cannot differentiate anymore.

In endgame DB's we tend to compress but the information is not changed and/or reduced.
So would it be possible (for example), to also play "perfect" ( in the sense that a theoretically won game is also a win, ...) in (specific endgame situations) by only having access to a limited amount of positions.
Like in the 4K - 1K case, only a few standard positions need to be learned to reach a goal (in combination with shallow search).

I for example applied DOE-search on the Woldouby position, and found similar book-lines generated with the DOE-approach compared with known theory.

We also see sometimes positions where a detailed analysis (based on plain search) does not reveal a winning sequence ( although there).
Maybe when analyzing these positions in a DOE-way, could provide more clues. And when added to the Book, at least the program's horizon is extended for these cases.

Last but not least, I also dont know if large books (so covering opening, middle and end phases) could be used together with the hash-table, and if this would work (in some critical cases).
Especially when playing against humans/grandmasters when they play known openings (but it could be that this approach is abandoned, and not in line with an anti-computer approach).

Consider this as all random (not scientifically sound) thoughts. I'm sure that we finally will surpass human performance in Draughts. But it might be that we need to develop some new methods, not yet proven and/or implemented to boldly go where no-one has gone before...

Bert
I think this method will eventually work far in the future. I have looked into these kind of possibilities before, but i have come to the conclusion that current computer hardware is insufficient to make any significant impact for now. You need in the order of a trillion (10^12) positions to do any good. And then there is the problem of generation such a large precomputed table.

As for surpassing humans, maybe some day someone will find a way to make MCTS or some other new algorithm work for draughts :-)

Michel

Rein Halbersma
Posts: 1722
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: Search Algorithm

Post by Rein Halbersma » Fri May 17, 2013 14:18

BertTuyt wrote: I for example applied DOE-search on the Woldouby position, and found similar book-lines generated with the DOE-approach compared with known theory.

We also see sometimes positions where a detailed analysis (based on plain search) does not reveal a winning sequence ( although there).
Maybe when analyzing these positions in a DOE-way, could provide more clues. And when added to the Book, at least the program's horizon is extended for these cases.
I think a "middle game book" of Woldouby-type positions would be quite valuable. One of the nice things of TurboDambase is that you can turn on the "book window" which will show how often a position occurred in its databases. When replaying a game, you often drop out of book somewhere between moves 10-20. But in the late middle game (11 vs 11, or 10 vs 10) you find certain classical positions over and over again. E.g the DeHaas-Fabre and Woldouby types of positions each have occurred > 100 times. That means that it is also very likely that these positions will enter in the search during late classical middle game positions. So I would like to systematically extract every position from TurboDambase that has occurred more than N times (N=10, or whatever) and apply DOE to that.

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Sat May 18, 2013 12:36

In principle, for the book player, i do restrict it to the best move only.

However i agree there should also be some randomness in the book and rather than having some W-value set for the non-book player i use another method. My book will be a combination of 2 methods:

1) I want the randomness in play to be in the first few moves of the game. There is little point in creating a variation on move 20, since you will probably never get there anyway. So at the root of the tree i do allow (nearly) all moves, while deeper in the tree, it is restricted to 1.

2) A pure book, with the book-player restricted to the best move only, and the non-book player has some value of W.

By interleaving between the two methods you get a combination of a wide but shallow book and a deep but restricted book.
Interesting approach. Didn't thought of it, but basically one could introduce a W-value for the book-player in the first X Ply (from the root), and then switch to best move only. Whereas the W-value for the non-book player keeps constant.
I think this method will eventually work far in the future. I have looked into these kind of possibilities before, but i have come to the conclusion that current computer hardware is insufficient to make any significant impact for now. You need in the order of a trillion (10^12) positions to do any good. And then there is the problem of generation such a large precomputed table.

As for surpassing humans, maybe some day someone will find a way to make MCTS or some other new algorithm work for draughts :-)
Michel,how did you derive the value of 10^12, are there some underlying assumptions that justify this claim?
Im now working on extracting positions from the Turbodambase ( :), I know it can be done via pdn export, but direct access is more fun).
I will post soon (if I manage to "understand" the new TB 2.1 Format), some numbers, which might provides some clues of unique and/or practical positions in relation to some theoretical limit value based on combination calculations.

I also studied MCTS, and I found somewhere a reference on the internet that it apparently (?) seemed to work do detect capture combinations in checkers. At least it introduced a breakthrough in GO, but for whatever reason was not successful in Chess (and as Draughts belongs to the Chess class of games, i think a similar conclusion so far applies).
It is on my list of things to study when i retire, but for the short term i have other things I want to work out in Computer Draughts.
Anyway never a dull moment...

Bert

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Sat May 18, 2013 13:20

In the past I wrote a .doc where I "explained" the number of theoretical positions. Not sure if it is "self-explaining", and/or if it doesn't contain any flaws.
In my archive I have somewhere a "large-number" program which calculates (and output) the exact number. Maybe I can find it one of these days, and post some results.

Bert
Attachments
Draughts Position Count.doc
(82.5 KiB) Downloaded 347 times

ildjarn
Posts: 1537
Joined: Tue Aug 22, 2006 15:38
Real name: Joost de Heer

Re: Search Algorithm

Post by ildjarn » Mon May 20, 2013 11:33

BertTuyt wrote:In the past I wrote a .doc where I "explained" the number of theoretical positions. Not sure if it is "self-explaining", and/or if it doesn't contain any flaws.
In my archive I have somewhere a "large-number" program which calculates (and output) the exact number. Maybe I can find it one of these days, and post some results.

Bert
There are some small flaws: In the 'draughts man position' table, the second area should be 6-45, and the third area should be 46-50. And the general case is much too high in the case of more than 5 pieces, because area 1 and 3 can only contain 5 pieces.

Does 'side to move' belong to 'position'?
Lasst die Maschinen verhungern, Ihr Narren...
Lasst sie verrecken!
Schlagt sie tot -- die Maschinen!

MichelG
Posts: 244
Joined: Sun Dec 28, 2003 20:24
Contact:

Re: Search Algorithm

Post by MichelG » Tue May 21, 2013 13:25

BertTuyt wrote:
Michel,how did you derive the value of 10^12, are there some underlying assumptions that justify this claim?
Im now working on extracting positions from the Turbodambase ( :), I know it can be done via pdn export, but direct access is more fun).
I will post soon (if I manage to "understand" the new TB 2.1 Format), some numbers, which might provides some clues of unique and/or practical positions in relation to some theoretical limit value based on combination calculations.
Here is the calculation for using in a huge pre-computed ingame hashtable:

First I took a random sample of a 40 million leafnode positions and put them in a hashtable. Then I let the program count how many times this hashtable is hit; my hitrate is about 6x10^-4. It depends a bit in the number of pieces on the board.

In other words; if you have a precomputed hashtable of a 40 million random positions, this will be accessed once every 1600 evaluations. You can attempt to extrapolate from this; if you assume the hitrate increases lineary with it's size, you would need a table size of 0.5*1600*40*million=32 billion to have a 50% hitrate in the table.

Ofcourse, the hitrate will not increase lineary, as some positions will be more frequent than others and my small statistical sample will more likely contain the likely positions.

My guess would be that a 32 billion position hashtable would be the very minimum to have some impact on the search, and you probably need much more. Such a huge tables are conceivable though; memory sizes are increasing and ssd's might help.

Michel

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Fri May 24, 2013 14:09

Thanks for the remarks I received regarding the doc I posted (Draughts position count).
There were indeed some small flaws/mistakes in the initial paper.
Based on the formulas I have made an excel calculation to visualize the position count as a function of number X of man ( X white man - X black man).
The vertical axis represent the LOG10 of the number of positions.

See below graph:
Comb.png
Comb.png (12.91 KiB) Viewed 8345 times
The blue line (series 1) represent the simple approach for position count count = 2 * COMB(45, X)^2.
I have included the factor 2 (for white to move and black to move, although for a Database based on mirroring only the WTM positions are required).
The red line represent the more complex/actual calculation (the math is included in the attached excel file, not 100% sure if this one is without flaws :) )
The green line is the number of unique positions for N whiteman x N blackman based on the MEGA1997.pd Turbodambase (approximate 168K games).

Due to the internal initial symmetry the actual number of 20 x 20 positions (only man no king) is an order of magnitude smaller. Most likely capture(s) break the internal symmetry, which yields that actual number of positions will be "closer" to the theoretical maximum.
I expect that the number of "practical" positions is again an order of magnitude smaller than the theoretical value.

Somewhat more background in a next post.

Bert
Attachments
Comb.xls
(52.5 KiB) Downloaded 278 times
Last edited by BertTuyt on Fri May 24, 2013 15:20, edited 2 times in total.

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Fri May 24, 2013 15:09

To test the hypothesis that the actual number of 20 x 20 positions (only man) is significant smaller than the theoretical max (based on the internal initial symmetry) , I wrote a program to calculate the unique 20x20 positions.
Although the results has not yet been independently verified, i wanted to share it already (see below output).
In general, if no flaws are detected, the number of unique 20x20 positions is 3.038.453.122.
I still expect that the "practical" value is a factor of 100 lower, but I recognize that i need "more evidence" to justify and validate this claim.

Below actual program output:
D = D= Depth, N = Number of Total (unique) positions, A = Actual number on this depth.

Code: Select all

0.00 Perft, Memory Empty
 0.00 Perft, Memory allocated = 420000000
 0.00 Perft, Init Start
 2.56 Perft, Init End
 2.56 Perft, Add Nodes Start
 2.56 Perft, DNA = 0 1 1
 5.66 Perft, DNA = 1 10 9
 8.77 Perft, DNA = 2 91 81
11.97 Perft, DNA = 3 523 432
15.08 Perft, DNA = 4 2327 1804
18.20 Perft, DNA = 5 8666 6339
21.28 Perft, DNA = 6 29099 20433
24.42 Perft, DNA = 7 86625 57526
27.70 Perft, DNA = 8 234627 148002
31.08 Perft, DNA = 9 586903 352276
34.67 Perft, DNA = 10 1365441 778538
38.72 Perft, DNA = 11 2949357 1583916
43.58 Perft, DNA = 12 5976283 3026926
49.94 Perft, DNA = 13 11457428 5481145
58.56 Perft, DNA = 14 20867763 9410335
70.88 Perft, DNA = 15 36053881 15186118
88.10 Perft, DNA = 16 59309084 23255203
112.69 Perft, DNA = 17 93426298 34117214
146.14 Perft, DNA = 18 141245493 47819195
191.88 Perft, DNA = 19 205413118 64167625
250.88 Perft, DNA = 20 288301454 82888336
326.67 Perft, DNA = 21 391962122 103660668
419.22 Perft, DNA = 22 517046316 125084194
531.30 Perft, DNA = 23 663285274 146238958
659.68 Perft, DNA = 24 827051538 163766264
805.75 Perft, DNA = 25 1005553035 178501497
966.39 Perft, DNA = 26 1195033147 189480112
1143.61 Perft, DNA = 27 1390895634 195862487
1327.97 Perft, DNA = 28 1587227872 196332238
1509.81 Perft, DNA = 29 1779755370 192527498
1678.76 Perft, DNA = 30 1964150744 184395374
1836.04 Perft, DNA = 31 2137222512 173071768
1977.41 Perft, DNA = 32 2294019435 156796923
2104.15 Perft, DNA = 33 2433573374 139553939
2213.69 Perft, DNA = 34 2553897630 120324256
2308.32 Perft, DNA = 35 2656214933 102317303
2387.08 Perft, DNA = 36 2740714598 84499665
2452.38 Perft, DNA = 37 2809485368 68770770
2505.15 Perft, DNA = 38 2864260310 54774942
2547.84 Perft, DNA = 39 2907495242 43234932
2581.66 Perft, DNA = 40 2940851183 33355941
2608.53 Perft, DNA = 41 2965981021 25129838
2629.29 Perft, DNA = 42 2984734794 18753773
2645.79 Perft, DNA = 43 2998858755 14123961
2658.88 Perft, DNA = 44 3009237308 10378553
2669.43 Perft, DNA = 45 3016863761 7626453
2677.98 Perft, DNA = 46 3022380401 5516640
2685.15 Perft, DNA = 47 3026467460 4087059
2691.27 Perft, DNA = 48 3029531829 3064369
2696.68 Perft, DNA = 49 3031872845 2341016
2701.54 Perft, DNA = 50 3033633906 1761061
2706.04 Perft, DNA = 51 3034954404 1320498
2710.19 Perft, DNA = 52 3035955833 1001429
2714.11 Perft, DNA = 53 3036692318 736485
2717.82 Perft, DNA = 54 3037210733 518415
2721.39 Perft, DNA = 55 3037568791 358058
2724.83 Perft, DNA = 56 3037818895 250104
2728.20 Perft, DNA = 57 3037998291 179396
2731.49 Perft, DNA = 58 3038130553 132262
2734.74 Perft, DNA = 59 3038232745 102192
2737.95 Perft, DNA = 60 3038307057 74312
2741.20 Perft, DNA = 61 3038358104 51047
2744.42 Perft, DNA = 62 3038391385 33281
2747.59 Perft, DNA = 63 3038413596 22211
2750.75 Perft, DNA = 64 3038427684 14088
2753.99 Perft, DNA = 65 3038436471 8787
2757.16 Perft, DNA = 66 3038442556 6085
2760.35 Perft, DNA = 67 3038446697 4141
2763.52 Perft, DNA = 68 3038449491 2794
2766.68 Perft, DNA = 69 3038451073 1582
2769.78 Perft, DNA = 70 3038451912 839
2772.95 Perft, DNA = 71 3038452407 495
2776.14 Perft, DNA = 72 3038452686 279
2779.30 Perft, DNA = 73 3038452875 189
2782.45 Perft, DNA = 74 3038453017 142
2785.59 Perft, DNA = 75 3038453088 71
2788.74 Perft, DNA = 76 3038453116 28
2791.87 Perft, DNA = 77 3038453122 6
2795.04 Perft, DNA = 78 3038453122 0



Bert

Post Reply