Search algorithm

Discussion about development of draughts in the time of computer and Internet.
Post Reply
TAILLE
Posts: 968
Joined: Thu Apr 26, 2007 18:51
Location: FRANCE

Search algorithm

Post by TAILLE » Thu Mar 01, 2012 12:11

Hi,

Having noted that my following answer to Bert had no relation with the 3-move ballots topic I prefer to propose here a new topic.
Sorry for this duplication.

What king of algorithm do you use yourself?
TAILLE wrote:
BertTuyt wrote:Gerard regarding your remark:
Though I do not like alpha-beta, for draughts game
Which algorithm do you like best.
And do you have a quantitative comparison with alpha-beta...

Bert
Hi Bert,

After many hesitations between "MTD-f" and "MTD-f best" I eventually reached the following conclusion:
1) For running a game the "MTD-f best" have my preference (it is definitely my choice for Damy)
2) For solving a problem (i.e. for proving a win) the "MTD-f" algorithm might be better

I do not have any comparaison with alpha-beta.

I used this alpha-beta procedure about 7-8 years ago but I changed my mind after having studied the MTD-f algorithm and after having discovered that the MTD-f algorithm seems simply the natural algorithm used by the humans!!!
If a human considers he has a advangeous position (e.g an evaluation of +30 for a computer) then when analysing the different suites he will naturally eliminate all suites leading to a smaller advantage etc. etc. which is exactly the MTD-f algorithm!
To go further I even think that a human used basically the MTD-f best algorithm.
The alpha-beta seems a pure computer approach never used by a human. The theoritical basis for the alpha-beta algorithm is very sound but it leads to a lot af calculation and thus cannot be efficient. A great progress has been made by the use a zero-wide-windows which is a big step towards the MTD-f algorithm isn't it? Even with this great progress the humans still don't use this algorithm.
I cannot prove alpha-beta is less efficient that MTD-f best approach (and it is not my goal to prove it!) but my feeling is that the human approach should be the best one.
I always try to imitate the human approach and in this context I added some major improvments (?) to the MTD-f best algorithm itself, like the "virtual depth" I mentionned earlier: along a given variant a human will typically used a lot of time for the considered best move and far less time on the other moves. That is exactly what I do in Damy with my "virtual depth".
Gérard

Rein Halbersma
Posts: 1722
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: Search algorithm

Post by Rein Halbersma » Thu Mar 01, 2012 12:45

TAILLE wrote:Hi,

Having noted that my following answer to Bert had no relation with the 3-move ballots topic I prefer to propose here a new topic.
Sorry for this duplication.

What king of algorithm do you use yourself?
TAILLE wrote: I used this alpha-beta procedure about 7-8 years ago but I changed my mind after having studied the MTD-f algorithm and after having discovered that the MTD-f algorithm seems simply the natural algorithm used by the humans!!!
Hi Gérard,

I currently use PVS but I have been experimenting with how to call the zero-window at the root. I don't agree that MTD-f is more natural or human than MTD-best or PVS.

Both methods can be looked at through a Bayesian framework. E.g., the prior iteration for both MTD and PVS gives you information about which moves is best, and what game-value is can realize. The difference between PVS and MTD is how you update your beliefs in the next iteration after you get a fail-low on the previously best move (so the previously best move and the previously best value don't match anymore). With PVS you continue to belief the move is best, but with a lower value (so you re-search the same move with a lower or widened search window). With MTD-f, you continue to belief that the value is still correct, but there should be a different best move that realizes this value (so you search another move with the same search window).

Bayesian decision theory implies that you should somehow balance the uncertainty about which move is the best and the uncertainty about what the game-value is. E.g. if the fail-low is very hard (e.g. the score of the best move drops more than 0.20) then you could search a new move, but if the fail-low is very soft you adjust the search window and re-search the same move. I don't have an optimal algorithm for this, but I think that either extreme (always PVS or always MTD-f) is not optimal.

Rein

Rein Halbersma
Posts: 1722
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: Search algorithm

Post by Rein Halbersma » Thu Mar 01, 2012 12:48

TAILLE wrote: I always try to imitate the human approach and in this context I added some major improvments (?) to the MTD-f best algorithm itself, like the "virtual depth" I mentionned earlier: along a given variant a human will typically used a lot of time for the considered best move and far less time on the other moves. That is exactly what I do in Damy with my "virtual depth".
In which way does this differ from LMR? Extending the best move is not very different from reducing all other moves, is it?

TAILLE
Posts: 968
Joined: Thu Apr 26, 2007 18:51
Location: FRANCE

Re: Search algorithm

Post by TAILLE » Thu Mar 01, 2012 15:14

Hi Rein,
Rein Halbersma wrote:Hi Gérard,

I currently use PVS but I have been experimenting with how to call the zero-window at the root. I don't agree that MTD-f is more natural or human than MTD-best or PVS.
Rein
Yes I agree with you. For me MTD-f-best is the most "natural" approach. PVS seems less natural because I think a human use quite a zero-window even along the PV. Of course you can easily disagree with this and I guess humans are all differents!
Rein Halbersma wrote:Both methods can be looked at through a Bayesian framework. E.g., the prior iteration for both MTD and PVS gives you information about which moves is best, and what game-value is can realize. The difference between PVS and MTD is how you update your beliefs in the next iteration after you get a fail-low on the previously best move (so the previously best move and the previously best value don't match anymore). With PVS you continue to belief the move is best, but with a lower value (so you re-search the same move with a lower or widened search window). With MTD-f, you continue to belief that the value is still correct, but there should be a different best move that realizes this value (so you search another move with the same search window).

Bayesian decision theory implies that you should somehow balance the uncertainty about which move is the best and the uncertainty about what the game-value is. E.g. if the fail-low is very hard (e.g. the score of the best move drops more than 0.20) then you could search a new move, but if the fail-low is very soft you adjust the search window and re-search the same move. I don't have an optimal algorithm for this, but I think that either extreme (always PVS or always MTD-f) is not optimal.

Rein
Here again I agree with you. Damy implementation is the following:
After I get a fail-low on the testValue V for the previous best move, I immediatly try the value V-20 on the same move. If this lead again to a fail-low then I explore the other moves with this V-20 testValue. During all this process the previous best move remains the default best move.
Gérard

TAILLE
Posts: 968
Joined: Thu Apr 26, 2007 18:51
Location: FRANCE

Re: Search algorithm

Post by TAILLE » Thu Mar 01, 2012 15:31

Rein Halbersma wrote:
TAILLE wrote: I always try to imitate the human approach and in this context I added some major improvments (?) to the MTD-f best algorithm itself, like the "virtual depth" I mentionned earlier: along a given variant a human will typically used a lot of time for the considered best move and far less time on the other moves. That is exactly what I do in Damy with my "virtual depth".
In which way does this differ from LMR? Extending the best move is not very different from reducing all other moves, is it?
Yes Rein I do not see any difference. In Damy I only have to decide how to reduce the "virtual depth" for every sussessor. In that sense I know only reduction.
Gérard

naveed01
Posts: 1
Joined: Fri Nov 14, 2014 11:38
Real name: sultan shah

Re: Search algorithm

Post by naveed01 » Fri Nov 14, 2014 11:54

Ed willingly play more such tournaments. But is large problem . The lack of programmes and and the men do not want to give their programmes to testing .
Kingsrow 1.51b 10 points, second place - Aurora Borealis professional 3.0.8 - 9 points, third place - Damira 3 points, the last place - Plus 500 6.39.g - 2 points. Great congratulations Ed.
Evil

Krzysztof Grzelak
Posts: 1368
Joined: Thu Jun 20, 2013 17:16
Real name: Krzysztof Grzelak

Re: Search algorithm

Post by Krzysztof Grzelak » Fri Nov 14, 2014 13:01

naveed01 wrote:Ed willingly play more such tournaments. But is large problem . The lack of programmes and and the men do not want to give their programmes to testing .
Kingsrow 1.51b 10 points, second place - Aurora Borealis professional 3.0.8 - 9 points, third place - Damira 3 points, the last place - Plus 500 6.39.g - 2 points. Great congratulations Ed.
Sorry to ask and what you have versions Damira.

Post Reply