Search Algorithm

Discussion about development of draughts in the time of computer and Internet.
Post Reply
BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Wed Jun 19, 2013 23:33

Michel, interesting that Dragon is also moving forward.
I guess that Damage sometimes is too optimistic with respect to the selective search, but so far it seems okish :).
If you have some new interesting thoughts, I'm looking forward to discuss these with you.

I downloaded this weekend a trial version of Visual Studio 2012 which motivated my to (re)do some profiling tests.
This gave me some clues how/where to optimize.
One of the things I want to change is to migrate from a move-based MoveGenerator towards a position based routine.
Think Ed is doing the same in KingsRow.

As a side-consequence you need to do "non-trivial" calculations for an incremental update of the Zobrist-based hash-function.
For this reason I now implemented a "on-the-fly" hashfunction based on CityHash.
Think Ed uses a Jenkins based Hash (like lookup2, lookup3, or SpookyHash).
Another idea which i didn't work out (so far) is to develop a hash based on the internal Intel CRC32 instruction (which seems to be fast, and could be used for a fast in-line hash function).

Next thing to do is to re-engineer the generator.
To avoid huge bugs I intend to run a 158 game match after every major iteration.

For me the 20% increase in speed is measured for the initial position and a 22-ply search. My goal is to reach 3MNodes/sec (1 core, total search time will be around 25 seconds).
As usual I will start with the non-parallel version, as single-core bugs seems to be more reproducible.

The profile also indicated 1 specific evaluation function call related to a specific pattern class which also takes far more time than i initially anticipated...
So work to do, and never a dull moment :)

Bert

MichelG
Posts: 244
Joined: Sun Dec 28, 2003 20:24
Contact:

Re: Search Algorithm

Post by MichelG » Thu Jun 20, 2013 12:45

BertTuyt wrote:Michel, interesting that Dragon is also moving forward.
I guess that Damage sometimes is too optimistic with respect to the selective search, but so far it seems okish :).
If you have some new interesting thoughts, I'm looking forward to discuss these with you.

I downloaded this weekend a trial version of Visual Studio 2012 which motivated my to (re)do some profiling tests.
This gave me some clues how/where to optimize.
One of the things I want to change is to migrate from a move-based MoveGenerator towards a position based routine.
Think Ed is doing the same in KingsRow.

As a side-consequence you need to do "non-trivial" calculations for an incremental update of the Zobrist-based hash-function.
For this reason I now implemented a "on-the-fly" hashfunction based on CityHash.
Think Ed uses a Jenkins based Hash (like lookup2, lookup3, or SpookyHash).
Another idea which i didn't work out (so far) is to develop a hash based on the internal Intel CRC32 instruction (which seems to be fast, and could be used for a fast in-line hash function).

Bert
Dragon is now fully running on a position based generator. (hence the recent strength improvement)

It still using good old Zobrist hash. I update it incrementally though, by reconstructing the change in position. (beforemove.white exclusiveOr aftermove.white) Let me know if the CRC32 instruction works faster :-).

How do you measure nodes speed? Do you include the internal nodes as well? Dragon runs at 1.5-1.6 Mnodes/sec singlecore, counting only leaf-nodes.

Michel

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Thu Jun 20, 2013 14:10

Dragon is now fully running on a position based generator. (hence the recent strength improvement)

It still using good old Zobrist hash. I update it incrementally though, by reconstructing the change in position. (beforemove.white exclusiveOr aftermove.white) Let me know if the CRC32 instruction works faster .

How do you measure nodes speed? Do you include the internal nodes as well? Dragon runs at 1.5-1.6 Mnodes/sec singlecore, counting only leaf-nodes.

Michel


Michel, in the end all programs will migrate to a "similar" architecture.
Do you use the 3 Bitboards approach (so WhitePiece, BlackPiece and Kings) or is your architecture based on 4 bitboards?

In the case of Damage I count a node as an entry point in the SearchN routine.
So if this node is leaf or not is not relevant. I do however also count the leaf-nodes, as I counts the calls to the Evaluation-function. If I remember well the ratio between total-nodes and leaf-nodes is a factor of 2, but I will check later today.

Im now testing to what extend i see strange behavior with the new inline hash-function.
I will also try to find a suitable CRC32 based hash-function and see if this makes a difference.
Below the code I used.

Basically i calculate a 64bit hash for the 2 man bitboards, and a 64bit hash for the 2 king bitboards (in the current Damage architecture i still use 4 + 1 (for empty) bitboards).
I dont use the Uint128Low64(x) and Uint128High64(x) (so in my case the 2 bitboards are not packed into 1 128bit word), but use the 2 seperate bitboards.
In the end, I xor both 64bit hash-values.

Code: Select all

/ Hash 128 input bits down to 64 bits of output.  
// This is intended to be a reasonably good hash function.  
inline uint64 Hash128to64(const uint128& x) {    
// Murmur-inspired hashing.    

const uint64 kMul = 0x9ddfea08eb382d69ULL;
    
uint64 a = (Uint128Low64(x) ^ Uint128High64(x)) * kMul;    
a ^= (a >> 47); 
   
uint64 b = (Uint128High64(x) ^ a) * kMul;    
b ^= (b >> 47);  
  
b *= kMul;    

return b;  } 
I didnt do (so far) timing tests and or statistical tests, just wanted to see if this works....

Did you consider to use the (new) Intel 128-bit and 256-bit instructions?
So far I use plain 64bit, and so-far I did not had a idea if these larger bitboards could yield an advantage.

Bert

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Thu Jun 20, 2013 14:19

My plan: release a new version of in a few weeks, and than focus on improving the tree search the next couple of months. Dragon still uses plain old alpha-beta search with limited selectivity...
Michel do you consider MTD-f?
For Damage i will stick (for now :) ) with alpha-beta.
But as I have shared in this forum with extensions like ETC, IID, MCP, FHR and LMR.
My challenge is not to find more additions, but to better finetune these optimizations, as sometimes the baby is thrown away with the bathwater ;).

What is the "selectivity" you use in Dragon, and which extensions are you considering?

Bert

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Thu Jun 20, 2013 20:59

Dragon runs at 1.5-1.6 Mnodes/sec singlecore, counting only leaf-nodes
Damage runs at around 1.2 MNodes/sec (only leaf nodes) on a Intel i940 running at 2.93 GHz

Bert

MichelG
Posts: 244
Joined: Sun Dec 28, 2003 20:24
Contact:

Re: Search Algorithm

Post by MichelG » Thu Jun 20, 2013 23:39

BertTuyt wrote: Did you consider to use the (new) Intel 128-bit and 256-bit instructions?
So far I use plain 64bit, and so-far I did not had a idea if these larger bitboards could yield an advantage.
...
Michel do you consider MTD-f?
For Damage i will stick (for now :) ) with alpha-beta.
But as I have shared in this forum with extensions like ETC, IID, MCP, FHR and LMR.
My challenge is not to find more additions, but to better finetune these optimizations, as sometimes the baby is thrown away with the bathwater ;).

What is the "selectivity" you use in Dragon, and which extensions are you considering?

Bert
I might use the 128 bit instructions, but i have some reservations against using too much intrinsics, because it makes the program harder to port to other architectures in the future. At the moment i just use bitscan and popcount.

I tried mtd-f and pvs search and several other enhancements. These reduce the tree size, but strangely don't improve the performance (except ETC). My one golden rule is that any 'improvement' must be proven by winning a match with 3-sigma statistical advantage, and none of the ideas i tried beated the current algorithms.

My guess is that the current selectivity is actually quite good already, and it is going to take some effort to improve on it. But i will re-examine the optimizations in more detail the following months.

One thing i am working on now, is cluster based testing. Running tests on 2-3 computers simultaneously will help finetune new ideas.

Michel

Krzysztof Grzelak
Posts: 1368
Joined: Thu Jun 20, 2013 17:16
Real name: Krzysztof Grzelak

Re: Search Algorithm

Post by Krzysztof Grzelak » Sun Jun 23, 2013 15:28

Hi Michel.

It when latest version of the program Dragon.

-- Krzysztof

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Wed Jun 26, 2013 19:04

Herewith another dxp-file.
60 games played 3 Damage Win, 2 Kingsrow Win and 55 Draw (the unknown in the file is a Draw).

The only purpose of these (short) matches is too check if I introduced fatal bugs with the implemented changes in architecture...
So no change in eval, and as a result some typical Damage errors re-occur :)

Bert
Attachments
dxpgames.pdn
(60.28 KiB) Downloaded 358 times

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Wed Jul 03, 2013 23:24

As i have posted here before , one of the bugs I tried to resolve was related to the Outpost Evaluation Function.
This part of the program is relatively old (as I started somewhere in the 70-ties with this challenge).
As computers were rather slow, i tried to capture some of the dynamics in a "static" function.
Especially those cases where there are more attackers than defenders, but a brilliant ( :( ) defend strategy is available ... :?

With a result that the software is hardly readable anymore (> 1000 lines of code, and as a consequence some re-occurring losses against KingsRow).
And a bug which is impossible to trace through the jungle of if then else....
Think that Gerard with Damy is more consequent with the split between dynamics (which should be covered by the search) and the "static"".

So I'm now rebuilding the Outpost Evaluation from scratch.
I started with the bare minimum, as depicted in the Damage code below.
Rather primitive, and I'm sure one has to add more code (example one should also test on the attack-defend "race-conditions".., and some sub-patterns, and .... , and ... ,and ...).

Anyway after around 30 games against Kingsrow, Damage only lost 1 game which was most likely Outpost related ( so maybe an ELO drop of 10 - 15 max).
This is not unexpected, as nowadays the search is able to deal with the many general outpost details, and exceptions and exceptions on exceptions, ...

I'm curious to learn if other programmers observe the same. That much of the evaluation code can be removed (especially dealing with all kind of dynamics , introduced and implemented in an era that 6 ply search was really the maximum, but now questionable or even counterproductive), without huge ELO and/or performance penalty.

Anyway, my approach is now to add only additional code, if based on actual game-play, the current (bare-minimum) implementation is too simplistic, and yields critical positions.

Bert

Code: Select all

int CEvaluation64::Eval64_OutPost_White24(PTSTACK pTDepth) {

	int iXAttack, iXDefend ;
	int iScore ;
	BITBOARD bbWhiteMan, bbBlackMan, bbEmpty ;
		
	
	iScore = 0 ;

	bbWhiteMan	= pTDepth->bbField[BB_WHITEMAN] ;
	bbBlackMan	= pTDepth->bbField[BB_BLACKMAN] ;
	bbEmpty		= pTDepth->bbField[BB_EMPTY] ;

	if ( bbWhiteMan & DFLD24 ) {		// Position 24 WHITE

		iXAttack	= (int)POPCNT64( bbBlackMan & ( DFLD3 | DFLD4 | DFLD5 | DFLD9 | DFLD10 | DFLD14 ) ) ;
		iXDefend	= (int)POPCNT64( bbWhiteMan & ( DFLD35 | DFLD40 | DFLD44 | DFLD45 | DFLD49 | DFLD50 ) ) ;

		if ( iXAttack > iXDefend )		// Critical Outpost !
			iScore = -30 ;

		else {

			if ( ALLPIECE( bbBlackMan, ( DFLD2 | DFLD8 | DFLD13 ) ) )	// Exchange Outpost
				iScore = 10 ;
			else
				iScore = 20 ;
		}
	}

	return ( iScore ) ;
}

TAILLE
Posts: 968
Joined: Thu Apr 26, 2007 18:51
Location: FRANCE

Re: Search Algorithm

Post by TAILLE » Sat Jul 06, 2013 10:47

BertTuyt wrote:As i have posted here before , one of the bugs I tried to resolve was related to the Outpost Evaluation Function.
This part of the program is relatively old (as I started somewhere in the 70-ties with this challenge).
As computers were rather slow, i tried to capture some of the dynamics in a "static" function.
Especially those cases where there are more attackers than defenders, but a brilliant ( :( ) defend strategy is available ... :?

With a result that the software is hardly readable anymore (> 1000 lines of code, and as a consequence some re-occurring losses against KingsRow).
And a bug which is impossible to trace through the jungle of if then else....
Think that Gerard with Damy is more consequent with the split between dynamics (which should be covered by the search) and the "static"".

So I'm now rebuilding the Outpost Evaluation from scratch.
I started with the bare minimum, as depicted in the Damage code below.
Rather primitive, and I'm sure one has to add more code (example one should also test on the attack-defend "race-conditions".., and some sub-patterns, and .... , and ... ,and ...).

Anyway after around 30 games against Kingsrow, Damage only lost 1 game which was most likely Outpost related ( so maybe an ELO drop of 10 - 15 max).
This is not unexpected, as nowadays the search is able to deal with the many general outpost details, and exceptions and exceptions on exceptions, ...

I'm curious to learn if other programmers observe the same. That much of the evaluation code can be removed (especially dealing with all kind of dynamics , introduced and implemented in an era that 6 ply search was really the maximum, but now questionable or even counterproductive), without huge ELO and/or performance penalty.

Anyway, my approach is now to add only additional code, if based on actual game-play, the current (bare-minimum) implementation is too simplistic, and yields critical positions.

Bert
Hi Bert,

I can see that you do not hesitate to rebuild a big part of your program from scratch. I like this myself and I find this funny as programmer.
Concerning what you can find in an evaluation function my view has changed (improved?) during these last years.
As you mentioned I think you must separate clearly a “dynamic” evaluation vs a “static” evaluation otherwise you create an unuseful redundancy between the evaluation function and the search.
To go further you have to define exactly what do you expect from an evaluation function and you will discover different needs and so different evaluation functions (or different evaluation options). As examples:
1) the evaluation you propagate through the root of the tree
2) the evaluation you need to help ordering the moves
3) the evaluation you need to prune a variant
4) the evaluation you need to handle extension/reduction
5) any other specific need
In addition what kind of evaluation (or bound values?) you need to store in your hash table?

I think you cannot answer all these questions by a brilliant (but unreadable?) evaluation function do you?
Gérard

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Sat Jul 06, 2013 23:06

Gerard, I can recognize all what you say.
The good news work to do, and never a dull moment :)
The bad news, still light years to go before programs play perfect.

The reason I want to rewrite part of the evaluation code, is that the core of the evaluation was already written some 20+ years ago.
And to deal with he horizon effect, i included too much dynamics in the evaluation function.
I think I could patch most of it, but as you said, sometimes rewriting from scratch and finding elegant solutions is much more fun.
So will keep you all posted...

Bert

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Sat Jul 06, 2013 23:17

Herewith the DXP file which Damage played (against Kingsrow) with an ultra basic Outpost - function (and only dealing with 24 white and 27 black).
The result from the perspective of Kingsrow 18 Win, 2 Lost and 138 Draw ( so around 35 ELO difference).

The match proved once again that you need to play many games to get some statistics.
In the first 74 games , Damage didn't lose a game, and won 2.
Next to that there were many consecutive draws ( game 1 - 29 draw, and game 31-67 draw).

It is also interesting to see why you need specific code, and only value the implementation when you are confronted with a lack of knowledge.
From an Outpost point of view, the (won) games by Kingsrow are interesting, games 80, 84, 96, 102, 103, 104, 110, 117, 140, 145, 151, 153...

Bert
Attachments
dxpgames.pdn
(160.03 KiB) Downloaded 353 times

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Tue Jul 16, 2013 17:34

I'm now writing a Draughts article for a Dutch magazine. One of the things yet to do on my to-do-list.
In this article I also compare Draughts and Chess Computer techniques.

So one of the things I was wondering, why didn't we (successfully) implement the NULL-move?
At least to my knowledge no program uses the NULL-move.
But I can be wrong.

At least from my side I never studied/tested it.
I imagine that ZUGZWANG could be a reason.
But maybe others have another and/or better opinions and/or practical experiences through several tests...

I also want to interview some Draughts programmers for this reason (article).
If you want to co-operate just drop me an email at H.Tuyt at chello.nl.

Thanks in advance,

Bert

TAILLE
Posts: 968
Joined: Thu Apr 26, 2007 18:51
Location: FRANCE

Re: Search Algorithm

Post by TAILLE » Tue Jul 16, 2013 18:43

Hi Bert,
BertTuyt wrote:I'm now writing a Draughts article for a Dutch magazine. One of the things yet to do on my to-do-list.
In this article I also compare Draughts and Chess Computer techniques.

So one of the things I was wondering, why didn't we (successfully) implement the NULL-move?
At least to my knowledge no program uses the NULL-move.
But I can be wrong.

At least from my side I never studied/tested it.
I imagine that ZUGZWANG could be a reason.
But maybe others have another and/or better opinions and/or practical experiences through several tests...

I also want to interview some Draughts programmers for this reason (article).
If you want to co-operate just drop me an email at H.Tuyt at chello.nl.

Thanks in advance,

Bert
I suppose the NULL-move is highly used by Chess programs because in the great majority of the chess positions a NULL-move means a disadvantage for a player. Of course I guess a chess programmer will try to avoid the NULL-move if zugswang position can be detected because in these cases the NULL-move is really an advantage.
In Draughts we all know that a NULL-move may be an advantage or a disadvantage depending on circumtances and so the use of the NULL-move might be a problem. But that doen't mean that the NULL-move should not be used in draughts. In Damy I use the NULL-move near the leaf of the tree in specific situations were I am pretty sure that such NULL-move is a disadvantage for the player.
For the time being I identified only two situations were I use the NULL-move:
1) The defense against a potential breaksthrough
2) The defense of an outpost

Even I use the NULL-move only in specific situations you cannot say that the NULL-move concept is not use in Draughts can you?
Gérard

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Tue Jul 16, 2013 19:41

Even I use the NULL-move only in specific situations you cannot say that the NULL-move concept is not use in Draughts can you?
Gerard, thanks for your reply.
I was not aware that you used the NULL-move.
So as you do (even in specific situations) use it, than my statement is not true obviously.
But so far I dont have information if also the others use this concept in general or as in your case in specific positions.

So please let me know...

Bert

Post Reply