Computer Draughts 2012

Discussion about development of draughts in the time of computer and Internet.
Rein Halbersma
Posts: 1722
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: Computer Draughts 2012

Post by Rein Halbersma »

BertTuyt wrote:All quiet on the western front?

It seems that all programmers are enjoying a long summer shutdown/vacation.
Hope to hear some great news from you all :)
Anyway I was working on a new architecture for the Engine, implemented/coded some new ideas for the search, en changed some evaluation routines.

As already posted, the new search is quit fast, but so far instable at times. I found some bugs which i removed, so hope to provide some updates in the near future.

Bert
Hi Bert,

I think I did my fair share by open-sourcing my draughts library (see the other thread on this forum, your input would be appreciated) :-)

How is your GUI coming along? Will you open-source this as well? It would be really great if the GUI would accept any GUIDE protocol conforming engine that communicates through stdin/stdout (and so that the GUI itself does the socket handling for remote engines).

Other stuff: with some help from Ed, I am thinking of either assembling my own server or having it assembled from loose parts. The parameters will be either a single socket E5-2620 / 24 Gb or a dual socket with 48 Gb of ECC ram. This would give 6-12 cores (or 12-24 threads!) with 4 Gb per core. On sites such as azerty.nl / alternate.nl such a system can be obtained for less than EUR 2000, with assembly costs in the EUR 50-100 range.

regards,

Rein
Ed Gilbert
Posts: 860
Joined: Sat Apr 28, 2007 14:53
Real name: Ed Gilbert
Location: Morristown, NJ USA
Contact:

Re: Computer Draughts 2012

Post by Ed Gilbert »

All quiet on the western front?
I guess New Jersey can be considered the western front. Indeed all is pretty quiet here. The only draughts activity has been the change to the egdb driver that I posted recently.

I crashed on my bicycle in July, breaking my clavicle and tearing a rotator cuff, so have been recovering from that. Went for a run yesterday, my first real exercise in 6 weeks, which felt great.

-- Ed
BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Computer Draughts 2012

Post by BertTuyt »

Ed, sorry to hear you had an accident, hope you recover soon.

With kind regards,

Bert
BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Computer Draughts 2012

Post by BertTuyt »

Rein the Damage GUI is not open source (yet), but I'm willing to share the source code (and reserve some time for architecture explanation, if needed), with anyone who wants to contribute to further development.

The GUI is MFC based and I use VS 2010 as an development environment.
I'm not a programmer from origin, so I'm looking forward to further support from more experienced people.

Bert
MichelG
Posts: 244
Joined: Sun Dec 28, 2003 20:24
Contact:

Re: Computer Draughts 2012

Post by MichelG »

BertTuyt wrote:All quiet on the western front?

It seems that all programmers are enjoying a long summer shutdown/vacation.
Hope to hear some great news from you all :)
Anyway I was working on a new architecture for the Engine, implemented/coded some new ideas for the search, en changed some evaluation routines.

As already posted, the new search is quit fast, but so far instable at times. I found some bugs which i removed, so hope to provide some updates in the near future.

Bert
For me, i got too many ideas and too little time to implement them :-) First on the list is to complete the conversion to bitboard based functions, so i can drop some of the array based code that is slowing down the search.

Second is the improving the evaluation function. It must get either faster or better, preferably both :-)

Michel
BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Computer Draughts 2012

Post by BertTuyt »

Although i expected to participate in the open Dutch Computer Draughts Tournament 2012 , which will take place tomorrow (Sunday 16-September), i was unfortunately not able to finish the version 12 of the Damage Engine in time. Playing with an old (unmodified) version is for me neither an option nor fun.
Nevertheless I will miss my programmers friends, and wish them success tomorrow.

During my holiday period I made some major code changes. Normally i think about new architectures/ideas before the summer holiday, implement them in a few weeks , and after holiday i go into a huge test process.
So basically im now in the test (and sometimes disappointment :D ) phase.

The main basis for improvement goals and directions is based on test matches i play against Kingsrow. Thanks to Frank Mesander we are able to use the DXP-protocol, and thanks to Ed (whom i deeply respect !) we have access to the best Draughts program ever made. Only through multiple test-matches (based on 158 games) I learn the weakness of my program. Short matches ( or a few games) do not make sense, as i often encounter a sequence of many consecutive draws (maximum so far 40) in a row during a match, followed by some losses. You really need statistics here, and i guess that next year i need to switch to 3-move ballots to make more progress. I know Gerard with Damy is more into the thematic approach, which also might work. But as my Draughts knowledge is less compared with Gerard, I think the volume system works better for me.

The main learning points from these matches:

* KingsRow is extremely good !! :shock:
* Often completely out of the blue, late middle game with 7 - 8- 9 man on each side suddenly the KingsRow score explodes (in the wrong direction for Damage).
* Damage is often Outsearched by KingsRow :(
* I really dont see a major weakness in KingsRow, locks, breakthrough, outposts, it is all very balanced. I guess this is based on the fact that Ed (most likely) used Truus/Flits and also played zillions of games (manual as the DXP Truus/Flits server was not yet invented), and this way step by step improved the KingsRow evaluation (I assume the search was already superior based on the checkers experience).
* I see many flaws in Damage, but in seems to focus on 3 areas: SearchDepth, some positions are hard to evaluate and you need raw brute-force to get a clue. Long term breakthrough (run away man), the outpost on 24 for white ( and 27 black) and 2 specific locks which Damage does not seem to handle properly.

During Holiday I focused on search-depth and Breakthrough, I didn't touch the outpost- and lock-routine (yet).
Another change I wanted to make is to make the internal Damage architecture (more or less) agnostic for the coordinate system which I used. For a weird reason (but i started with computer draughts in 1970s) my 1 coordinate starts at white 46 (and the black promotion row is 50 ) :?: . As I also wanted to use the search (and Movegenerator) of Damage in combination with the Horizon Evaluation, I wanted to basically hide the coordinate-system details for these routines.

I will post ( most likely) today the details of the new search routines, and share details/findings..

Bert
BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Computer Draughts 2012

Post by BertTuyt »

Part I.

The Damage Search is based on the PVS framework with Zero-Window search in non-PV nodes.

Multi-Cut-Pruning is applied in non-PV nodes, so both in CUT nodes as ALL nodes, although the method is mainly designed for CUT nodes. I still need to check if it makes a difference when the mechanism is only applied at expected CUT nodes ( in stead of all non-PV nodes).
Below code is from the Damage Search-Routine.
The 2 parameters needed by MCP are the DepthReductionFactor ( value 3 in Damage), the Number of MCP Moves examined ( value 5 ) and the number of moves which need create a beta-cutoff so Multi_cut_Pruning() becomes true ( value 2 ).

Code: Select all

//	Multi Cut Pruning (only in non_PV Nodes )
	if ( pTDepth->bPV == false && iDepth > pEngineData->iMCP_DepthReductionFactor && pTDepth->ipbx >= pEngineData->iMCP_Child_Search )
		if ( Multi_Cut_Pruning<EngID>(iSearchThreadID, iDepth, iAlfa, iBeta, pTDepth, pEngineData, bTurn) ) {

			SearchReturn(pTDepth, pSearchThreadData) ; 
			return ( iBeta ) ;
		}
After MCP , Fail High Reduction (FHR) is applied, also mainly working on CUT nodes. Maybe I need to check if a reversal of MCP and FHR makes sense.
The base idea of FHR, if the player to move has a huge material advantage (based only on the evaluation material score of man and kings) which exceeds beta by a margin, then it doesnt make sense to continue that line. Base conditions:
* No capture possible from the perspective of the player to move ( as this could be part of a combination sequnce which he is trapped into by the opponent).
* No breaktrough possible (so a free running man who can promote to king) for the opponent , which will create a complete different material-value balance (and maybe the opponent sacrisfied material for this reason).
* No opponent thread possible from the opponent perspective ( so a capture which can not be blocked or prevented).
* FHR is not applied when the root position in in the Database (and all other nodes within the search-tree).

The margin applied is 80 , which is 80% of the value of a man (100).
The reduced search depth is a function of the difference between material value and beta ( iDeltaDepth = ( iMatVal - iBeta - 80 ) )
See below actual Damage source code:

Code: Select all

//	Fail High Reduction ;

	if ( !pEngineData->bDatabasePosition && iDepth != 0 && pTDepth->iCapture == 0 ) {					// Base Condition

		iMatVal			= ( bTurn == true ? pTDepth->matval : -(pTDepth->matval) ) ;
		bBreakThrough	= ( bTurn == true ? m_pEvaluation64->Eval64_bBlackBreakThrough(pTDepth) : m_pEvaluation64->Eval64_bWhiteBreakThrough(pTDepth) ) ;

		if ( bBreakThrough == false && iQCapture ==  CAPTURETHREAD_NOTHREAD ) {							// Opponent has no Breakthrough & no Capture Thread !

			if ( pEngineData->bFHR == true && pTDepth->bPV == false && iBeta > -WIN_THRESHOLD ) {		//	Fail High Reduction

				iDeltaDepth = ( iMatVal - iBeta - 80 ) ;

				if ( iDeltaDepth >= 0 ) {

					iDeltaDepth = 2 + iDeltaDepth/100 ;
					iNewDepth = max(iDepth - iDeltaDepth, 0) ;
				}
			}
		}
	}

For testing purposes I can switch on/off FHR ( pEngineData->bFHR )

Hope I find the time for Part II tomorrow.
If there are questions, remarks, suggestions, whatever, please let me know...
I also hope that it will motivate others to share their ideas... (in line with the philosophy of Rein).

Bert
BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Computer Draughts 2012

Post by BertTuyt »

Part II

To avoid switching between moves at the root, based on small evaluation score differences (like 0.01 which is 1/100 of a man value) i introduced granularity (as also other programs do), which all scores round to an even value ( .., -2, 0, 2, 4 , .. ). I might add odd score later for DB-draws (like Kingsrow).
Unfortunately (and for initial unknown reasons) there were many move sequences with ad odd result ( 1, 3 , ..).
The reason was (in the end) logical, but maybe other programmers face the same.

In the original search i used 2 optimizations which did not seem to work well:
1) I used the hash-function for the score (so apparently the position was already evaluated) in case the hash-depth was also equal to search-depth. But as i use more and more different pruning mechanism for PV and non-PV nodes, basically (as i also dont use pondering, and clear the hash-table in between moves), i suspected that using the hash-score for PV nodes was something which i should omit. Next to that i also use the hash-result for alpha modification, which i also now switched off for PV-nodes.

2) in the Zero-Window search for non-PV nodes, i used the score of the previous iteration in case a fail-high occurred, and a re-search was needed with a wider window.
Using the score of the previous iteration as a new lower bound, reduces basically the window of the search (and therefore increases efficiency). However as the PV-search is different (with more restrictions on pruning), this assumption can yields erroneous results.

After changing these 2 in the program the scores now are always even (in line with granularity).
Another positive effect is that the PV now is really the PV on which the tree score is based on.
I do a check now after the search where i do a follow-PV and display the end-position (gives sometimes interesting clues about evaluation mistakes) and the related score .

Bert
TAILLE
Posts: 968
Joined: Thu Apr 26, 2007 18:51
Location: FRANCE

Re: Computer Draughts 2012

Post by TAILLE »

Hi Bert,
BertTuyt wrote:Part I.

The Damage Search is based on the PVS framework with Zero-Window search in non-PV nodes.

Multi-Cut-Pruning is applied in non-PV nodes, so both in CUT nodes as ALL nodes, although the method is mainly designed for CUT nodes. I still need to check if it makes a difference when the mechanism is only applied at expected CUT nodes ( in stead of all non-PV nodes).
Below code is from the Damage Search-Routine.
The 2 parameters needed by MCP are the DepthReductionFactor ( value 3 in Damage), the Number of MCP Moves examined ( value 5 ) and the number of moves which need create a beta-cutoff so Multi_cut_Pruning() becomes true ( value 2 ).



If there are questions, remarks, suggestions, whatever, please let me know...
I also hope that it will motivate others to share their ideas... (in line with the philosophy of Rein).

Bert
Yes Bert, of course any discussion may be here interesting. Seing you are waiting for remarks/ comments I will do my best.
I tried in the past to apply the MCP procedure but eventually I was not satisfied but I have to say that my search algorithm is based on MTD-f and not PVS. As a consequence the issue may be different for you?
I understood MCP was build only for CUT nodes because for these nodes the probability to prune after the MCP may be high. In the other hand the probability to prune after a MCP on an ALL node should be very low and I suspect it is not very efficient. Did you measure a real gain be including these ALL nodes?
Concerning a CUT node the MCP procedure seems interesting but it fails in a very common situation. Take a position which is really a CUT node because a (simple or complex) combination exists in this position. The MCP procedure will not work in this case and this procedure will appear as a waste of time.
But this case above is not really the case that convinced me to not use the MCP procedure.
The key point for me is the surrounding strategies and in particular the strategies used against an opponent white 24 man (black 27 man). If you are white with a man on square 24 you may often see an advantage for white though black might have a good opportunity to surround white. If you apply the MCP procedure you will only confirm the advantage for white and you will miss the black strategy due to the reduction occurring in the MCP procedure. You mentioned Damage was not very strong with a white man 24. Don’t you think one reason could be the use of the MCP procedure?
Gérard
BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Computer Draughts 2012

Post by BertTuyt »

Part III

Late Move Reductions, is applied during the Zero-Window PVS search for Non-PV nodes.
Basically the mechanism should be applied for ALL-nodes, but when the search enters this phase, most likely this condition is valid.
The actual implementation is given below:

Code: Select all

			bLMR = ( pTDepth->bPV == false && pTDepth->iklmx == 0 && iNMovesSearched >=4 && iNewDepth >= 3 ) ;

			if ( pSearchThreadData->bLMR == true && bLMR )
				iMWScore = -SearchN<EngID>(iSearchThreadID, ( bDDepth && pTDepth->bPromotion == false ? iNewDepth-1 : iNewDepth ) - SEARCH_LMR_DEPTHREDUCTIONFACTOR, -iAlfa-1, -iAlfa, pTDepth+1, pEngineData, !bTurn) ;
			else
				iMWScore = iAlfa + 1 ;

			if ( iMWScore > iAlfa )
				iMWScore = -SearchN<EngID>(iSearchThreadID, ( bDDepth && pTDepth->bPromotion == false ? iNewDepth-1 : iNewDepth ), -iAlfa-1, -iAlfa, pTDepth+1, pEngineData, !bTurn) ;

Boundary condition for LMR:
* bPV is true (but i should probably omit this test, as i is most likely impossible here to have a PV-node ). (So discussing your own code helps, and forces me to re-check :D )
* Actual SearchDepth >= 3
* First 4 moves already searched with no LMR (the 4 is hardcoded, should I change that :( )
* No Capture possible (cant remember myself why i added/implemented this condition, so need to rethink) :? .
* The reduced search-depth SEARCH_LMR_DEPTHREDUCTIONFACTOR is 2 ply.
* the else iMWScore = iAlfa + 1 ; is a (maybe) non-elegant way in forcing a normal PVS-ZW search, and a normal re-search with the orginal search-depth in case no fail-low takes place.

In the code one can read, that damage also doesnt count a promotion as an extra ply.
The boolean bDDepth is true when the next search is 1 Ply less, and calculated as:

Code: Select all

bDDepth = !( iNMoves == 1 || iNewDepth == 0 ) ;
In part IV I will do some tests (and im also curious myself as i ded not do this lately) what the effect is of switching on/off MCP, FHR and LMR..

Bert
BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Computer Draughts 2012

Post by BertTuyt »

Gerard, thanks for your reaction.

Herewith some answers shooting from the hip.

1) You are right that MCP should only be applied at CUT nodes. However I somewhere/sometime read some remarks from Mark Winands who also applied MCP at ALL nodes with a positive effect (if I remember well). I didn't further test this claim, because as a ""nice" consequence i didn't need to track if a node was ALL/CUT (PV i do track).

2) It is also clear that MCP is a waste of time in case the node is singular-CUT. But I assume (that is basically my pragmatic approach), that if the Search is better (or the outcome of a 158 game match is better), then apparently the net effect is positive (and the number of gains by far offset the wasted energy in singular positions).

3) My outpost routine just contains a bug, as some patterns (which are basically dynamically , but which i evaluate in a static way, if i remember well you also go for dynamics here) do not evaluate the attack/defense in a right way. In the cases I lost a man, KingsRow "sees" immediately the failure even at low search-depths.

Bert
TAILLE
Posts: 968
Joined: Thu Apr 26, 2007 18:51
Location: FRANCE

Re: Computer Draughts 2012

Post by TAILLE »

Hi Bert,
BertTuyt wrote:Part II

To avoid switching between moves at the root, based on small evaluation score differences (like 0.01 which is 1/100 of a man value) i introduced granularity (as also other programs do), which all scores round to an even value ( .., -2, 0, 2, 4 , .. ). I might add odd score later for DB-draws (like Kingsrow).
Bert
I do not really understand why it is important to avoid switching moves at the root. If two moves are quite identical then this switchling is only natural isn't it.
BTW granulartity is essential for Damy because I use the MTD-f procedure.
For your information my evaluation of a position is a one byte value; from -126 (loosing position) to +126 (winning position). The 3 values -128, -127 and +126 are reserved values used for different purposes.
I guess my evaluation have the same granularity of yours between -45 and +45 but outside this range I make a compression in the spirit of the logarithm function:
+65 correspond to your +100 (advantage of 1 man)
+79 correspond to your +200 (advantage of 2 men)
etc.
The calculation of the eval function is made in the "standard" format but, just before returning the result, the eval function compresses the result on one byte.
A side effect of this approach is that I gain rooms in the hash table!
Gérard
BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Computer Draughts 2012

Post by BertTuyt »

Gerard, basically what i meant to say is that I dont want to re-search somewhere in the tree when the difference is evaluation is small.
Therefor i introduced granularity.
So you are right it is not only about the root.

Bert
TAILLE
Posts: 968
Joined: Thu Apr 26, 2007 18:51
Location: FRANCE

Re: Computer Draughts 2012

Post by TAILLE »

BertTuyt wrote:Part III

Late Move Reductions, is applied during the Zero-Window PVS search for Non-PV nodes.
Basically the mechanism should be applied for ALL-nodes, but when the search enters this phase, most likely this condition is valid.
Boundary condition for LMR:
* bPV is true (but i should probably omit this test, as i is most likely impossible here to have a PV-node ). (So discussing your own code helps, and forces me to re-check :D )
* Actual SearchDepth >= 3
* First 4 moves already searched with no LMR (the 4 is hardcoded, should I change that :( )
* No Capture possible (cant remember myself why i added/implemented this condition, so need to rethink) :? .
* The reduced search-depth SEARCH_LMR_DEPTHREDUCTIONFACTOR is 2 ply.
* the else iMWScore = iAlfa + 1 ; is a (maybe) non-elegant way in forcing a normal PVS-ZW search, and a normal re-search with the orginal search-depth in case no fail-low takes place.

Bert
LMR is the last procedure I decided to abandoned, after having used it for about one year.
As you I did not apply reduction on moves leading to a capture, simply because such moves could be part of a combination.
IMO LMR is a very powerful procedure, far more efficient than the MCP procedure. However, when I analyzed in details on which moves I would like to apply a reduction I discovered that the best criteria are very dependant on the resulting position. As a consequence these criteria should be analyzed on the next ply and I replaced my LMR procedure by a new (and more aggressive) FHR procedure.
Gérard
Rein Halbersma
Posts: 1722
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: Computer Draughts 2012

Post by Rein Halbersma »

BertTuyt wrote:Part I.

The Damage Search is based on the PVS framework with Zero-Window search in non-PV nodes.

Multi-Cut-Pruning is applied in non-PV nodes, so both in CUT nodes as ALL nodes, although the method is mainly designed for CUT nodes. I still need to check if it makes a difference when the mechanism is only applied at expected CUT nodes ( in stead of all non-PV nodes).
Below code is from the Damage Search-Routine.
The 2 parameters needed by MCP are the DepthReductionFactor ( value 3 in Damage), the Number of MCP Moves examined ( value 5 ) and the number of moves which need create a beta-cutoff so Multi_cut_Pruning() becomes true ( value 2 ).
Hi Bert,

Thanks for explaining your code in such detail! There is a nice draughts anecdote about Multi-Cut. IIRC, there was a 1995 game (http://toernooibase.kndb.nl/opvraag/app ... &wed=58725) between Sijbrands and Schwarzman and they had a very complicated middle game situation in which Schwarzman could have played a very aggressive but risky move. After the game Sijbrands asked why Schwarzman didn't play it. His response: "There were two tactical exchanges you could have taken, each of which gave you a king for 3 men. It was too hard to judge the values of these variations exactly, but I figured at least one of them must be bad for me." When I first read about multi-cut, I immediately had to think of this passage from one of Sijbrands' analyses. I often think of Multi-Cut as a sort of Early Move Pruning (as opposed to Late Move Reductions).
After MCP , Fail High Reduction (FHR) is applied, also mainly working on CUT nodes. Maybe I need to check if a reversal of MCP and FHR makes sense.
The base idea of FHR, if the player to move has a huge material advantage (based only on the evaluation material score of man and kings) which exceeds beta by a margin, then it doesnt make sense to continue that line. Base conditions:
* No capture possible from the perspective of the player to move ( as this could be part of a combination sequnce which he is trapped into by the opponent).
* No breaktrough possible (so a free running man who can promote to king) for the opponent , which will create a complete different material-value balance (and maybe the opponent sacrisfied material for this reason).
* No opponent thread possible from the opponent perspective ( so a capture which can not be blocked or prevented).
* FHR is not applied when the root position in in the Database (and all other nodes within the search-tree).

The margin applied is 80 , which is 80% of the value of a man (100).
The reduced search depth is a function of the difference between material value and beta ( iDeltaDepth = ( iMatVal - iBeta - 80 ) )
For testing purposes I can switch on/off FHR ( pEngineData->bFHR )
I don't have a mature search routine yet, but my search will become a modified version of the Stockfish template. Stockfish does roughly the following:

1) TT-lookup
2) razoring and futulity pruning: if depth<4 and eval-score is far below or far above the current window, return the appropriate bound.
3) reduced (R>=3) null move search with verification. If the null move succeeds, they verify it with a reduced search. If the verification succeeds, or if the original null move search fails because of a threat that was connected to LMR one ply above in the tree, they return immediately.
4) ProbCut: they do reduced (R=3) searches on promising captures and return if the shallow search gives a beta-cutoff
5) Internal iterative deepening to sort moves
6) the main loop over all moves, with several enhancements (singular extensions, futility pruning, LMR etc.)

Note that Stockfish does not do FHR but uses Null Move pruning instead. The difference is two-fold: instead of using the static (material) eval, they use a reduced search to compare against beta, and instead of reducing the search depth for remaining moves, they do a hard cutoff (if the score is verified, or a threat is identified that negates a previous LMR). So quite a bit more aggressive!

Rein
Post Reply