Search Algorithm

Discussion about development of draughts in the time of computer and Internet.
Post Reply
Walter Thoen
Posts: 44
Joined: Wed Nov 17, 2010 13:26
Real name: Walter Thoen

Re: Search Algorithm

Post by Walter Thoen » Sun Mar 10, 2013 13:37

BertTuyt wrote:Krzysztof,

the Horizon 4.1 version, including all sources (also with all the required files for the Visual Studio Project) will most likely be available mid March, so in 2 - 3 weeks.

Bert
Bert,

Can you add your Visual Studio Project files to the published sources?

Regards,
Walter

Rein Halbersma
Posts: 1722
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: Search Algorithm

Post by Rein Halbersma » Sun Mar 10, 2013 13:40

Walter Thoen wrote:
BertTuyt wrote:Krzysztof,

the Horizon 4.1 version, including all sources (also with all the required files for the Visual Studio Project) will most likely be available mid March, so in 2 - 3 weeks.

Bert
Bert,

Can you add your Visual Studio Project files to the published sources?

Regards,
Walter
Better yet, install Mercurial, TortoiseHg and use BitBucket! Having all those version comments in the source is sooooo 1970s ;-)

Walter Thoen
Posts: 44
Joined: Wed Nov 17, 2010 13:26
Real name: Walter Thoen

Re: Search Algorithm

Post by Walter Thoen » Sun Mar 10, 2013 16:55

BertTuyt wrote:Herewith the CSearch class.
The YBWC algorithm is mostly based on the work of Glaurung/Stockfish.

Also here the link to the Dropbox source folder...

https://www.dropbox.com/sh/sf7ynwmgo8ljf2j/6HkUp4CBwV

Bert
I tried compiling the source in VS2012 but I am missing CBook64.h and resource.h

Walter Thoen
Posts: 44
Joined: Wed Nov 17, 2010 13:26
Real name: Walter Thoen

Re: Search Algorithm

Post by Walter Thoen » Sun Mar 10, 2013 17:05

Rein Halbersma wrote: Better yet, install Mercurial, TortoiseHg and use BitBucket! Having all those version comments in the source is sooooo 1970s ;-)
Bitbucket + Mercurial is good. Bitbucket + Git might be easier for Visual Studio users as Git is now fully supported and integrated by Microsoft.

Anyway, this decision is for Bert as he is currently the only one contributing code :D

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Sun Mar 10, 2013 21:36

Although I dont focus on Damage these days, I was curious what Damage would do against Dragon.
So for this reason I played a short match 10 min/game, 4 core search for both programs.
Only Damage had a slight advantage while using a 7P DB in stead of a 6P for Dragon.
Result from the Damage perspective: 80 games, 5 Win, 3 Lost and 72 Draw, yielding a 9 point ELO difference.
Attached the pdn match file.

As I normally post the Kingsrow pdn,I this time had to use the Damage pdn.
I found a strange bug, which you should be aware of analyzing the games.
As I dont want to use the 7P advantage, I stop the game when there are only 7 pieces on the board, and i rely on the DB-score.
There is however an strange bug (not noticed before as I almost never use Damage pdn), that although it stops, Damage adds (a random, or just the first move from the move-list) a move to the game move list. So sometimes the resulting board position is suddenly lost ( so a win for Dragon). The pdn score in this case is the right one..

Although I used the YBWC algorithm before, I never tuned this implementation with the new Damage search. It could be that I need to fine-tune some parameters as the overall nodes/second is a little disappointing (as the number of split points seems to be rather high)...
And as a side consequence, Horizon will also benefit from this, as the only difference between the 2 is the evaluation function.


Bert
Attachments
DXPMatch.pdn
(78.65 KiB) Downloaded 248 times

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Sun Mar 10, 2013 21:43

Bert,

Can you add your Visual Studio Project files to the published sources?

Regards,
Walter
Walter, will do.
I'm now slighting changing the Horizon Evaluation, in stead of a large evaluation function, I want to use several smaller functional ones.
But will post it in the next days, and will let you know..

Bert

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Sun Mar 10, 2013 22:07

Rein Halbersma wrote:
Better yet, install Mercurial, TortoiseHg and use BitBucket! Having all those version comments in the source is sooooo 1970s ;-)


Bitbucket + Mercurial is good. Bitbucket + Git might be easier for Visual Studio users as Git is now fully supported and integrated by Microsoft.

Anyway, this decision is for Bert as he is currently the only one contributing code :D
:D Absolutely right, I wrote my first Draughts program in the 1970s, and in FORTRAN.
Basically as a non-programmer Im not a grandmaster in all the new sophisticated tools now available. But I will and need to dig into these ones....

Bert

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Sun Mar 10, 2013 22:18

I tried compiling the source in VS2012 but I am missing CBook64.h and resource.h
I have copied to the Horizon source directory the resource.h file, the .rc file and the Res folder (the Engine is a MFC application).
I removed in the CEngine and CSearch class the reference to the CBook.
For this reason also the CSearch.h file was slightly modified.
All updated in Dropbox now.
As shared already I want to build a book based on the dropout expansion method.
But before that search and evaluation should be more bug-free.....

Let me know if you got all working, or still missing something, and/or need (more) help/support.
I use VC2008 myself sofar for the Engine ...

Bert

Rein Halbersma
Posts: 1722
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: Search Algorithm

Post by Rein Halbersma » Sun Mar 10, 2013 22:22

BertTuyt wrote:
Rein Halbersma wrote:
Better yet, install Mercurial, TortoiseHg and use BitBucket! Having all those version comments in the source is sooooo 1970s ;-)


Bitbucket + Mercurial is good. Bitbucket + Git might be easier for Visual Studio users as Git is now fully supported and integrated by Microsoft.

Anyway, this decision is for Bert as he is currently the only one contributing code :D
:D Absolutely right, I wrote my first Draughts program in the 1970s, and in FORTRAN.
Basically as a non-programmer Im not a grandmaster in all the new sophisticated tools now available. But I will and need to dig into these ones....

Bert
Bert, it takes 30 minutes to learn: http://hginit.com/

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Sun Mar 10, 2013 22:30

Rein thanks, that will then one of the things on my to-do-list for next week..... :)
Thanks for sharing.....

Bert

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Sun Mar 10, 2013 23:19

Walter, thanks for yor post, herewith some remarks:

I would certainly want to help to improve the strengths of Horizon especially now that everything is open source.
Anyway thanks for your support. I hope that others are also encouraged more to share ideas and/or source. I think the sources of Feike (especially the evaluation function) are of great help. I also very much appreciate the open and constructive way Rein is sharing ideas and code. In the past also Michel posted all sources related to an older Dragon Engine on the Internet, I'm not sure if this source is still available. I hope also Stef Keetman ( TRUUS), Adri Vermeulen (FLITS) will do the same, as their programs are no longer sold (for Flits it could be that it is still available via the KNDB) and/or competitive.

I also know Harm Jetten as a very supportive person, who shared all details about the DB-handler in Dam 2.x . Think many programs rely on this code (at least Maximus uses the Harm Jetten DB's). I'm not sure but i think this was already a very good cooperation between Michel and Harm at that point in time, and very much supported and accelerated DB-application in Draughts Engines. My ideal is to have one big container in the cloud with all knowledge and sources about many engines. As Draughts is a small world, all of us most likely will not make big money with these programs.

Maybe I only make an exception for Ed !! I really value his contribution towards computer draughts. Basically he did what Stef did with TRUUS, to put something out there which is lightyears beyond current competition. Both Stef and Ed were reasons for me to think further, and to challenge their programs. And for me Kingsrow is still (for the years to come) the proud defacto standard and real value for money. So as long we come not quite close, I have no problem that Ed keeps several ideas for his own. Anyway when you ask him, he will also always provide details, suggestions and many hints. Ed thanks for that, a pity you are not so active anymore here, we miss you...
I find it interesting to think about how a human would evaluate and how Horizon evaluates. For instance, Horizon seems to treat every node completely independent from the game position (i.e. from what is already known). The game position often has many features that do not change often but are critical for evaluation.

A position with a few pieces cannot change into a position with many pieces.
Tempi changes by +1 for a non-capture move. Tempi imbalances are causes by exchanges.
A classical game rarely changes into an attacking game and vice versa.
A lock (hekstelling etc.) usually persists for many moves.

Horizon does not seem to use the fact that these things rarely change during the game. Instead for every node Horizon seems to determine and calculate these things afresh. Would it not be more efficient to call the evaluation function with parameters based on knowledge from the game position? (e.g. call evaluation with is_klasiek = true for the entire search if is_klassiek is true in game position). Would it be more efficient to add tempi information to the node and update during movegen? (Note that it is only the tempi difference between white and black is important to conserve memory). Do you really have to recount to number of locked men for every node?
Basically you are right, and it might be that incremental evaluation update is far more effective. I also thought about that, but it would made my evaluation - search interaction even more complex, so for the time being i put this idea to bed.
A drawback is that with incremental update you need to to update calculations every-move, which in the end (especially when you go really deep), is more costly then doing the math at the end-node. For example in the past i also did the tempi-update after every move. But when i got a new processor with a hardwired POPCNT instruction the incremental update was in fact slower then doing everything at the end-node.
I do some incremental updates for parameters which i need to access at every depth, like number of man, king, hash value (although i think that also Ed has a separate has-function for this, and also does not do incremental update in a Zobrist way).
But if anyone already has some experience with this approach (incremental evaluation update), please share it in this forum.
I also find the interaction between search and evaluation difficult to get my head around. For instance, in the Outpost (voorpost) evaluation the important questions are indeed 1) are there enough defenders? 2) will be defenders arrive on time? 3) are there other defenses than recapturing the attacker (opvangen)? Horizon attempts to answer the first two questions in the various Voorpost evaluation functions. The question whether the defenders will arrive on time is answered by calculating the distances. For a human the second question is probably more something for a restricted search. By a restricted search I mean a search that only considers moves that are relevant for the issue being evaluated. In this case only moves that are aimed at attacking or defending the outpost.

I think that a human uses such restricted or targeted searches a lot to evaluate a position. Is the outpost safe? Is a breakthrough possible? Who controls the wings? Could a computer implement this?
The outpost is really a "animal" where it might be preferable to better sync between evaluation and search, as the current static approach is so heavy, and the code hardly readable and/or maintainable. Think that Gerard (DAMY) uses this approach. Would be good if he provides some status information related to his novel approach, as I also believe that a better dynamic cooperation between search and evaluation will pay off in the end (and is more alike how humans "think").
Some other areas that interest me personally are:

1) Add Killer-rule to Move Generator
2) Use CMoveGen64 code to get legal moves to Web GUI.
3) Use CDatabase64 code to retrieve end game db info with the Web GUI.
4) Try to get search to work massively parallel on Xeon Phi
5) 3-Hirn approach, i.e. start multiple searches with different parameters or eval in parallel and arbitrate best move.
At least the good news is that with the code available these days, you have a head-start to start with these ideas.

In my case the 5 areas I need to work on:
- Better Horizon evaluation, adding some features, and improving some functions (like free-path, run-away, or breakthrough, or whatever name is used...).
- Making the Horizon evaluation faster. Really need to do a profile, because i have no clue what is consuming most time (but i suspect the outpost routine).
- YBWC parameter tuning, I think the Xeon Phi is an interesting concept (do you have one?), but I doubt if Draughts is really good scalable towards the 50+ cores.
The same is valid for GPU, unlimited power, but hardly feasible for this type of problem (but maybe the Chess folks will find something, at least there is regular posting in the multiple forums).
- The current search goes really deep (and has really potential) , but often the baby is thrown away with the bathwater, so need to do something better here.
- An auto generated opening BOOK.

So never a dull moment... :D

Bert

Walter Thoen
Posts: 44
Joined: Wed Nov 17, 2010 13:26
Real name: Walter Thoen

Re: Search Algorithm

Post by Walter Thoen » Tue Mar 12, 2013 16:40

BertTuyt wrote: - YBWC parameter tuning, I think the Xeon Phi is an interesting concept (do you have one?), but I doubt if Draughts is really good scalable towards the 50+ cores.
The same is valid for GPU, unlimited power, but hardly feasible for this type of problem (but maybe the Chess folks will find something, at least there is regular posting in the multiple forums).
Bert
I don't have a Xeon Phi yet, but I did include one in the 2013 R&D budget for the startup I'm involved in. So I might get my hands on one later this year. I have the same doubts about scaling a single search to 50+ cores. Hence, my interest in the 3-Hirn approach. Start multiple searches maybe using max 10 cores each. The parameters / evaluation would be different for each search. The hard part will be the arbitration in case the searches do not agree on the best move.

I am following the GPGPU experiments as well, but at the moment I think that the code changes would be too much due to the very different architectures.

As a research topic the question how to make use of many cores is very relevant I think. As in the 80s/90s Chess/Draughts programming could maybe serve again as an interesting platform for experiments. The interest had gone away a bit as 2, 4, or 8 cores didn't matter much for the parallel algoritmes already known around 2000. The software was ready for the hardware developments between 2000 and say 2012. Now we have a new interesting development in hardware to hundreds or thousands of cores. And the question might go back to the software. How best to use the new hardware power? The answer for chess/draughts could once more be relevant for other HPC / AI applications as well.

Walter

Walter Thoen
Posts: 44
Joined: Wed Nov 17, 2010 13:26
Real name: Walter Thoen

Re: Search Algorithm

Post by Walter Thoen » Tue Mar 12, 2013 17:40

BertTuyt wrote:
Bert,

Can you add your Visual Studio Project files to the published sources?

Regards,
Walter
Walter, will do.
I'm now slighting changing the Horizon Evaluation, in stead of a large evaluation function, I want to use several smaller functional ones.
But will post it in the next days, and will let you know..

Bert
Bert,

I managed to get the code to build. I had some minor issues with CSearch64:

_ENGINE_HORIZON didn't seem to be defined causing compile errors on some Damage evaluation parts.

CSearch64 line 427: m_pHorizonEvaluation->evaluate(pTDepth, iAlfa, iBeta, bInfo, &csEvaluation) ;
iAlfa, iBeta aren't parameters of the evaluate function in CHorizonEvaluation

Regards,
Walter

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Tue Mar 12, 2013 19:31

Walter, interesting what hardware do you use ( processor, memory, ... by the way)?
Can you do the next small experiment:

opponents engine-engine
levelsfixeddepth 20
go

I'm curious what the output is when you get at ply 20.
Basically the NME values should be exactly the same as in my case (as this is a single-thread search).

Most of the smaller 7P sub DB's are uploaded to your SkyDrive.
But the 300M files were rejected, apparently i have to do something different, hope you have a clue...

Bert

Rein Halbersma
Posts: 1722
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: Search Algorithm

Post by Rein Halbersma » Tue Mar 12, 2013 21:23

Walter Thoen wrote:
BertTuyt wrote: - YBWC parameter tuning, I think the Xeon Phi is an interesting concept (do you have one?), but I doubt if Draughts is really good scalable towards the 50+ cores.
The same is valid for GPU, unlimited power, but hardly feasible for this type of problem (but maybe the Chess folks will find something, at least there is regular posting in the multiple forums).
Bert
I don't have a Xeon Phi yet, but I did include one in the 2013 R&D budget for the startup I'm involved in. So I might get my hands on one later this year. I have the same doubts about scaling a single search to 50+ cores.
Walter
Actually, I have a strong feeling that parallel search *will* scale to 50+ cores, but it requires advanced programming environments such as used by Cilk-chess. One problem that Cilk (currently available as a special branch in gcc maintained by Intel) solves compared to most existing YBW implementations (such as present in Stockfish) is the split-point handling and load-balancing over the various cores. The tree itself contains enough parallelism as search depth increases, it's "only" a matter of keeping all the processors busy. The Cilk-scheduler does so in a provably optimal way (by randomized work-stealing rather than deterministic work-pushing in almost every hand-made implementations of YBW).

Post Reply