Release of DCTL: draughts and checkers template library

Discussion about development of draughts in the time of computer and Internet.
Walter
Posts: 96
Joined: Fri Jul 13, 2012 19:23
Real name: Walter Thoen

Re: Release of DCTL: draughts and checkers template library

Post by Walter »

BertTuyt wrote:
For the evaluation function, my first probably very naive strategy is to just write a bunch of general eval terms (material, center, balance, terrain, tempo, outposts) plus some lock and bridge patterns, and just automatically tune the coefficients and see where that leads to. But these would just be base level programs, I don't expect them to be competitive compared to the state of the art. That will require more manual labor :-)
I expect that with all your draughts knowledge and auto coefficient tuning your implementation could already approach top level!

But I tend to agree that in the end the eval function is the area which makes the difference and (so far) requires a huge manual effort.
For this reason I'm still thinking how we could approach evaluation function learning in a different way.

That was also the reason that Im now working on DQL.
The base idea is to find specific board/game positions with a DQL-search, and than extract the relevant patterns from the DQL search output.

My main focus these days is the white outpost on 24 evaluation. The base algorithm (as used by most i guess), is to count attackers and defenders (with some clever race-conditions).
Unfortunately there are many exceptions, and exceptions on exceptions, and ...

I still did not find the breakthrough solution/approach , but if it was easy everybody would have done it already...

Bert
My guess is that the role of evaluation is going to become more limited over time. The reasoning behind this is that the game phase in which evaluation is important is becoming shorter. A typical game lasts between 120 - 150 ply. We already have opening books and end game databases that provide 'perfect play' for say the first 30 and the last 30 ply. This leaves a 60 - 90 ply gap in the middle that is currently covered by search and evaluation. As the current search depth is probably about 20-30 ply we cannot bridge the gap, and hence we need evaluation to ensure that we keep on the right track to reach to other end.

However, with larger opening books and larger endgame databases the middle phase will shrink. Even with improving hardware and search it would still be a long time though before search and evaluation can safely bridge the gap. However, when the middle phase becomes shorter then methods without (much) evaluation, such as a combination of MCTS and tactics-only search, might become more effective at bridging the gap.

At least I hope things are going in this direction as manually optimizing evaluation functions does not seem to be a lot of fun :)

Walter
Post Reply