JJ, thank you for your comments.
Ed, can you share the results of Kingsrow vs Flits+SidikiBook? Like you, I would be surprised if Flits became very much stronger but why leave it up to speculation?
I didn't run the experiment, for several reasons.
- I didn't fully understand how to enable and disable this additional book. From the video it looks like you just click something in a menu, and that does something. Can it be undone, or turned off? It doesn't seem to need to be told the filename of the additional book file, from what I could see.
- I ran the first brief match using the hub version of flits, and I don't know if that hub version supports opening books.
- I kind of lost interest in the experiment when I heard what the self-learning was all about. I agree that doing this just for fun, as an experiment to see how much flits can improve with an "anti-kingsrow" book is perfectly acceptable. I guess the point of my post, although maybe I didn't make it very well, is that it doesn't feel right to make a book that specifically counters every move of another program's book, and then use that in competition, as that sounds like it's basically a form of copying a book. It's just my opinion, and I was interested to see what other engine developers thought about it.
Although I'm not planning to do it, Sidiki or anyone else can download kingsrow and run the experiment.
Another thing about "fair competition" is the fact that Ed shared his optimization program only with Bert and with no-one else. With this program and the explanation offered by Ed every programmer can generate a very strong evaluation function in a short period of time. In my opinion Bert now has an advantage over other programmers who don't have access to the optimization program.
I never thought about it that way, but I can see your point. I shared the code with Bert because he asked me for it, and we had collaborated on other projects in the past, like the dxp wrappers for truus and flits, and Bert had shared some of his language localization code with me. I don't think there's anything proprietary or new in the code. Gradient descent is a well documented algorithm, and there's a lot of information available describing how to implement it. It is a lot of work though to develop and debug, or at least it was for me. Perhaps I should think about publishing the source.
-- Ed