Flits self-learning mode

Discussion about development of draughts in the time of computer and Internet.
jj
Posts: 190
Joined: Sun Sep 13, 2009 23:33
Real name: Jan-Jaap van Horssen
Location: Zeist, Netherlands

Re: Flits self-learning mode

Post by jj » Tue May 26, 2020 11:55

Fabien, thanks for your answer. I did ask others or rather everybody since this is a public forum, but so far only very few people have joined the discussion. And "the" opinion seems to be that this is allowed, although I only count you and Rein (not competing himself). What do the others think? (Ed, Jelle, Michel, Harm, etc.) I haven't heard Ed say he approved all this. He probably forgot to say explicitly that his code was intended to study and not to use directly. And what are you gonna do when the results are there and you are thanked a thousand times?

So it is up to the TD, in our case that means Krzysztof. Not sure I still like to compete myself.

Fabien Letouzey
Posts: 299
Joined: Tue Jul 07, 2015 07:48
Real name: Fabien Letouzey

Re: Flits self-learning mode

Post by Fabien Letouzey » Tue May 26, 2020 13:29

jj wrote:
Tue May 26, 2020 11:55
So it is up to the TD, in our case that means Krzysztof. Not sure I still like to compete myself.
I have a magic trick to cheer people up, I just show them a part of my life.

Years ago, Sidiki and I (independently) told everyone that Edeon Sport was using Scan 2's eval. What did Krzysztof do? He paid them for the next version (as I recall; not 100% sure). So every tournament (that was twice a year at some point, less now I think), I have the pleasure of seeing Edeon Sport (and of course Scan will only draw those games, to add insult to injury). In all fairness, Krzysztof did something: he asked them whether they cheated and they said "no". I am no expert on police work, so I assumed it was standard procedure.

Hopefully the Bert incident should appear less hurting, when seen in proportion.

jj
Posts: 190
Joined: Sun Sep 13, 2009 23:33
Real name: Jan-Jaap van Horssen
Location: Zeist, Netherlands

Re: Flits self-learning mode

Post by jj » Tue May 26, 2020 14:05

Yes thank you for sharing, Fabien. The Edeon case seems clear, if there is proof they should be banned. But of course they can hide it now if they want.

I recently checked where my app for Android can be downloaded for free, not funny either. Now I have other problems: try asking Google and Samsung technical questions and get answers.
Spoiler:
No answers.

Ed Gilbert
Posts: 854
Joined: Sat Apr 28, 2007 14:53
Real name: Ed Gilbert
Location: Morristown, NJ USA
Contact:

Re: Flits self-learning mode

Post by Ed Gilbert » Tue May 26, 2020 17:19

I haven't heard Ed say he approved all this.
Bert asked me for the code, I gave it to him without any restrictions. If that upset the balance of draughts competition then I am to blame, not Bert. Sure tournaments are fun, but really Fabien has already won, the strongest engines are using his pattern technology. I don't take any pride that after several years of effort kingsrow has finally more or less caught up to scan, because it's with the pattern technology that Fabien has brought to draughts, and without that we'd all still be way behind.

jj
Posts: 190
Joined: Sun Sep 13, 2009 23:33
Real name: Jan-Jaap van Horssen
Location: Zeist, Netherlands

Re: Flits self-learning mode

Post by jj » Tue May 26, 2020 18:44

Ed, it is very gentleman-like of you to take the blame. So you are not going to share with us with what intention you gave the code or what you think of what was done with it. Don't you think Damage eval derives a lot from Kingsrow eval this way? More than Kingsrow eval derives from Scan eval, I would say. Or do you think that is basically the same?

Is the conclusion that "anything goes", except direct copy-paste of end products?

Fabien Letouzey
Posts: 299
Joined: Tue Jul 07, 2015 07:48
Real name: Fabien Letouzey

Re: Flits self-learning mode

Post by Fabien Letouzey » Wed May 27, 2020 08:46

Rein Halbersma wrote:
Tue May 26, 2020 11:32
There are no high-performance AlphaZero draughts programs yet.
Last year, there were two candidate projects I was associated with (mostly at the advice level): one by Facebook, and one by the author of Galvanise-zero (G0) that I already mentioned on this forum. Unfortunately they were both terminated so early that we don't have much insight.

The FB one played illegal moves during a competition, presumably because of bad communication between team members (bad coordinate conversions on the UI side I'm guessing ...). The bottom line is that they don't mention draughts anywhere (probably including in the code). There might have been a perfectly working A0 draughts engine, possibly built on powerful hardware, but we'll never know anything about it; quite poetic.

Originally called "Elf" (playing only Go for the first few years), it was later renamed "Polygames".

https://ai.facebook.com/blog/open-sourc ... self-play/

G0 is for me much more interesting, as it was designed to run (don't forget training) on less-than-infinite hardware. It might also be one of the few projects that didn't try to copy the A0 paper verbatim (necessary for less-than-infinite hardware anyway).

The author was successful in multiple games before (Hex and Amazons, I think).
Initial draughts results were not so good, though. Only a few games were played (on lidraughts) and might be strategically correct, but the program routinely fell into combinations, never seeing them coming. Post-mortem analysis revealed that the combination moves were given tiny probabilities, making G0 blind. I suggested modifying the exploration parameter (trusting the learned policy much less), but it seems that Richard had already lost interest in A0 altogether (mentioned by email). It's been nearly a year already, and I haven't had any news since.

https://github.com/richemslie/galvanise_zero

There is some insight from the G0 experiments. It failed (relatively) in exactly the games Scan was designed to play: Othello (10x10 to be interesting) and draughts. A possible explanation is that those games have a low BF, while A0 techniques come from Go. More generally, the A0 approach might not be as game-independent as we thought after the original paper; I guess there is no free lunch after all.

A0 as a hobby also looks very boring. It's mostly parameter tuning, and experiments take 2+ weeks if you don't have access to a cluster. It also costs quite a bit. Richard had 3-4 high-end GPUs, and additionally spent $100 a month on cloud computing for about 2 years. I think that he had fun overall, but it's not a long-term hobby.

Fabien.

Rein Halbersma
Posts: 1720
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: Flits self-learning mode

Post by Rein Halbersma » Wed May 27, 2020 11:15

Thanks Fabien for this update on the NN front. The huge resource requirements are not very helpful to do quick experiments with fast feedback. But there is some progress on the Go front here: https://blog.janestreet.com/acceleratin ... ing-in-go/ In poker people also used huge (multi-Tb RAM systems) resources to compute Nash equilibria, but now there are also consumer-grade (single GPU laptop) systems that give superhuman performance: https://papers.nips.cc/paper/7993-depth ... -games.pdf I think it's a matter of time before NN programs becomes feasible for amateurs (budgetwise).

Especially the Kata-go paper link is interesting to read: it emphasizes using human ingenuity to come up with better NN primitives to accelerate learning. It should be right up your ally to see how this can be applied to draughts.

Fabien Letouzey
Posts: 299
Joined: Tue Jul 07, 2015 07:48
Real name: Fabien Letouzey

Re: Flits self-learning mode

Post by Fabien Letouzey » Thu May 28, 2020 08:49

Rein Halbersma wrote:
Wed May 27, 2020 11:15
Especially the Kata-go paper link is interesting to read: it emphasizes using human ingenuity to come up with better NN primitives to accelerate learning. It should be right up your ally to see how this can be applied to draughts.
The way I see it, DeepMind's MO has always been (so far?) to not optimise resources. As I recall, the original AG used batch GD (???) for instance, use one position per game, etc ... Such decisions do not make sense for the rest of us, however, and weirdly (in a bad way) remind me of Deep Blue, which famously did not use null-move pruning (they also admitted that singular extensions, as published, were worth 0 Elo).

I think it's DeepMind's way of making algorithms simpler, as they are free from constraints. In any case, it seems obvious that making A0 much faster is a given; the only question is how many orders of magnitude are needed to make it usable by mortals. And each improvement will bring side effects with it (confirmed by the author of G0), complicating development even more.

As for the paper you cite, it seems extremely specific to A0 internals. I still think that anybody using Go for experimentations is bound to make (hidden) assumptions about a large BF; this is visible even from his "general" (game-independant) section: moves on the 1st line.

Fabien.

Krzysztof Grzelak
Posts: 1315
Joined: Thu Jun 20, 2013 17:16
Real name: Krzysztof Grzelak

Re: Flits self-learning mode

Post by Krzysztof Grzelak » Tue Jul 07, 2020 14:38

Fabien Letouzey wrote:
Tue May 26, 2020 13:29
I have a magic trick to cheer people up, I just show them a part of my life.

Years ago, Sidiki and I (independently) told everyone that Edeon Sport was using Scan 2's eval. What did Krzysztof do? He paid them for the next version (as I recall; not 100% sure). So every tournament (that was twice a year at some point, less now I think), I have the pleasure of seeing Edeon Sport (and of course Scan will only draw those games, to add insult to injury). In all fairness, Krzysztof did something: he asked them whether they cheated and they said "no". I am no expert on police work, so I assumed it was standard procedure.

Hopefully the Bert incident should appear less hurting, when seen in proportion.
Please write me what should he do in this case so as not to hurt anyone with accusations.

pontel
Posts: 41
Joined: Tue Jan 26, 2021 21:48
Real name: João Anselmo Pontel

Re: Modo de autoaprendizagem Flits

Post by pontel » Wed May 19, 2021 16:36

Fabien Letouzey wrote:
Tue May 26, 2020 13:29
jj wrote:
Tue May 26, 2020 11:55
So it is up to the TD, in our case that means Krzysztof. Not sure I still like to compete myself.
I have a magic trick to cheer people up, I just show them a part of my life.

Years ago, Sidiki and I (independently) told everyone that Edeon Sport was using Scan 2's eval. What did Krzysztof do? He paid them for the next version (as I recall; not 100% sure). So every tournament (that was twice a year at some point, less now I think), I have the pleasure of seeing Edeon Sport (and of course Scan will only draw those games, to add insult to injury). In all fairness, Krzysztof did something: he asked them whether they cheated and they said "no". I am no expert on police work, so I assumed it was standard procedure.

Hopefully the Bert incident should appear less hurting, when seen in proportion.

If I'm not mistaken Fabien Letouzey was a victim of plagiarism by chess developers Vasik Rajlich and Larry Kaufman. In summary, the Deep Rybka Program plagiarized Letouzey's Fruyt.

His excellence in programming has been much envied! :D

pontel
Posts: 41
Joined: Tue Jan 26, 2021 21:48
Real name: João Anselmo Pontel

Re: Modo de autoaprendizagem Flits

Post by pontel » Wed May 19, 2021 16:50

Sidiki wrote:
Tue May 12, 2020 13:55
Hi all,

For those who know it, Flits has a kind of manual self-learning option that permit to adjust his strenght.
I am working on , just add it to flits and you will see the strength.

Actually, the draw rate against Scan and Kingsrow it's around 99.99%. I haven't yet optimized it's for the 2 move ballots.

Sidiki.
Hello my friend Sidik

In an old project that I abandoned I used the Book feature with evaluation to record relevant matches in matchs against other programs and also of other matches between different engines, such as Kingsrow, Dragon, Scan and other strong programs.

In addition I adapted the book from the version of Flits 1.05 to work on version 3.02, which comes practically without a book, the problem is that the tree of book 1.05 is very limited to the 32-28 bid.

I can make them available here if you are interested. Hug...

Post Reply