Damage 15.3
Re: Damage 15.3
I still do not believe that there is a huge ELO gain possible beyond Scan / Kingsrow with normal time settings.
But advanced approaches like A0, can learn us much about the relevant features (like material value man/king temp, left right balance, game phase) in draughts, which are now (more or less) pre-defined by the programmers (although the value is determined through ML).
Bert
But advanced approaches like A0, can learn us much about the relevant features (like material value man/king temp, left right balance, game phase) in draughts, which are now (more or less) pre-defined by the programmers (although the value is determined through ML).
Bert
Re: Damage 15.3
As stated in a previous post, I will do now some tests to measure ELO as a function of games with increased win/loss ratio.
I used the same data set, but only used 1 out of X of the draw games.
The test now running is with X = 3, so only on out of 3 draws are used, effectively increasing the decisive ratio with the same factor 3.
Tomorrow I can share a first result.
Bert
I used the same data set, but only used 1 out of X of the draw games.
The test now running is with X = 3, so only on out of 3 draws are used, effectively increasing the decisive ratio with the same factor 3.
Tomorrow I can share a first result.
Bert
Re: Damage 15.3
OK Bert,BertTuyt wrote: ↑Fri Feb 21, 2020 22:51As stated in a previous post, I will do now some tests to measure ELO as a function of games with increased win/loss ratio.
I used the same data set, but only used 1 out of X of the draw games.
The test now running is with X = 3, so only on out of 3 draws are used, effectively increasing the decisive ratio with the same factor 3.
Tomorrow I can share a first result.
Bert
We hope that X=3 will be promising.
What's, for you the maximum X value to reach for an efficient test ?
Or it's linked to the saturation graph posted early ?
Thank.
Sidiki
Re: Damage 15.3
ML, it's definely the best programming approch to get a strong engine.BertTuyt wrote: ↑Fri Feb 21, 2020 22:28I still do not believe that there is a huge ELO gain possible beyond Scan / Kingsrow with normal time settings.
But advanced approaches like A0, can learn us much about the relevant features (like material value man/king temp, left right balance, game phase) in draughts, which are now (more or less) pre-defined by the programmers (although the value is determined through ML).
Bert
-
- Posts: 1368
- Joined: Thu Jun 20, 2013 17:16
- Real name: Krzysztof Grzelak
Re: Damage 15.3
You trust too much Google.Rein Halbersma wrote: ↑Fri Feb 21, 2020 18:30You do understand that Google/Deepmind used 1700 years worth of computing *in parallel* in order to achieve all that in 4 hours?
Re: Damage 15.3
I am asking myself if A0 can beat RedFish that crashed LC0Krzysztof Grzelak wrote: ↑Sat Feb 22, 2020 11:21You trust too much Google.Rein Halbersma wrote: ↑Fri Feb 21, 2020 18:30You do understand that Google/Deepmind used 1700 years worth of computing *in parallel* in order to achieve all that in 4 hours?
-
- Posts: 1722
- Joined: Wed Apr 14, 2004 16:04
- Contact:
Re: Damage 15.3
The 1700 years of computing comes from Gian-Carlo Pascutto, author of Leela-Zero, the open source program that actually reproduced the Alphazero results. See http://computer-go.org/pipermail/comput ... 10307.htmlKrzysztof Grzelak wrote: ↑Sat Feb 22, 2020 11:21You trust too much Google.Rein Halbersma wrote: ↑Fri Feb 21, 2020 18:30You do understand that Google/Deepmind used 1700 years worth of computing *in parallel* in order to achieve all that in 4 hours?
Re: Damage 15.3
Good reply Rein,Rein Halbersma wrote: ↑Sat Feb 22, 2020 14:16The 1700 years of computing comes from Gian-Carlo Pascutto, author of Leela-Zero, the open source program that actually reproduced the Alphazero results. See http://computer-go.org/pipermail/comput ... 10307.htmlKrzysztof Grzelak wrote: ↑Sat Feb 22, 2020 11:21You trust too much Google.Rein Halbersma wrote: ↑Fri Feb 21, 2020 18:30You do understand that Google/Deepmind used 1700 years worth of computing *in parallel* in order to achieve all that in 4 hours?
All this prove again that supercalculators are very very powerfull. I remember a post, sorry i haven't the link that spoken of a french supercalculator, not the best, and they said that the data that this machine calculate in Day, will be reached by all the humans with a calculator per person, in 50 years.
I believe so that this isn't too much for Google neuronal machines.
Sidiki
-
- Posts: 1368
- Joined: Thu Jun 20, 2013 17:16
- Real name: Krzysztof Grzelak
Re: Damage 15.3
Unfortunately that is not a good answer Rein.Sidiki wrote: ↑Sat Feb 22, 2020 16:39Good reply Rein,
All this prove again that supercalculators are very very powerfull. I remember a post, sorry i haven't the link that spoken of a french supercalculator, not the best, and they said that the data that this machine calculate in Day, will be reached by all the humans with a calculator per person, in 50 years.
I believe so that this isn't too much for Google neuronal machines.
Sidiki
Re: Damage 15.3
Why ? We are speaking of neural Network many computer or super computers working together. Do you know what's a petaflop, Krzysztof Grzelak ?Krzysztof Grzelak wrote: ↑Sat Feb 22, 2020 17:23Unfortunately that is not a good answer Rein.Sidiki wrote: ↑Sat Feb 22, 2020 16:39Good reply Rein,
All this prove again that supercalculators are very very powerfull. I remember a post, sorry i haven't the link that spoken of a french supercalculator, not the best, and they said that the data that this machine calculate in Day, will be reached by all the humans with a calculator per person, in 50 years.
I believe so that this isn't too much for Google neuronal machines.
Sidiki
You saw the link of the proof of what Rein said and he confirmed what Bert said about the 1700 years computing of " a standard computer" as i don't know, your 32Go of Ram.
It's Just a calcul that was done to estimate this time. Have you read the message of the link sent by Rein ?
http://computer-go.org/pipermail/comput ... 10307.html
Sidiki
-
- Posts: 1368
- Joined: Thu Jun 20, 2013 17:16
- Real name: Krzysztof Grzelak
Re: Damage 15.3
I have reason not to believe in such things.Sidiki wrote: ↑Sat Feb 22, 2020 18:51Why ? We are speaking of neural Network many computer or super computers working together. Do you know what's a petaflop, Krzysztof Grzelak ?
You saw the link of the proof of what Rein said and he confirmed what Bert said about the 1700 years computing of " a standard computer" as i don't know, your 32Go of Ram.
It's Just a calcul that was done to estimate this time. Have you read the message of the link sent by Rein ?
http://computer-go.org/pipermail/comput ... 10307.html
Sidiki
Re: Damage 15.3
Even now, you don't tell us why to not believe in such things.Krzysztof Grzelak wrote: ↑Sat Feb 22, 2020 19:02I have reason not to believe in such things.Sidiki wrote: ↑Sat Feb 22, 2020 18:51Why ? We are speaking of neural Network many computer or super computers working together. Do you know what's a petaflop, Krzysztof Grzelak ?
You saw the link of the proof of what Rein said and he confirmed what Bert said about the 1700 years computing of " a standard computer" as i don't know, your 32Go of Ram.
It's Just a calcul that was done to estimate this time. Have you read the message of the link sent by Rein ?
http://computer-go.org/pipermail/comput ... 10307.html
Sidiki
It's like someone tell you that a distance reached to a car at 300 Km/h can take 20 heures to a human at legs if we consider that his max speed is 15 km/h. And even with these data you still saying that this calcul it's wrong.
Re: Damage 15.3
Herewith the actual status of the next test.
I still use the same data set, but I skip 2 out of 3 draws, effectively increasing the win/lose ratio (with a factor of 3).
Below the table and graph. The ELO rating indicates how much Scan is better in comparison with Damage 15.3.
It is clearly visible (blue line old test, orange line new line) that initial learning is much faster/better, but that the saturation level seems to be the same.
I will complete the test in the next days.
An interesting research question is , if there is further improvement when one further increases the win/lose ratio (for example do not include any draws), or if there is an optimum.
But most likely I will only work hereafter on different evaluation functions, based upon slightly adapted features and/or pattern regions.
Keep you posted,
Bert
I still use the same data set, but I skip 2 out of 3 draws, effectively increasing the win/lose ratio (with a factor of 3).
Below the table and graph. The ELO rating indicates how much Scan is better in comparison with Damage 15.3.
Code: Select all
Games W D L U T ELO
10000 31 127 0 0 158 69
20000 11 147 0 0 158 24
40000 7 151 0 0 158 15
80000
160000 6 148 1 3 158 11
320000
640000
1280000
I will complete the test in the next days.
An interesting research question is , if there is further improvement when one further increases the win/lose ratio (for example do not include any draws), or if there is an optimum.
But most likely I will only work hereafter on different evaluation functions, based upon slightly adapted features and/or pattern regions.
Keep you posted,
Bert
-
- Posts: 299
- Joined: Tue Jul 07, 2015 07:48
- Real name: Fabien Letouzey
Re: Damage 15.3
Hi Bert,
Congrats on your success with patterns, BTW. It's a rite of passage.
Fabien.
I remember trying skipping draws, calling the label target "Wilo" (win/loss), in hope that this component would be comparable between variants. But no, draws are essential. That was likely with killer draughts, but things can only get worse in ID (throwing away even more).
Congrats on your success with patterns, BTW. It's a rite of passage.
Fabien.
Re: Damage 15.3
Fabien ,thanks for your email.
But the suc6 I now have is to a large extend based upon the willingness of you (sharing your ideas and source code), and Ed (sharing his optimization program).
I also believe that you need draws to better balance the value of weights.
But I think that the draw ratio in my set was rather high (around 90%), whereas I'm now closer to 70% (think this is also the number Ed has in his set).
So the interesting question is what is the right quality of games?
If it is too high you have hardly and mistakes , but with only draws, all weights stay at zero.
When the quality of the training set is lower, you have a better win/lose - draw ratio, but maybe too many blunders which might not support the logistic regression.
Or you could have high quality games, but reduce the draw you use for training, so 30% of the games is decisive.
Not sure if this 30% is an optimum though....
Bert
But the suc6 I now have is to a large extend based upon the willingness of you (sharing your ideas and source code), and Ed (sharing his optimization program).
I also believe that you need draws to better balance the value of weights.
But I think that the draw ratio in my set was rather high (around 90%), whereas I'm now closer to 70% (think this is also the number Ed has in his set).
So the interesting question is what is the right quality of games?
If it is too high you have hardly and mistakes , but with only draws, all weights stay at zero.
When the quality of the training set is lower, you have a better win/lose - draw ratio, but maybe too many blunders which might not support the logistic regression.
Or you could have high quality games, but reduce the draw you use for training, so 30% of the games is decisive.
Not sure if this 30% is an optimum though....
Bert