NNUE

Discussion about development of draughts in the time of computer and Internet.
Post Reply
Sidiki
Posts: 321
Joined: Thu Jan 15, 2015 16:28
Real name: Coulibaly Sidiki

Re: NNUE

Post by Sidiki » Mon Dec 28, 2020 18:10

BertTuyt wrote:
Sun Dec 27, 2020 14:21
Sidiki, and to reply on your other question.

I used the previous match file and extracted a file with positions, and a file with result labels (based upon the previous Damage Evaluation, scaled with a Sigmoid function).
You could say that in this way you project the old evaluation function into a neural net architecture.

So as said before, not (yet) a bootstrap or zero approach....

Think the Chess NNUE world also started this way, and then proceed with further learning with autopay, but I'm not 100% sure.
Guess Rein knows the details.

Bert
Thank Bert,
I understand, because i also play chess and i see that these times, NNUE it's a fashion with the best chess engine, and this seem to improve their strenght .
With yours first results, we hope that this can bring a plus to the draughts engine strenght.

SIDKI

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: NNUE

Post by BertTuyt » Tue Dec 29, 2020 15:03

Sidiki, thanks for your post.

In all honesty I doubt that with NNUE one would get better results compared with the current Scan pattern-based evaluation functions.
They are extremely efficient and fast, so I don't know if we would even get close.

But working on NNUE is fun, and I like the fact that the neural network totally does not contain any features.
So I will at least continue for some time, to see what's possible in the end....

Bert

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: NNUE

Post by BertTuyt » Tue Dec 29, 2020 23:47

With some optimization in the AVX2 (256) code (thanks also to Joost), i was able to increase the Damage search-speed with NNUE to 6.0 MN/sec.
This yielded a small improvement in strength.
Unfortunately my processor does not support AVX-512, which could further improve SIMD performance.

Recent DXP match against Kingsrow (with same settings), and perspective Kingsrow 31W, 127D, which yield an Elo difference of 69.

So small steps....
Will keep you posted.

Bert

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: NNUE

Post by BertTuyt » Wed Dec 30, 2020 10:19

When comparing eval scores between the NNUE eval and the original (pattern based) Damage eval, i found that the scaling in NNUE was smaller.
This also indirectly affect the pruning mechanism.
This all boils down to the choice of the factor in the sigmoid function.

I did a test where I simply multiplied the resulting NNUE score with 2.
The DXP match results were encouraging, perspective KingsRow, 25W, 133D , which yields a El difference of 55.

There are so many new knobs to turn in NNUE and I feel i only scratched the surface yet.
Will keep you posted.

Bert

Sidiki
Posts: 321
Joined: Thu Jan 15, 2015 16:28
Real name: Coulibaly Sidiki

Re: NNUE

Post by Sidiki » Wed Dec 30, 2020 21:36

BertTuyt wrote:
Tue Dec 29, 2020 15:03
Sidiki, thanks for your post.

In all honesty I doubt that with NNUE one would get better results compared with the current Scan pattern-based evaluation functions.
They are extremely efficient and fast, so I don't know if we would even get close.

But working on NNUE is fun, and I like the fact that the neural network totally does not contain any features.
So I will at least continue for some time, to see what's possible in the end....

Bert
Hi Bert,

Thanks.
I see what you are trying to say, however i see that Damage it still growing in strength with this NNUE. That's mean that NNUE's technology it's based on self-learning, Is't the same algorithm that A0, ie using training file?

Sidiki.

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: NNUE

Post by BertTuyt » Thu Dec 31, 2020 11:00

Sidiki, no the approach is different and more in line with Stockfish NNUE.

Bert

Sidiki
Posts: 321
Joined: Thu Jan 15, 2015 16:28
Real name: Coulibaly Sidiki

Re: NNUE

Post by Sidiki » Thu Dec 31, 2020 18:23

BertTuyt wrote:
Thu Dec 31, 2020 11:00
Sidiki, no the approach is different and more in line with Stockfish NNUE.

Bert
Hi Bert,

Understood, so if, if you can take time to write an answer to this perhaps stupid question from me, i want to learn; i will be very glad.
I suppose that you Bert, are a draughts player, and you want to do a NNUE version of yourself.

What process will you follow?

Thanks

Sidiki.

Rein Halbersma
Posts: 1722
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: NNUE

Post by Rein Halbersma » Thu Dec 31, 2020 18:41

BertTuyt wrote:
Sun Dec 27, 2020 14:21
Guess Rein knows the details.
Hardly! I think getting zero-knowledge self-play reinforcement learning (RL) correct is quite tricky. Obviously, it worked for AlphaZero / LeelaZero etc. But those were massive distributed efforts, with hundreds (even thousands) of CPU years of training.

I haven't looked at how Stockfish is generating its training games for the NNUE evals. But one thing that is clear from the literature on RL is that you need to have sufficient exploration in order to be able to have an eval that generalizes to unseen positions. Essentially you need examples of imbalanced positions in order to learn the value of all parameters.

That could mean to use what is called an epsilon-greedy strategy (i.e. take the best move in 100% - epsilon cases, and a random move in the remaining epsilon percentage of cases, epsilon ~ 10%), or to use a softmax selection (i.e. exp(eval_i/tau) / sum(exp(eval_i/tau)) for i running over successor positions and where tau is a temperature (with tau -> 0, you get the regular max function). The softmax with nonzero temperature was used in AlphaZero (at least for the opening moves).

There's also all kinds of details that you need to get right (like not forgetting previously learned strategies that allow you to beat weak opponents as your program becomes stronger), and those details cannot really be predicted ahead of time. So you need to experiment and see what works. Generating lots of training games will take >90% of the time on your machine, the neural network training itself is much cheaper (even more so if you have a GPU).

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: NNUE

Post by BertTuyt » Thu Dec 31, 2020 18:51

Sidiki, in first instance I want to optimize the network topology.
Now i use a 4 layer network, with 191 inputs.
The number of neurons in the 4 layers are now, 192, 32, 32, and 1.
I will also will do a test with 256 neurons in the first layer.

It is also possible to apply a dedicated network for every phase.
This is an approach applied by Jonathan in his checkers program.
And certainly this is also on my test to-do list.

The number of inputs is far less, compared what one is doing in Chess.
Rein posted some suggestions here, which you can read in this threat.

Learning within this context is, that I use a fixed set of labeled positions, in my case 98M positions.
Labeling can be improved in 2 ways, to use another evaluation (for example Scan), or to use a shallow search as label-base.

Basically with this fixed input set you can train the weights of the network.
Here I can try to optimize the (many) parameters within the Keras/Tensorflow framework.

Last but not least within the chess world one has tools to further optimize strength through self-learning by playing games.
But this is not within my short term scope.

Main focus is to further improve Elo difference with KingsRow which is now 55, and certainly not good enough to really compete in a tournament.

Bert

Madeleine Birchfield
Posts: 12
Joined: Mon Jun 22, 2020 12:36
Real name: Madeleine Birchfield

Re: NNUE

Post by Madeleine Birchfield » Fri Jan 01, 2021 07:02

BertTuyt wrote:
Tue Dec 29, 2020 15:03
Sidiki, thanks for your post.

In all honesty I doubt that with NNUE one would get better results compared with the current Scan pattern-based evaluation functions.
They are extremely efficient and fast, so I don't know if we would even get close.

But working on NNUE is fun, and I like the fact that the neural network totally does not contain any features.
So I will at least continue for some time, to see what's possible in the end....

Bert
A reason why NNUE was so successful in chess is because of integer quantisation of the network weights (16-bit and 8-bit integers are used in most layers instead of floats) yielding significant speedup in calculating the nets.

Madeleine Birchfield
Posts: 12
Joined: Mon Jun 22, 2020 12:36
Real name: Madeleine Birchfield

Re: NNUE

Post by Madeleine Birchfield » Fri Jan 01, 2021 07:07

BertTuyt wrote:
Sun Dec 27, 2020 14:21
Sidiki, and to reply on your other question.

I used the previous match file and extracted a file with positions, and a file with result labels (based upon the previous Damage Evaluation, scaled with a Sigmoid function).
You could say that in this way you project the old evaluation function into a neural net architecture.

So as said before, not (yet) a bootstrap or zero approach....

Think the Chess NNUE world also started this way, and then proceed with further learning with autopay, but I'm not 100% sure.
Guess Rein knows the details.

Bert
The original Stockfish nets were initially trained on Stockfish's non-NNUE evaluations at depth 8 of the training positions, rather than the static evaluations of the training positions.

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: NNUE

Post by BertTuyt » Fri Jan 01, 2021 12:58

Madeleine, valid remarks.

The network I use (now) has 16bit integer quantization for the weights. It would indeed help (for search speed) to apply 8-bits for specific layers in the network.

There are from here, several ways to find a potential improvement:
* Increase network size (for example increase the first layer size from 192 to 256 or 512, ....)
* Increase number of inputs (mentioned by Rein)
* As you wrote, do not use static evaluation, but the score of a shallow search (as applied by Stockfish)
* Use several different network for the games phases (this is implemented by Jonathan in his checkers program , with 4 networks, think Stockfish uses only 1 network).

For now I started with the first option, and if I have interesting results, I will share them.

Bert

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: NNUE

Post by BertTuyt » Fri Jan 01, 2021 16:10

With 256 neurons in the first layer the match result was (perspective KingsRow) 22W, 136D.
Elo difference 49.
Not sure if this is within the margin of error, but if one thinks in a positive way, slowly improving day by day :D

Bert

Sidiki
Posts: 321
Joined: Thu Jan 15, 2015 16:28
Real name: Coulibaly Sidiki

Re: NNUE

Post by Sidiki » Sat Jan 02, 2021 04:59

BertTuyt wrote:
Fri Jan 01, 2021 16:10
With 256 neurons in the first layer the match result was (perspective KingsRow) 22W, 136D.
Elo difference 49.
Not sure if this is within the margin of error, but if one thinks in a positive way, slowly improving day by day :D

Bert
Great result with Damage's NNUE; i think that, as i said, we must have new kind of strenght with NNUE concept.

Bert, can you post the pdn please.

Thanks

Sidiki

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: NNUE

Post by BertTuyt » Sat Jan 02, 2021 12:26

Sidiki, here they are.

There were 2 unknowns which were both a win for Kingsrow (game 48 and game 84), for which I corrected the scores in the file.
Result 22 W Kingsrow, 136 draw, Elo difference 49.

Bert
Attachments
DXPMatch nnue 1-1-2021.pdn
(164.93 KiB) Downloaded 631 times

Post Reply