Search Algorithm

Discussion about development of draughts in the time of computer and Internet.
Post Reply
Joost Buijs
Posts: 471
Joined: Wed May 04, 2016 11:45
Real name: Joost Buijs

Re: Search Algorithm

Post by Joost Buijs » Sat Sep 03, 2016 07:07

BertTuyt wrote: Joost, yes it would be interesting to test the Hash function I posted.

Bascically I use 4 bitboards, but the empty is not calculated and stored as part of the MoveGenerator().
Im not sure if 3 (so all white piece, and black piece and kings in one bitmap) is faster/slower as 4 (white/black Man/Kings) , which is what I used so far.

I guess it does not make a huge difference, but it is a little more elegant, and I wanted to reduce the number of bytes I needed to save during the MoveGen().

Bert
Later today I will add your hash-function to my test-program, first I have to go shopping with my wife which we usually do on Saturdays.

Joost

Joost Buijs
Posts: 471
Joined: Wed May 04, 2016 11:45
Real name: Joost Buijs

Re: Search Algorithm

Post by Joost Buijs » Sat Sep 03, 2016 12:15

Bert,

I don't want to be rude, but I just checked your hash-function and it doesn't look very good, maybe I made a mistake because I assumed your HASHCODE is uint64_t.
The side to move is not encoded in your hash-function so I had to disable checking this.
After letting it run for about 5.5 minutes the result is the following:

Code: Select all

TT: num_buckets = 16777216, tt_size = 1140850688
Allocated TT with size 1140850688 in SP memory.

    -  -  -  -  -
  -  -  -  -  -
    -  m  m  m  -
  m  -  m  m  -
    m  -  m  m  M
  m  M  M  -  M
    -  M  M  M  M
  -  M  M  -  -
    -  -  -  -  -
  -  -  -  -  -

Time = 326.53  Matching hash-probes = 3591082057  Matching probes/sec. = 10997627.189889

Collisions found 1505
This means that you will encounter a collision on average each 2.4 million probes.
I guess this will have a negative impact on the playing strength of your engine.

For comparison with Zobrist hashing:

Code: Select all

TT: num_buckets = 16777216, tt_size = 1140850688
Allocated TT with size 1140850688 in SP memory.

    -  -  -  -  -
  -  -  -  -  -
    -  m  m  m  -
  m  -  m  m  -
    m  -  m  m  M
  m  M  M  -  M
    -  M  M  M  M
  -  M  M  -  -
    -  -  -  -  -
  -  -  -  -  -

Time = 333.42  Matching hash-probes = 3558093128  Matching probes/sec. = 10671544.738032

Collisions found 0
Zobrist seems to be 3% slower, I can live with that, probably I will keep it in.

Joost

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Sat Sep 03, 2016 12:44

Joost, it is indeed 64bit.
As I never really tested it, Im not surprised that it might be non-optimal.
With respect to the side to move, I encoded this in the Hash Data.
In the past (if my memory is correct), I also used the least significant bit of the hashcode for the side to move.
But I thought (which migth be a failure), that this would split the HashTable in 2, and based upon the position, I assumed that black or white positions could be dominant.

Nevertheless, interesting test, which I also need to consider, so thanks for your feedback.

Bert

Joost Buijs
Posts: 471
Joined: Wed May 04, 2016 11:45
Real name: Joost Buijs

Re: Search Algorithm

Post by Joost Buijs » Sat Sep 03, 2016 12:58

Bert,

Indeed using 1 bit for the side to move can effectively be the same as 2 separate hash-tables for Black and White, but it depends entirely upon how you calculate your index.
I use a 64 bit random number which I xor each time when the side to move changes, this works fine.
In the end it is not very important how you do it, but I am always very wary for collisions.

At the moment I use a bucket system with 4 entries per bucket, according to my last tests it doesn't help at all.
When it still doesn't work when my search is completely finished I will remove it completely and go back to single entries which is also faster.

Joost
Last edited by Joost Buijs on Sat Sep 03, 2016 13:07, edited 1 time in total.

Ed Gilbert
Posts: 859
Joined: Sat Apr 28, 2007 14:53
Real name: Ed Gilbert
Location: Morristown, NJ USA
Contact:

Re: Search Algorithm

Post by Ed Gilbert » Sat Sep 03, 2016 13:03

Joost Buijs wrote:Using a 32 bit hash-key is something I haven't done for years, with an optimal 32 bit hash-function you will get on average 1 collision each 4 billion probes, in practice things will be much worse since most hash-functions are not optimal.
When an engine probes at ~70 mnps (which is feasible on modern multicore hardware) this means (best case) 1 collision each 60 seconds which can wreck up a search if there is no means to detect collisions.
Maybe a 64 bit key is a bit overdone, 48 bit could be a alternative, I have the feeling that 32 bit is a bit on the low side because it regularly gave me problems with my chess-engine.
I used to have some test code in my search to measure the collision rate. It has been a long time since I used it, and search speeds were much slower then, but it might have been one every few minutes. I tried a larger hashcode but through testing with many games I concluded that on average the smaller code gave better match results. I was also influenced by a paper written by Robert Hyatt that seemed to indicate that you really have to have a horrible collision rate for it to have a measurable affect. Since I did this a long time ago, it might be worth repeating the tests. How did you detect that the collisions gave you problems?
The reason that I store the whole move in the transposition table is that I want to avoid generating moves in case the table-move generates a cutoff (which it does very often), a simple legality check of the move to avoid crashing the engine will do, with a 64 bit key this is probably not needed at all.
That is a nice benefit of storing the actual move instead of the index. I would think that the legality check is always needed though, since a crash, even only once in a great while, is very bad.

-- Ed

Joost Buijs
Posts: 471
Joined: Wed May 04, 2016 11:45
Real name: Joost Buijs

Re: Search Algorithm

Post by Joost Buijs » Sat Sep 03, 2016 13:40

Ed Gilbert wrote: I used to have some test code in my search to measure the collision rate. It has been a long time since I used it, and search speeds were much slower then, but it might have been one every few minutes. I tried a larger hashcode but through testing with many games I concluded that on average the smaller code gave better match results. I was also influenced by a paper written by Robert Hyatt that seemed to indicate that you really have to have a horrible collision rate for it to have a measurable affect. Since I did this a long time ago, it might be worth repeating the tests. How did you detect that the collisions gave you problems?
I agree with Bob H. that a search is very permissive with respect to errors, but not always.

In the past I used a 32 bit key in my chess-engine and it happened on a few occasions that it unexpectedly played a very bad move which I could trace back to a collision that occurred not far from the root, at that time the engine still behaved deterministic, that allowed me to find it.
The chance that this happens is very small, but when you play hundreds of games it will happen once in a while.
After switching to a 64 bit key the problem never reoccurred and I never felt the need to look at it again.

Using a 64 bit key with draughts is a bit difficult because a move takes more space, that is why I'm thinking about making the key 48 bit, for the depth 6 bits will do, then there is 10 bits left for an age field, still 128 bit per entry.

Joost

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Sat Sep 03, 2016 20:40

I agree that my Hash-function should/could be better
As Ed pointed out, not sure however if it has impact on playing strength.
To make it really robust, we could add the complete position to the HashData (so the 3 64bit Bitboards).
At least then one has no issues anymore with crashes due to a wrong use of the stored move (and one can ommit a move sanity check).

In the really non optimised case, each hash table entry has 5 * 8 Bytes (key, 3 position bitboards and data) = 40 Bytes ( 320 bits).
Maybe with a little optimization is can be compressed in 4 * 8 Bytes, making the hashtable size (only) twice as large compared with a normal implementation. Not sure if a larger hashable makes sense (somewhere there should be diminishing returns), so with larger memory sizes available, one maybe should optimize by adding more info, in stead of making the table bigger (on the other hand it could be that size really matters :oops: ).

Not 100% sure, but think I once saw it in older sources of Dragon, but maybe Michel has abondoned this approach....
And also not sure, what the mnps penalty impact is.

Bert

Rein Halbersma
Posts: 1722
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: Search Algorithm

Post by Rein Halbersma » Sat Sep 03, 2016 20:50

BertTuyt wrote:I agree that my Hash-function should/could be better
As Ed pointed out, not sure however if it has impact on playing strength.
To make it really robust, we could add the complete position to the HashData (so the 3 64bit Bitboards).
At least then one has no issues anymore with crashes due to a wrong use of the stored move (and one can ommit a move sanity check).

In the really non optimised case, each hash table entry has 5 * 8 Bytes (key, 3 position bitboards and data) = 40 Bytes ( 320 bits).
Maybe with a little optimization is can be compressed in 4 * 8 Bytes, making the hashtable size (only) twice as large compared with a normal implementation. Not sure if a larger hashable makes sense (somewhere there should be diminishing returns), so with larger memory sizes available, one maybe should optimize by adding more info, in stead of making the table bigger (on the other hand it could be that size really matters :oops: ).

Not 100% sure, but think I once saw it in older sources of Dragon, but maybe Michel has abondoned this approach....
And also not sure, what the mnps penalty impact is.

Bert
64-bit hash keys are plenty. In the 8x8 perft thread I computed perft(22) used 64-bit hash keys without any collisions during a 3 day computation. This result was later confirmed by Aart Bik by storing the entire position as the key in the hash table (so guaranteed no collisions).

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Sat Sep 03, 2016 20:56

Rein, thanks.

And in your case, was it also based upon incremental Zobrist?

Bert

Rein Halbersma
Posts: 1722
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: Search Algorithm

Post by Rein Halbersma » Sat Sep 03, 2016 21:12

BertTuyt wrote:Rein, thanks.

And in your case, was it also based upon incremental Zobrist?

Bert
Yes, I used incremental Zobrist hashing, and I also confirmed part of the computation with Ed's version of Jenkins hashing. No collisions in either case.

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Sat Sep 03, 2016 23:54

Joost, i also started to test my HashFunction.
See code at the end of this post
The position I examined first was the initial setup so with black man on 1 - 20, and white man on 31 - 50.
HashTable size 1M Entries ( 2^20).

At ply 20 I get 6 collisions.
Till ply 19 0.

Code: Select all

id name Dwarf 0.0
id author Draughts Community
ready
HashTable Allocated, Entries = 1048576
pos XW XB 20 20
info Depth 1010 knps 0
Hash Probes = 10 Hash Entry = 0 Collisions = 0
info Depth 2027 knps 0
Hash Probes = 27 Hash Entry = 0 Collisions = 0
info Depth 30126 knps 0
Hash Probes = 126 Hash Entry = 9 Collisions = 0
info Depth 40317 knps 0
Hash Probes = 317 Hash Entry = 51 Collisions = 0
info Depth 501170 knps 0
Hash Probes = 1170 Hash Entry = 177 Collisions = 0
info Depth 603265 knps 0
Hash Probes = 3265 Hash Entry = 622 Collisions = 0
info Depth 7010448 knps 0
Hash Probes = 10448 Hash Entry = 1788 Collisions = 0
info Depth 8026762 knps 0
Hash Probes = 26762 Hash Entry = 7211 Collisions = 0
info Depth 9094798 knps 9479
Hash Probes = 94798 Hash Entry = 18807 Collisions = 0
info Depth 100238679 knps 7955
Hash Probes = 238679 Hash Entry = 73820 Collisions = 0
info Depth 110801314 knps 11447
Hash Probes = 801314 Hash Entry = 181653 Collisions = 0
info Depth 1202072576 knps 17271
Hash Probes = 2072576 Hash Entry = 705240 Collisions = 0
info Depth 1306536929 knps 21086
Hash Probes = 6536929 Hash Entry = 1521485 Collisions = 0
info Depth 14015673433 knps 17223
Hash Probes = 15673433 Hash Entry = 5509042 Collisions = 0
info Depth 15051977448 knps 22024
Hash Probes = 51977448 Hash Entry = 11462618 Collisions = 0
info Depth 160122116409 knps 17905
Hash Probes = 122116409 Hash Entry = 44332743 Collisions = 0
info Depth 170429362741 knps 23071
Hash Probes = 429362741 Hash Entry = 92557401 Collisions = 0
info Depth 180978090024 knps 17809
Hash Probes = 978090024 Hash Entry = 375363184 Collisions = 0
info Depth 1903627404335 knps 23233
Hash Probes = 3627404335 Hash Entry = 784954261 Collisions = 0
info Depth 2008242698848 knps 17406
Hash Probes = 8242698848 Hash Entry = 3331810635 Collisions = 6
move 31-26

So need to check if this is related to kings or that the man only implementation is also not perfect
So from there I can test alternatives.
It would be helpfull if you find the same order of magnitude, in the mentioned position with my function (as you test wih Woldouby).

Bert

Code: Select all

HASHKEY Hash_Key(BITBOARD* pBBField)
{
	HASHKEY HashMan0, HashMan1, HashKing0, HashKing1;

	const HASHKEY hMul = 0x9ddfea08eb382d69ULL;

	BITBOARD bbWhiteMan = pBBField[BB_WHITEPIECE] & ~pBBField[BB_KING];
	BITBOARD bbBlackMan = pBBField[BB_BLACKPIECE] & ~pBBField[BB_KING];
	BITBOARD bbWhiteKing = pBBField[BB_WHITEPIECE] & pBBField[BB_KING];
	BITBOARD bbBlackKing = pBBField[BB_BLACKPIECE] & pBBField[BB_KING];

	// Hash64 Man
	HashMan0 = (bbWhiteMan ^ bbBlackMan) * hMul;
	HashMan0 ^= (HashMan0 >> 47);

	HashMan1 = (bbBlackMan ^ HashMan0) * hMul;
	HashMan1 ^= (HashMan1 >> 47);
	HashMan1 *= hMul;

	// Hash64 King
	HashKing0 = (bbWhiteKing ^ bbBlackKing) * hMul;
	HashKing0 ^= (HashKing0 >> 47);

	HashKing1 = (bbBlackKing ^ HashKing0) * hMul;
	HashKing1 ^= (HashKing1 >> 47);
	HashKing1 *= hMul;

	return (HashKing1 ^ HashMan1);
}

void Hash_Write(HASHKEY HashKey, BITBOARD* pBBField)
{
	HASHKEY HashEntry = HashKey & HASHTABLE_MASK;

	pHashTable[HashEntry].HashKey = HashKey;

	pHashTable[HashEntry].HashData[0] = pBBField[BB_WHITEPIECE];
	pHashTable[HashEntry].HashData[1] = pBBField[BB_BLACKPIECE];
	pHashTable[HashEntry].HashData[2] = pBBField[BB_KING];
}

void Hash_Read(HASHKEY HashKey, BITBOARD* pbbField)
{
	HASHKEY HashEntry = HashKey & HASHTABLE_MASK;

	if (pHashTable[HashEntry].HashKey == HashKey) {

		++iXEntry;

		if (pbbField[BB_WHITEPIECE] != pHashTable[HashEntry].HashData[0] ||
			pbbField[BB_BLACKPIECE] != pHashTable[HashEntry].HashData[1] ||
			pbbField[BB_KING] != pHashTable[HashEntry].HashData[2])
			++iXCollision;
	}
}


Joost Buijs
Posts: 471
Joined: Wed May 04, 2016 11:45
Real name: Joost Buijs

Re: Search Algorithm

Post by Joost Buijs » Sun Sep 04, 2016 07:17

BertTuyt wrote:Joost, i also started to test my HashFunction.
See code at the end of this post
The position I examined first was the initial setup so with black man on 1 - 20, and white man on 31 - 50.
HashTable size 1M Entries ( 2^20).

At ply 20 I get 6 collisions.
Till ply 19 0.
Bert,

At the starting position with your hash-function I found 31 collisions at depth 20, 5 so times more, it took more then 10 minutes since I still use plain a-b.
The difference can be due to many things, index calculation, hash replacement scheme, larger tree due to different pruning etc., this is nothing to worry about.
The collision rate is more important, which is in this case roughly 1 out of 270 million matching probes.

You won't notice it when you have a collision now and then, in case of the Woldouby there are too many collisions, my guess is that it gets bad when there are kings on the board.

It won't be easy to make a simple and fast hash-function that has a performance as good as Zobrist.

Joost

Code: Select all

TT: num_buckets = 16777216, tt_size = 1140850688
Allocated TT with size 1140850688 in SP memory.

    m  m  m  m  m
  m  m  m  m  m
    m  m  m  m  m
  m  m  m  m  m
    -  -  -  -  -
  -  -  -  -  -
    M  M  M  M  M
  M  M  M  M  M
    M  M  M  M  M
  M  M  M  M  M

Time = 622.41  Matching hash-probes = 8366650043  Matching probes/sec. = 13442279.755268

Collisions found 31
I also tried 19 ply and there are still 27 collisions, in the test program I probe in quiescence as well, maybe this explains why you didn't find collisions at 19 ply?
It is possible to store the offending positions and take a look at what is different about them.

Code: Select all

TT: num_buckets = 16777216, tt_size = 1140850688
Allocated TT with size 1140850688 in SP memory.

    m  m  m  m  m
  m  m  m  m  m
    m  m  m  m  m
  m  m  m  m  m
    -  -  -  -  -
  -  -  -  -  -
    M  M  M  M  M
  M  M  M  M  M
    M  M  M  M  M
  M  M  M  M  M

Time = 390.52  Matching hash-probes = 5407588133  Matching probes/sec. = 13847225.596663

Collisions found 27
I start finding collisions at depth 17, here are two colliding positions, no kings on the board:

Code: Select all


Black to move.

Hash = 1330c679f492c49a

    m  m  -  m  m
  m  m  m  -  m
    m  m  m  m  m
  m  m  m  m  m
    -  -  -  M  -
  -  M  -  -  -
    -  -  M  -  M
  m  M  M  M  M
    M  M  M  M  M
  M  M  M  -  M

Hash = 1330c679f492c49a

    m  m  -  m  m
  m  m  m  -  m
    m  m  m  m  m
  m  m  m  m  m
    m  -  -  -  -
  M  -  -  M  M
    -  M  -  M  M
  M  M  -  -  -
    M  M  M  M  -
  M  M  M  M  M


Joost Buijs
Posts: 471
Joined: Wed May 04, 2016 11:45
Real name: Joost Buijs

Re: Search Algorithm

Post by Joost Buijs » Sun Sep 04, 2016 08:51

BertTuyt wrote: I agree that my Hash-function should/could be better
As Ed pointed out, not sure however if it has impact on playing strength.
To make it really robust, we could add the complete position to the HashData (so the 3 64bit Bitboards).
At least then one has no issues anymore with crashes due to a wrong use of the stored move (and one can ommit a move sanity check).

In the really non optimised case, each hash table entry has 5 * 8 Bytes (key, 3 position bitboards and data) = 40 Bytes ( 320 bits).
Maybe with a little optimization is can be compressed in 4 * 8 Bytes, making the hashtable size (only) twice as large compared with a normal implementation. Not sure if a larger hashable makes sense (somewhere there should be diminishing returns), so with larger memory sizes available, one maybe should optimize by adding more info, in stead of making the table bigger (on the other hand it could be that size really matters :oops: ).

Not 100% sure, but think I once saw it in older sources of Dragon, but maybe Michel has abondoned this approach....
And also not sure, what the mnps penalty impact is.
Bert
Bert,

When you want to make a top performing program everything has to be top notch otherwise it just doesn't work.

It is possible to store the whole position in 150 bit without additional encoding, in my case it would it would indeed mean roughly doubling the entry size, this is no problem with current memory-sizes. The drawback is that the larger table puts more stress upon the TLB which hurts performance, you can largely overcome this by allocating the TT in large-page or huge-page memory. I have this option in my program, it is a bit cumbersome to use because you have to give the user the right to lock pages in memory and you have to run the engine in administrator mode, sometimes you have to reboot your computer if the memory is fragmented too much at the time you start your program, but for an occasional tournament it is nice to have this option though.

Joost

BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Sun Sep 04, 2016 12:16

I went back to the drawing board.
And re started with the orginal MurmerHash code.
See link https://sites.google.com/site/murmurhash/

I modified it for use with 24 bytes only (so 3 Bitboards).

So not sure if this is better, but curious if collisions have been reduced.

I still believe there is a on the fly Hash Function possible , robust and as fast (or faster) compared with Zobrist.
Think at least Ed managed to do so.

In literature better (faster) alternatives were mentioned, liked CityHash and SpookyHash, so I might need to look to these alternatives. Im not sure but I remember that one of the 2 (think it was CityHash) also uses the CRC32 instruction.

The first test here did not reveal any collision at the initial position till (and including) ply 21.
So at least this implementation seems not worse.

Code: Select all

id name Dwarf 0.0
id author Draughts Community
ready
HashTable Allocated, Entries = 1048576
pos XW XB 20 20
info Depth 1010 knps 0
Hash Probes = 10 Hash Entry = 0 Collisions = 0
info Depth 2027 knps 0
Hash Probes = 27 Hash Entry = 0 Collisions = 0
info Depth 30126 knps 0
Hash Probes = 126 Hash Entry = 9 Collisions = 0
info Depth 40317 knps 0
Hash Probes = 317 Hash Entry = 48 Collisions = 0
info Depth 501170 knps 0
Hash Probes = 1170 Hash Entry = 167 Collisions = 0
info Depth 603265 knps 0
Hash Probes = 3265 Hash Entry = 599 Collisions = 0
info Depth 7010448 knps 0
Hash Probes = 10448 Hash Entry = 1715 Collisions = 0
info Depth 8026762 knps 0
Hash Probes = 26762 Hash Entry = 7091 Collisions = 0
info Depth 9094798 knps 0
Hash Probes = 94798 Hash Entry = 18107 Collisions = 0
info Depth 100238679 knps 23867
Hash Probes = 238679 Hash Entry = 72168 Collisions = 0
info Depth 110801314 knps 20032
Hash Probes = 801314 Hash Entry = 174656 Collisions = 0
info Depth 1202072576 knps 17271
Hash Probes = 2072576 Hash Entry = 686555 Collisions = 0
info Depth 1306536929 knps 21086
Hash Probes = 6536929 Hash Entry = 1460328 Collisions = 0
info Depth 14015673433 knps 16853
Hash Probes = 15673433 Hash Entry = 5303020 Collisions = 0
info Depth 15051977448 knps 21478
Hash Probes = 51977448 Hash Entry = 10906480 Collisions = 0
info Depth 160122116409 knps 17223
Hash Probes = 122116409 Hash Entry = 42502347 Collisions = 0
info Depth 170429362741 knps 22316
Hash Probes = 429362741 Hash Entry = 88627359 Collisions = 0
info Depth 180978090024 knps 17601
Hash Probes = 978090024 Hash Entry = 360657634 Collisions = 0
info Depth 1903627404335 knps 21965
Hash Probes = 3627404335 Hash Entry = 755968547 Collisions = 0
info Depth 2008242698848 knps 17559
Hash Probes = 8242698848 Hash Entry = 3200156482 Collisions = 0
info Depth 21031241547377 knps 21540
Hash Probes = 31241547377 Hash Entry = 6498388946 Collisions = 0

Bert

Code: Select all

HASHKEY MurmurHash64A(BITBOARD* pBBField)
{
	HASHKEY h, k;
	const uint64_t m = 0xc6a4a7935bd1e995;
	const int r = 47;

	k = pBBField[BB_WHITEPIECE];
	k *= m;
	k ^= k >> r;
	k *= m;

	h ^= k;
	h *= m;

	k = pBBField[BB_BLACKPIECE];
	k *= m;
	k ^= k >> r;
	k *= m;

	h ^= k;
	h *= m;

	k = pBBField[BB_KING];
	k *= m;
	k ^= k >> r;
	k *= m;

	h ^= k;
	h *= m;

	h ^= h >> r;
	h *= m;
	h ^= h >> r;

	return h;
}


BertTuyt
Posts: 1592
Joined: Wed Sep 01, 2004 19:42

Re: Search Algorithm

Post by BertTuyt » Sun Sep 04, 2016 12:23

I forgot to initialise h in the previous MurmerHash version.
Which I now changed.

Code: Select all

HASHKEY h=0
I restarted the test, for the initial position I will try to reach ply 23.
As soon as I have the result I will share and forward.

Edit 1: So far so good, no collisions at ply 21.

Bert

Post Reply