On Wed, Jun 6, 2012 at 2:53 PM, Sergey Koposov <kopo...@ast.cam.ac.uk> wrote: > On Wed, 6 Jun 2012, Ants Aasma wrote: > >> On Wed, Jun 6, 2012 at 2:27 PM, Sergey Koposov <kopo...@ast.cam.ac.uk> >> wrote: >>> >>> I've quickly tested your lockfree-getbuffer.patch patch with the test >>> case >>> you provided and I barely see any improvement (2% at max) >>> https://docs.google.com/open?id=0B7koR68V2nM1QVBxWGpZdW4wd0U >>> tested with 24 core (48 ht cores, Xeon E7- 4807). >>> Although the tps vs number of threads looks weird.... >> >> >> Was this the range scan on the test table? (sorry about the error in >> the query, the x should really be id) In that case the results look >> really suspicious. > > > Yes, my fault partially, because without much thought I've put "value" > instead of "x" in the script. Now after replacing it by "id" the tps are > much smaller. > > Here is the tps vs nthreads I did test up to 10 threads on my 24 cpu system > (I disabled HT though): > https://docs.google.com/open?id=0B7koR68V2nM1Nk9OcWNJOTRrYVE > > Your patch clearly improve the situation (the peak tps is ~ 10% higher), but > the general picture is the same: flattening of tps vs nthreads.
I think this is the expected result. In the single user case the spinklock never spins and only has to make the cpu-locking cache instructions once. can we see results @24 threads? merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers