Zdenek Kotala wrote:
Zdenek Kotala napsal(a):
Heikki Linnakangas napsal(a):
Zdenek Kotala wrote:
My conclusion is that new implementation is about 8% slower in OLTP workload.

Can you do some analysis of why that is?

I tested it several times and last test was surprise for me. I run original server (with old FSM) on the database which has been created by new server (with new FSM) and performance is similar (maybe new implementation is little bit better):

MQThL (Maximum Qualified Throughput LIGHT): 1348.90 tpm
MQThM (Maximum Qualified Throughput MEDIUM): 2874.76 tpm
MQThH (Maximum Qualified Throughput HEAVY): 2422.20 tpm

The question is why? There could be two reasons for that. One is realated to OS/FS or HW. Filesystem could be defragmented or HDD is slower in some part...

Ugh. Could it be autovacuum kicking in at different times? Do you get any other metrics than the TPM out of it.

Second idea is that new FSM creates heavy defragmented data and index scan needs to jump from one page to another too often.

Hmm. That's remotely plausible, I suppose. The old FSM only kept track of pages with more than avg. request size of free space, but the new FSM tracks even the smallest free spots. Is there tables in that workload that are inserted to, with very varying row widths?

FWIW, I just got the results of my first 2h DBT-2 results, and I'm seeing no difference at all in the overall performance or behavior during the test. Autovacuum doesn't kick in in those short tests, though, so I schedule a pair of 4h tests, and might run even longer tests over the weekend.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to