[PERFORM] debug_assertions is enabled in official 9.6rc1 build

2016-09-07 Thread Mkrtchyan, Tigran


Hi *,

just installed official rpm from http://yum.postgresql.org/ to check
functionality and performance of 9.6rc1. Unfortunately, binaries are
compiled with debug_assertions=on, which makes any performance testing
useless.

Regards,
   Tigran.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Tuning guidelines for server with 256GB of RAM and SSDs?

2016-07-05 Thread Mkrtchyan, Tigran
Hi, 
We had a similar situation and the best performance was with 64MB background_bytes and 512 MB dirty_bytes.
Tigran.

On Jul 5, 2016 16:51, Kaixi Luo  wrote:Hello,I've been reading Mr. Greg Smith's "Postgres 9.0 - High Performance" book and I have some questions regarding the guidelines I found in the book, because I suspect some of them can't be followed blindly to the letter on a server with lots of RAM and SSDs.Here are my server specs:Intel Xeon E5-1650 v3 Hexa-Core Haswell 256GB DDR4 ECC RAMBattery backed hardware RAID with 512MB of WriteBack cache (LSI MegaRAID SAS 9260-4i)RAID1 - 2x480GB Samsung SSD with power loss protection (will be used to store the PostgreSQL database)RAID1 - 2x240GB Crucial SSD with power loss protection. (will be used to store PostgreSQL transactions logs)First of all, the book suggests that I should enable the WriteBack cache of the HWRAID and disable the disk cache to increase performance and ensure data safety. Is it still advisable to do this on SSDs, specifically the step of disabling the disk cache? Wouldn't that increase the wear rate of the SSD?Secondly, the book suggests that we increase the device readahead from 256 to 4096. As far as I understand, this was done in order to reduce the number of seeks on a rotating hard drive, so again, is this still applicable to SSDs?The other tunable I've been looking into is vm.dirty_ratio and vm.dirty_background_ratio. I reckon that the book's recommendation to lower vm.dirty_background_ratio to 5 and vm.dirty_ratio to 10 is not enough for a server with such big amount of RAM. How much lower should I set these values, given that my RAID's WriteBack cache size is 512MB?Thank you very much.Kaixi Luo


Re: [PERFORM] New server: SSD/RAID recommendations?

2015-07-07 Thread Mkrtchyan, Tigran
Thanks for the Info.

So if RAID controllers are not an option, what one should use to build
big databases? LVM with xfs? BtrFs? Zfs?

Tigran.

- Original Message -
 From: Graeme B. Bell graeme.b...@nibio.no
 To: Steve Crawford scrawf...@pinpointresearch.com
 Cc: Wes Vaske (wvaske) wva...@micron.com, pgsql-performance 
 pgsql-performance@postgresql.org
 Sent: Tuesday, July 7, 2015 12:22:00 PM
 Subject: Re: [PERFORM] New server: SSD/RAID recommendations?

 Completely agree with Steve.
 
 1. Intel NVMe looks like the best bet if you have modern enough hardware for
 NVMe. Otherwise e.g. S3700 mentioned elsewhere.
 
 2. RAID controllers.
 
 We have e.g. 10-12 of these here and e.g. 25-30 SSDs, among various machines.
 This might give people idea about where the risk lies in the path from disk to
 CPU.
 
 We've had 2 RAID card failures in the last 12 months that nuked the array with
 days of downtime, and 2 problems with batteries suddenly becoming useless or
 suddenly reporting wildly varying temperatures/overheating. There may have 
 been
 other RAID problems I don't know about.
 
 Our IT dept were replacing Seagate HDDs last year at a rate of 2-3 per week (I
 guess they have 100-200 disks?). We also have about 25-30 Hitachi/HGST HDDs.
 
 So by my estimates:
 30% annual problem rate with RAID controllers
 30-50% failure rate with Seagate HDDs (backblaze saw similar results)
 0% failure rate with HGST HDDs.
 0% failure in our SSDs.   (to be fair, our one samsung SSD apparently has a 
 bug
 in TRIM under linux, which I'll need to investigate to see if we have been
 affected by).
 
 also, RAID controllers aren't free - not just the money but also the 
 management
 of them (ever tried writing a complex install script that interacts work with
 MegaCLI? It can be done but it's not much fun.). Just take a look at the
 MegaCLI manual and ask yourself... is this even worth it (if you have a good
 MTBF on an enterprise SSD).
 
 RAID was meant to be about ensuring availability of data. I have trouble
 believing that these days
 
 Graeme Bell
 
 
 On 06 Jul 2015, at 18:56, Steve Crawford scrawf...@pinpointresearch.com 
 wrote:
 
 
 2. We don't typically have redundant electronic components in our servers. 
 Sure,
 we have dual power supplies and dual NICs (though generally to handle 
 external
 failures) and ECC-RAM but no hot-backup CPU or redundant RAM banks and...no
 backup RAID card. Intel Enterprise SSD already have power-fail protection so 
 I
 don't need a RAID card to give me BBU. Given the MTBF of good enterprise SSD
 I'm left to wonder if placing a RAID card in front merely adds a new point of
 failure and scheduled-downtime-inducing hands-on maintenance (I'm looking at
 you, RAID backup battery).
 
 
 
 --
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] New server: SSD/RAID recommendations?

2015-07-07 Thread Mkrtchyan, Tigran


- Original Message -
 From: Graeme B. Bell graeme.b...@nibio.no
 To: Mkrtchyan, Tigran tigran.mkrtch...@desy.de
 Cc: Graeme B. Bell graeme.b...@nibio.no, Steve Crawford 
 scrawf...@pinpointresearch.com, Wes Vaske (wvaske)
 wva...@micron.com, pgsql-performance pgsql-performance@postgresql.org
 Sent: Tuesday, July 7, 2015 12:38:10 PM
 Subject: Re: [PERFORM] New server: SSD/RAID recommendations?

 I am unsure about the performance side but, ZFS is generally very attractive 
 to
 me.
 
 Key advantages:
 
 1) Checksumming and automatic fixing-of-broken-things on every file (not just
 postgres pages, but your scripts, O/S, program files).
 2) Built-in  lightweight compression (doesn't help with TOAST tables, in fact
 may slow them down, but helpful for other things). This may actually be a net
 negative for pg so maybe turn it off.
 3) ZRAID mirroring or ZRAID5/6. If you have trouble persuading someone that 
 it's
 safe to replace a RAID array with a single drive... you can use a couple of
 NVMe SSDs with ZFS mirror or zraid, and  get the same availability you'd get
 from a RAID controller. Slightly better, arguably, since they claim to have
 fixed the raid write-hole problem.
 4) filesystem snapshotting
 
 Despite the costs of checksumming etc., I suspect ZRAID running on a fast CPU
 with multiple NVMe drives will outperform quite a lot of the alternatives, 
 with
 great data integrity guarantees.


We are planing to have a test setup as well. For now I have single NVMe SSD on 
my
test system:

# lspci | grep NVM
85:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD 
Controller 171X (rev 03)

# mount | grep nvm
/dev/nvme0n1p1 on /var/lib/pgsql/9.5 type ext4 
(rw,noatime,nodiratime,data=ordered)


and quite happy with it. We have write heavy workload on it to see when it will
break. Postgres Performs very well. About x2.5 faster than with regular disks
with a single client and almost linear with multiple clients (picture attached.
On Y number of high level op/s our application does, X number of clients). The
setup is used last 3 months. Looks promising but for production we need to
to have disk size twice as big as on the test system. Until today, I was
planning to use a RAID10 with a HW controller...

Related to ZFS. We use ZFSonlinux and behaviour is not as good as with solaris.
Let's re-phrase it: performance is unpredictable. We run READZ2 with 30x3TB 
disks.

Tigran.

 
 Haven't built one yet. Hope to, later this year. Steve, I would love to know
 more about how you're getting on with your NVMe disk in postgres!
 
 Graeme.
 
 On 07 Jul 2015, at 12:28, Mkrtchyan, Tigran tigran.mkrtch...@desy.de wrote:
 
 Thanks for the Info.
 
 So if RAID controllers are not an option, what one should use to build
 big databases? LVM with xfs? BtrFs? Zfs?
 
 Tigran.
 
 - Original Message -
 From: Graeme B. Bell graeme.b...@nibio.no
 To: Steve Crawford scrawf...@pinpointresearch.com
 Cc: Wes Vaske (wvaske) wva...@micron.com, pgsql-performance
 pgsql-performance@postgresql.org
 Sent: Tuesday, July 7, 2015 12:22:00 PM
 Subject: Re: [PERFORM] New server: SSD/RAID recommendations?
 
 Completely agree with Steve.
 
 1. Intel NVMe looks like the best bet if you have modern enough hardware for
 NVMe. Otherwise e.g. S3700 mentioned elsewhere.
 
 2. RAID controllers.
 
 We have e.g. 10-12 of these here and e.g. 25-30 SSDs, among various 
 machines.
 This might give people idea about where the risk lies in the path from disk 
 to
 CPU.
 
 We've had 2 RAID card failures in the last 12 months that nuked the array 
 with
 days of downtime, and 2 problems with batteries suddenly becoming useless or
 suddenly reporting wildly varying temperatures/overheating. There may have 
 been
 other RAID problems I don't know about.
 
 Our IT dept were replacing Seagate HDDs last year at a rate of 2-3 per week 
 (I
 guess they have 100-200 disks?). We also have about 25-30 Hitachi/HGST HDDs.
 
 So by my estimates:
 30% annual problem rate with RAID controllers
 30-50% failure rate with Seagate HDDs (backblaze saw similar results)
 0% failure rate with HGST HDDs.
 0% failure in our SSDs.   (to be fair, our one samsung SSD apparently has a 
 bug
 in TRIM under linux, which I'll need to investigate to see if we have been
 affected by).
 
 also, RAID controllers aren't free - not just the money but also the 
 management
 of them (ever tried writing a complex install script that interacts work 
 with
 MegaCLI? It can be done but it's not much fun.). Just take a look at the
 MegaCLI manual and ask yourself... is this even worth it (if you have a good
 MTBF on an enterprise SSD).
 
 RAID was meant to be about ensuring availability of data. I have trouble
 believing that these days
 
 Graeme Bell
 
 
 On 06 Jul 2015, at 18:56, Steve Crawford scrawf...@pinpointresearch.com 
 wrote:
 
 
 2. We don't typically have redundant electronic components in our servers. 
 Sure,
 we have dual power supplies and dual NICs

Re: [PERFORM] 9.5alpha1 vs 9.4

2015-07-06 Thread Mkrtchyan, Tigran

On Jul 6, 2015 18:45, Josh Berkus j...@agliodbs.com wrote:

 On 07/05/2015 10:16 AM, Mkrtchyan, Tigran wrote: 
  Thanks for the hin. My bad. The backup db and 9.5 had a different type on 
  one of the foreign-key constrains char(36) vs varchar(36). 
  
  The schema was screwed couple of days ago, byt performance numbers I 
  checked only 
  after migration to 9.5. 

 Thank you for testing! 

 Can you re-run your tests with the fixed schema?  How does it look? 

With fixed schema performance equal to 9.4. I have updated my code to use ON 
CONFLICT statement. ~5% better compared with INSERT WHERE NOT EXIST. Really 
cool! Thanks.

Tigran.

 -- 
 Josh Berkus 
 PostgreSQL Experts Inc. 
 http://pgexperts.com 


 -- 
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) 
 To make changes to your subscription: 
 http://www.postgresql.org/mailpref/pgsql-performance 

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] 9.5alpha1 vs 9.4

2015-07-05 Thread Mkrtchyan, Tigran

And this is with 9.4 in the same hardware ( restored from backup)

   0.35 |0.35 |  1002 | DELETE FROM t_inodes WHERE ipnfsid=$1 
AND inlink = ?
   0.16 |0.16 |  1006 | insert into t_dirs (iparent, iname, 
ipnfsid)  (select $1 as iparent, $2 as iname, $3 as ipnfsid  where not exists 
(select
 ? from t_dirs where iparent=$4 and iname=$5))
   0.15 |0.02 |  8026 | SELECT 
isize,inlink,itype,imode,iuid,igid,iatime,ictime,imtime,icrtime,igeneration 
FROM t_inodes WHERE ipnfsid=$1
   0.06 |0.06 |  1002 | INSERT INTO t_inodes 
VALUES($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13)
   0.04 |0.02 |  2004 | UPDATE t_inodes SET inlink=inlink 
-$1,imtime=$2,ictime=$3,igeneration=igeneration+? WHERE ipnfsid=$4
   0.03 |0.01 |  2000 | SELECT 
ilocation,ipriority,ictime,iatime  FROM t_locationinfo WHERE itype=$1 AND 
ipnfsid=$2 AND istate=? ORDER BY ipriori
ty DESC
   0.02 |0.02 |  1002 | UPDATE t_inodes SET inlink=inlink 
+$1,imtime=$2,ictime=$3,igeneration=igeneration+? WHERE ipnfsid=$4
   0.02 |0.01 |  2006 | SELECT ipnfsid FROM t_dirs WHERE 
iname=$1 AND iparent=$2
   0.01 |0.01 |  1006 | DELETE FROM t_dirs WHERE iname=$1 AND 
iparent=$2
   0.00 |0.00 |  2004 | COMMI

Tigran.


- Original Message -
 From: Mkrtchyan, Tigran tigran.mkrtch...@desy.de
 To: pgsql-performance pgsql-performance@postgresql.org
 Sent: Sunday, July 5, 2015 1:10:51 PM
 Subject: [PERFORM] 9.5alpha1 vs 9.4

 Hi,
 
 today I have update my test system to 9.5alpha1.
 Most of the operations are ok, except delete.
 I get ~1000 times slower!
 
 
 chimera=# SELECT
  (total_time / 1000 )::numeric(10,2) as total_secs,
  (total_time/calls)::numeric(10,2) as average_time_ms, calls,
  query
 FROM pg_stat_statements where userid = 16384
 ORDER BY 1 DESC
 LIMIT 10;
 total_secs | average_time_ms | calls |
 query
   
 +-+---+--
 ---
 255.88 |  566.11 |   452 | DELETE FROM t_inodes WHERE ipnfsid=$1 
 AND
 inlink = ?
   0.13 |0.13 |  1006 | insert into t_dirs (iparent, iname, 
 ipnfsid)
   (select $1 as iparent, $2 as iname, $3 as ipnfsid  where not exists 
 (select ?
   from t_dirs where iparent=$4 and iname=$5))
   0.11 |0.02 |  6265 | SELECT
   
 isize,inlink,itype,imode,iuid,igid,iatime,ictime,imtime,icrtime,igeneration
   FROM t_inodes WHERE ipnfsid=$1
   0.03 |0.03 |  1002 | INSERT INTO t_inodes
   VALUES($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13)
   0.02 |0.02 |  1002 | UPDATE t_inodes SET inlink=inlink
   +$1,imtime=$2,ictime=$3,igeneration=igeneration+? WHERE ipnfsid=$4
   0.02 |0.03 |   905 | UPDATE t_inodes SET inlink=inlink
   -$1,imtime=$2,ictime=$3,igeneration=igeneration+? WHERE ipnfsid=$4
   0.02 |0.01 |  2000 | SELECT 
 ilocation,ipriority,ictime,iatime  FROM
   t_locationinfo WHERE itype=$1 AND ipnfsid=$2 AND istate=? ORDER BY 
 ipriority
   DESC
   0.01 |0.01 |   906 | SELECT ipnfsid FROM t_dirs WHERE 
 iname=$1 AND
   iparent=$2
   0.01 |0.01 |   453 | DELETE FROM t_dirs WHERE iname=$1 AND
   iparent=$2
 
 
 
 
 chimera=# \d t_inodes
 Table public.t_inodes
   Column|   Type   |   Modifiers
 -+--+
 ipnfsid | character varying(36)| not null
 itype   | integer  | not null
 imode   | integer  | not null
 inlink  | integer  | not null
 iuid| integer  | not null
 igid| integer  | not null
 isize   | bigint   | not null
 iio | integer  | not null
 ictime  | timestamp with time zone | not null
 iatime  | timestamp with time zone | not null
 imtime  | timestamp with time zone | not null
 icrtime | timestamp with time zone | not null default now()
 igeneration | bigint   | not null default 0
 Indexes:
t_inodes_pkey PRIMARY KEY, btree (ipnfsid)
 Referenced by:
TABLE t_access_latency CONSTRAINT t_access_latency_ipnfsid_fkey 
 FOREIGN KEY
(ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE
TABLE t_acl CONSTRAINT t_acl_fkey FOREIGN KEY (rs_id) REFERENCES
t_inodes(ipnfsid) ON DELETE CASCADE
TABLE t_dirs CONSTRAINT t_dirs_ipnfsid_fkey FOREIGN KEY (ipnfsid) 
 REFERENCES
t_inodes(ipnfsid)
TABLE t_inodes_checksum CONSTRAINT t_inodes_checksum_ipnfsid_fkey 
 FOREIGN
KEY (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE
TABLE t_inodes_data CONSTRAINT

[PERFORM] 9.5alpha1 vs 9.4

2015-07-05 Thread Mkrtchyan, Tigran

Hi,

today I have update my test system to 9.5alpha1.
Most of the operations are ok, except delete.
I get ~1000 times slower!


chimera=# SELECT
  (total_time / 1000 )::numeric(10,2) as total_secs,
  (total_time/calls)::numeric(10,2) as average_time_ms, calls,
  query
FROM pg_stat_statements where userid = 16384
ORDER BY 1 DESC
LIMIT 10;
 total_secs | average_time_ms | calls | 
 query  
 
   
+-+---+--
---
 255.88 |  566.11 |   452 | DELETE FROM t_inodes WHERE ipnfsid=$1 
AND inlink = ?
   0.13 |0.13 |  1006 | insert into t_dirs (iparent, iname, 
ipnfsid)  (select $1 as iparent, $2 as iname, $3 as ipnfsid  where not exists 
(select ? from t_dirs where iparent=$4 and iname=$5))
   0.11 |0.02 |  6265 | SELECT 
isize,inlink,itype,imode,iuid,igid,iatime,ictime,imtime,icrtime,igeneration 
FROM t_inodes WHERE ipnfsid=$1
   0.03 |0.03 |  1002 | INSERT INTO t_inodes 
VALUES($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13)
   0.02 |0.02 |  1002 | UPDATE t_inodes SET inlink=inlink 
+$1,imtime=$2,ictime=$3,igeneration=igeneration+? WHERE ipnfsid=$4
   0.02 |0.03 |   905 | UPDATE t_inodes SET inlink=inlink 
-$1,imtime=$2,ictime=$3,igeneration=igeneration+? WHERE ipnfsid=$4
   0.02 |0.01 |  2000 | SELECT 
ilocation,ipriority,ictime,iatime  FROM t_locationinfo WHERE itype=$1 AND 
ipnfsid=$2 AND istate=? ORDER BY ipriority DESC
   0.01 |0.01 |   906 | SELECT ipnfsid FROM t_dirs WHERE 
iname=$1 AND iparent=$2
   0.01 |0.01 |   453 | DELETE FROM t_dirs WHERE iname=$1 AND 
iparent=$2




chimera=# \d t_inodes
 Table public.t_inodes
   Column|   Type   |   Modifiers
-+--+
 ipnfsid | character varying(36)| not null
 itype   | integer  | not null
 imode   | integer  | not null
 inlink  | integer  | not null
 iuid| integer  | not null
 igid| integer  | not null
 isize   | bigint   | not null
 iio | integer  | not null
 ictime  | timestamp with time zone | not null
 iatime  | timestamp with time zone | not null
 imtime  | timestamp with time zone | not null
 icrtime | timestamp with time zone | not null default now()
 igeneration | bigint   | not null default 0
Indexes:
t_inodes_pkey PRIMARY KEY, btree (ipnfsid)
Referenced by:
TABLE t_access_latency CONSTRAINT t_access_latency_ipnfsid_fkey FOREIGN 
KEY (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE
TABLE t_acl CONSTRAINT t_acl_fkey FOREIGN KEY (rs_id) REFERENCES 
t_inodes(ipnfsid) ON DELETE CASCADE
TABLE t_dirs CONSTRAINT t_dirs_ipnfsid_fkey FOREIGN KEY (ipnfsid) 
REFERENCES t_inodes(ipnfsid)
TABLE t_inodes_checksum CONSTRAINT t_inodes_checksum_ipnfsid_fkey 
FOREIGN KEY (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE
TABLE t_inodes_data CONSTRAINT t_inodes_data_ipnfsid_fkey FOREIGN KEY 
(ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE
TABLE t_level_1 CONSTRAINT t_level_1_ipnfsid_fkey FOREIGN KEY (ipnfsid) 
REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE
TABLE t_level_2 CONSTRAINT t_level_2_ipnfsid_fkey FOREIGN KEY (ipnfsid) 
REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE
TABLE t_level_3 CONSTRAINT t_level_3_ipnfsid_fkey FOREIGN KEY (ipnfsid) 
REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE
TABLE t_level_4 CONSTRAINT t_level_4_ipnfsid_fkey FOREIGN KEY (ipnfsid) 
REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE
TABLE t_level_5 CONSTRAINT t_level_5_ipnfsid_fkey FOREIGN KEY (ipnfsid) 
REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE
TABLE t_level_6 CONSTRAINT t_level_6_ipnfsid_fkey FOREIGN KEY (ipnfsid) 
REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE
TABLE t_level_7 CONSTRAINT t_level_7_ipnfsid_fkey FOREIGN KEY (ipnfsid) 
REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE
TABLE t_locationinfo CONSTRAINT t_locationinfo_ipnfsid_fkey FOREIGN KEY 
(ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE
TABLE t_retention_policy CONSTRAINT t_retention_policy_ipnfsid_fkey 
FOREIGN KEY (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE
TABLE t_storageinfo CONSTRAINT t_storageinfo_ipnfsid_fkey FOREIGN KEY 
(ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE
TABLE t_tags CONSTRAINT t_tags_ipnfsid_fkey FOREIGN KEY (ipnfsid) 
REFERENCES t_inodes(ipnfsid)
Triggers:
tgr_locationinfo_trash BEFORE DELETE ON t_inodes FOR 

Re: [PERFORM] 9.5alpha1 vs 9.4

2015-07-05 Thread Mkrtchyan, Tigran
Thanks for the hin. My bad. The backup db and 9.5 had a different type on
one of the foreign-key constrains char(36) vs varchar(36).

The schema was screwed couple of days ago, byt performance numbers I checked 
only
after migration to 9.5.


Sorry for the noise.

Tigran.

- Original Message -
 From: Tom Lane t...@sss.pgh.pa.us
 To: Andres Freund and...@anarazel.de
 Cc: Mkrtchyan, Tigran tigran.mkrtch...@desy.de, pgsql-performance 
 pgsql-performance@postgresql.org
 Sent: Sunday, July 5, 2015 4:33:25 PM
 Subject: Re: [PERFORM] 9.5alpha1 vs 9.4

 Andres Freund and...@anarazel.de writes:
 On 2015-07-05 13:10:51 +0200, Mkrtchyan, Tigran wrote:
 today I have update my test system to 9.5alpha1.
 Most of the operations are ok, except delete.
 I get ~1000 times slower!
 
 255.88 |  566.11 |   452 | DELETE FROM t_inodes WHERE ipnfsid=$1 AND
 inlink = ?
 
 That certainly should not be the case. Could you show the query plan for
 this statement in both versions?
 
 EXPLAIN ANALYZE, please.  I'm wondering about a missing index on some
 foreign-key-involved column.  That would show up as excessive time in
 the relevant trigger ...
 
   regards, tom lane


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Can't get Dell PE T420 (Perc H710) perform better than a MacMini with PostgreSQL

2015-04-02 Thread Mkrtchyan, Tigran
Hi Pietro,

The modern CPUs trying to be too smart.

try to run this code to disable CPUs c-states:

 setcpulatency.c 

#include stdio.h
#include fcntl.h
#include stdint.h

int main(int argc, char **argv) {
   int32_t l;
   int fd;

   if (argc != 2) {
  fprintf(stderr, Usage: %s latency in us\n, argv[0]);
  return 2;
   }

   l = atoi(argv[1]);
   printf(setting latency to %d us\n, l);
   fd = open(/dev/cpu_dma_latency, O_WRONLY);
   if (fd  0) {
  perror(open /dev/cpu_dma_latency);
  return 1;
   }
   
   if (write(fd, l, sizeof(l)) != sizeof(l)) {
  perror(write to /dev/cpu_dma_latency);
  return 1;
   }
   
   while (1) pause();
}




you can use i7z (https://code.google.com/p/i7z/) to see the percentage of CPU 
power to be used.
Changing CPU from C1 to C0 takes quite some time and for DB workload not 
optimal (if you need a 
high throughout and any given moment).

I see ~65% boost when run './setcpulatency 0'.

Tigran.

- Original Message -
 From: Pietro Pugni pietro.pu...@gmail.com
 To: i...@postgresql-consulting.com
 Cc: pgsql-performance pgsql-performance@postgresql.org
 Sent: Thursday, April 2, 2015 12:57:22 PM
 Subject: Re: [PERFORM] Can't get Dell PE T420 (Perc H710) perform better than 
 a MacMini with PostgreSQL

 Hi Ilya,
 thank your for your response.
 Both system were configured for each test I’ve done. On T420 I’ve optimized 
 the
 kernel following the official Postgres documentation (
 http://www.postgresql.org/docs/9.4/static/kernel-resources.html ):
 kernel.shmmax=68719476736
 kernel.shmall=16777216
 vm.overcommit_memory=2
 vm.overcommit_ratio=90
 
 
 RAID controllers were configured as following:
 - Write cache: WriteBack
 - Read cache: ReadAhead
 - Disk cache (only T420): disabled to take full advantage of WriteBack cache
 (BBU is charged and working)
 - NCQ (only MacMini because it’s a SATA option): enabled (this affects a lot 
 the
 overall performance)
 
 For postgresql.conf:
 
 T420
 Normal operations
 autovacuum = on
 maintenance_work_mem = 512MB
 work_mem = 512MB
 wal_buffers = 64MB
 effective_cache_size = 64GB # this helps A LOT in disk write speed when 
 creating
 indexes
 shared_buffers = 32GB
 checkpoint_segments = 2000
 checkpoint_completion_target = 1.0
 effective_io_concurrency = 0 # 1 doesn’t make any substantial difference
 max_connections = 10 # 20 doesn’t make any difference
 
 Data loading (same as above with the following changes):
 autovacuum = off
 maintenance_work_mem = 64GB
 
 
 MacMini
 Normal operations
 autovacuum = on
 maintenance_work_mem = 128MB
 work_mem = 32MB
 wal_buffers = 32MB
 effective_cache_size = 800MB
 shared_buffers = 512MB
 checkpoint_segments = 32
 checkpoint_completion_target = 1.0
 effective_io_concurrency = 1
 max_connections = 20
 
 Data loading (same as above with the following changes):
 autovacuum = off
 maintenance_work_mem = 6GB
 
 
 Best regards,
 Pietro
 
 
 
 Il giorno 01/apr/2015, alle ore 16:27, Ilya Kosmodemiansky
 ilya.kosmodemian...@postgresql-consulting.com ha scritto:
 
 Hi Pietro,
 
 On Wed, Apr 1, 2015 at 3:56 PM, Pietro Pugni pietro.pu...@gmail.com wrote:
 T420: went from 311seconds (default postgresql.conf) to 195seconds doing
 tuning adjustments over RAID, kernel and postgresql.conf;
 MacMini: 40seconds.
 
 I'am afraid, the matter is, that PostgreSQL is not configured properly
 (and so do operating system and probably controller, however
 pg_test_fsync shows that things are not so bad there as with
 postgresql.conf).
 
 It is pretty useless to benchmark a database using out-of-the-box
 configuration. You need at least configure shared memory related,
 checkpoints-related and autovacuum-related settings. And as a first
 step, please compare postgresql.conf on Mac and on the server:
 sometimes (with some mac installers) default postgresql.conf can be
 not the same as on server.
 
 Best regards,
 Ilya
 
 
 --
 Ilya Kosmodemiansky,
 
 PostgreSQL-Consulting.com
 tel. +14084142500
 cell. +4915144336040
  i...@postgresql-consulting.com


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgres 9.3 vs. 9.4

2014-09-24 Thread Mkrtchyan, Tigran
Hi Merlin et al.

after building postgres 9.4 myself from sources I get the same performance as
with 9.3. The difference was in the value of debug_assertions setting. 

Now the next step. Why my 3 years old laptop gets x1.8 times more tps than my 
one month old server?
And Mark Kirkwood's desktop gets x2 times more tps as well? Is there some 
special optimization
for i7 which does not work with Intel(R) Xeon(R) CPU E5-2660?


Thanks,
  Tigran.

- Original Message -
 From: Merlin Moncure mmonc...@gmail.com
 To: Tigran Mkrtchyan tigran.mkrtch...@desy.de
 Cc: Mark Kirkwood mark.kirkw...@catalyst.net.nz, postgres performance 
 list pgsql-performance@postgresql.org
 Sent: Tuesday, September 23, 2014 4:21:13 PM
 Subject: Re: [PERFORM] postgres 9.3 vs. 9.4
 
 On Tue, Sep 23, 2014 at 7:58 AM, Mkrtchyan, Tigran
 tigran.mkrtch...@desy.de wrote:
  Hi Merlin,
 
  you are right, in 9.4 the debug_assertions are on:
 
  # /etc/init.d/postgresql-9.4 start
  Starting postgresql-9.4 service:   [  OK  ]
  # psql -U postgres
  psql (9.4beta2)
  Type help for help.
 
  postgres=# select name,setting from pg_settings where
  name='debug_assertions';
 name   | setting
  --+-
   debug_assertions | on
  (1 row)
 
 
 (plz try to not top-post).
 
 That's not not really unexpected: 9.4 is still in beta.  If you're
 just doing raw performance testing consider building a postgres
 instance from source (but, instead of compiling into /usr/local/bin,
 I'd keep it all in private user folder for easy removal).
 
 For example, if I downloaded the source into /home/mmoncure/pgdev/src,
 i'd approximately do:
 
 cd /home/mmoncure/pgdev/src
 ./configure --prefix=/home/mmoncure/pgdev
 # if configure gripes about missing readline, go grab the
 libreadline-dev rpm etc and repeat above
 make -j4  make install
 export PATH=/home/mmoncure/pgdev/bin:$PATH
 export PGDATA=/home/mmoncure/pgdev/data
 # use C locale.  may not be appropriate in your case
 initdb --no-locale --encoding=UTF8
 pg_ctl start
 
 This should suffice any beta performance testing you need to do.  When
 9.4 proper comes out, just stop the database and kill the pgdev folder
 (taking a backup first if you need to preserve stuff).
 
 merlin
 
 
 --
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance
 


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgres 9.3 vs. 9.4

2014-09-24 Thread Mkrtchyan, Tigran

With pg_test_timing I can see, that overhead is 48 nsec on my server and 32 
nsec on the laptop.
what makes this difference and have it any influence on the overall performance?

Tigran.

- Original Message -
 From: Mark Kirkwood mark.kirkw...@catalyst.net.nz
 To: Tigran Mkrtchyan tigran.mkrtch...@desy.de, Merlin Moncure 
 mmonc...@gmail.com
 Cc: postgres performance list pgsql-performance@postgresql.org
 Sent: Wednesday, September 24, 2014 12:04:12 PM
 Subject: Re: [PERFORM] postgres 9.3 vs. 9.4
 
 On 24/09/14 21:23, Mkrtchyan, Tigran wrote:
  Hi Merlin et al.
 
  after building postgres 9.4 myself from sources I get the same performance
  as
  with 9.3. The difference was in the value of debug_assertions setting.
 
  Now the next step. Why my 3 years old laptop gets x1.8 times more tps than
  my one month old server?
  And Mark Kirkwood's desktop gets x2 times more tps as well? Is there some
  special optimization
  for i7 which does not work with Intel(R) Xeon(R) CPU E5-2660?
 
 
 
 Yes - firstly, nicely done re finding the assertions (my 9.4 beta2 was
 built from src - never thought to mention sorry)!
 
 I'd guess that you are seeing some bios setting re the p320 SSD - it
 *should* be seriously fast...but does not seem to be. You could try
 running some pure IO benchmarks to confirm this (e.g fio). Also see if
 the manual for however it is attached to the system allows for some
 optimized-for-ssd settings that tend to work better (altho these usually
 imply the drive is plugged into an adapter card of some kind - mind you
 your p320 *does* used a custom connector that does 2.5 SATA to PCIe
 style interconnect so I'd look to debug that first).
 
 Cheers
 
 Mark
 
 


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgres 9.3 vs. 9.4

2014-09-23 Thread Mkrtchyan, Tigran
Hi Merlin,

you are right, in 9.4 the debug_assertions are on:

# /etc/init.d/postgresql-9.4 start
Starting postgresql-9.4 service:   [  OK  ]
# psql -U postgres 
psql (9.4beta2)
Type help for help.

postgres=# select name,setting from pg_settings where name='debug_assertions';
   name   | setting 
--+-
 debug_assertions | on
(1 row)

postgres=# \q
# /etc/init.d/postgresql-9.4 stop
Stopping postgresql-9.4 service:   [  OK  ]
# /etc/init.d/postgresql-9.3 start
Starting postgresql-9.3 service:   [  OK  ]
# psql -U postgres 
psql (9.4beta2, server 9.3.5)
Type help for help.

postgres=# select name,setting from pg_settings where name='debug_assertions';
   name   | setting 
--+-
 debug_assertions | off
(1 row)

postgres=# \q
# 


The rpms are coming from Postgres official repo:

http://yum.postgresql.org/9.4/redhat/rhel-$releasever-$basearch


Tigran.


- Original Message -
 From: Merlin Moncure mmonc...@gmail.com
 To: Mark Kirkwood mark.kirkw...@catalyst.net.nz
 Cc: Tigran Mkrtchyan tigran.mkrtch...@desy.de, postgres performance 
 list pgsql-performance@postgresql.org
 Sent: Monday, September 22, 2014 3:37:50 PM
 Subject: Re: [PERFORM] postgres 9.3 vs. 9.4
 
 On Fri, Sep 19, 2014 at 6:58 PM, Mark Kirkwood
 mark.kirkw...@catalyst.net.nz wrote:
  On 19/09/14 19:24, Mkrtchyan, Tigran wrote:
 
 
 
  - Original Message -
 
  From: Mark Kirkwood mark.kirkw...@catalyst.net.nz
  To: Tigran Mkrtchyan tigran.mkrtch...@desy.de
  Cc: Merlin Moncure mmonc...@gmail.com, postgres performance list
  pgsql-performance@postgresql.org
  Sent: Friday, September 19, 2014 8:26:27 AM
  Subject: Re: [PERFORM] postgres 9.3 vs. 9.4
 
  On 19/09/14 17:53, Mkrtchyan, Tigran wrote:
 
 
 
  - Original Message -
 
  From: Mark Kirkwood mark.kirkw...@catalyst.net.nz
 
 
  Further to the confusion, here's my 9.3 vs 9.4 on two M550 (one for 9.3
  one for 9.4), see below for results.
 
  I'm running xfs on them with trim/discard enabled:
 
  $ mount|grep pg
  /dev/sdd4 on /mnt/pg94 type xfs (rw,discard)
  /dev/sdc4 on /mnt/pg93 type xfs (rw,discard)
 
 
  I'm *not* seeing any significant difference between 9.3 and 9.4, and
  the
  numbers are both about 2x your best number, which is food for thought
  (those P320's should toast my M550 for write performance...).
 
 
  cool! any details on OS and other options? I still get the same numbers
  as before.
 
 
  Sorry, Ubuntu 14.04 on a single socket i7 3.4 Ghz, 16G (i.e my
  workstation).
 
  I saw the suggestion that Didier made to run 9.3 on the SSD that you
  were using for 9.4, and see if it suddenly goes slow - then we'd know
  it's something about the disk (or filesystem/mount options). Can you
  test this?
 
 
 
  swapping the disks did not change the results.
 
 
 
  Do you mean that 9.3 was still faster using the disk that 9.4 had used? If
  so that strongly suggests that there is something you have configured
  differently in the 9.4 installation [1]. Not wanting to sound mean - but it
  is really easy to accidentally connect to the wrong instance when there are
  two on the same box (ahem, yes , done it myself). So perhaps another look
  at
  the 9.4 vs 9.3 setup (or even posti the config files postgresql.conf +
  postgresql.auto.conf for 9.4 here).
 
 Huh.  Where did the 9.4 build come from?  I wonder if there are some
 debugging options set.   Can you check 9.4 pg_settings for value
 ofdebug_assertions?   If it's set true, you might want to consider
 hand compiling postgres until 9.4 is released...
 
 merlin
 


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgres 9.3 vs. 9.4

2014-09-19 Thread Mkrtchyan, Tigran


- Original Message -
 From: Mark Kirkwood mark.kirkw...@catalyst.net.nz
 To: Tigran Mkrtchyan tigran.mkrtch...@desy.de
 Cc: Merlin Moncure mmonc...@gmail.com, postgres performance list 
 pgsql-performance@postgresql.org
 Sent: Friday, September 19, 2014 8:26:27 AM
 Subject: Re: [PERFORM] postgres 9.3 vs. 9.4
 
 On 19/09/14 17:53, Mkrtchyan, Tigran wrote:
 
 
  - Original Message -
  From: Mark Kirkwood mark.kirkw...@catalyst.net.nz
 
  Further to the confusion, here's my 9.3 vs 9.4 on two M550 (one for 9.3
  one for 9.4), see below for results.
 
  I'm running xfs on them with trim/discard enabled:
 
  $ mount|grep pg
  /dev/sdd4 on /mnt/pg94 type xfs (rw,discard)
  /dev/sdc4 on /mnt/pg93 type xfs (rw,discard)
 
 
  I'm *not* seeing any significant difference between 9.3 and 9.4, and the
  numbers are both about 2x your best number, which is food for thought
  (those P320's should toast my M550 for write performance...).
 
  cool! any details on OS and other options? I still get the same numbers
  as before.
 
 
 Sorry, Ubuntu 14.04 on a single socket i7 3.4 Ghz, 16G (i.e my workstation).
 
 I saw the suggestion that Didier made to run 9.3 on the SSD that you
 were using for 9.4, and see if it suddenly goes slow - then we'd know
 it's something about the disk (or filesystem/mount options). Can you
 test this?


swapping the disks did not change the results.
Nevertheless, I run the same test on my fedora20 laptop
8GB RAM, i7 2.2GHz and got 2600tps! I am totally
confused now! Is it kernel version? libc?


Tigran.
 
 Cheers
 
 Mark
 


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] postgres 9.3 vs. 9.4

2014-09-18 Thread Mkrtchyan, Tigran


Hi Folk, 

I am trying to investigate some performance issues which we have with postgres
(a different topic by itself) and tried postgres.9.4beta2, with a hope that it
perform better.

Turned out that 9.4 is 2x slower than 9.3.5 on the same hardware.

Some technical details:

  Host: rhel 6.5 2.6.32-431.23.3.el6.x86_64
  256 GB RAM, 40 cores, Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz
  2x160GB  PCIe SSD DELL_P320h-MTFDGAL175SAH ( on one 9.3, on an other one 9.4 )

postgres tweaks:


default_statistics_target = 100
wal_writer_delay = 10s
vacuum_cost_delay = 50
synchronous_commit = off
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
effective_cache_size = 94GB
work_mem = 402MB
wal_buffers = 16MB
checkpoint_segments = 64
shared_buffers = 8GB
max_connections = 100
random_page_cost = 1.5
# other goodies
log_line_prefix = '%m %d %u %r %%'
log_temp_files = 0
log_min_duration_statement = 5

in both cases databases are fresh - no data.

Here is a results with pgbench.


9.3.5:

# /usr/pgsql-9.3/bin/pgbench -r -j 1 -c 1 -T 60
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 1
number of threads: 1
duration: 60 s
number of transactions actually processed: 96361
tps = 1605.972262 (including connections establishing)
tps = 1606.064501 (excluding connections establishing)
statement latencies in milliseconds:
0.001391\set nbranches 1 * :scale
0.000473\set ntellers 10 * :scale
0.000430\set naccounts 10 * :scale
0.000533\setrandom aid 1 :naccounts
0.000393\setrandom bid 1 :nbranches
0.000468\setrandom tid 1 :ntellers
0.000447\setrandom delta -5000 5000
0.025161BEGIN;
0.131317UPDATE pgbench_accounts SET abalance = abalance + 
:delta WHERE aid = :aid;
0.100211SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
0.117406UPDATE pgbench_tellers SET tbalance = tbalance + :delta 
WHERE tid = :tid;
0.114332UPDATE pgbench_branches SET bbalance = bbalance + 
:delta WHERE bid = :bid;
0.086660INSERT INTO pgbench_history (tid, bid, aid, delta, 
mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
0.035940END;


9.4beta2:

# /usr/pgsql-9.3/bin/pgbench -r -j 1 -c 1 -T 60
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 1
number of threads: 1
duration: 60 s
number of transactions actually processed: 34017
tps = 566.948384 (including connections establishing)
tps = 567.008666 (excluding connections establishing)
statement latencies in milliseconds:
0.001879\set nbranches 1 * :scale
0.000526\set ntellers 10 * :scale
0.000490\set naccounts 10 * :scale
0.000595\setrandom aid 1 :naccounts
0.000421\setrandom bid 1 :nbranches
0.000480\setrandom tid 1 :ntellers
0.000484\setrandom delta -5000 5000
0.055047BEGIN;
0.172179UPDATE pgbench_accounts SET abalance = abalance + 
:delta WHERE aid = :aid;
0.135392SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
0.157224UPDATE pgbench_tellers SET tbalance = tbalance + :delta 
WHERE tid = :tid;
0.147969UPDATE pgbench_branches SET bbalance = bbalance + 
:delta WHERE bid = :bid;
0.123001INSERT INTO pgbench_history (tid, bid, aid, delta, 
mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
0.957854END;

any ideas?

Tigran.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgres 9.3 vs. 9.4

2014-09-18 Thread Mkrtchyan, Tigran


- Original Message -
 From: Mark Kirkwood mark.kirkw...@catalyst.net.nz
 To: Tigran Mkrtchyan tigran.mkrtch...@desy.de, 
 pgsql-performance@postgresql.org
 Sent: Thursday, September 18, 2014 12:17:45 PM
 Subject: Re: [PERFORM] postgres 9.3 vs. 9.4
 
 On 18/09/14 21:58, Mkrtchyan, Tigran wrote:
 
 
  Hi Folk,
 
  I am trying to investigate some performance issues which we have with
  postgres
  (a different topic by itself) and tried postgres.9.4beta2, with a hope that
  it
  perform better.
 
  Turned out that 9.4 is 2x slower than 9.3.5 on the same hardware.
 
  Some technical details:
 
 Host: rhel 6.5 2.6.32-431.23.3.el6.x86_64
 256 GB RAM, 40 cores, Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz
 2x160GB  PCIe SSD DELL_P320h-MTFDGAL175SAH ( on one 9.3, on an other one
 9.4 )
 
  postgres tweaks:
 
 
  default_statistics_target = 100
  wal_writer_delay = 10s
  vacuum_cost_delay = 50
  synchronous_commit = off
  maintenance_work_mem = 2GB
  checkpoint_completion_target = 0.9
  effective_cache_size = 94GB
  work_mem = 402MB
  wal_buffers = 16MB
  checkpoint_segments = 64
  shared_buffers = 8GB
  max_connections = 100
  random_page_cost = 1.5
  # other goodies
  log_line_prefix = '%m %d %u %r %%'
  log_temp_files = 0
  log_min_duration_statement = 5
 
  in both cases databases are fresh - no data.
 
  Here is a results with pgbench.
 
 
  9.3.5:
 
  # /usr/pgsql-9.3/bin/pgbench -r -j 1 -c 1 -T 60
  starting vacuum...end.
  transaction type: TPC-B (sort of)
  scaling factor: 1
  query mode: simple
  number of clients: 1
  number of threads: 1
  duration: 60 s
  number of transactions actually processed: 96361
  tps = 1605.972262 (including connections establishing)
  tps = 1606.064501 (excluding connections establishing)
  statement latencies in milliseconds:
  0.001391\set nbranches 1 * :scale
  0.000473\set ntellers 10 * :scale
  0.000430\set naccounts 10 * :scale
  0.000533\setrandom aid 1 :naccounts
  0.000393\setrandom bid 1 :nbranches
  0.000468\setrandom tid 1 :ntellers
  0.000447\setrandom delta -5000 5000
  0.025161BEGIN;
  0.131317UPDATE pgbench_accounts SET abalance = abalance + 
  :delta WHERE
  aid = :aid;
  0.100211SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
  0.117406UPDATE pgbench_tellers SET tbalance = tbalance + :delta 
  WHERE tid
  = :tid;
  0.114332UPDATE pgbench_branches SET bbalance = bbalance + 
  :delta WHERE
  bid = :bid;
  0.086660INSERT INTO pgbench_history (tid, bid, aid, delta, 
  mtime) VALUES
  (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
  0.035940END;
 
 
  9.4beta2:
 
  # /usr/pgsql-9.3/bin/pgbench -r -j 1 -c 1 -T 60
  starting vacuum...end.
  transaction type: TPC-B (sort of)
  scaling factor: 1
  query mode: simple
  number of clients: 1
  number of threads: 1
  duration: 60 s
  number of transactions actually processed: 34017
  tps = 566.948384 (including connections establishing)
  tps = 567.008666 (excluding connections establishing)
  statement latencies in milliseconds:
  0.001879\set nbranches 1 * :scale
  0.000526\set ntellers 10 * :scale
  0.000490\set naccounts 10 * :scale
  0.000595\setrandom aid 1 :naccounts
  0.000421\setrandom bid 1 :nbranches
  0.000480\setrandom tid 1 :ntellers
  0.000484\setrandom delta -5000 5000
  0.055047BEGIN;
  0.172179UPDATE pgbench_accounts SET abalance = abalance + 
  :delta WHERE
  aid = :aid;
  0.135392SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
  0.157224UPDATE pgbench_tellers SET tbalance = tbalance + :delta 
  WHERE tid
  = :tid;
  0.147969UPDATE pgbench_branches SET bbalance = bbalance + 
  :delta WHERE
  bid = :bid;
  0.123001INSERT INTO pgbench_history (tid, bid, aid, delta, 
  mtime) VALUES
  (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
  0.957854END;
 
  any ideas?
 
 
 Hi Tigran,
 
 Some ideas:
 
 60s is too short for reliable results (default settings for checkpoints
 is 300s so 600s is the typical elapsed time to get reasonably repeatable
 numbers (to ensure you get about 1 checkpoint in your run). In addition
 I usually do
 
 psql !
 CHECKPOINT;
 !
 
 Plus
 
 $ sleep 10
 
 before each run so that I've got some confidence that we are starting
 from approximately the same state each time (and getting hopefully only
 *one* checkpoint per run)!


Sure, I can run a longer tests with longer breaks in between.






9.3.5

# /usr/pgsql-9.3/bin/pgbench -r -j 1 -c 1 -T 600
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 1
number of threads: 1
duration: 600 s
number of transactions actually processed: 1037297
tps = 1728.826406 (including

Re: [PERFORM] postgres 9.3 vs. 9.4

2014-09-18 Thread Mkrtchyan, Tigran


- Original Message -
 From: Jeff Janes jeff.ja...@gmail.com
 To: Tigran Mkrtchyan tigran.mkrtch...@desy.de
 Cc: pgsql-performance@postgresql.org
 Sent: Thursday, September 18, 2014 4:56:22 PM
 Subject: Re: [PERFORM] postgres 9.3 vs. 9.4
 
 On Thu, Sep 18, 2014 at 2:58 AM, Mkrtchyan, Tigran tigran.mkrtch...@desy.de
  wrote:
 
 
 
  Hi Folk,
 
  I am trying to investigate some performance issues which we have with
  postgres
  (a different topic by itself) and tried postgres.9.4beta2, with a hope
  that it
  perform better.
 
  Turned out that 9.4 is 2x slower than 9.3.5 on the same hardware.
 
  Some technical details:
 
Host: rhel 6.5 2.6.32-431.23.3.el6.x86_64
256 GB RAM, 40 cores, Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz
2x160GB  PCIe SSD DELL_P320h-MTFDGAL175SAH ( on one 9.3, on an other one
  9.4 )
 
 
 Why are the versions segregated that way?  Are you sure they are configured
 identically?


es, they are configured identically
 
 
 
  postgres tweaks:
 
 
  default_statistics_target = 100
  wal_writer_delay = 10s
  vacuum_cost_delay = 50
  synchronous_commit = off
 
 
 Are you sure that synchronous_commit is actually off on the 9.4 instance?


yes, synchronous_commit is off.

 
 9.3.5:
 
  # /usr/pgsql-9.3/bin/pgbench -r -j 1 -c 1 -T 60
 
 
 ...
 
 
 0.035940END;
 
 
  9.4beta2:
 
 ...
 
  0.957854END;
 
 
 Looks like IO.

Postgres internal IO? May be. We get 600MB/s on this SSDs.


Tigran.

 
 Cheers,
 
 Jeff
 


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgres 9.3 vs. 9.4

2014-09-18 Thread Mkrtchyan, Tigran


- Original Message -
 From: Josh Berkus j...@agliodbs.com
 To: pgsql-performance@postgresql.org
 Sent: Thursday, September 18, 2014 7:54:24 PM
 Subject: Re: [PERFORM] postgres 9.3 vs. 9.4
 
 On 09/18/2014 08:09 AM, Mkrtchyan, Tigran wrote:
  9.4beta2:
   
   ...
   
0.957854END;
   
   
   Looks like IO.
  Postgres internal IO? May be. We get 600MB/s on this SSDs.
 
 While it's possible that this is a Postgres issue, my first thought is
 that the two SSDs are not actually identical.  The 9.4 one may either
 have a fault, or may be mostly full and heavily fragmented.  Or the Dell
 PCIe card may have an issue.


We have tested both SSDs and they have identical IO characteristics and
as I already mentioned, both databases are fresh, including filesystem.

 
 You are using scale 1 which is a  1MB database, and one client and 1
 thread, which is an interesting test I wouldn't necessarily have done
 myself.  I'll throw the same test on one of my machines and see how it does.

this scenario corresponds to our use case. We need a high transaction rate
per for a single client. Currently I can get only ~1500 tps. Unfortunately, 
posgtress does not tell me where the bottleneck is. Is this is defensively
not the disk IO.


Thanks for the help,
Tigran.

 
 --
 Josh Berkus
 PostgreSQL Experts Inc.
 http://pgexperts.com
 
 
 --
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance
 


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgres 9.3 vs. 9.4

2014-09-18 Thread Mkrtchyan, Tigran

On Sep 18, 2014 9:32 PM, Andrew Dunstan and...@dunslane.net wrote:


 On 09/18/2014 03:09 PM, Mkrtchyan, Tigran wrote: 
  
  - Original Message - 
  From: Josh Berkus j...@agliodbs.com 
  To: pgsql-performance@postgresql.org 
  Sent: Thursday, September 18, 2014 7:54:24 PM 
  Subject: Re: [PERFORM] postgres 9.3 vs. 9.4 
  
  On 09/18/2014 08:09 AM, Mkrtchyan, Tigran wrote: 
  9.4beta2: 
  ... 
  
   0.957854    END; 
  
  Looks like IO. 
  Postgres internal IO? May be. We get 600MB/s on this SSDs. 
  While it's possible that this is a Postgres issue, my first thought is 
  that the two SSDs are not actually identical.  The 9.4 one may either 
  have a fault, or may be mostly full and heavily fragmented.  Or the Dell 
  PCIe card may have an issue. 
  
  We have tested both SSDs and they have identical IO characteristics and 
  as I already mentioned, both databases are fresh, including filesystem. 
  
  You are using scale 1 which is a  1MB database, and one client and 1 
  thread, which is an interesting test I wouldn't necessarily have done 
  myself.  I'll throw the same test on one of my machines and see how it 
  does. 
  this scenario corresponds to our use case. We need a high transaction rate 
  per for a single client. Currently I can get only ~1500 tps. Unfortunately, 
  posgtress does not tell me where the bottleneck is. Is this is defensively 
  not the disk IO. 
  
  
  


 This is when you dig out tools like perf, maybe.

Do you have a better suggestions ?


 cheers 

 andrew 

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgres 9.3 vs. 9.4

2014-09-18 Thread Mkrtchyan, Tigran


- Original Message -
 From: Merlin Moncure mmonc...@gmail.com
 To: Tigran Mkrtchyan tigran.mkrtch...@desy.de
 Cc: postgres performance list pgsql-performance@postgresql.org
 Sent: Thursday, September 18, 2014 10:32:20 PM
 Subject: Re: [PERFORM] postgres 9.3 vs. 9.4
 
 On Thu, Sep 18, 2014 at 4:58 AM, Mkrtchyan, Tigran
 tigran.mkrtch...@desy.de wrote:
 
  9.3.5:
  0.035940END;
 
 
  9.4beta2:
  0.957854END;
 
 
 time being spent on 'END' is definitely suggesting i/o related issues.
 This is making me very skeptical that postgres is the source of the
 problem.   I also thing synchronous_commit is not set properly on the
 new instance (or possibly there is a bug or some such).  Can you
 verify via:
 
 select * from pg_settings where name = 'synchronous_commit';
 
 on both servers?


here you are:

9.4beta2
postgres=# select version();
 version
 
-
 PostgreSQL 9.4beta2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 
20120313 (Red Hat 4.4.7-4), 64-bit
(1 row)

postgres=# select * from pg_settings where name = 'synchronous_commit';
name| setting | unit |  category  | 
 short_desc   | extra_desc | context | vartype |
   
source   | min_val | max_val |  enumvals   | boot_val | 
reset_val |   sourcefile| sourceline 
+-+--++---++-+-+---
-+-+-+-+--+---+-+
 synchronous_commit | off |  | Write-Ahead Log / Settings | Sets the 
current transaction's synchronization level. || user| enum| 
config
uration file | | | {local,remote_write,on,off} | on   | off 
  | /var/lib/pgsql/9.4/data/postgresql.conf |622
(1 row)


9.3.5
postgres=# select version();
   version  
  
--
 PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 
20120313 (Red Hat 4.4.7-4), 64-bit
(1 row)

postgres=# select * from pg_settings where name = 'synchronous_commit';
name| setting | unit |  category  | 
 short_desc   | extra_desc | context | vartype |
   
source   | min_val | max_val |  enumvals   | boot_val | 
reset_val |   sourcefile| sourceline 
+-+--++---++-+-+---
-+-+-+-+--+---+-+
 synchronous_commit | off |  | Write-Ahead Log / Settings | Sets the 
current transaction's synchronization level. || user| enum| 
config
uration file | | | {local,remote_write,on,off} | on   | off 
  | /var/lib/pgsql/9.3/data/postgresql.conf |166
(1 row)




 
 What is iowait?  For pci-e SSD, these drives don't seem very fast...

iostat, top and pg_top never show iowait greater than  0.7%

Tigran.

 
 merlin
 


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgres 9.3 vs. 9.4

2014-09-18 Thread Mkrtchyan, Tigran


- Original Message -
 From: Mark Kirkwood mark.kirkw...@catalyst.net.nz
 To: Merlin Moncure mmonc...@gmail.com, Tigran Mkrtchyan 
 tigran.mkrtch...@desy.de
 Cc: postgres performance list pgsql-performance@postgresql.org
 Sent: Thursday, September 18, 2014 10:56:36 PM
 Subject: Re: [PERFORM] postgres 9.3 vs. 9.4
 
 On 19/09/14 08:32, Merlin Moncure wrote:
  On Thu, Sep 18, 2014 at 4:58 AM, Mkrtchyan, Tigran
  tigran.mkrtch...@desy.de wrote:
 
  9.3.5:
   0.035940END;
 
 
  9.4beta2:
   0.957854END;
 
 
  time being spent on 'END' is definitely suggesting i/o related issues.
  This is making me very skeptical that postgres is the source of the
  problem.   I also thing synchronous_commit is not set properly on the
  new instance (or possibly there is a bug or some such).  Can you
  verify via:
 
  select * from pg_settings where name = 'synchronous_commit';
 
  on both servers?
 
 
 Yes, does look suspicious. It *could* be that the 9.4 case is getting
 unlucky and checkpointing just before the end of the 60s run, and 9.3
 isn't.

10 minutes run had the same results.

Is there some kind of statistics which can tell there time is spend?
Or the only way is to run on solaris with dtrace? For me it's more important
to find why I get only 1500tps with 9.3. The test with 9.4 was just a hope for
a magic code change that will give me a better performance.

Tigran.


 
  What is iowait?  For pci-e SSD, these drives don't seem very fast...
 
 
 
 
 These look like rebranded Micron P320's and should be extremely
 fast...However I note that my Crucial/Micron M550's are very fast for
 most writes *but* are much slower for sync writes (and fsync) that
 happen at commit...
 
 Cheers
 
 Mark
 
 
 --
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance
 


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgres 9.3 vs. 9.4

2014-09-18 Thread Mkrtchyan, Tigran


- Original Message -
 From: Mark Kirkwood mark.kirkw...@catalyst.net.nz
 To: Tigran Mkrtchyan tigran.mkrtch...@desy.de
 Cc: Merlin Moncure mmonc...@gmail.com, postgres performance list 
 pgsql-performance@postgresql.org
 Sent: Friday, September 19, 2014 12:49:05 AM
 Subject: Re: [PERFORM] postgres 9.3 vs. 9.4
 
 On 19/09/14 10:16, Mark Kirkwood wrote:
  On 19/09/14 09:10, Mkrtchyan, Tigran wrote:
 
 
  - Original Message -
  From: Mark Kirkwood mark.kirkw...@catalyst.net.nz
  To: Merlin Moncure mmonc...@gmail.com, Tigran Mkrtchyan
  tigran.mkrtch...@desy.de
  Cc: postgres performance list pgsql-performance@postgresql.org
  Sent: Thursday, September 18, 2014 10:56:36 PM
  Subject: Re: [PERFORM] postgres 9.3 vs. 9.4
 
  On 19/09/14 08:32, Merlin Moncure wrote:
  On Thu, Sep 18, 2014 at 4:58 AM, Mkrtchyan, Tigran
  tigran.mkrtch...@desy.de wrote:
 
  9.3.5:
0.035940END;
 
 
  9.4beta2:
0.957854END;
 
 
  time being spent on 'END' is definitely suggesting i/o related issues.
  This is making me very skeptical that postgres is the source of the
  problem.   I also thing synchronous_commit is not set properly on the
  new instance (or possibly there is a bug or some such).  Can you
  verify via:
 
  select * from pg_settings where name = 'synchronous_commit';
 
  on both servers?
 
 
  Yes, does look suspicious. It *could* be that the 9.4 case is getting
  unlucky and checkpointing just before the end of the 60s run, and 9.3
  isn't.
 
  10 minutes run had the same results.
 
  Is there some kind of statistics which can tell there time is spend?
  Or the only way is to run on solaris with dtrace? For me it's more
  important
  to find why I get only 1500tps with 9.3. The test with 9.4 was just a
  hope for
  a magic code change that will give me a better performance.
 
 
 
  Interesting. With respect to dtrace, you can use systemtap on Linux to
  achieve similar things.
 
  However before getting too carried away with that - we already *know*
  that 9.4 is spending longer in END (i.e commit) than 9.3 is. I'd
  recommend you see what wal_sync_method is set to on both systems. If it
  is the same, then my suspicion is that one of the SSD's needs to be
  trimmed [1]. You can do this by running:
 
  $ fstrim /mountpoint
 
  Also - are you using the same filesystem and mount options on each SSD?
 
  Cheers
 
  Mark
 
  [1] if fact, for the paranoid - I usually secure erase any SSD before
  performance testing, and then check the SMART counters too...
 
 
 Further to the confusion, here's my 9.3 vs 9.4 on two M550 (one for 9.3
 one for 9.4), see below for results.
 
 I'm running xfs on them with trim/discard enabled:
 
 $ mount|grep pg
 /dev/sdd4 on /mnt/pg94 type xfs (rw,discard)
 /dev/sdc4 on /mnt/pg93 type xfs (rw,discard)
 
 
 I'm *not* seeing any significant difference between 9.3 and 9.4, and the
 numbers are both about 2x your best number, which is food for thought
 (those P320's should toast my M550 for write performance...).

cool! any details on OS and other options? I still get the same numbers
as before.

Tigran.

 
 
 9.3:
 
 $ pgbench -r -j 1 -c 1 -T 60 bench
 starting vacuum...end.
 transaction type: TPC-B (sort of)
 scaling factor: 1
 query mode: simple
 number of clients: 1
 number of threads: 1
 duration: 60 s
 number of transactions actually processed: 194615
 tps = 3243.567115 (including connections establishing)
 tps = 3243.771688 (excluding connections establishing)
 statement latencies in milliseconds:
   0.000798\set nbranches 1 * :scale
   0.000302\set ntellers 10 * :scale
   0.000276\set naccounts 10 * :scale
   0.000330\setrandom aid 1 :naccounts
   0.000265\setrandom bid 1 :nbranches
   0.000278\setrandom tid 1 :ntellers
   0.000298\setrandom delta -5000 5000
   0.012818BEGIN;
   0.065403UPDATE pgbench_accounts SET abalance = abalance + 
 :delta WHERE
 aid = :aid;
   0.048516SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
   0.058343UPDATE pgbench_tellers SET tbalance = tbalance + :delta 
 WHERE
 tid = :tid;
   0.057763UPDATE pgbench_branches SET bbalance = bbalance + 
 :delta WHERE
 bid = :bid;
   0.043293INSERT INTO pgbench_history (tid, bid, aid, delta, 
 mtime)
 VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
   0.017087END;
 
 
 9.4:
 
 $ pgbench -r -j 1 -c 1 -T 60 bench
 starting vacuum...end.
 transaction type: TPC-B (sort of)
 scaling factor: 1
 query mode: simple
 number of clients: 1
 number of threads: 1
 duration: 60 s
 number of transactions actually processed: 194130
 latency average: 0.309 ms
 tps = 3235.488190 (including connections establishing)
 tps = 3235.560235 (excluding connections establishing)
 statement latencies in milliseconds:
   0.000460\set nbranches 1 * :scale
   0.000231\set ntellers 10 * :scale