RE: Performance of a Query

2018-01-09 Thread Danylo Hlynskyi
Can you try to extract filter part as CTE? Like with filtered as (select ... where policyid = 123456) select ... (here comes original query but uses filtered table instead) 10 янв. 2018 г. 1:10 пользователь "Kumar, Virendra" < virendra.ku...@guycarp.com> написал: It did not seem to help. See

Re: Need Help on wal_compression

2018-01-09 Thread Michael Paquier
On Tue, Jan 09, 2018 at 01:53:14PM -0300, Claudio Freire wrote: > That shows 50% of that are full page writes. This is with compression > enabled. WAL compression will only help FPW, so if you don't have a large > volume of FPW, or they don't compress well, you won't benefit much. This highly

Re: Performance of a Query

2018-01-09 Thread Scott Marlowe
On Tue, Jan 9, 2018 at 4:09 PM, Kumar, Virendra wrote: > It did not seem to help. > See attachment. Yeah while it's still writing, it's about half as much but most of the time seems to be in merging etc multiple data sets. I'm wondering what non-default values you

Re: Performance of a Query

2018-01-09 Thread Scott Marlowe
On Tue, Jan 9, 2018 at 3:25 PM, Kumar, Virendra wrote: > Thank you Scott! > I have current work_mem set as 4MB, shared_buffers to 8GB, hugepages on. > I gradually increased the work_mem to 1GB but it did not help a bit. Am I > missing something obvious. > From: Scott

PG 9.5 2 tables same DDL with diff size

2018-01-09 Thread ghiureai
HI List I am trying to understand the following : have 2  identical PG cluster on diff hosts, same postgresql.conf, same db schema :  same tale DDL and row counts but different size ( 14GB diff  ), I run reindex and full vacuum analyze,  but I can not decrease the size of larger

RE: Performance of a Query

2018-01-09 Thread Kumar, Virendra
Thank you Scott! I have current work_mem set as 4MB, shared_buffers to 8GB, hugepages on. I gradually increased the work_mem to 1GB but it did not help a bit. Am I missing something obvious. Regards, Virendra -Original Message- From: Scott Marlowe [mailto:scott.marl...@gmail.com] Sent:

Re: Performance of a Query

2018-01-09 Thread Scott Marlowe
On Tue, Jan 9, 2018 at 2:18 PM, Kumar, Virendra wrote: > Hello Gurus, > > I am struggling to tune a query which is doing join on top of aggregate for > around 3 million rows. The plan and SQL is attached to the email. > > Below is system Details: > > PGSQL version –

Performance of a Query

2018-01-09 Thread Kumar, Virendra
Hello Gurus, I am struggling to tune a query which is doing join on top of aggregate for around 3 million rows. The plan and SQL is attached to the email. Below is system Details: PGSQL version - 10.1 OS - RHEL 3.10.0-693.5.2.el7.x86_64 Binary - Dowloaded from postgres.org compiled and

Re: Need Help on wal_compression

2018-01-09 Thread Scott Marlowe
On Mon, Jan 8, 2018 at 11:53 PM, Rambabu V wrote: > Hi Team, > > Daily 4000 Archive files are generating and these are occupying more space, > we are trying to compress wall files with using wal_compression parameter, > but we are not seeing any change in wal files count,

Re: Batch insert heavily affecting query performance.

2018-01-09 Thread Claudio Freire
On Wed, Dec 27, 2017 at 2:10 PM, Mike Sofen wrote: > In my experience, that 77ms will stay quite constant even if your db grew > to > 1TB. Postgres IS amazing. BTW, for a db, you should always have > provisioned IOPS or else your performance can vary wildly, since the SSDs >

Re: Need Help on wal_compression

2018-01-09 Thread Claudio Freire
On Tue, Jan 9, 2018 at 3:53 AM, Rambabu V wrote: > Hi Team, > > Daily 4000 Archive files are generating and these are occupying more > space, we are trying to compress wall files with using wal_compression > parameter, but we are not seeing any change in wal files count,

Need Help on wal_compression

2018-01-09 Thread Rambabu V
Hi Team, Daily 4000 Archive files are generating and these are occupying more space, we are trying to compress wall files with using wal_compression parameter, but we are not seeing any change in wal files count, could you please help us on this.

Re: Updating a large table

2018-01-09 Thread Sergei Kornilov
Hello > 1. ALTER TABLE clusters ADD COLUMN "is_paid" BOOLEAN DEFAULT FALSE; this is wrong. To avoid large table lock you need DEFAULT NULL: ALTER TABLE clusters ADD COLUMN "is_paid" BOOLEAN DEFAULT NULL; Default null changes only system catalog, default with any non-null value will rewrite all

Re: Updating a large table

2018-01-09 Thread Timokhin Maxim
Hello Tomas! Thank you for the useful answer! 23.12.2017, 23:58, "Tomas Vondra" : > On 12/22/2017 05:46 PM, Timokhin Maxim wrote: >>  Hello! We have a large table 11GB ( about 37 million records ) and we >>  need to alter a table - add a new column with default