Re: [PERFORM] Postgres slave not catching up (on 9.2)

2014-11-10 Thread Ruben Domingo Gaspar Aparicio
Indeed I could save some IO with noatime. I must say I haven’t found any 
recommendation about mount options for postgresql, likely because this is not 
encourage. The ones you see are taking from a Oracle cluster configuration 
where several nodes see the same files. It's not the case on this setup.

The IO is not an issue. The storage is not at all saturated. Slave gets 
streaming perfectly but the apply is quite slow, looks like always working with 
pages of 8k at a time:

--datafiles
[root@ ~]# /ORA/dbs01/syscontrol/projects/dfm/bin/smetrics -i 5 -n 100 -o vol 
dbnasg403-12a:/vol/dodpupdbtst03 Instance total_ops read_ops write_ops 
read_data write_data avg_latency read_latency write_latenc
/s   /s/s   b/sb/s  us  
 us   us
dodpupdbtst03  64660   162 02350619   31.53 
   0   764.70
dodpupdbtst03  67620   843 08751169   48.32 
   0   263.10
dodpupdbtst03  70230  1547 0   14914498  112.88 
   0   303.16
dodpupdbtst03  53730   321  65013809930   58.44 
11287.75   467.21
dodpupdbtst03  56180   183 01661200   20.91 
   0   265.61
dodpupdbtst03  55380   214 03471380   29.24 
   0   374.27
dodpupdbtst03  57530   425 04973131   45.36 
   0   351.08
dodpupdbtst03  61100   142 02331695   20.96 
   0   378.95

--WALs
Bye Bye[root@ ~]# /ORA/dbs01/syscontrol/projects/dfm/bin/smetrics -i 5 -n 100 
-o vol dbnasg401-12a:/vol/dodpupdbtst02 Instance total_ops read_ops write_ops 
read_data write_data avg_latency read_latency write_latenc
/s   /s/s   b/sb/s  us  
 us   us
dodpupdbtst02  1017  20293   59158992637111 2033.22 
10116.09   172.61
dodpupdbtst02  1284  242   141   73687124309409 1235.11 
 6306.37   172.89
dodpupdbtst02  1357  231   268   68698168489466  957.55 
 5104.09   192.26
dodpupdbtst02  1566  264   288   81429659008529  747.96 
 4069.78   180.00
dodpupdbtst02  1333  235   153   76010514755791  993.81 
 5394.99   176.99
dodpupdbtst02  1261  199   287   61248219075170  896.32 
 5150.28   203.81
dodpupdbtst02   963  161   192   49559966066333 1757.66 
10035.06   213.12
dodpupdbtst02   924  159   157   47826174807262 1092.61 
 5804.85   236.91
dodpupdbtst02   591   97   137   28990854275046 1218.24 
 6980.66   190.20

Writes are usually fast (us as they use the NVRAM )and reads are about 5 ms 
which is quite ok considering SATA disks (they have a flash cache of 512GB, 
this is why we get this average).

Thank you,
Ruben

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] updating statistics on slow running query

2014-11-10 Thread Eric Ramirez
Hi,
I have created a sample database with test data to help benchmark our
application. The database has ten million records, and is running on a
dedicated server(postgres 9.3) with 8GB of RAM.  Our queries are pretty
slow with this amount of data and is my job to get them to run to at
acceptable speed. First thing that I notice was that the planner's row
estimates are off by a large number or records (millions) I have updated
the statistics target but didn't seem to make a difference. The relevant
output follows.
Am I looking in the wrong place, something else I should be trying?
Thanks in advance for your comments/suggestions,
Eric.


=# show work_mem;
 work_mem
--
 1GB
(1 row)
=# show effective_cache_size;
 effective_cache_size
--
 5GB
(1 row)

=#ALTER TABLE TAR_MVW_TARGETING_RECORD ALTER COLUMN
household_member_first_name SET STATISTICS 5000;
=# vacuum analyse TAR_MVW_TARGETING_RECORD;

=# \d tar_mvw_targeting_record;
 Table public.tar_mvw_targeting_record
   Column| Type  | Modifiers
-+---+---
 household_member_id | bigint|
 form_id | bigint|
 status  | character varying(64) |
 gender  | character varying(64) |
 household_member_first_name | character varying(64) |
 household_member_last_name  | character varying(64) |

Indexes:
   tar_mvw_targeting_record_form_id_household_member_id_idx UNIQUE, btree
(form_id, household_member_id)
 tar_mvw_targeting_record_lower_idx gist
(lower(household_member_first_name::text) extensions.gist_trgm_ops)
 WHERE status::text  'ANULLED'::text
tar_mvw_targeting_record_lower_idx1 gist
(lower(household_member_last_name::text) extensions.gist_trgm_ops)
 WHERE status::text  'ANULLED'::text


=# explain (analyse on,buffers on)select T.form_id from
TAR_MVW_targeting_record AS T where T.status NOT IN ('ANULLED')  AND
LOWER(T.household_member_last_name) LIKE LOWER('%tu%') AND
T.gender='FEMALE' group by T.form_id;

QUERY PLAN

---
---
 HashAggregate  (cost=450994.35..452834.96 rows=184061 width=8) (actual
time=11932.959..12061.206 rows=442453 loops=1)
   Buffers: shared hit=307404 read=109743
   -  Bitmap Heap Scan on tar_mvw_targeting_record t
(cost=110866.33..448495.37 rows=999592 width=8) (actual
time=3577.301..11629.132 row
s=500373 loops=1)
 Recheck Cond: ((lower((household_member_last_name)::text) ~~
'%tu%'::text) AND ((status)::text  'ANULLED'::text))
 Rows Removed by Index Recheck: 979
 Filter: ((gender)::text = 'FEMALE'::text)
 Rows Removed by Filter: 499560
 Buffers: shared hit=307404 read=109743
 -  Bitmap Index Scan on tar_mvw_targeting_record_lower_idx1
(cost=0.00..110616.43 rows=202 width=0) (actual time=3471.142..3
471.142 rows=1012 loops=1)
   Index Cond: (lower((household_member_last_name)::text) ~~
'%tu%'::text)
   Buffers: shared hit=36583 read=82935
 Total runtime: 12092.059 ms
(12 rows)

Time: 12093.107 ms

p.s. this plan was ran three times, first time took 74 seconds.


Re: [PERFORM] Performance bug in prepared statement binding in 9.2?

2014-11-10 Thread Josh Berkus
On 12/31/2013 09:55 AM, Tom Lane wrote:
 Josh Berkus j...@agliodbs.com writes:
 Tom,
 There's an abbreviated version of this argument in the comments in
 my proposed patch at
 http://www.postgresql.org/message-id/11927.1384199...@sss.pgh.pa.us
 What I'm hoping will happen next is that the complainants will hot-patch
 that and see if it fixes their problems.  We can't really determine
 what to do without that information.
 
 Unfortunately, the original reporter of this issue will not be available
 for testing for 2-3 weeks, and I haven't been able to devise a synthetic
 test which clearly shows the issue.
 
 Ping?  I've been waiting on committing that patch pending some real-world
 testing.  It'd be nice to resolve this question before we ship 9.3.3,
 which I'm supposing will be sometime in January ...

Did this patch every make it in?  Or did it hang waiting for verification?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] 9.3 performance issues, lots of bind and parse log entries

2014-11-10 Thread Josh Berkus
Tory,

Do you know if your workload involves a lot of lock-blocking,
particularly blocking on locks related to FKs?  I'm tracing down a
problem which sounds similar to yours.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] updating statistics on slow running query

2014-11-10 Thread desmodemone
2014-11-10 18:43 GMT+01:00 Eric Ramirez eric.ramirez...@gmail.com:


 Hi,
 I have created a sample database with test data to help benchmark our
 application. The database has ten million records, and is running on a
 dedicated server(postgres 9.3) with 8GB of RAM.  Our queries are pretty
 slow with this amount of data and is my job to get them to run to at
 acceptable speed. First thing that I notice was that the planner's row
 estimates are off by a large number or records (millions) I have updated
 the statistics target but didn't seem to make a difference. The relevant
 output follows.
 Am I looking in the wrong place, something else I should be trying?
 Thanks in advance for your comments/suggestions,
 Eric.


 =# show work_mem;
  work_mem
 --
  1GB
 (1 row)
 =# show effective_cache_size;
  effective_cache_size
 --
  5GB
 (1 row)

 =#ALTER TABLE TAR_MVW_TARGETING_RECORD ALTER COLUMN
 household_member_first_name SET STATISTICS 5000;
 =# vacuum analyse TAR_MVW_TARGETING_RECORD;

 =# \d tar_mvw_targeting_record;
  Table public.tar_mvw_targeting_record
Column| Type  | Modifiers
 -+---+---
  household_member_id | bigint|
  form_id | bigint|
  status  | character varying(64) |
  gender  | character varying(64) |
  household_member_first_name | character varying(64) |
  household_member_last_name  | character varying(64) |

 Indexes:
tar_mvw_targeting_record_form_id_household_member_id_idx UNIQUE,
 btree (form_id, household_member_id)
  tar_mvw_targeting_record_lower_idx gist
 (lower(household_member_first_name::text) extensions.gist_trgm_ops)
  WHERE status::text  'ANULLED'::text
 tar_mvw_targeting_record_lower_idx1 gist
 (lower(household_member_last_name::text) extensions.gist_trgm_ops)
  WHERE status::text  'ANULLED'::text


 =# explain (analyse on,buffers on)select T.form_id from
 TAR_MVW_targeting_record AS T where T.status NOT IN ('ANULLED')  AND
 LOWER(T.household_member_last_name) LIKE LOWER('%tu%') AND
 T.gender='FEMALE' group by T.form_id;

 QUERY PLAN


 ---
 ---
  HashAggregate  (cost=450994.35..452834.96 rows=184061 width=8) (actual
 time=11932.959..12061.206 rows=442453 loops=1)
Buffers: shared hit=307404 read=109743
-  Bitmap Heap Scan on tar_mvw_targeting_record t
 (cost=110866.33..448495.37 rows=999592 width=8) (actual
 time=3577.301..11629.132 row
 s=500373 loops=1)
  Recheck Cond: ((lower((household_member_last_name)::text) ~~
 '%tu%'::text) AND ((status)::text  'ANULLED'::text))
  Rows Removed by Index Recheck: 979
  Filter: ((gender)::text = 'FEMALE'::text)
  Rows Removed by Filter: 499560
  Buffers: shared hit=307404 read=109743
  -  Bitmap Index Scan on tar_mvw_targeting_record_lower_idx1
 (cost=0.00..110616.43 rows=202 width=0) (actual time=3471.142..3
 471.142 rows=1012 loops=1)
Index Cond: (lower((household_member_last_name)::text) ~~
 '%tu%'::text)
Buffers: shared hit=36583 read=82935
  Total runtime: 12092.059 ms
 (12 rows)

 Time: 12093.107 ms

 p.s. this plan was ran three times, first time took 74 seconds.



Hello Eric,
did you try with gin index instead ? so you could
avoid, if possible, the recheck condition (almost the gin index is not
lossy ), further if you always use a predicate like gender= , you could
think to partition the indexes based on that predicate (where status NOT IN
('ANULLED')  and gender='FEMALE', in the other case it wil be where status
NOT IN ('ANULLED')  and gender='MALE' ) . Moreover you could avoid also the
lower operator and try use directly the ilike , instead of like.

CREATE INDEX tar_mvw_targeting_record_idx02 ON
tar_mvw_targeting_record USING gin (  status gin_trgm_ops) where
status NOT IN ('ANULLED')  and gender='FEMALE' ;
CREATE INDEX tar_mvw_targeting_record_idx03 ON
tar_mvw_targeting_record USING gin (  status gin_trgm_ops) where
status NOT IN ('ANULLED')  and gender='MALE' ;


explain (analyse on,buffers on)  select T.form_id from
TAR_MVW_targeting_record AS T where T.status NOT IN ('ANULLED')  AND
T.household_member_last_name ilike LOWER('%tu%') AND T.gender='FEMALE'
group by T.form_id;


 I hope it works

have a nice day


-- 
Matteo Durighetto

- - - - - - - - - - - - - - - - - - - - - - -

Italian PostgreSQL User Group http://www.itpug.org/index.it.html
Italian Community for Geographic Free/Open-Source Software
http://www.gfoss.it


Re: [PERFORM] Performance bug in prepared statement binding in 9.2?

2014-11-10 Thread Jeff Janes
On Mon, Nov 10, 2014 at 10:48 AM, Josh Berkus j...@agliodbs.com wrote:

 On 12/31/2013 09:55 AM, Tom Lane wrote:
  Josh Berkus j...@agliodbs.com writes:
  Tom,
  There's an abbreviated version of this argument in the comments in
  my proposed patch at
  http://www.postgresql.org/message-id/11927.1384199...@sss.pgh.pa.us
  What I'm hoping will happen next is that the complainants will
 hot-patch
  that and see if it fixes their problems.  We can't really determine
  what to do without that information.
 
  Unfortunately, the original reporter of this issue will not be available
  for testing for 2-3 weeks, and I haven't been able to devise a synthetic
  test which clearly shows the issue.
 
  Ping?  I've been waiting on committing that patch pending some real-world
  testing.  It'd be nice to resolve this question before we ship 9.3.3,
  which I'm supposing will be sometime in January ...

 Did this patch every make it in?  Or did it hang waiting for verification?


It made it in:

commit 4162a55c77cbb54acb4ac442ef3565b813b9d07a
Author: Tom Lane t...@sss.pgh.pa.us
Date:   Tue Feb 25 16:04:09 2014 -0500

Cheers,

Jeff


Re: [PERFORM] Performance bug in prepared statement binding in 9.2?

2014-11-10 Thread Josh Berkus
On 11/10/2014 10:59 AM, Jeff Janes wrote:
 On Mon, Nov 10, 2014 at 10:48 AM, Josh Berkus j...@agliodbs.com wrote:
 
 On 12/31/2013 09:55 AM, Tom Lane wrote:
 Josh Berkus j...@agliodbs.com writes:
 Tom,
 There's an abbreviated version of this argument in the comments in
 my proposed patch at
 http://www.postgresql.org/message-id/11927.1384199...@sss.pgh.pa.us
 What I'm hoping will happen next is that the complainants will
 hot-patch
 that and see if it fixes their problems.  We can't really determine
 what to do without that information.

 Unfortunately, the original reporter of this issue will not be available
 for testing for 2-3 weeks, and I haven't been able to devise a synthetic
 test which clearly shows the issue.

 Ping?  I've been waiting on committing that patch pending some real-world
 testing.  It'd be nice to resolve this question before we ship 9.3.3,
 which I'm supposing will be sometime in January ...

 Did this patch every make it in?  Or did it hang waiting for verification?

 
 It made it in:
 
 commit 4162a55c77cbb54acb4ac442ef3565b813b9d07a
 Author: Tom Lane t...@sss.pgh.pa.us
 Date:   Tue Feb 25 16:04:09 2014 -0500

Thanks, then the problem I'm seeing now is something else.


-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance bug in prepared statement binding in 9.2?

2014-11-10 Thread Andres Freund
On 2014-11-10 10:48:24 -0800, Josh Berkus wrote:
 On 12/31/2013 09:55 AM, Tom Lane wrote:
  Josh Berkus j...@agliodbs.com writes:
  Tom,
  There's an abbreviated version of this argument in the comments in
  my proposed patch at
  http://www.postgresql.org/message-id/11927.1384199...@sss.pgh.pa.us
  What I'm hoping will happen next is that the complainants will hot-patch
  that and see if it fixes their problems.  We can't really determine
  what to do without that information.
  
  Unfortunately, the original reporter of this issue will not be available
  for testing for 2-3 weeks, and I haven't been able to devise a synthetic
  test which clearly shows the issue.
  
  Ping?  I've been waiting on committing that patch pending some real-world
  testing.  It'd be nice to resolve this question before we ship 9.3.3,
  which I'm supposing will be sometime in January ...
 
 Did this patch every make it in?  Or did it hang waiting for
 verification?

src/tools/git_changelog is your friend.

Author: Tom Lane t...@sss.pgh.pa.us
Branch: master Release: REL9_4_BR [fccebe421] 2014-02-25 16:04:06 -0500
Branch: REL9_3_STABLE Release: REL9_3_4 [4162a55c7] 2014-02-25 16:04:09 -0500
Branch: REL9_2_STABLE Release: REL9_2_8 [00283cae1] 2014-02-25 16:04:12 -0500
Branch: REL9_1_STABLE Release: REL9_1_13 [3e2db4c80] 2014-02-25 16:04:16 -0500
Branch: REL9_0_STABLE Release: REL9_0_17 [1e0fb6a2c] 2014-02-25 16:04:20 -0500

Use SnapshotDirty rather than an active snapshot to probe index endpoints.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance bug in prepared statement binding in 9.2?

2014-11-10 Thread Tom Lane
Josh Berkus j...@agliodbs.com writes:
 On 11/10/2014 10:59 AM, Jeff Janes wrote:
 On Mon, Nov 10, 2014 at 10:48 AM, Josh Berkus j...@agliodbs.com wrote:
 Did this patch every make it in?  Or did it hang waiting for verification?

 It made it in:
 commit 4162a55c77cbb54acb4ac442ef3565b813b9d07a
 Author: Tom Lane t...@sss.pgh.pa.us
 Date:   Tue Feb 25 16:04:09 2014 -0500

 Thanks, then the problem I'm seeing now is something else.

Notice that only went in this past Feb., so you need to check you're
dealing with a fairly recent minor release before you dismiss it.

regards, tom lane


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance bug in prepared statement binding in 9.2?

2014-11-10 Thread Josh Berkus
On 11/10/2014 11:11 AM, Tom Lane wrote:
 Josh Berkus j...@agliodbs.com writes:
 On 11/10/2014 10:59 AM, Jeff Janes wrote:
 On Mon, Nov 10, 2014 at 10:48 AM, Josh Berkus j...@agliodbs.com wrote:
 Did this patch every make it in?  Or did it hang waiting for verification?
 
 It made it in:
 commit 4162a55c77cbb54acb4ac442ef3565b813b9d07a
 Author: Tom Lane t...@sss.pgh.pa.us
 Date:   Tue Feb 25 16:04:09 2014 -0500
 
 Thanks, then the problem I'm seeing now is something else.
 
 Notice that only went in this past Feb., so you need to check you're
 dealing with a fairly recent minor release before you dismiss it.

It's 9.3.5

The new issue will get its own thread.


-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Lock pileup causes server to stall

2014-11-10 Thread Josh Berkus
All,

pg version: 9.3.5
RHEL 6.5
128GB/32 cores
Configured with shared_buffers=16GB
Java/Tomcat/JDBC application

Server has an issue that whenever we get lock waits (transaction lock
waits, usually on an FK dependancy) lasting over a minute or more than
10 at once, *all* queries on the server slow to a crawl, taking 100X to
400X normal execution times.

Other info:
* This applies even to queries which are against other databases, so
it's not purely a lock blocking issue.
* this database routinely has a LOT of lock conlicts, churning through 1
million multixacts per day
* pgBouncer is also involved in this stack, and may be contributing to
the problem in some way
* at no time is the DB server out of CPU (max usage = 38%), RAM, or
doing major IO (max %util = 22%).
* BIND statements can be slow as well as EXECUTEs.

I don't have full query logs from a stall period yet, so I'll have more
information when I do: for example, is it ALL queries which are slow or
just some of them?

However, I thought this list would have some other ideas where to look.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance bug in prepared statement binding in 9.2?

2014-11-10 Thread Jeff Janes
On Mon, Nov 10, 2014 at 11:04 AM, Josh Berkus j...@agliodbs.com wrote:

 On 11/10/2014 10:59 AM, Jeff Janes wrote:
  On Mon, Nov 10, 2014 at 10:48 AM, Josh Berkus j...@agliodbs.com wrote:
 

  Did this patch every make it in?  Or did it hang waiting for
 verification?
 
 
  It made it in:
 
  commit 4162a55c77cbb54acb4ac442ef3565b813b9d07a
  Author: Tom Lane t...@sss.pgh.pa.us
  Date:   Tue Feb 25 16:04:09 2014 -0500

 Thanks, then the problem I'm seeing now is something else.


The related problem where the end rows are actually needed (e.g. ORDER
BY...LIMIT) has not been fixed.

My idea to fix that was to check if the row's creation-transaction was in
the MVCC snapshot (which just uses local memory) before checking if that
creation-transaction had committed (which uses shared memory).  But I
didn't really have the confidence to push that given the fragility of that
part of the code and my lack of experience with it.  See In progress
INSERT wrecks plans on table thread.

Simon also had some patches to still do the shared memory look up but make
them faster by caching where in the list it would be likely to find the
match, based on where it found the last match.



Cheers,

Jeff


Re: [PERFORM] Lock pileup causes server to stall

2014-11-10 Thread Alvaro Herrera
Josh Berkus wrote:
 All,
 
 pg version: 9.3.5
 RHEL 6.5
 128GB/32 cores
 Configured with shared_buffers=16GB
 Java/Tomcat/JDBC application
 
 Server has an issue that whenever we get lock waits (transaction lock
 waits, usually on an FK dependancy) lasting over a minute or more than
 10 at once, *all* queries on the server slow to a crawl, taking 100X to
 400X normal execution times.

Current FK checking makes you wait if the referenced tuple is modified
on any indexed column, not just those that are actually used in
foreign keys.  Maybe this case would be sped up if we optimized that.

 * This applies even to queries which are against other databases, so
 it's not purely a lock blocking issue.

Oh.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] updating statistics on slow running query

2014-11-10 Thread Eric Ramirez
Hi Matteo,
Thanks for your suggestions, I just run some test with ILIKE and LIKE, and
ILIKE is consistently slower so I think I will keep the Lower  functions.
As per your suggestion,  I have switched indexes to use GIN type index,
they seem to build/read a bit faster, still the Recheck task continues to
happen in the query plan though. I have removed the Gender column from the
query since is not relevant in my tests. With all this playing around it
looks like the stats are now a bit more accurate.
The query went down to 9 seconds, ideally I would like to get to execute in
2 seconds..., any thoughts on what else I could try?
Thanks again,
Eric

=# explain (analyse on,buffers on)select T.form_id from
TAR_MVW_targeting_record AS T where T.status NOT IN ('ANULLED')  AND
LOWER(T.household_member_last_name) LIKE LOWER('%tu%')  group by T.form_id;

QUERY PLAN

---
---
 HashAggregate  (cost=557677.27..561360.83 rows=368356 width=8) (actual
time=10172.672..10410.068 rows=786669 loops=1)
   Buffers: shared hit=304998
   -  Bitmap Heap Scan on tar_mvw_targeting_record t
(cost=80048.06..552677.27 rows=202 width=8) (actual
time=2481.418..9564.280 rows
=33 loops=1)
 Recheck Cond: ((status)::text  'ANULLED'::text)
 Filter: (lower((household_member_last_name)::text) ~~ '%tu%'::text)
 Rows Removed by Filter: 979
 Buffers: shared hit=304998
 -  Bitmap Index Scan on tar_mvw_targeting_record_lower_idx4
(cost=0.00..79548.06 rows=1012 width=0) (actual time=2375.399..2
375.399 rows=1012 loops=1)
   Buffers: shared hit=7369
 Total runtime: 10475.240 ms




On Mon, Nov 10, 2014 at 1:57 PM, desmodemone desmodem...@gmail.com wrote:



 2014-11-10 18:43 GMT+01:00 Eric Ramirez eric.ramirez...@gmail.com:


 Hi,
 I have created a sample database with test data to help benchmark our
 application. The database has ten million records, and is running on a
 dedicated server(postgres 9.3) with 8GB of RAM.  Our queries are pretty
 slow with this amount of data and is my job to get them to run to at
 acceptable speed. First thing that I notice was that the planner's row
 estimates are off by a large number or records (millions) I have updated
 the statistics target but didn't seem to make a difference. The relevant
 output follows.
 Am I looking in the wrong place, something else I should be trying?
 Thanks in advance for your comments/suggestions,
 Eric.


 =# show work_mem;
  work_mem
 --
  1GB
 (1 row)
 =# show effective_cache_size;
  effective_cache_size
 --
  5GB
 (1 row)

 =#ALTER TABLE TAR_MVW_TARGETING_RECORD ALTER COLUMN
 household_member_first_name SET STATISTICS 5000;
 =# vacuum analyse TAR_MVW_TARGETING_RECORD;

 =# \d tar_mvw_targeting_record;
  Table public.tar_mvw_targeting_record
Column| Type  | Modifiers
 -+---+---
  household_member_id | bigint|
  form_id | bigint|
  status  | character varying(64) |
  gender  | character varying(64) |
  household_member_first_name | character varying(64) |
  household_member_last_name  | character varying(64) |

 Indexes:
tar_mvw_targeting_record_form_id_household_member_id_idx UNIQUE,
 btree (form_id, household_member_id)
  tar_mvw_targeting_record_lower_idx gist
 (lower(household_member_first_name::text) extensions.gist_trgm_ops)
  WHERE status::text  'ANULLED'::text
 tar_mvw_targeting_record_lower_idx1 gist
 (lower(household_member_last_name::text) extensions.gist_trgm_ops)
  WHERE status::text  'ANULLED'::text


 =# explain (analyse on,buffers on)select T.form_id from
 TAR_MVW_targeting_record AS T where T.status NOT IN ('ANULLED')  AND
 LOWER(T.household_member_last_name) LIKE LOWER('%tu%') AND
 T.gender='FEMALE' group by T.form_id;

 QUERY PLAN


 ---
 ---
  HashAggregate  (cost=450994.35..452834.96 rows=184061 width=8) (actual
 time=11932.959..12061.206 rows=442453 loops=1)
Buffers: shared hit=307404 read=109743
-  Bitmap Heap Scan on tar_mvw_targeting_record t
 (cost=110866.33..448495.37 rows=999592 width=8) (actual
 time=3577.301..11629.132 row
 s=500373 loops=1)
  Recheck Cond: ((lower((household_member_last_name)::text) ~~
 '%tu%'::text) AND ((status)::text  'ANULLED'::text))
  Rows Removed by Index Recheck: 979
  Filter: ((gender)::text = 'FEMALE'::text)
  Rows Removed by Filter: 499560
  Buffers: shared hit=307404 read=109743
  -  Bitmap Index Scan on tar_mvw_targeting_record_lower_idx1

Re: [PERFORM] Performance bug in prepared statement binding in 9.2?

2014-11-10 Thread Josh Berkus
On 11/10/2014 12:13 PM, Jeff Janes wrote:

 The related problem where the end rows are actually needed (e.g. ORDER
 BY...LIMIT) has not been fixed.
 
 My idea to fix that was to check if the row's creation-transaction was in
 the MVCC snapshot (which just uses local memory) before checking if that
 creation-transaction had committed (which uses shared memory).  But I
 didn't really have the confidence to push that given the fragility of that
 part of the code and my lack of experience with it.  See In progress
 INSERT wrecks plans on table thread.

Oh!  I thought this issue had been fixed by Tom's patch as well.  It
could very well describe what I'm seeing (in the other thread), since
some of the waiting queries are INSERTs, and other queries do selects
against the same tables concurrently.

Although ... given that I'm seeing preposterously long BIND times (like
50 seconds), I don't think that's explained just by bad plans.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] trigger Before or After

2014-11-10 Thread avpro avpro
hi,

in the pgsql documentation
(http://www.postgresql.org/docs/9.1/static/sql-createtrigger.html)

i haven't seen anything referring to: how is affected the data inserted in
the new table by a trigger Before Insert compared with a trigger After
Insert? and anything related to performance

for example:

tables: actuals (summarize the total running hours), log (the functional
hours are inserted in LOG as time)
 function: sum
view: timeview (where running hours are calculated as a difference)

-- Function: sum()

-- DROP FUNCTION sum();

CREATE OR REPLACE FUNCTION sum()
 RETURNS trigger AS
$BODY$begin
update actuals
set
hours = hours + (select time from time_view
where idlog = (select max(idlog) from timeview))
where actuals.idmac =
(SELECT idmac FROM selectedmac) ;
return new;
end$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;
ALTER FUNCTION sum()
  OWNER TO user;




--trigger
CREATE TRIGGER update_actuals_tg01
  AFTER INSERT
  ON log
  FOR EACH ROW
  EXECUTE PROCEDURE sum();


I read somewhere (I don't find the link anymore) that if the trigger is
After Insert, the data available in the table LOG might not be available
anymore to run the trigger. is that correct? or I might understood wrong?

what's the difference related to performance concerning a trigger Before
Insert compared with a trigger After Insert?

thank you
have a sunny day