Re: [GENERAL] postgres_fdw - push down conditionals for ENUMs

2014-07-25 Thread Sergiy Zuban
>
> > 1. Is there any plans to add "non-strict mode" (configurable via options
> on
> > server/table/column level) to allow pushing down conditions for all data
> > types?
>
> No.  You might as well call it a "return random answers" mode.
>

Its bad. I think most users would be happy to have "auto discovery" mode
when foreign table fetches all required meta info to act like original
table.


>
> > 2. There is an option that allows to map foreign table column to column
> > with another name. What about adding another option to specify column
> type
> > to be send to remote server?
>
> Same problem.  We don't have any way of knowing whether type foo on the
> remote end acts like foo locally


I understand it breaks all logic how FDW works internally, but I'm still
trying to find some workaround to allow pushing down conditions for enums.

  CREATE CAST (TEXT as STATUS_TYPE) WITH function to_status_type(text) AS
IMPLICIT;

Could you please confirm such cast won't work because PostgreSQL converts
ENUM values to INTs (enumtypid) on query rewriting stage, but casting works
later, when data accessed?
I was thinking about looking up "enumtypid" in pg_enum by "enumlabel", but
I couldn't find any way to force PostgreSQL to somehow use found enumtypid
instead of original text.


Re: [GENERAL] tab_to_sting

2014-07-25 Thread Ramesh T
Hi,
when i ran below statement its working fine..
   select string_agg(part_id::text,':') from part;
But,
SELECT tab_to_largeStringcheck(cast(string_agg(part_id::text,':')as
t_varchar2_tab)) FROM   part


[image: Inline image 1]

when i ran like

SELECT
qa.tab_to_largeStringcheck(string_agg(part_id::text,':'))
FROM   qa.part
its returnfunction( text)does'nt exist
let me know how solve issue..
thanks,




On Thu, Jul 24, 2014 at 10:42 PM, Adrian Klaver 
wrote:

> On 07/24/2014 08:03 AM, Ramesh T wrote:
>
>> I have try
>> select string_agg(partname,':') from part_tab;its return same,
>>
>> ERROR:  function string_agg(bigint, unknown) does not exist
>> LINE 1: select string_agg(part_id,':') from part;
>>
>
> Try:
>
> select string_agg(part_id::text,':') from part;
>
>
>  ^
>> HINT:  No function matches the given name and argument types. You might
>> need to add explicit type casts.
>>
>> i  thought string_agg and array_agg same, is it right..?
>>
>
> No:
>
> http://www.postgresql.org/docs/9.3/static/functions-aggregate.html
>
> array_agg(expression)   any array of the argument type  input
> values, including nulls, concatenated into an array
>
> string_agg(expression, delimiter)   (text, text) or (bytea, bytea)
>  same as argument types  input values concatenated into a string, separated
> by delimiter
>
>
>>
>>
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] Checkpoint_segments optimal value

2014-07-25 Thread John R Pierce

On 7/25/2014 5:58 PM, Prabhjot Sheena wrote:
We do have i/o problem during heavy loads because we share mount point 
from netapp across different machines which we are getting rid in few 
weeks by moving stuff to Amazon aws and than IO issues will go away 
with fast iops


Netapp has to be some of the most expensive storage I've ever seen in a 
datacenter and is frequently a source of performance problems.   NFS 
isn't really appropriate for storing database data on, there's way too 
many opportunites for fsync to get lost, more so if you've got a 
virtualization layer in there too.






--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Checkpoint_segments optimal value

2014-07-25 Thread Prabhjot Sheena
Thanks a lot Kevin. This is what i did to improve query performance. i
recreated all the indexes on work_unit table and  have been running vacuum
analyze through cron job 3 times a day on two tables that are in the query.
The query performance is between 2 to 3 seconds now. The strange thing i
noticed is that just today at one time query performance came down to under
1 second and started using this query plan

http://explain.depesz.com/s/h5q8

But than cronjob started to vacuum analyze the work_unit and run table and
after vacuum analyze got completed. it started using another query plan
which made query slow and it went back to running between 2 to 3 seconds.
This is query plan that is used now and is slower

http://explain.depesz.com/s/AiG

Why is that happening and what should i do to put it back to this query
plan http://explain.depesz.com/s/h5q8

No changes have been made to the table structure or indexes between those
two query plans

My other questions is once the manual vaccum runs on the table like vacuum
analyze work_unit and vacuum analyze run does that also cleans up the
indexes or not. If not does that do any damages to the indexes or something


We do have i/o problem during heavy loads because we share mount point from
netapp across different machines which we are getting rid in few weeks by
moving stuff to Amazon aws and than IO issues will go away with fast iops

i have copied some more information. *Can you please recomend what values i
should set for temp_buffer and work_memory and also what query should i run
to check for bloat*.

caesius=# SELECT version();
 version
--
 PostgreSQL 8.3.7 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)
3.4.6 20060404 (Red Hat 3.4.6-9)

(1 row)


caesius=# show max_connections ;
 max_connections
-
 600

caesius=# select count(*) from pg_stat_activity;
 count
---
   165

#autovacuum = on

log_autovacuum_min_duration = 1
autovacuum_max_workers = 1
#autovacuum_naptime = 1min
#autovacuum_vacuum_threshold = 50
#autovacuum_analyze_threshold = 50
autovacuum_vacuum_scale_factor = 0.1
#autovacuum_analyze_scale_factor = 0.05
#autovacuum_freeze_max_age = 2
 #autovacuum_vacuum_cost_delay = 20
#autovacuum_vacuum_cost_limit = -1
# - Cost-Based Vacuum Delay -
#vacuum_cost_delay = 0  # 0-1000 milliseconds
#vacuum_cost_page_hit = 1   # 0-1 credits
#vacuum_cost_page_miss = 10 # 0-1 credits
#vacuum_cost_page_dirty = 20# 0-1 credits
#vacuum_cost_limit = 200# 1-1 credits




[caesius@clientdb01 tmp]$ cat /proc/meminfo
MemTotal: 12582912 kB
MemFree:204748 kB
Buffers:  4540 kB
Cached:9541024 kB
SwapCached:   5324 kB
Active:5218556 kB
Inactive:  6554684 kB
HighTotal:   0 kB
HighFree:0 kB
LowTotal: 12582912 kB
LowFree:204748 kB
SwapTotal:16777208 kB
SwapFree: 16755516 kB
Dirty:   36584 kB
Writeback:  20 kB
AnonPages: 2227364 kB
Mapped:1093452 kB
Slab:   101396 kB
PageTables: 206692 kB
NFS_Unstable:0 kB
Bounce:  0 kB
CommitLimit:  23068664 kB
Committed_AS:  3796932 kB
VmallocTotal: 34359738367 kB
VmallocUsed:  9196 kB
VmallocChunk: 34359729063 kB


iostar -d -s 5

Device: rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
xvda  0.00 7.40  0.20  1.20 4.8068.8052.57
0.014.57   2.29   0.32
xvda1 0.00 0.00  0.00  0.00 0.00 0.00 0.00
0.000.00   0.00   0.00
xvda2 0.00 7.40  0.20  1.20 4.8068.8052.57
0.014.57   2.29   0.32
dm-0  0.00 0.00  0.20  8.60 4.8068.80 8.36
0.044.09   0.36   0.32
dm-1  0.00 0.00  0.00  0.00 0.00 0.00 0.00
0.000.00   0.00   0.00

Device: rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
xvda  0.00 3.80  0.00  0.60 0.0035.2058.67
0.006.67   5.33   0.32
xvda1 0.00 0.00  0.00  0.00 0.00 0.00 0.00
0.000.00   0.00   0.00
xvda2 0.00 3.80  0.00  0.60 0.0035.2058.67
0.006.67   5.33   0.32
dm-0  0.00 0.00  0.00  4.40 0.0035.20 8.00
0.024.36   0.73   0.32
dm-1  0.00 0.00  0.00  0.00 0.00 0.00 0.00
0.000.00   0.00   0.00

Device: rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
xvda  0.00 9.40  0.00  1.80 0.0089.6049.78
0.002.22   0.44   0.08
xvda1 0.00 0.00  0.00  0.00 0.00 0.00 0.00
0.000.00   0.00   0.00
xvda2 0.00 9.40  0.00  1.80   

Re: [GENERAL] copy/dump database to text/csv files

2014-07-25 Thread Marc Mamin
This is probably an easy one for most sql users but I don't use it very often.
>
>
>
>We have a postgres database that was used for an application we no longer use. 
> However, we would
>
>like to copy/dump the tables to files, text or csv so we can post them to 
>sharepoint.
>
>
>
>Copy seems to be what I want.  But it only does one table at a time.  There is 
>204 tables with a schemaname=public.  I need to be copy the tables with data 
>to files.   I need something like a for
>
>loop which checks if the table is populated if so, copy it to tablename.csv 
>file
>
>
>
>Any ideas?

You could do it with pgpsql, or if your schema is too large generate a sript 
with it.

something like:

DO $$
DECLARE
test boolean;
rec record;
BEGIN
  for rec in  select tablename from pg_tables where schemaname = 'public'
  LOOP
EXECUTE 'select exists (select * from public.'||rec.tablename||')' into 
test;
IF test THEN raise notice 'COPY public.% TO 
%.dump',rec.tablename,rec.tablename;
END IF;
  END LOOP;
END;
$$ language plpgsql

regards,

Marc Mamin


Re: [GENERAL] Index usage with slow query

2014-07-25 Thread Marc Mamin



>The presence of absence of the length limiter on a varchar will not impact
>the query plan.  And I'm pretty sure you cannot even store a too long
>varchar in an index.  It will error on the attempt (as opposed to
>truncating).

The max size is almost one block. After that you get an error:
FEHLER:  Indexzeile benötigt 9184 Bytes, Maximalgröße ist 8191

>Looking for some advice regarding a slow query I have and indexing. 
>
>I'm using postgresql 9.1 and this is my table that has around 680 rows:

It's worth a try to compare the planner choice in Postgres 9.3 ...

>
>CREATE TABLE mytable
>(
>  class character varying,
>  floor character varying,
>  source_id integer,
>  the_geom geometry

You could slightly improve the table definition while placing the integer 
column in front of the varchar.

More interesting would be to move the varchar in separate reference column and 
only have integers except for the geometry type.
This will make the new index smaller and faster. I bet that the planner would 
take that change in account.


>
>INDEX idx_source_id
>USING btree
>  (source_id);
>
>INDEX idx_the_geom_gist
>  USING gist
>  (the_geom);
>
>
>This table is constantly hit with the below query (not always the same values 
>in the where). The only difference between queries are the values in the where 
>clause:
>
>SELECT the_geom,oid from mytable
>WHERE
>the_geom && ST_GeomFromText('POLYGON((529342.334095833 
>180696.22173,529342.334095833 181533.44595,530964.336820833 
>181533.44595,530964.336820833 180696.22173,529342.334095833 
>180696.22173))',find_srid('','mytable','the_geom')) 
>AND 
>(floor = 'gf' AND source_id = '689' AND class = 'General')
>
>
>As the table has increased in size, this query has become slower, so I made 
>this index:
>
>
>INDEX idx_floor_sourceid_class
>  USING btree
>  (floor, source_id, class);

if your query always uses these 3 columns, you should put the one with the 
highest cardinality first.


>
>When I run an EXPLAIN and EXPLAIN ANALYZE the query isn't using this new 
>index. 
>
>Sometimes it uses just idx_the_geom_gist
>
>other times it uses idx_the_geom_gist and idx_source_id
>
>I don't understand why it's inconsistent in its use of indexes when the query 
>is always the same structure, just different where clause values, and I don't 
>understand why it's not using the new index either.

The planner uses statistics on the different columns content to weight the 
possible query plans.
And it is good at that :)
You can help him while raising the target statistics on these 3 columns.

More difficult for the planner is to compare the advantage of the GIN index to 
the other one.
If some columns get toasted then the cost of detoasting seems to often be 
underestimated.

For the case when one of your 3 first columns has a very low cardinality, you 
may consider adding some partial indexes.
e.g.:
  create INDEX idx_the_geom_gist_general  USING gist (the_geom) where class 
='general';
  create INDEX idx_the_geom_gist_special  USING gist (the_geom) where class 
='special';
  
They can of course only get used when your query contains exactly the same 
clause.  


regards,

Marc Mamin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] event triggers in 9.3.4

2014-07-25 Thread Alvaro Herrera
Pavel Stehule wrote:
> Hello
> 
> I found a interesting extension
> http://code.malloclabs.com/pg_schema_triggers

Ah, I remember that.  I find that approach more cumbersome to use than
mine.  Note the ALTER cases can't tell you much about exactly how the
relation has changed; you have to run comparisons of the pg_class rows
manually, which is not very nice.  I see this extension as a stopgap
measure until we have real support for this in 9.5, per my patch.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index usage with slow query

2014-07-25 Thread David G Johnston
Bill Moran wrote
> On Fri, 25 Jul 2014 17:20:57 +0100
> Rebecca Clarke <

> r.clarke83@

> > wrote:
> 
> Note that this is speculation on my part, but the
> point being that if those columns are usually as narrow as your
> examples you might want to try changing them to VARCHAR(50) or
> something and see if the planner is more willing to use the
> indexes at that point.
> 
> A lot of this is pure speculation on my part ... hope that it's
> helpful and doesn't lead you in the wrong direction.

The presence of absence of the length limiter on a varchar will not impact
the query plan.  And I'm pretty sure you cannot even store a too long
varchar in an index.  It will error on the attempt (as opposed to
truncating).

David J.






--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Index-usage-with-slow-query-tp5812503p5812862.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] event triggers in 9.3.4

2014-07-25 Thread Pavel Stehule
Hello

I found a interesting extension
http://code.malloclabs.com/pg_schema_triggers

Regards

Pavel


2014-07-25 20:01 GMT+02:00 Alvaro Herrera :

> Vasudevan, Ramya wrote:
> > >> You could compare list of tables before (_start) and after (_end) the
> ddl. Doing it in plpgsql will be tricky, but if you'd use some other
> language - like plperl - it's relatively simple:
> http://www.depesz.com/2013/12/18/waiting-for-9-4-plperl-add-event-trigger-support/
> > Thank You Depesz.  This will work for ‘CREATE’ and ‘DROP’ DDLs.
> >
> > But, what about the scenario where I want to just have event triggers
> for operations like these? - 'ALTER TABLE','ALTER TRIGGER', 'ALTER FUNCTION'
> > CREATE EVENT TRIGGER log_ddl_info_start
> > ON
> > ddl_command_start
> > when
> > tag in
> > ('ALTER TABLE','ALTER TRIGGER','CREATE FUNCTION','ALTER FUNCTION')
> > EXECUTE PROCEDURE
> > log_ddl_execution();
> >
> > In this case, is there a way to capture the object that was altered?
>
> Not yet, but there's a patch in progress to help with this.  If you're
> interested in the feature, please give the patch a try and see if the
> user interface it provides solves your use case.  We want to ensure that
> the new feature we're creating is useful.
>
> You can find the patch here:
>
> http://www.postgresql.org/message-id/20140613203156.gr18...@eldon.alvh.no-ip.org
> (there are lots of small patches to ease review, so please grab them all
> and apply one by one) and a sample event trigger function (there are no
> docs yet) here:
>
> http://www.postgresql.org/message-id/2014011505.gb29...@eldon.alvh.no-ip.org
>
> Thanks,
>
> --
> Álvaro Herrerahttp://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Training & Services
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] copy/dump database to text/csv files

2014-07-25 Thread Francisco Olarte
Hi William:


On Thu, Jul 24, 2014 at 9:04 PM, William Nolf 
wrote:

> We have a postgres database that was used for an application we no longer
> use.  However, we wouldlike to copy/dump the tables to files, text or csv
> so we can post them to sharepoint.
>



How BIG is your public schema? As this is a one-shot I would recommend
first doing a custom format backup, and then working from it ( you can make
an sql format backup from a custom one with pg_restore ).

THEN I would suggest making a mini-script ( I'm partial to perl for that,
but this is due to 20+ years  hacking with it ) through which I'll pipe a
data-only sql backup. You just wait for the line with 'copy
table(comma,separated,field,names) from stdin' line, open the file
table.whatever, write a header line if needed, read, transform and write
copy lines until the '\.' end of data marker and close the file, repeat
until input exhausted. Something like the following perl seudo code:

BEWARE, UNTESTED CODE FOR EXAMPLE PURPOSES:
OUTER: # Loop around all the backup
while(<>) {
if (my ($table, $fields) = /^copy\s*(.*?)\s*\((.*?)\)\s*from stdin;$/) {
# Header line found, do headers etc
my $fname = get_filename($table); # Dequote, add extensions.
open (my $fh, '>', $fname) or die "open problem: $!";
print $fh generate_header($fields); # If needed.
  INNER: # Loop around a single table data.
while(<>) {
/^\\\.$/ and last;
print $fh transform($_); # Chomp, split, quote, etc...
}
close($fh) or die "Close problem"; # Disk full, .
}
}
# Yeah, I know print should be tested too.

One of this things should give you a file for each table as fast as your
database can send a backup, or your machine can run pg_restore.

Regards.
   Francisco Olarte.


Re: [GENERAL] Index usage with slow query

2014-07-25 Thread Bill Moran
On Fri, 25 Jul 2014 17:20:57 +0100
Rebecca Clarke  wrote:

> Hi Bill,
> 
> Thanks for the reply. Here's the EXPLAIN output of a couple of the queries:

A few suggestions in addition to David's comment about doing
EXPLAIN ANALYZE:
1) When experimenting, one thing to try might be making a single
   GiST index across the_geom, floor, source_id, and class.  Not
   sure if it will work/help, but worth a try.
2) Based on the info below, are you sure that table is getting
   ANALYZEd enough?  I'm asking because the first query was
   estimated to be 50x slower than the second, but actual run
   time was only 7x slower.  It makes me wonder if the table
   stats are way off.

Also, how selective are floor and class?  The following queries
should give you an idea:
SELECT count(distinct floor), count(*) from mytable;
SELECT count(distinct class), count(*) from mytable;
If the number of distinct values is very small compared to the
total number of entries, the planner might be ignoring the
index simply because it's not helpful.

I'm also wondering if the planner is avoiding using indexes for
those columns because they're unbounded varchars and the limit on
and index is far less (I believe it's less than 3000 chars, but
I can't find a definitive number right now).  If the combination
of those exceeded the max index length, then the index would
truncate the values, and the executor would have to recheck each
row against the actual row data, which the planner might perceive
as slower.  Note that this is speculation on my part, but the
point being that if those columns are usually as narrow as your
examples you might want to try changing them to VARCHAR(50) or
something and see if the planner is more willing to use the
indexes at that point.

A lot of this is pure speculation on my part ... hope that it's
helpful and doesn't lead you in the wrong direction.

> SELECT the_geom,oid from mytable
> where the_geom && ST_GeomFromText('POLYGON((529342.334095833
> 180696.22173,529342.334095833 181533.44595,530964.336820833
> 181533.44595,530964.336820833 180696.22173,529342.334095833
> 180696.22173))',find_srid('','mytable','the_geom'))
> and (floor = 'gf' AND source_id = '16701' AND class = 'General')
> 
> 
>  Bitmap Heap Scan on mytable  (cost=1212.62..1580.71 rows=177 width=612)
>Recheck Cond: ((source_id = 16701) AND (the_geom &&
> '010320346C01000500'::geometry))
>Filter: (((floor)::text = 'gf'::text) AND ((class)::text =
> 'General'::text))
>->  BitmapAnd  (cost=1212.62..1212.62 rows=184 width=0)
>  ->  Bitmap Index Scan on idx_source_id  (cost=0.00..433.25
> rows=33149 width=0)
>Index Cond: (source_id = 16701)
>  ->  Bitmap Index Scan on idx_the_geom_gist  (cost=0.00..779.03
> rows=38654 width=0)
>Index Cond: (the_geom &&
> '010320346C01000500'::geometry)
> 
> 
> 
> SELECT the_geom,oid from mytable
> where the_geom && ST_GeomFromText('POLYGON((415995.148624997
> 433101.445479165,415995.148624997 433326.320145835,416466.572625003
> 433326.320145835,416466.572625003 433101.445479165,415995.148624997
> 433101.445479165))',find_srid('','mytable','the_geom'))
> and (floor = 'gf' AND source_id = '20427' AND class = 'General')
> 
> 
>  Bitmap Heap Scan on mytable  (cost=21.41..23.44 rows=1 width=612)
>Recheck Cond: ((the_geom &&
> '010320346C01000500'::geometry) AND (source_id = 20427))
>Filter: (((floor)::text = 'gf'::text) AND ((class)::text =
> 'General'::text))
>->  BitmapAnd  (cost=21.41..21.41 rows=1 width=0)
>  ->  Bitmap Index Scan on idx_the_geom_gist  (cost=0.00..4.18
> rows=141 width=0)
>Index Cond: (the_geom &&
> '010320346C01000500'::geometry)
>  ->  Bitmap Index Scan on idx_source_id  (cost=0.00..16.97
> rows=1112 width=0)
>Index Cond: (source_id = 20427)
> 
> 
> 
> There is no unique field in the table. And unfortunately the performance is
> unacceptable. The logs show that the first query when it was execute took
> 70466.757 ms where as the second one took 11032.459 ms.
> 
> I've begun to create a duplicate environment to play with so hopefully I'm
> able to weed out a solution.
> 
> 
> 
> On Wed, Jul 23, 2014 at 3:57 PM, Bill Moran 
> wrote:
> 
> > On Wed, 23 Jul 2014 10:45:56 +0100
> > Rebecca Clarke  wrote:
> >
> > > Hi all,
> > >
> > > Looking for some advice regarding a slow query I have and indexing.
> > >
> > > I'm using postgresql 9.1 and this is my table that has around 680
> > rows:
> > >
> > > CREATE TABLE mytable
> > > (
> > >   class character varying,
> > >   floor character varying,
> > >   source_id integer,
> > >   the_geom geometry
> > > )
> > > WITH (
> > >   OIDS=TRUE
> > > );
> > >
> > >
> > > INDEX idx_source_id
> > > USING btree
> > >   (source_id);
> > >
> > > INDEX idx_the_geom_gist
> > >   USING gist
> > >   (the_geom);
> > >
> > >
> > > This table is constantly hit with the below query (

Re: [GENERAL] event triggers in 9.3.4

2014-07-25 Thread Alvaro Herrera
Vasudevan, Ramya wrote:
> >> You could compare list of tables before (_start) and after (_end) the ddl. 
> >> Doing it in plpgsql will be tricky, but if you'd use some other language - 
> >> like plperl - it's relatively simple:  
> >> http://www.depesz.com/2013/12/18/waiting-for-9-4-plperl-add-event-trigger-support/
> Thank You Depesz.  This will work for ‘CREATE’ and ‘DROP’ DDLs.
> 
> But, what about the scenario where I want to just have event triggers for 
> operations like these? - 'ALTER TABLE','ALTER TRIGGER', 'ALTER FUNCTION'
> CREATE EVENT TRIGGER log_ddl_info_start
> ON
> ddl_command_start
> when
> tag in
> ('ALTER TABLE','ALTER TRIGGER','CREATE FUNCTION','ALTER FUNCTION')
> EXECUTE PROCEDURE
> log_ddl_execution();
> 
> In this case, is there a way to capture the object that was altered?

Not yet, but there's a patch in progress to help with this.  If you're
interested in the feature, please give the patch a try and see if the
user interface it provides solves your use case.  We want to ensure that
the new feature we're creating is useful.

You can find the patch here:
http://www.postgresql.org/message-id/20140613203156.gr18...@eldon.alvh.no-ip.org
(there are lots of small patches to ease review, so please grab them all
and apply one by one) and a sample event trigger function (there are no
docs yet) here:
http://www.postgresql.org/message-id/2014011505.gb29...@eldon.alvh.no-ip.org

Thanks,

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index usage with slow query

2014-07-25 Thread David G Johnston
Rebecca Clarke-2 wrote
> Thanks for the reply. Here's the EXPLAIN output of a couple of the
> queries:

Typically you want to provide EXPLAIN ANALYZE output so that comparisons
between planner estimates and reality can be made.

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Index-usage-with-slow-query-tp5812503p5812851.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] event triggers in 9.3.4

2014-07-25 Thread Vasudevan, Ramya
>> Sure - just check differences in appropriate catalogs. pg_attribute, 
>> pg_trigger, pg_proc.

>> In any way - if you want to really use it - you'll have to write in C.

Thank You Depesz. I think that answered my question.


Thank You
Ramya


Re: [GENERAL] Index usage with slow query

2014-07-25 Thread Rebecca Clarke
Hi Bill,

Thanks for the reply. Here's the EXPLAIN output of a couple of the queries:


SELECT the_geom,oid from mytable
where the_geom && ST_GeomFromText('POLYGON((529342.334095833
180696.22173,529342.334095833 181533.44595,530964.336820833
181533.44595,530964.336820833 180696.22173,529342.334095833
180696.22173))',find_srid('','mytable','the_geom'))
and (floor = 'gf' AND source_id = '16701' AND class = 'General')


 Bitmap Heap Scan on mytable  (cost=1212.62..1580.71 rows=177 width=612)
   Recheck Cond: ((source_id = 16701) AND (the_geom &&
'010320346C01000500'::geometry))
   Filter: (((floor)::text = 'gf'::text) AND ((class)::text =
'General'::text))
   ->  BitmapAnd  (cost=1212.62..1212.62 rows=184 width=0)
 ->  Bitmap Index Scan on idx_source_id  (cost=0.00..433.25
rows=33149 width=0)
   Index Cond: (source_id = 16701)
 ->  Bitmap Index Scan on idx_the_geom_gist  (cost=0.00..779.03
rows=38654 width=0)
   Index Cond: (the_geom &&
'010320346C01000500'::geometry)



SELECT the_geom,oid from mytable
where the_geom && ST_GeomFromText('POLYGON((415995.148624997
433101.445479165,415995.148624997 433326.320145835,416466.572625003
433326.320145835,416466.572625003 433101.445479165,415995.148624997
433101.445479165))',find_srid('','mytable','the_geom'))
and (floor = 'gf' AND source_id = '20427' AND class = 'General')


 Bitmap Heap Scan on mytable  (cost=21.41..23.44 rows=1 width=612)
   Recheck Cond: ((the_geom &&
'010320346C01000500'::geometry) AND (source_id = 20427))
   Filter: (((floor)::text = 'gf'::text) AND ((class)::text =
'General'::text))
   ->  BitmapAnd  (cost=21.41..21.41 rows=1 width=0)
 ->  Bitmap Index Scan on idx_the_geom_gist  (cost=0.00..4.18
rows=141 width=0)
   Index Cond: (the_geom &&
'010320346C01000500'::geometry)
 ->  Bitmap Index Scan on idx_source_id  (cost=0.00..16.97
rows=1112 width=0)
   Index Cond: (source_id = 20427)



There is no unique field in the table. And unfortunately the performance is
unacceptable. The logs show that the first query when it was execute took
70466.757 ms where as the second one took 11032.459 ms.

I've begun to create a duplicate environment to play with so hopefully I'm
able to weed out a solution.



On Wed, Jul 23, 2014 at 3:57 PM, Bill Moran 
wrote:

> On Wed, 23 Jul 2014 10:45:56 +0100
> Rebecca Clarke  wrote:
>
> > Hi all,
> >
> > Looking for some advice regarding a slow query I have and indexing.
> >
> > I'm using postgresql 9.1 and this is my table that has around 680
> rows:
> >
> > CREATE TABLE mytable
> > (
> >   class character varying,
> >   floor character varying,
> >   source_id integer,
> >   the_geom geometry
> > )
> > WITH (
> >   OIDS=TRUE
> > );
> >
> >
> > INDEX idx_source_id
> > USING btree
> >   (source_id);
> >
> > INDEX idx_the_geom_gist
> >   USING gist
> >   (the_geom);
> >
> >
> > This table is constantly hit with the below query (not always the same
> > values in the where). The only difference between queries are the values
> in
> > the where clause:
> >
> > SELECT the_geom,oid from mytable
> > WHERE
> > the_geom && ST_GeomFromText('POLYGON((529342.334095833
> > 180696.22173,529342.334095833 181533.44595,530964.336820833
> > 181533.44595,530964.336820833 180696.22173,529342.334095833
> > 180696.22173))',find_srid('','mytable','the_geom'))
> > AND
> > (floor = 'gf' AND source_id = '689' AND class = 'General')
> >
> >
> > As the table has increased in size, this query has become slower, so I
> made
> > this index:
> >
> >
> > INDEX idx_floor_sourceid_class
> >   USING btree
> >   (floor, source_id, class);
> >
> >
> > When I run an EXPLAIN and EXPLAIN ANALYZE the query isn't using this new
> > index.
> >
> > Sometimes it uses just idx_the_geom_gist
> >
> > other times it uses idx_the_geom_gist and idx_source_id
> >
> >
> > I don't understand why it's inconsistent in its use of indexes when the
> > query is always the same structure, just different where clause values,
> and
> > I don't understand why it's not using the new index either.
>
> It depends on the data.  The planner will make estimates on what the
> fastest
> way to execute will be based on a lot of things, one of which is how
> helpful
> an index is really expected to be.  Since your indexes aren't arranged to
> allow an index-only scan (although I don't remember if 9.1 had index-only
> scans yet ...) it will have to use the index to narrow down the rows, then
> load up the rows and filter them further (you didn't provide explain
> output,
> but I'll bet a dozen nickels that's what it says).  So if the values in
> source_id are unique enough that the planner doesn't think that
> idx_floor_sourceid_class will narrow the results any better than
> idx_source_id, it will use the former because it's a smaller index and will
> require less disk fetches to load it.
>
> Of course, without explain o

Re: [GENERAL] event triggers in 9.3.4

2014-07-25 Thread hubert depesz lubaczewski
On Thu, Jul 24, 2014 at 7:13 PM, Vasudevan, Ramya <
ramya.vasude...@classmates.com> wrote:

>  >> You could compare list of tables before (_start) and after (_end) the
> ddl. Doing it in plpgsql will be tricky, but if you'd use some other
> language - like plperl - it's relatively simple:
> http://www.depesz.com/2013/12/18/waiting-for-9-4-plperl-add-event-trigger-support/
>
> Thank You Depesz.  This will work for ‘CREATE’ and ‘DROP’ DDLs.
>
>
>
> But, what about the scenario where I want to just have event triggers for
> operations like these? - 'ALTER TABLE','ALTER TRIGGER', 'ALTER FUNCTION'
>

Sure - just check differences in appropriate catalogs. pg_attribute,
pg_trigger, pg_proc.

In any way - if you want to really use it - you'll have to write in C.

depesz