Re: [GENERAL] Compiling C function with VC++ for Windows version

2012-01-11 Thread Craig Ringer
On 12/01/12 11:06, Edwin Quijada wrote:
> Hi!
> I am trying to create a C function for windows in Postgres compiling
> with VC++ 2008 Express Edition. I did this function for Linux without
> problem using gcc and tried to do the same in Windows but in Windows
> was imposible. The function compile fine but when I tried to run from
> Postgres it down the server in windows but in Linux the same function
> run perfect.

You haven't provided enough information for anyone to help you.

How are you compiling your function? Using pgxs, or some other way?

What's the full source code of the function?

Where does the server crash when it crashes? You might need to get a
backtrace of the server crash to help you figure out why your function
crashes it. See this documentation on getting a backtrace of a Pg
backend under windows:

http://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Windows

>
> The function just try to get a string value and pass to postgres.
> Reading somebody told me that I need to compile the function using
> VC++ not gcc in Windows so I downloaded VC++ 2008 Express but I dont
> know what directives compilation I need as gcc in windows and Linux.

If you are using a standard binary distribution of PostgreSQL then yes,
you should use Visual C++, preferably the same version of Visual C++
that was used for that version of PostgreSQL. Specify your version of Pg.

--
Craig Ringer


Re: [GENERAL] How to return latest results without a polygon intersect?

2012-01-11 Thread Andy Colson

On 01/11/2012 07:14 PM, David Waddy wrote:

If I have a table of the following form:

id (integer)   event_time  (timestamp)   lat_lon (polygon)
3497 1977-01-01 00:00:00
((-64.997,45.975),(,(-64.9981,45.975),(-64.8981,45.875),(-64.9978,45.9751))
3431 2007-06-06 01:00:00
((-64.971,45.982),(-64.921,45.982),(-64.972,45.982),(-64.973,45.98209),(-64.97,45.98237))
3498 1977-01-01 00:00:00
((-64.97838,45.9778),(-64.9783,45.97767),(-64.978,45.977),(-64.9781,45.97728),(-64.9781,45.97714),(-64.977,45.976))
...

How would I return a list of the latest events for a particular
lat/lon region? More precisely, how would a return a result set with
the greatest event times with polygons that don't intersect any other
polygon in the result set?

Any help would be greatly appreciated,
Dave



Are you using PostGIS?  Assuming yes, try something like:

select * from theTable a cross join theTable b on not (a.lat_lon && b.lat_lon) 
order by event_time

Also try the postgis news group, there are people there with more experience 
with the postGIS functions.

Your two questions dont seem to be asking the same thing, though.  One asks for 
a particular region.  The second for a region that doesn't intersect with any 
other's.  I went for the second, cross joining the table to itself, so every 
record with be compared to every other record, which is gonna be a huge number 
of comparisons.  So it'll be slow.

-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Compiling C function with VC++ for Windows version

2012-01-11 Thread Edwin Quijada

Hi!I am trying to create a C function for windows in Postgres compiling with 
VC++ 2008 Express Edition. I did this function for Linux without problem using 
gcc and tried to do the same in Windows but in Windows was imposible. The 
function compile fine but when I tried to run from Postgres it down the server 
in windows but in Linux the same function run perfect.
The function just try to get a string value and pass to postgres. Reading 
somebody told me that I need to compile the function using VC++ not gcc in 
Windows so I downloaded VC++ 2008 Express but I dont know what directives 
compilation I need as gcc in windows and Linux.
Somebody here has compiled any function for working in Windows for Postgres 
using VC++?
Any help will be so appreciated. 
*---* 
*-Edwin Quijada 
*-Developer DataBase 
*-JQ Microsistemas 

*-Soporte PostgreSQL

*-www.jqmicrosistemas.com
*-809-849-8087
*---*

  

[GENERAL] How to return latest results without a polygon intersect?

2012-01-11 Thread David Waddy
If I have a table of the following form:

id (integer)   event_time  (timestamp)   lat_lon (polygon)
3497 1977-01-01 00:00:00
((-64.997,45.975),(,(-64.9981,45.975),(-64.8981,45.875),(-64.9978,45.9751))
3431 2007-06-06 01:00:00
((-64.971,45.982),(-64.921,45.982),(-64.972,45.982),(-64.973,45.98209),(-64.97,45.98237))
3498 1977-01-01 00:00:00
((-64.97838,45.9778),(-64.9783,45.97767),(-64.978,45.977),(-64.9781,45.97728),(-64.9781,45.97714),(-64.977,45.976))
...

How would I return a list of the latest events for a particular
lat/lon region? More precisely, how would a return a result set with
the greatest event times with polygons that don't intersect any other
polygon in the result set?

Any help would be greatly appreciated,
Dave

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enumeration of tables is very slow in largish database

2012-01-11 Thread Kirill Müller

On 01/12/2012 02:02 AM, Tom Lane wrote:

There were fixes for that in 8.4.9, so I'd be interested to know if you
get a better estimate in a more up-to-date version.

I'll ask our administrator to update the server, but this might take a 
while. I'll get back to you when I know more.



Regards

Kirill

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enumeration of tables is very slow in largish database

2012-01-11 Thread Tom Lane
=?ISO-8859-1?Q?Kirill_M=FCller?=  writes:
> On 01/12/2012 01:34 AM, Tom Lane wrote:
>> This is probably an indication of eqjoinsel_semi doing the wrong thing;
>> we've whacked that estimator around a few times now, so it's hard to
>> know whether this represents an already-fixed bug or not.  What PG
>> version are you using exactly?

> muelleki@xxx:~$ psql
> psql (8.4.8)

There were fixes for that in 8.4.9, so I'd be interested to know if you
get a better estimate in a more up-to-date version.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enumeration of tables is very slow in largish database

2012-01-11 Thread Kirill Müller

On 01/12/2012 01:34 AM, Tom Lane wrote:

=?ISO-8859-1?Q?Kirill_M=FCller?=  writes:

When leaving out the last two "AND NOT EXISTS..." parts, the query
finishes in no time. I have attached the output of EXPLAIN ANALYZE -- if
I understand the execution tree correctly, the time is burnt in repeated
sequential scans of the geometry_columns table (line 38).

Yeah.  It wouldn't choose that plan if it weren't for the horrid rowcount
misestimate here:


  ->   Hash Anti Join  (cost=30586.95..37075.41 rows=1 width=133) 
(actual time=945.911..1760.307 rows=17836 loops=1)


I have VACUUM ANALYZE-d the table just before testing. Seems that this 
didn't help here.


This is probably an indication of eqjoinsel_semi doing the wrong thing;
we've whacked that estimator around a few times now, so it's hard to
know whether this represents an already-fixed bug or not.  What PG
version are you using exactly?

muelleki@xxx:~$ psql
psql (8.4.8)


Maybe use EXCEPT instead of a WHERE condition to get rid of the
already-present entries?
Thank you for the hint, I was not aware of the rather elegant EXCEPT. 
Using WHERE (..., ...) NOT IN (SELECT ..., ... FROM ...) as suggested by 
David Johnston shows excellent performance (and better fits the code 
that is generating the SQL), but I'll keep the EXCEPT option in mind.



Regards

Kirill

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enumeration of tables is very slow in largish database

2012-01-11 Thread Tom Lane
=?ISO-8859-1?Q?Kirill_M=FCller?=  writes:
> Thanks for the feedback. I found the relevant parts in the qgis source 
> code and have been able to trace the problem. It's just a sub-optimal 
> query issued by qgis:

> SELECT 
> pg_class.relname,pg_namespace.nspname,pg_attribute.attname,pg_class.relkind
> FROM pg_attribute,pg_class,pg_namespace
> WHERE pg_namespace.oid=pg_class.relnamespace AND pg_attribute.attrelid = 
> pg_class.oid
> AND ( EXISTS (SELECT * FROM pg_type WHERE 
> pg_type.oid=pg_attribute.atttypid AND pg_type.typname IN 
> ('geometry','geography','topogeometry')) OR pg_attribute.atttypid IN 
> (SELECT oid FROM pg_type a WHERE EXISTS (SELECT * FROM pg_type b WHERE 
> a.typbasetype=b.oid AND b.typname IN 
> ('geometry','geography','topogeometry'
> AND has_schema_privilege( pg_namespace.nspname, 'usage' )
> AND has_table_privilege( '"' || pg_namespace.nspname || '"."' || 
> pg_class.relname || '"', 'select' )
> AND NOT EXISTS (SELECT * FROM geometry_columns WHERE 
> pg_namespace.nspname=f_table_schema AND pg_class.relname=f_table_name)
> AND pg_class.relkind IN ('v','r');

> When leaving out the last two "AND NOT EXISTS..." parts, the query 
> finishes in no time. I have attached the output of EXPLAIN ANALYZE -- if 
> I understand the execution tree correctly, the time is burnt in repeated 
> sequential scans of the geometry_columns table (line 38).

Yeah.  It wouldn't choose that plan if it weren't for the horrid rowcount
misestimate here:

>  ->  Hash Anti Join  (cost=30586.95..37075.41 rows=1 width=133) 
> (actual time=945.911..1760.307 rows=17836 loops=1)

This is probably an indication of eqjoinsel_semi doing the wrong thing;
we've whacked that estimator around a few times now, so it's hard to
know whether this represents an already-fixed bug or not.  What PG
version are you using exactly?

> Rewriting the 
> "AND NOT EXISTS" part using WITH solves the performance issues here, but 
> works only from Postgres 8.4. Any idea how to speed up this query for 
> older versions? (Creating a temporary table or an index should be avoided.)

Maybe use EXCEPT instead of a WHERE condition to get rid of the
already-present entries?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enumeration of tables is very slow in largish database

2012-01-11 Thread David Johnston
-Original Message-
From: Kirill Müller [mailto:kirill.muel...@ivt.baug.ethz.ch] 
Sent: Wednesday, January 11, 2012 7:11 PM
To: David Johnston
Cc: pgsql-general@postgresql.org; 'Scott Marlowe'
Subject: Re: [GENERAL] Enumeration of tables is very slow in largish
database

I am impressed. Execution time dropped to less than one second. Thanks a
lot!

On 01/12/2012 12:43 AM, David Johnston wrote:
> I only see one (1) "AND NOT EXISTS" in the provided query.
Sorry, there used to be two "AND NOT EXISTS", but I edited the query without
updating the text.
> Syntax may be a little off but:
>
> ... AND (f_table_schema, f_table_name) NOT IN (  SELECT (nspname, 
> relname) FROM geometry_columns  )
Just for the record:

...AND (nspname, relname) NOT IN (SELECT f_table_schema, f_table_name FROM
geometry_columns)
> Should work since it is no longer a correlated sub-query; whether the 
> size of geometry_columns makes this better or worse performing is 
> impossible to tell without testing but it isn't that much different than
using a WITH/CTE.
The execution plan looks much nicer (attached). I'd guess that the altered
query might lose a bit if geometry_columns has only very few entries.?
Apparently it gains a lot if the table is populated.

Thanks again!


Kirill



Performance for IN should increase as the results from geometry_columns
decrease since the IN target becomes smaller - thus fewer entries to compare
against.  EXISTS works better than IN if the IN target is large AS LONG AS
the query that exists is using can use an Index.  Since your query was
performing a sequential scan pretty much any size IN target will be better
performing.  For small IN targets and index-using EXISTS it likely matters
very little which one you use.

David J.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enumeration of tables is very slow in largish database

2012-01-11 Thread Kirill Müller
I am impressed. Execution time dropped to less than one second. Thanks a 
lot!


On 01/12/2012 12:43 AM, David Johnston wrote:

I only see one (1) "AND NOT EXISTS" in the provided query.
Sorry, there used to be two "AND NOT EXISTS", but I edited the query 
without updating the text.

Syntax may be a little off but:

... AND (f_table_schema, f_table_name) NOT IN (  SELECT (nspname, relname)
FROM geometry_columns  )

Just for the record:

...AND (nspname, relname) NOT IN (SELECT f_table_schema, f_table_name 
FROM geometry_columns)

Should work since it is no longer a correlated sub-query; whether the size
of geometry_columns makes this better or worse performing is impossible to
tell without testing but it isn't that much different than using a WITH/CTE.
The execution plan looks much nicer (attached). I'd guess that the 
altered query might lose a bit if geometry_columns has only very few 
entries.? Apparently it gains a lot if the table is populated.


Thanks again!


Kirill


 QUERY PLAN 


 Nested Loop  (cost=3004.94..3539610.10 rows=15297 width=193) (actual 
time=177.084..626.521 rows=4821 loops=1)
   ->  Hash Join  (cost=241.56..5383.60 rows=1213 width=133) (actual 
time=36.977..233.422 rows=9343 loops=1)
 Hash Cond: (pg_class.relnamespace = pg_namespace.oid)
 Join Filter: ((NOT (hashed SubPlan 4)) AND 
has_table_privilege('"'::text || (pg_namespace.nspname)::text) || 
'"."'::text) || (pg_class.relname)::text) || '"'::text), 'select'::text))
 ->  Seq Scan on pg_class  (cost=0.00..4802.82 rows=17676 width=73) 
(actual time=0.074..72.791 rows=17850 loops=1)
   Filter: (relkind = ANY ('{v,r}'::"char"[]))
 ->  Hash  (cost=6.85..6.85 rows=63 width=68) (actual time=1.180..1.180 
rows=157 loops=1)
   ->  Seq Scan on pg_namespace  (cost=0.00..6.85 rows=63 width=68) 
(actual time=0.028..0.944 rows=157 loops=1)
 Filter: has_schema_privilege((nspname)::text, 
'usage'::text)
 SubPlan 4
   ->  Seq Scan on geometry_columns  (cost=0.00..211.94 rows=8794 
width=43) (actual time=0.024..11.014 rows=8794 loops=1)
   ->  Index Scan using pg_attribute_relid_attnum_index on pg_attribute  
(cost=2763.38..2913.46 rows=13 width=68) (actual time=0.038..0.040 rows=1 
loops=9343)
 Index Cond: (pg_attribute.attrelid = pg_class.oid)
 Filter: ((alternatives: SubPlan 1 or hashed SubPlan 2) OR (hashed 
SubPlan 3))
 SubPlan 1
   ->  Index Scan using pg_type_oid_index on pg_type  (cost=0.00..8.28 
rows=1 width=0) (never executed)
 Index Cond: (oid = $0)
 Filter: (typname = ANY 
('{geometry,geography,topogeometry}'::name[]))
 SubPlan 2
   ->  Bitmap Heap Scan on pg_type  (cost=12.88..57.58 rows=12 width=4) 
(actual time=0.152..0.155 rows=2 loops=1)
 Recheck Cond: (typname = ANY 
('{geometry,geography,topogeometry}'::name[]))
 ->  Bitmap Index Scan on pg_type_typname_nsp_index  
(cost=0.00..12.88 rows=12 width=0) (actual time=0.124..0.124 rows=2 loops=1)
   Index Cond: (typname = ANY 
('{geometry,geography,topogeometry}'::name[]))
 SubPlan 3
   ->  Hash Semi Join  (cost=57.73..2629.03 rows=53739 width=4) (actual 
time=139.039..139.039 rows=0 loops=1)
 Hash Cond: (a.typbasetype = b.oid)
 ->  Seq Scan on pg_type a  (cost=0.00..1832.39 rows=53739 
width=8) (actual time=0.029..96.628 rows=53878 loops=1)
 ->  Hash  (cost=57.58..57.58 rows=12 width=4) (actual 
time=0.047..0.047 rows=2 loops=1)
   ->  Bitmap Heap Scan on pg_type b  (cost=12.88..57.58 
rows=12 width=4) (actual time=0.038..0.040 rows=2 loops=1)
 Recheck Cond: (typname = ANY 
('{geometry,geography,topogeometry}'::name[]))
 ->  Bitmap Index Scan on pg_type_typname_nsp_index 
 (cost=0.00..12.88 rows=12 width=0) (actual time=0.030..0.030 rows=2 loops=1)
   Index Cond: (typname = ANY 
('{geometry,geography,topogeometry}'::name[]))
 Total runtime: 629.619 ms
(33 rows)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enumeration of tables is very slow in largish database

2012-01-11 Thread David Johnston
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Kirill Müller
Sent: Wednesday, January 11, 2012 6:28 PM
To: pgsql-general@postgresql.org
Cc: Scott Marlowe
Subject: Re: [GENERAL] Enumeration of tables is very slow in largish
database

On 01/11/2012 07:00 PM, Scott Marlowe wrote:
> This is a problem I've run into before, but I can't find the previous 
> post on it.  When you run a \d command, if you run top on your server 
> do you see a single CPU spinning hard on that one command?  If so then 
> it's a pg server side problem, which is what I had on one server with 
> ~40k objects in it.
>
> Off the top of my head I remember something like this helping:
>
> alter function pg_table_is_visible cost 10;
Thanks for the feedback. I found the relevant parts in the qgis source code
and have been able to trace the problem. It's just a sub-optimal query
issued by qgis:

SELECT
pg_class.relname,pg_namespace.nspname,pg_attribute.attname,pg_class.relkind
FROM pg_attribute,pg_class,pg_namespace
WHERE pg_namespace.oid=pg_class.relnamespace AND pg_attribute.attrelid =
pg_class.oid AND ( EXISTS (SELECT * FROM pg_type WHERE
pg_type.oid=pg_attribute.atttypid AND pg_type.typname IN
('geometry','geography','topogeometry')) OR pg_attribute.atttypid IN (SELECT
oid FROM pg_type a WHERE EXISTS (SELECT * FROM pg_type b WHERE
a.typbasetype=b.oid AND b.typname IN
('geometry','geography','topogeometry'
AND has_schema_privilege( pg_namespace.nspname, 'usage' ) AND
has_table_privilege( '"' || pg_namespace.nspname || '"."' ||
pg_class.relname || '"', 'select' ) AND NOT EXISTS (SELECT * FROM
geometry_columns WHERE pg_namespace.nspname=f_table_schema AND
pg_class.relname=f_table_name) AND pg_class.relkind IN ('v','r');

When leaving out the last two "AND NOT EXISTS..." parts, the query finishes
in no time. I have attached the output of EXPLAIN ANALYZE -- if I understand
the execution tree correctly, the time is burnt in repeated sequential scans
of the geometry_columns table (line 38).? Rewriting the "AND NOT EXISTS"
part using WITH solves the performance issues here, but works only from
Postgres 8.4. Any idea how to speed up this query for older versions?
(Creating a temporary table or an index should be avoided.)

Kirill


-

I only see one (1) "AND NOT EXISTS" in the provided query.

Syntax may be a little off but:

... AND (f_table_schema, f_table_name) NOT IN (  SELECT (nspname, relname)
FROM geometry_columns  )

Should work since it is no longer a correlated sub-query; whether the size
of geometry_columns makes this better or worse performing is impossible to
tell without testing but it isn't that much different than using a WITH/CTE.

David J.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] indexes no longer used after shutdown during reindexing

2012-01-11 Thread Tom Lane
Matt Dew  writes:
> I have a database that was shut down, cleanly, during an 'reindex 
> table'  command.  When the database came back up, queries against that 
> table started doing sequential scans instead of using the indexes as 
> they had been up until that point.

What exactly is your definition of a "clean shutdown"?  At the very
least you'd have had to abort the session running the reindex.  Also,
what PG version is this, and what are the index definitions?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enumeration of tables is very slow in largish database

2012-01-11 Thread Kirill Müller

On 01/11/2012 07:00 PM, Scott Marlowe wrote:

This is a problem I've run into before, but I can't find the previous
post on it.  When you run a \d command, if you run top on your server
do you see a single CPU spinning hard on that one command?  If so then
it's a pg server side problem, which is what I had on one server with
~40k objects in it.

Off the top of my head I remember something like this helping:

alter function pg_table_is_visible cost 10;
Thanks for the feedback. I found the relevant parts in the qgis source 
code and have been able to trace the problem. It's just a sub-optimal 
query issued by qgis:


SELECT 
pg_class.relname,pg_namespace.nspname,pg_attribute.attname,pg_class.relkind

FROM pg_attribute,pg_class,pg_namespace
WHERE pg_namespace.oid=pg_class.relnamespace AND pg_attribute.attrelid = 
pg_class.oid
AND ( EXISTS (SELECT * FROM pg_type WHERE 
pg_type.oid=pg_attribute.atttypid AND pg_type.typname IN 
('geometry','geography','topogeometry')) OR pg_attribute.atttypid IN 
(SELECT oid FROM pg_type a WHERE EXISTS (SELECT * FROM pg_type b WHERE 
a.typbasetype=b.oid AND b.typname IN 
('geometry','geography','topogeometry'

AND has_schema_privilege( pg_namespace.nspname, 'usage' )
AND has_table_privilege( '"' || pg_namespace.nspname || '"."' || 
pg_class.relname || '"', 'select' )
AND NOT EXISTS (SELECT * FROM geometry_columns WHERE 
pg_namespace.nspname=f_table_schema AND pg_class.relname=f_table_name)

AND pg_class.relkind IN ('v','r');

When leaving out the last two "AND NOT EXISTS..." parts, the query 
finishes in no time. I have attached the output of EXPLAIN ANALYZE -- if 
I understand the execution tree correctly, the time is burnt in repeated 
sequential scans of the geometry_columns table (line 38).? Rewriting the 
"AND NOT EXISTS" part using WITH solves the performance issues here, but 
works only from Postgres 8.4. Any idea how to speed up this query for 
older versions? (Creating a temporary table or an index should be avoided.)


Kirill

   QUERY PLAN 
   
--
---
 Nested Loop  (cost=33350.33..40392.79 rows=13 width=193) (actual 
time=1073.299..171226.309 rows=4821 loops=1)
   ->  Nested Loop Anti Join  (cost=30586.95..37463.23 rows=1 width=133) 
(actual time=958.850..170675.643 rows=9343 loops=1)
 Join Filter: ((pg_namespace.nspname = 
(geometry_columns.f_table_schema)::name) AND (pg_class.relname = 
(geometry_columns.f_table_name)::name))
 ->  Hash Anti Join  (cost=30586.95..37075.41 rows=1 width=133) (actual 
time=945.911..1760.307 rows=17836 loops=1)
   Hash Cond: ((pg_namespace.nspname = n.nspname) AND 
(pg_class.relname = c.relname))
   ->  Hash Join  (cost=7.64..5076.89 rows=2426 width=133) (actual 
time=1.264..769.633 rows=17836 loops=1)
 Hash Cond: (pg_class.relnamespace = pg_namespace.oid)
 Join Filter: has_table_privilege('"'::text || 
(pg_namespace.nspname)::text) || '"."'::text) || (pg_class.relname)::text) || 
'"'::text
), 'select'::text)
 ->  Seq Scan on pg_class  (cost=0.00..4802.82 rows=17676 
width=73) (actual time=0.039..135.094 rows=17850 loops=1)
   Filter: (relkind = ANY ('{v,r}'::"char"[]))
 ->  Hash  (cost=6.85..6.85 rows=63 width=68) (actual 
time=1.172..1.172 rows=157 loops=1)
   ->  Seq Scan on pg_namespace  (cost=0.00..6.85 
rows=63 width=68) (actual time=0.027..0.938 rows=157 loops=1)
 Filter: has_schema_privilege((nspname)::text, 
'usage'::text)
   ->  Hash  (cost=29919.24..29919.24 rows=44005 width=128) (actual 
time=944.583..944.583 rows=0 loops=1)
 ->  Hash Join  (cost=5294.84..29919.24 rows=44005 
width=128) (actual time=944.582..944.582 rows=0 loops=1)
   Hash Cond: (a.attrelid = c.oid)
   ->  Hash Join  (cost=19.75..22908.77 rows=94205 
width=4) (actual time=944.580..944.580 rows=0 loops=1)
 Hash Cond: (a.atttypid = t.oid)
 ->  Seq Scan on pg_attribute a  
(cost=0.00..19562.42 rows=635881 width=8) (actual time=0.007..582.303 
rows=647146 loops=1)
   Filter: (NOT attisdropped)
 ->  Hash  (cost=19.70..19.70 rows=4 width=4) 
(actual time=0.039..0.039 rows=1 loops=1)
   ->  Bitmap Heap Scan on pg_type t  
(cost=4.32..19.70 rows=4 width=4) (actual time=0.035..0.036 rows=1 loops=1)
 Recheck Cond: (typname = 
'geography'::name)
   

[GENERAL] Parameterized Query Pegs the DB

2012-01-11 Thread Michael Daines
Hi,

I've got a situation where a user was seeing their database pegged at 100% CPU 
usage. We tracked it down to a parameterized query, and when we removed the 
process that was launching the query, it solved the problem. I still don't 
understand what caused the problem, though. I've searched around in the old 
messages and FAQ and google and didn't find anything that helped.

The one thing that stands out at this point is that the parameters aren't 
written to the log file.

For other parameterized queries, the logs will look like this:

2011-12-02 13:33:58 MSTLOG:  duration: 0.028 ms  execute : SELECT 
COUNT(*) FROM table_name WHERE field = $1
2011-12-02 13:33:58 MSTDETAIL:  parameters: $1 = 'value'

But for the query that was causing a problem, there is never an "MSTDETAIL" 
line showing the parameters.

We're using PostgreSQL 9.0 and accessing via JDBC.

Thanks for any help/insight you can provide.

msd
This electronic message contains information which may be confidential or 
privileged. The information is intended for the use of the individual or entity 
named above. If you are not the intended recipient, be aware that any 
disclosure, copying, distribution or use of the contents of this information is 
prohibited. If you have received this electronic transmission in error, please 
notify us by e-mail at (postmas...@rapid7.com) immediately.


[GENERAL] PG synchronous replication and unresponsive slave

2012-01-11 Thread Manoj Govindassamy

Hi,

I have a PG 9.1.2 Master <--> Slave with synchronous replication setup. 
They are all working fine as expected. I do have a case where I want to 
flip Master to non replication mode whenever its slave is not 
responding. I have set replication_timeout to 5s and whenever salve is 
not responding for for more than 5s, i see the master detecting it. But, 
the transactions on the master is stuck till the slave comes back. To 
get over it, I reloaded the config on master with synchronous_commit = 
local. Further transactions on the master are going thru fine with this 
local commits turned on.


Here are my questions:

1. Transaction which was stuck right when slave going away never went 
thru even after I reloaded master's config with local commit on. I do 
see all new transactions on master are going thru fine, except the one 
which was stuck initially. How to get this stuck transaction complete or 
return with error.


2. Whenever there is a problem with slave, I have to manually reload 
master's config with local commit turned on to get master go forward. Is 
there any automated way to reload this config with local commit on on 
slave's unresponsiveness ? tcp connection timeouts, replication timeouts 
all detect the failures, but i want to run some corrective action on 
these failure detection.


--
thanks,
Manoj



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problemas ao salvar endereços de rede.

2012-01-11 Thread giuliano.medina
Bom dia pessoal, 

Estou passando por um problema semelhante, mas com duas versões de base de
dados diferentes. 

Se eu rodar, na versão 'PostgreSQL 8.4.4, compiled by Visual C++ build 1400,
32-bit', o comando: 
insert into escape(texto) values ( 'caminho\\rede\\arquivo.txt'); 
o pgadmin me retorna o Warning que o colega citou acima, mas insere os dados
corretamente: 
select * from escape : 
\\caminho\rede\arquivo.txt 

No entanto, rodando o mesmo comando na versão 'PostgreSQL 9.1.0, compiled by
Visual C++ build 1500, 64-bit', nenhum Warning é mostrado, e os dados são
armazenados da seguinte forma: 
select * from escape : 
caminho\\rede\\arquivo.txt 
Nessa nova versão, para que os dados sejam armazenados corretamente, sou
obrigado a utilizar o caractere de escape 'E' 
insert into escape(texto) values ( E'caminho\\rede\\arquivo.txt'); 
select * from escape : 
\\caminho\rede\arquivo.txt 

Como tenho o mesmo aplicativo rodando em bases diferentes, gostaria de saber
se existe alguma configuração no postgres para que não seja necessário
informar sempre o caractere de escape. Caso contrário terei que alterar os
fontes do meu aplicativo :S 

Alguma dica?

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Problemas-ao-salvar-enderecos-de-rede-tp5137949p5137949.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enumeration of tables is very slow in largish database

2012-01-11 Thread Kirill Müller


On 01/11/2012 09:36 PM, Adrian Klaver wrote:

On 01/11/2012 11:45 AM, Kirill Müller wrote:

On 01/11/2012 04:32 PM, Adrian Klaver wrote:

Would seem that they both have the same issue, namely pulling over the
table
names and the meta data is resource intensive.

The problem is that the slow part is the "connect"...

More digging:
Looks like QGIS use the geometry_columns table first to determine the 
geometry aware tables, failing that it walks the tables looking for 
geometry columns. Is the geometry_columns table up to date with the 
actual tables?
Thank you for the tip. In the current version of QGIS, I can select 
"only look in the geometry_columns table" as an option. For 4+ 
entries in this table, it took just some seconds to enumerate all 
tables. This workaround solves the most urgent problems (I'm currently 
rebuilding the geometry_columns table).


Kirill

--
_
ETH Zürich
Institute for Transport Planning and Systems
HIL F 32.2
Wolfgang-Pauli-Str. 15
8093 Zürich

Phone:   +41 44 633 33 17
Fax: +41 44 633 10 57
Secretariat: +41 44 633 31 05
E-Mail:  kirill.muel...@ivt.baug.ethz.ch


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enumeration of tables is very slow in largish database

2012-01-11 Thread Reid Thompson
On Wed, 2012-01-11 at 20:50 +0100, Kirill Müller wrote:
> that would 
> allow tracing the queries and their runtime while they are executed \

http://www.postgresql.org/docs/8.4/static/auto-explain.html

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enumeration of tables is very slow in largish database

2012-01-11 Thread Adrian Klaver

On 01/11/2012 11:45 AM, Kirill Müller wrote:

Adrian,


On 01/11/2012 04:32 PM, Adrian Klaver wrote:

Would seem that they both have the same issue, namely pulling over the
table
names and the meta data is resource intensive.

The problem is that the slow part is the "connect", just at the * in
the in-line comment :-)


More digging:
Looks like QGIS use the geometry_columns table first to determine the 
geometry aware tables, failing that it walks the tables looking for 
geometry columns. Is the geometry_columns table up to date with the 
actual tables?



Cheers

Kirill




--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there a reason why Postgres doesn't have Byte or tinyint?

2012-01-11 Thread Francisco Figueiredo Jr.
On Tue, Jan 10, 2012 at 03:49, Mike Christensen  wrote:
 According to the manuals, Postgres has smallint (2 byte), integer 
 (4
 bytes) and bigint (8 bytes)..  I use a lot of structures with 
 "bytes"
 in my code and it's kinda annoying to cast DB output from Int16 to
 Byte every time, especially since there's no explicit cast in .NET 
 and
 you have to use System.Convert().

 Is there a work-around, or do people just cast or use Int16 in 
 their
 data structures?  Just wondering..  I know on modern computers it
 probably doesn't make any difference anyway..
>>>
>>>
>>> Is this just about programmer convenience or is it about space 
>>> efficiency
>>> in
>>> the database?  BYTEA might help you.  Or try declaring a DOMAIN over
>>> SMALLINT that limits allowed values to the range of a byte. -- 
>>> Darren
>>> Duncan
>>
>>
>> This is purely programmer convenience.
>>
>> Basically, I want Npgsql to marshal the value as a .NET Byte type, if
>> I can find a way to do that I'm happy.  Perhaps it's more of a Npgsql
>> question, though I'm curious as to why Postgres doesn't have an
>> intrinsic tinyint or byte type.
>
>
> Maybe Postgres doesn't need a Byte type predefined because it gives 
> you the
> means to define the type yourself, such as using DOMAIN.
>
> Generally speaking, I believe it is more important for a type system 
> to
> provide the means for arbitrary user-defined types which can be used 
> in all
> the places as built-in-defined types, than to have large numbers of
> built-in-defined types.

 Precisely.  postgresql's extensable nature allows you to build your
 own types as well.  If it's popular enough it'll make it into contrib,
 then maybe core.  My guess is that there's some non-trivial cost to
 maintaining each core type, and since a byte type isn't required by
 the SQL spec, it would take some effort to get a standard one included
 in the core.
>>>
>>> That makes sense.
>>>
>>> I guess my question is more of a NpgSql question then.  Is there a way
>>> to create a custom PG type, and have npgsql serialize that type in a
>>> dataset to a .NET Byte type?
>>>
>>> I'd probably be better off posting on the npgsql mailing list, but
>>> perhaps someone here knows as well..
>>>
>>
>>
>> Hi!
>>
>> Npgsql already has support to map .Net byte data type to int2 and 
>> vice-versa.
>>
>> You can see all the supported type mappings in this file:
>> http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/NpgsqlTypes/NpgsqlTypesHelper.cs?rev=1.41&content-type=text/x-cvsweb-markup
>>
>> Check method PrepareDefaultTypesMap().
>>
>> I hope it helps.
>
> Thanks!  I'll have to mess around with this a bit more..  From what
> I've seen so far, functions that have int2 out parameters will return
> Int16 through the DataReader..  Maybe I'm doing something wrong..
>
> Mike


 I think you aren't doing anything wrong. int2 postgresql datatypes are
 supposed to be mapped to .net int16 types.

 Do you have a simple example?

 What type were you expecting in the datareader? Maybe there is a
 missing mapping in Npgsql.

 Thanks in advance.
>>>
>>> I'm pretty sure your code is working the way it's designed.  If I pass
>>> /in/ a .NET Byte type, it's compatible with an Int2 (16bit int).  It's
>>> basically "cast-up" to the smallest intrinsic type Postgres supports.
>>> However, data coming back out from PG is where I run into the issue.
>>> What I'm doing is calling a function that has the following signature:
>>>
>>> CREATE FUNCTION KPC_GetMealPlans(IN _category MealPlanCategoryEnum,
>>> OUT id uuid, OUT title character varying, OUT imageurl character
>>> varying, OUT rating smallint, OUT numratings integer, out crediturl
>>> character varying, OUT recipecount integer, out ingredientcount
>>> integer)
>>>  RETURNS SETOF record AS
>>>  $BODY$
>>>  BEGIN
>>>    -- All sorts of stuff way too mind-blowing for this email
>>>  END;
>>>
>>> Then I call it as so:
>>>
>>> IDbCommand cmd = session.Connection.CreateCommand();
>>> cmd.CommandType = CommandType.StoredProcedure;
>>> cmd.CommandText = "KPC_GetMealPlans";
>>> //...bind parameter
>>> return cmd.ExecuteReader();
>>>
>>> This will return an iDataReader, which I loop through.  That
>>> IDataReader will return an Int16 type for the Rating OUT parameter,
>>> which I'm pretty sure is by default.  However, I would like it to
>>> return a Byte, because I use a Byte for all my ratings 

Re: [GENERAL] Enumeration of tables is very slow in largish database

2012-01-11 Thread Pavel Stehule
2012/1/11 Kirill Müller :
> On 01/11/2012 02:44 PM, Andres Freund wrote:
>>
>> On Wednesday, January 11, 2012 11:07:23 AM Kirill Müller wrote:
>>>
>>> Hi all,
>>>
>>> we have a Postgres/PostGIS database with 100+ schemas and 200+ tables in
>>> each schema, generated automatically. When adding a new PostGIS layer in
>>> QGis, the application obviously enumerates all tables, and this takes
>>> minutes. Even browsing the database in pgAdmin3 is horribly slow -- it
>>> takes several seconds to e.g. open a schema (click on a schema's "+" in
>>> the tree view).
>>
>> Are you actually sure its the database and not just pgadmin thats getting
>> really slow?
>>
>> If you connect via psql and use \dt (see \? for a list of commands) and
>> consorts, is it that slow as well?
>
> \dt is quick, I haven't tried the other commands, though. I have built qgis
> from source and will try to find out this way which query stalls. I wonder
> if Postgres has a profiling tool like MS SQL Server that would allow tracing
> the queries and their runtime while they are executed. Or perhaps there are
> logs? Could you give me some pointers, please?

log_min_duration_statement = 0 in postgresql.conf and after reload, pg
logs all query to log

Regards

Pavel

>
>
>
> Kirill
>
> --
> _
> ETH Zürich
> Institute for Transport Planning and Systems
> HIL F 32.2
> Wolfgang-Pauli-Str. 15
> 8093 Zürich
>
> Phone:       +41 44 633 33 17
> Fax:         +41 44 633 10 57
> Secretariat: +41 44 633 31 05
> E-Mail:      kirill.muel...@ivt.baug.ethz.ch
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enumeration of tables is very slow in largish database

2012-01-11 Thread Kirill Müller

On 01/11/2012 02:44 PM, Andres Freund wrote:

On Wednesday, January 11, 2012 11:07:23 AM Kirill Müller wrote:

Hi all,

we have a Postgres/PostGIS database with 100+ schemas and 200+ tables in
each schema, generated automatically. When adding a new PostGIS layer in
QGis, the application obviously enumerates all tables, and this takes
minutes. Even browsing the database in pgAdmin3 is horribly slow -- it
takes several seconds to e.g. open a schema (click on a schema's "+" in
the tree view).

Are you actually sure its the database and not just pgadmin thats getting
really slow?

If you connect via psql and use \dt (see \? for a list of commands) and
consorts, is it that slow as well?
\dt is quick, I haven't tried the other commands, though. I have built 
qgis from source and will try to find out this way which query stalls. I 
wonder if Postgres has a profiling tool like MS SQL Server that would 
allow tracing the queries and their runtime while they are executed. Or 
perhaps there are logs? Could you give me some pointers, please?



Kirill

--
_
ETH Zürich
Institute for Transport Planning and Systems
HIL F 32.2
Wolfgang-Pauli-Str. 15
8093 Zürich

Phone:   +41 44 633 33 17
Fax: +41 44 633 10 57
Secretariat: +41 44 633 31 05
E-Mail:  kirill.muel...@ivt.baug.ethz.ch


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enumeration of tables is very slow in largish database

2012-01-11 Thread Kirill Müller

Adrian,


On 01/11/2012 04:32 PM, Adrian Klaver wrote:

Would seem that they both have the same issue, namely pulling over the table
names and the meta data is resource intensive.
The problem is that the slow part is the "connect", just at the * in 
the in-line comment :-)

4.2.2. Loading a PostGIS Layer

Once you have one or more connections defined, you can load layers from the
PostgreSQL database.
Of course this requires having data in PostgreSQL. See Section 4.2.4 for a
discussion on importing data into
the database.
To load a layer from PostGIS, perform the following steps:
– If the Add PostGIS Table(s) dialog is not already open, click on the Add
PostGIS Layer toolbar button.
– Choose the connection from the drop-down list and click Connect .

***

– Select or unselect Also list tables with no geometry
– Optionally use some Search Options to define which features to load from the
layer or use the
Build query icon to start the Query builder dialog.

^^^
Wonder if it would be possible to restrict the dataset(tables) by using the
above?

Thanks for your feedback anyway, this helps clarifying the issue.


Cheers

Kirill

--
_
ETH Zürich
Institute for Transport Planning and Systems
HIL F 32.2
Wolfgang-Pauli-Str. 15
8093 Zürich

Phone:   +41 44 633 33 17
Fax: +41 44 633 10 57
Secretariat: +41 44 633 31 05
E-Mail:  kirill.muel...@ivt.baug.ethz.ch


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Keywords

2012-01-11 Thread vyang
That works perfectly.  Thanks!!

From: Dmitriy Igrishin 
Sent: Wednesday, January 11, 2012 11:03 AM
To: vyang 
Cc: pgsql-general@postgresql.org 
Subject: Re: [GENERAL] Keywords

Hey vyang,


2012/1/11 vyang 

  Hello List,

  I’m wondering if there is a way to retrieve/query PostgreSQL for a list of 
key words matching that of Appendix C. SQL Key Words.  I’m using PostgreSQL 9.1 
and java.  I’ve already tried java’s DatabaseMetaData.getSQLKeywords, but the 
API states “Retrieves a comma-separated list of all of this database's SQL 
keywords that are NOT also SQL:2003 keywords.”  As a result many key words are 
left out.  Any help on this matter is much appreciated.

  vyang

Use pg_get_keywords().
http://www.postgresql.org/docs/9.1/static/functions-info.html

-- 
// Dmitriy.




Re: [GENERAL] Keywords

2012-01-11 Thread vyang


From: Adam Cornett 
Sent: Wednesday, January 11, 2012 11:05 AM
To: vyang 
Cc: pgsql-general@postgresql.org 
Subject: Re: [GENERAL] Keywords

I want to grab it programmatically because from time to time words are 
added/removed.
On Wed, Jan 11, 2012 at 12:02 PM, Adam Cornett  wrote:

  On Wed, Jan 11, 2012 at 11:03 AM, vyang  wrote:

Hello List,

I’m wondering if there is a way to retrieve/query PostgreSQL for a list of 
key words matching that of Appendix C. SQL Key Words.  I’m using PostgreSQL 9.1 
and java.  I’ve already tried java’s DatabaseMetaData.getSQLKeywords, but the 
API states “Retrieves a comma-separated list of all of this database's SQL 
keywords that are NOT also SQL:2003 keywords.”  As a result many key words are 
left out.  Any help on this matter is much appreciated.

vyang

  The Manual has a list in the appendix: 
  http://www.postgresql.org/docs/9.1/static/sql-keywords-appendix.html

Sorry for not reading the first part of your question, you've already found the 
list I pointed you to. 

Why can't you just save the list in Appendix C and read it in to your 
application?

-Adam

Re: [GENERAL] indexes no longer used after shutdown during reindexing

2012-01-11 Thread Matt Dew

On 01/11/2012 11:07 AM, Scott Marlowe wrote:

On Wed, Jan 11, 2012 at 10:42 AM, Matt Dew  wrote:

Hello all,
   I have a database that was shut down, cleanly, during an 'reindex table'
  command.  When the database came back up, queries against that table
started doing sequential scans instead of using the indexes as they had been
up until that point.

We tried:
1) vacuuming the table (vacuum tblName)
2) reindexing the table (reindex table tblName)
3) dropping and recreating the indexes

but none of those actions helped.   We ended up recreating the table by
renaming the table and doing a create table as select * from oldTable and
readding the indexes.  This worked.

This problem presented itself as an application timing out. It took several
people, several hours to track this down and solve it.

Several months ago I had two other tables also stopped using their indexes.
  Those times however I don't know if a database shutdown caused the problem.

Has anyone had this problem?  If so, what specifically is the cause?  Is
shutting down a database during a table rebuild or vacuum an absolute no-no?

Any and all help or insight would be appreciated,
Matt

You likely had an invalid index, I've seen that crop up when doing a
create index concurrently.  Just a guess.  What did or does \d of the
table and its indexes show?  Look for invalid in the output.


Hi Scott,
  The output of \d looked normal.  Nothing weird or different than before.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] indexes no longer used after shutdown during reindexing

2012-01-11 Thread Scott Marlowe
On Wed, Jan 11, 2012 at 10:42 AM, Matt Dew  wrote:
> Hello all,
>   I have a database that was shut down, cleanly, during an 'reindex table'
>  command.  When the database came back up, queries against that table
> started doing sequential scans instead of using the indexes as they had been
> up until that point.
>
> We tried:
> 1) vacuuming the table (vacuum tblName)
> 2) reindexing the table (reindex table tblName)
> 3) dropping and recreating the indexes
>
> but none of those actions helped.   We ended up recreating the table by
> renaming the table and doing a create table as select * from oldTable and
> readding the indexes.  This worked.
>
> This problem presented itself as an application timing out. It took several
> people, several hours to track this down and solve it.
>
> Several months ago I had two other tables also stopped using their indexes.
>  Those times however I don't know if a database shutdown caused the problem.
>
> Has anyone had this problem?  If so, what specifically is the cause?  Is
> shutting down a database during a table rebuild or vacuum an absolute no-no?
>
> Any and all help or insight would be appreciated,
> Matt

You likely had an invalid index, I've seen that crop up when doing a
create index concurrently.  Just a guess.  What did or does \d of the
table and its indexes show?  Look for invalid in the output.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] add constraint deferrable, syntax error

2012-01-11 Thread Andreas Kretschmer
Gauthier, Dave  wrote:

> Why am I getting this?
> 
>  
> 
> xdb=# alter table templates add constraint manager_uid_is_invalid check
> ((constraint_checker('manager',manager,null,null)) = 'OK') DEFERRABLE ;
> 
> ERROR:  syntax error at or near "DEFERRABLE"
> 
> LINE 1: ...int_checker('manager',manager,null,null)) = 'OK') DEFERRABLE...

1st, i think, you should it rewrite to:

((constraint_checker('manager',manager,null,null) = 'OK'))


2nd, check-constraints ARE NOT DEFERRABLE !!!


http://www.postgresql.org/docs/current/static/sql-createtable.html says:
Currently, only UNIQUE, PRIMARY KEY, EXCLUDE, and REFERENCES (foreign
key) constraints accept this clause. NOT NULL and CHECK constraints are
not deferrable.


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enumeration of tables is very slow in largish database

2012-01-11 Thread Scott Marlowe
On Wed, Jan 11, 2012 at 3:07 AM, Kirill Müller
 wrote:
> Hi all,
>
> we have a Postgres/PostGIS database with 100+ schemas and 200+ tables in
> each schema, generated automatically. When adding a new PostGIS layer in
> QGis, the application obviously enumerates all tables, and this takes
> minutes. Even browsing the database in pgAdmin3 is horribly slow -- it takes
> several seconds to e.g. open a schema (click on a schema's "+" in the tree
> view).
>
> The problems occurred only after adding that many schemas to the database.
> Before, with only 10+ schemas, the performance was acceptable.
>
> Is this a known limitation of Postgres, or perhaps a misconfiguration of our
> installation? What would you suggest to improve performance here? We
> currently don't have administration rights for the database or login rights
> for the server machine (Linux), but I think we'll need to take care about
> that.

This is a problem I've run into before, but I can't find the previous
post on it.  When you run a \d command, if you run top on your server
do you see a single CPU spinning hard on that one command?  If so then
it's a pg server side problem, which is what I had on one server with
~40k objects in it.

Off the top of my head I remember something like this helping:

alter function pg_table_is_visible cost 10;

But I'm not sure that's it.  Maybe Tom Lane can pipe up on this.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] indexes no longer used after shutdown during reindexing

2012-01-11 Thread Matt Dew

Hello all,
   I have a database that was shut down, cleanly, during an 'reindex 
table'  command.  When the database came back up, queries against that 
table started doing sequential scans instead of using the indexes as 
they had been up until that point.


We tried:
1) vacuuming the table (vacuum tblName)
2) reindexing the table (reindex table tblName)
3) dropping and recreating the indexes

but none of those actions helped.   We ended up recreating the table by 
renaming the table and doing a create table as select * from oldTable 
and readding the indexes.  This worked.


This problem presented itself as an application timing out. It took 
several people, several hours to track this down and solve it.


Several months ago I had two other tables also stopped using their 
indexes.  Those times however I don't know if a database shutdown caused 
the problem.


Has anyone had this problem?  If so, what specifically is the cause?  Is 
shutting down a database during a table rebuild or vacuum an absolute no-no?


Any and all help or insight would be appreciated,
Matt

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Keywords

2012-01-11 Thread Adam Cornett
On Wed, Jan 11, 2012 at 12:02 PM, Adam Cornett wrote:

> On Wed, Jan 11, 2012 at 11:03 AM, vyang  wrote:
>
>>   Hello List,
>>
>> I’m wondering if there is a way to retrieve/query PostgreSQL for a list
>> of key words matching that of Appendix C. SQL Key Words.  I’m using
>> PostgreSQL 9.1 and java.  I’ve already tried java’s
>> DatabaseMetaData.getSQLKeywords, but the API states “Retrieves a
>> comma-separated list of all of this database's SQL keywords that are NOT
>> also SQL:2003 keywords.”  As a result many key words are left out.  Any
>> help on this matter is much appreciated.
>>
>> vyang
>>
>
> The Manual has a list in the appendix:
> http://www.postgresql.org/docs/9.1/static/sql-keywords-appendix.html
>

Sorry for not reading the first part of your question, you've already found
the list I pointed you to.

Why can't you just save the list in Appendix C and read it in to your
application?

-Adam


Re: [GENERAL] Keywords

2012-01-11 Thread Dmitriy Igrishin
Hey vyang,

2012/1/11 vyang 

>   Hello List,
>
> I’m wondering if there is a way to retrieve/query PostgreSQL for a list of
> key words matching that of Appendix C. SQL Key Words.  I’m using PostgreSQL
> 9.1 and java.  I’ve already tried java’s DatabaseMetaData.getSQLKeywords,
> but the API states “Retrieves a comma-separated list of all of this
> database's SQL keywords that are NOT also SQL:2003 keywords.”  As a result
> many key words are left out.  Any help on this matter is much appreciated.
>
> vyang
>

Use pg_get_keywords().
http://www.postgresql.org/docs/9.1/static/functions-info.html

-- 
// Dmitriy.


Re: [GENERAL] Keywords

2012-01-11 Thread Adam Cornett
On Wed, Jan 11, 2012 at 11:03 AM, vyang  wrote:

>   Hello List,
>
> I’m wondering if there is a way to retrieve/query PostgreSQL for a list of
> key words matching that of Appendix C. SQL Key Words.  I’m using PostgreSQL
> 9.1 and java.  I’ve already tried java’s DatabaseMetaData.getSQLKeywords,
> but the API states “Retrieves a comma-separated list of all of this
> database's SQL keywords that are NOT also SQL:2003 keywords.”  As a result
> many key words are left out.  Any help on this matter is much appreciated.
>
> vyang
>

The Manual has a list in the appendix:
http://www.postgresql.org/docs/9.1/static/sql-keywords-appendix.html


[GENERAL] Keywords

2012-01-11 Thread vyang
Hello List,

I’m wondering if there is a way to retrieve/query PostgreSQL for a list of key 
words matching that of Appendix C. SQL Key Words.  I’m using PostgreSQL 9.1 and 
java.  I’ve already tried java’s DatabaseMetaData.getSQLKeywords, but the API 
states “Retrieves a comma-separated list of all of this database's SQL keywords 
that are NOT also SQL:2003 keywords.”  As a result many key words are left out. 
 Any help on this matter is much appreciated.

vyang

Re: [GENERAL] question about the money type

2012-01-11 Thread Merlin Moncure
On Wed, Jan 11, 2012 at 9:32 AM, Szymon Guz  wrote:
> Hi,
> in the documentation for 8.2
> (http://www.postgresql.org/docs/8.2/interactive/datatype-money.html) there
> is info:
>
> Note: The money type is deprecated. Use numeric or decimal instead, in
> combination with theto_char function.
>
> However in later versions the info disappeared. It doesn't exist even in the
> 8.3.
>
> I think removing this type would a good solution... any plans for that?
>
> What is the status of Money type, is that OK or deprecated?

in 8.3 it was expanded to 64 bits.  Lack of range was the main reason
for the deprecation so it's status was reverted.  I happen to still
think it's pretty silly to justify as a core type but it's not going
anywhere.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] add constraint deferrable, syntax error

2012-01-11 Thread Gauthier, Dave
Why am I getting this?

xdb=# alter table templates add constraint manager_uid_is_invalid check 
((constraint_checker('manager',manager,null,null)) = 'OK') DEFERRABLE ;
ERROR:  syntax error at or near "DEFERRABLE"
LINE 1: ...int_checker('manager',manager,null,null)) = 'OK') DEFERRABLE...

v8.3.4 on linux



Re: [GENERAL] Mixed Pg-9.1.2 and 8.4.4 environment

2012-01-11 Thread James B. Byrne

On Tue, January 10, 2012 09:28, Adrian Klaver wrote:
> On Tuesday, January 10, 2012 6:17:27 am James B. Byrne
> wrote:
>>
>> My question:  Is their a way to specify a pg_dump option
>> to retain compatibility with a lower version pg
>> database?
>> Alternatively, can one install a higher level pg_dump in
>> an otherwise 8.4.4 installation?
>
> Short answer is no. The reason is seen in the errors
> below. EXTENSION does not exist in Postgres below 9.1.
> In this particular case the EXTENSION is being used
> to load a procedural language and that had a pre 9.1
> analog in CREATE LANGUAGE. In other instances that
> would not be the case. pg_dump can dump forward i.e
> use new pg_dump version to pull data forward from old
> Postgres version, but not the other way around.
>

We upgraded the backup systems to v.9.1.2, which was
fairly trivial given that they simply load pgdumps to
begin with.

-- 
***  E-Mail is NOT a SECURE channel  ***
James B. Byrnemailto:byrn...@harte-lyne.ca
Harte & Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] question about the money type

2012-01-11 Thread Szymon Guz
Hi,
in the documentation for 8.2 (
http://www.postgresql.org/docs/8.2/interactive/datatype-money.html) there
is info:

*Note:* The money type is deprecated. Use numeric or decimal instead, in
combination with theto_char function.

However in later versions the info disappeared. It doesn't exist even in
the 8.3.

I think removing this type would a good solution... any plans for that?

What is the status of Money type, is that OK or deprecated?

regards
Szymon


Re: [GENERAL] Enumeration of tables is very slow in largish database

2012-01-11 Thread Adrian Klaver
On Wednesday, January 11, 2012 2:07:23 am Kirill Müller wrote:
> Hi all,
> 
> we have a Postgres/PostGIS database with 100+ schemas and 200+ tables in
> each schema, generated automatically. When adding a new PostGIS layer in
> QGis, the application obviously enumerates all tables, and this takes
> minutes. Even browsing the database in pgAdmin3 is horribly slow -- it
> takes several seconds to e.g. open a schema (click on a schema's "+" in
> the tree view).

Would seem that they both have the same issue, namely pulling over the table 
names and the meta data is resource intensive.

Not a QGis user but I did find this in the manual:

http://download.osgeo.org/qgis/doc/manual/qgis-1.7.0_user_guide_en.pdf


See in-line comment:


4.2.2. Loading a PostGIS Layer

Once you have one or more connections defined, you can load layers from the 
PostgreSQL database.
Of course this requires having data in PostgreSQL. See Section 4.2.4 for a 
discussion on importing data into
the database.
To load a layer from PostGIS, perform the following steps:
– If the Add PostGIS Table(s) dialog is not already open, click on the Add 
PostGIS Layer toolbar button.
– Choose the connection from the drop-down list and click Connect .
– Select or unselect Also list tables with no geometry
– Optionally use some Search Options to define which features to load from the 
layer or use the
Build query icon to start the Query builder dialog.

^^^
Wonder if it would be possible to restrict the dataset(tables) by using the 
above?

– Find the layer(s) you wish to add in the list of available layers.
– Select it by clicking on it. You can select multiple layers by holding down 
the shift key while clicking.
See Section 4.6 for information on using the PostgreSQL Query Builder to 
further 
define the layer.
– Click on the Add button to add the layer to the map.



> 
> The problems occurred only after adding that many schemas to the
> database. Before, with only 10+ schemas, the performance was acceptable.
> 
> Is this a known limitation of Postgres, or perhaps a misconfiguration of
> our installation? What would you suggest to improve performance here? We
> currently don't have administration rights for the database or login
> rights for the server machine (Linux), but I think we'll need to take
> care about that.
> 
> 
> Best regards
> 
> Kirill

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgAdmin - Query - out of memory for query result

2012-01-11 Thread Kevin Duffy
I am using pgAdmin on both machines.
And for some strange reason the query fails on the machine with more
memory.

The message shows up in the Output pane in the messages tab.

I do not get it.


KD


On Wed, Jan 11, 2012 at 8:50 AM, Guillaume Lelarge
wrote:

> On Wed, 2012-01-11 at 01:10 -0500, Kevin Duffy wrote:
> > [...]
> > I have run into a very frustrating problem.
> >
> > I have a database with some tables which were loaded from Excel
> spreadsheets
> > via a Perl script.  Guess that does not really matter.  The table
> > definitions
> > are below.
> >
> > Both tables contain  9,398,340 rows
> >
> > On a machine running Windows 7 64Bit with 6Gig of RAM
> > When I do a simple query, select *  from estimated_idiosyncratic_
> > return, on these tables I get:  out of memory for query result
> >
> > If I run the same query on the machine where the database resides,
> > the query runs, No issue.  The machine where the database resides is
> > laptop running Windows XP Sp2 with 3 Gig of RAM.
> >
> > Other queries run fine.
> >
> > I do not understand why these simple queries would work on a machine
> > with 3Gig of RAM and fail on a machine with 6Gig.
> >
>
> I suppose from your email's subject that you use pgAdmin on your Windows
> machine, and psql on the other one?
>
> If that's true, pgAdmin will probably need more memory to display the
> results than psql. I've tried to find the error message you're talking
> about, but couldn't find it in pgAdmin's code source. But I guess that,
> if your query returns many lines, it will hit hard on the memory. And
> even if it can use that much memory, the grid component can get pretty
> slow.
>
>
> --
> Guillaume
> http://blog.guillaume.lelarge.info
> http://www.dalibo.com
> PostgreSQL Sessions #3: http://www.postgresql-sessions.org
>
>


Re: [GENERAL] PGAdmin3 for CentOS-5_186_64 and pgdg91

2012-01-11 Thread devrim
This message has been digitally signed by the sender.

Re___GENERAL__PGAdmin3_for_CentOS_5_186_64_and_pgdg91.eml
Description: Binary data


-
Hi-Tech Gears Ltd, Gurgaon, India


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGAdmin3 for CentOS-5_186_64 and pgdg91

2012-01-11 Thread Devrim GÜNDÜZ
Hi,

On Wed, 2012-01-11 at 09:58 -0500, James B. Byrne wrote:

> Does anyone reading this message know where I can get
> pgadmin3_91-1.14.0-3.rhel5.i86_64 or why it is not
> available from pgdgrpm.org?

(Sorry for not replying you on pgadmin list and the private email)

pgadmin3 cannot be built on stock RHEL 5 x86_64. That's why I pushed
32-bit version to the repository, in case it works  :(

Regards,
-- 
Devrim GÜNDÜZ
Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz


signature.asc
Description: This is a digitally signed message part


[GENERAL] PGAdmin3 for CentOS-5_186_64 and pgdg91

2012-01-11 Thread James B. Byrne

I need a version of pgadmin4 which can handle pg-9.1 and
which runs on RHEL5_186_64 architectures.  I have
installed the pgdg-91-centos.repo yum repo file from
pgdgrpm.org. However:

yum whatprovides */pgadmin3 --enablerepo=pgdg91

does not return anything.  If I do exactly the same thing
on a Centos5_i686 box then I see this instead:

pgadmin3_91-1.14.0-3.rhel5.i386 : Graphical client for
PostgreSQL
Repo: pgdg91
Matched from:
Filename: /usr/pgsql-9.1/bin/pgadmin3
Filename: /usr/pgsql-9.1/share/pgadmin3

Does anyone reading this message know where I can get
pgadmin3_91-1.14.0-3.rhel5.i86_64 or why it is not
available from pgdgrpm.org?

Regards,

-- 
***  E-Mail is NOT a SECURE channel  ***
James B. Byrnemailto:byrn...@harte-lyne.ca
Harte & Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enumeration of tables is very slow in largish database

2012-01-11 Thread Guillaume Lelarge
On Wed, 2012-01-11 at 15:10 +0100, Andres Freund wrote:
> On Wednesday, January 11, 2012 02:53:06 PM Guillaume Lelarge wrote:
> > On Wed, 2012-01-11 at 14:44 +0100, Andres Freund wrote:
> > > On Wednesday, January 11, 2012 11:07:23 AM Kirill Müller wrote:
> > > > Hi all,
> > > > 
> > > > we have a Postgres/PostGIS database with 100+ schemas and 200+ tables
> > > > in each schema, generated automatically. When adding a new PostGIS
> > > > layer in QGis, the application obviously enumerates all tables, and
> > > > this takes minutes. Even browsing the database in pgAdmin3 is horribly
> > > > slow -- it takes several seconds to e.g. open a schema (click on a
> > > > schema's "+" in the tree view).
> > > 
> > > Are you actually sure its the database and not just pgadmin thats getting
> > > really slow?
> > > 
> > > If you connect via psql and use \dt (see \? for a list of commands) and
> > > consorts, is it that slow as well?
> > 
> > \dt won't be as slow as pgAdmin. \dt only gets the table name, owner,
> > and stuff like that. Kinda quick. pgAdmin will get also all the other
> > informations, like columns, triggers, constraints, functions, types,
> > etc.
> Yes, sure. My guess is that the gui/pgadmin is the bottleneck and not 
> postgres 
> itself. Its hard to really do all what pgadmin does at once inside psql 
> though.
> 

Yeah, sure enough.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com
PostgreSQL Sessions #3: http://www.postgresql-sessions.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enumeration of tables is very slow in largish database

2012-01-11 Thread Andres Freund
On Wednesday, January 11, 2012 02:53:06 PM Guillaume Lelarge wrote:
> On Wed, 2012-01-11 at 14:44 +0100, Andres Freund wrote:
> > On Wednesday, January 11, 2012 11:07:23 AM Kirill Müller wrote:
> > > Hi all,
> > > 
> > > we have a Postgres/PostGIS database with 100+ schemas and 200+ tables
> > > in each schema, generated automatically. When adding a new PostGIS
> > > layer in QGis, the application obviously enumerates all tables, and
> > > this takes minutes. Even browsing the database in pgAdmin3 is horribly
> > > slow -- it takes several seconds to e.g. open a schema (click on a
> > > schema's "+" in the tree view).
> > 
> > Are you actually sure its the database and not just pgadmin thats getting
> > really slow?
> > 
> > If you connect via psql and use \dt (see \? for a list of commands) and
> > consorts, is it that slow as well?
> 
> \dt won't be as slow as pgAdmin. \dt only gets the table name, owner,
> and stuff like that. Kinda quick. pgAdmin will get also all the other
> informations, like columns, triggers, constraints, functions, types,
> etc.
Yes, sure. My guess is that the gui/pgadmin is the bottleneck and not postgres 
itself. Its hard to really do all what pgadmin does at once inside psql 
though.

Andres

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enumeration of tables is very slow in largish database

2012-01-11 Thread Guillaume Lelarge
On Wed, 2012-01-11 at 14:44 +0100, Andres Freund wrote:
> On Wednesday, January 11, 2012 11:07:23 AM Kirill Müller wrote:
> > Hi all,
> > 
> > we have a Postgres/PostGIS database with 100+ schemas and 200+ tables in
> > each schema, generated automatically. When adding a new PostGIS layer in
> > QGis, the application obviously enumerates all tables, and this takes
> > minutes. Even browsing the database in pgAdmin3 is horribly slow -- it
> > takes several seconds to e.g. open a schema (click on a schema's "+" in
> > the tree view).
> Are you actually sure its the database and not just pgadmin thats getting 
> really slow?
> 
> If you connect via psql and use \dt (see \? for a list of commands) and 
> consorts, is it that slow as well?
> 

\dt won't be as slow as pgAdmin. \dt only gets the table name, owner,
and stuff like that. Kinda quick. pgAdmin will get also all the other
informations, like columns, triggers, constraints, functions, types,
etc.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com
PostgreSQL Sessions #3: http://www.postgresql-sessions.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgAdmin - Query - out of memory for query result

2012-01-11 Thread Guillaume Lelarge
On Wed, 2012-01-11 at 01:10 -0500, Kevin Duffy wrote:
> [...]
> I have run into a very frustrating problem.
> 
> I have a database with some tables which were loaded from Excel spreadsheets
> via a Perl script.  Guess that does not really matter.  The table
> definitions
> are below.
> 
> Both tables contain  9,398,340 rows
> 
> On a machine running Windows 7 64Bit with 6Gig of RAM
> When I do a simple query, select *  from estimated_idiosyncratic_
> return, on these tables I get:  out of memory for query result
> 
> If I run the same query on the machine where the database resides,
> the query runs, No issue.  The machine where the database resides is
> laptop running Windows XP Sp2 with 3 Gig of RAM.
> 
> Other queries run fine.
> 
> I do not understand why these simple queries would work on a machine
> with 3Gig of RAM and fail on a machine with 6Gig.
> 

I suppose from your email's subject that you use pgAdmin on your Windows
machine, and psql on the other one?

If that's true, pgAdmin will probably need more memory to display the
results than psql. I've tried to find the error message you're talking
about, but couldn't find it in pgAdmin's code source. But I guess that,
if your query returns many lines, it will hit hard on the memory. And
even if it can use that much memory, the grid component can get pretty
slow.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com
PostgreSQL Sessions #3: http://www.postgresql-sessions.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enumeration of tables is very slow in largish database

2012-01-11 Thread Andres Freund
On Wednesday, January 11, 2012 11:07:23 AM Kirill Müller wrote:
> Hi all,
> 
> we have a Postgres/PostGIS database with 100+ schemas and 200+ tables in
> each schema, generated automatically. When adding a new PostGIS layer in
> QGis, the application obviously enumerates all tables, and this takes
> minutes. Even browsing the database in pgAdmin3 is horribly slow -- it
> takes several seconds to e.g. open a schema (click on a schema's "+" in
> the tree view).
Are you actually sure its the database and not just pgadmin thats getting 
really slow?

If you connect via psql and use \dt (see \? for a list of commands) and 
consorts, is it that slow as well?

Andres

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enumeration of tables is very slow in largish database

2012-01-11 Thread Guillaume Lelarge
On Wed, 2012-01-11 at 11:07 +0100, Kirill Müller wrote:
> Hi all,
> 
> we have a Postgres/PostGIS database with 100+ schemas and 200+ tables in 
> each schema, generated automatically. When adding a new PostGIS layer in 
> QGis, the application obviously enumerates all tables, and this takes 
> minutes. Even browsing the database in pgAdmin3 is horribly slow -- it 
> takes several seconds to e.g. open a schema (click on a schema's "+" in 
> the tree view).
> 
> The problems occurred only after adding that many schemas to the 
> database. Before, with only 10+ schemas, the performance was acceptable.
> 
> Is this a known limitation of Postgres, or perhaps a misconfiguration of 
> our installation?

I don't think it has anything to do with PostgreSQL. It has to do with
the client. I don't know for QGis but, in the case of pgAdmin, I'm
pretty sure the issue is pgAdmin. When you click on the + sign of a
schema, pgAdmin has to get all the informations on the schema: tables,
columns, constraints, triggers, and all the other objets found in the
schema. It could take some time. Moreover, pgAdmin has to put all this
in the treeview, and my guess would be that it's probably the most time
consuming operation here. We could probably get better performance, but
I didn't have the time to look at that yet.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com
PostgreSQL Sessions #3: http://www.postgresql-sessions.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can a function return more then one table?

2012-01-11 Thread Sergey Konoplev
On Wed, Jan 11, 2012 at 10:47 AM, Pavel Stehule  wrote:
> * return setof refcursors
> * use arrays instead tables and returns arrays (for tables less 1 rows)

One more option is to use hstore plus populate_record().

>
> Regards
>
> Pavel Stehule
>
>> Mike
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sergey Konoplev

Blog: http://gray-hemp.blogspot.com
LinkedIn: http://ru.linkedin.com/in/grayhemp
JID/GTalk: gray...@gmail.com Skype: gray-hemp

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Enumeration of tables is very slow in largish database

2012-01-11 Thread Kirill Müller

Hi all,

we have a Postgres/PostGIS database with 100+ schemas and 200+ tables in 
each schema, generated automatically. When adding a new PostGIS layer in 
QGis, the application obviously enumerates all tables, and this takes 
minutes. Even browsing the database in pgAdmin3 is horribly slow -- it 
takes several seconds to e.g. open a schema (click on a schema's "+" in 
the tree view).


The problems occurred only after adding that many schemas to the 
database. Before, with only 10+ schemas, the performance was acceptable.


Is this a known limitation of Postgres, or perhaps a misconfiguration of 
our installation? What would you suggest to improve performance here? We 
currently don't have administration rights for the database or login 
rights for the server machine (Linux), but I think we'll need to take 
care about that.



Best regards

Kirill

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] string = any()

2012-01-11 Thread Filip Rembiałkowski
W dniu 10 stycznia 2012 22:52 użytkownik David Johnston
 napisał:

> If you are going to supply one parameter per "possible value" anyway skip the 
> whole "ANY" and "ARRAY" and just say " 'bob' IN ($1, $2 [,...]) ";

true :-)

prepare($sql);
$stmt->execute($names);
?>

PHP from several years cries for postgres array support in PDO. And
personally I would rather discourage programmers from constructing
delimited array string in code - more complicated than first option,
needs careful character escaping, etc.


select array['a',null,'','tab is  ','quote is "','comma is ,'];
 array
---
 {a,NULL,"","tab is  ","quote is \"","comma is ,"}

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general