Re: [GENERAL] : Looking for a PostgreSQL book

2011-09-29 Thread Achilleas Mantzios
The cook book is indeed helpful, but i doubt if it was ever properly reviewed. 
Many typos, many apparent errors,
code for the cache utils pg_cacheutils is mentioned to exist somewhere but it's 
not there, and many more

Στις Thursday 29 September 2011 08:07:34 ο/η Venkat Balaji έγραψε:
 Thanks Adam !
 
 Regards,
 VB
 
 On Thu, Sep 29, 2011 at 12:03 AM, Adam Cornett adam.corn...@gmail.comwrote:
 
  The same publisher (Packt) has a book *PostgreSQL
  9 Administration Cookbook* by Simon Riggs and Hannu Krosing that is
  equally useful as Greg's *High Performance* book
 
 
  On Wed, Sep 28, 2011 at 1:14 PM, Venkat Balaji 
  venkat.bal...@verse.inwrote:
 
  Hello Everyone,
 
  I have been working on PostgreSQL for quite a while (2 yrs) now.
 
  I have got PostgreSQL 9.0 High Performance book and quite excited to go
  through it.
 
  Please let me know any source where i can get more books on PG, I am
  especially looking for books on PG internals, architecture, Backup 
  Recovery and HA.
 
  Looking forward for the information.
 
  Regards,
  VB
 
 
 
 
  --
  Adam Cornett
  adam.corn...@gmail.com
  (678) 296-1150
 
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stored proc

2011-09-29 Thread Jacqui Caren-home

On 29/09/2011 02:34, J.V. wrote:

Is it possible to group a bunch of methods and functions into a single file (stored 
procedures  functions) and have a main method that can be called
to orchestrate the methods, pass in params, get back results, log to a file?

I know this can be done with Oracle PL/SQL but a simple google on this does not 
show any examples.


In Oracle is called a package - FWICR there is no direct equivalent to it in 
postgresql.


Also where can I find a list of stored proc data structures (hash maps, arrays, 
lists of arrays) or anything else that would be useful.


If you use windows download pgadmin3 and click the help button - you get a copy 
or link to the
SQL documentation, including data types.

or go to http://www.postgresql.org/docs/current/static/index.html

http://www.postgresql.org/docs/current/static/datatype.html
may be what you are interested in however I would recommend reading the
*majority* of this document - Pg is not Ora!

If you are migrating from Oracle, one point people tend to emphasise is that 
unlike oracle
you cannot commit and rollback the current transaction within procedural SQL - 
this is not
a bug or missing feature, so please dont start asking for it to be added :-)

If you really really want to compartmentialise then you could create schemas 
representing packages
and reference data and functions prefixed with the schema.

i.e. create function schemaname.functionname ...

but this has one major risk that if your functionname is called without the 
schemaprefix and
a matching functionname exists in a schema in the search_path, this will be 
called instead.
Nasty!

My preference is to define a naming scheme such as

XXN_YYY_functionname
where XX is the project
N is P procedure - no return values
 F function = returns data
 T trigger function
 ...
and YYY is the package name :-)

Jacqui

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stored procs

2011-09-29 Thread Richard Huxton

On 29/09/11 02:33, J.V. wrote:

Is is possible within a stored procedure to read all the tables in a
schema into a list?

[snip]

I need to extract this meta-data for a project.


Apart from information_schema mentioned elsewhere, start psql with -E 
and then try \dt and similar - it will show you the queries it uses.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Create Extension search path

2011-09-29 Thread Richard Huxton

On 28/09/11 18:51, Roger Niederland wrote:

To add the extension required me to change the search_path.
Is it required that all any schema added to the search_path exist in all
databases?


If set in the configuration file, yes (unless you want errors).

You can set it per-database or per-user though. See ALTER DATABASE or 
ALTER ROLE.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Feature request: improving ENUM type manipulation

2011-09-29 Thread depstein
Until v. 9.1 enum modification was not officially supported, although hacking 
pg_enum made it somewhat workable. Since v. 9.1 there is a way to add values to 
enum types. However,

- alter type ... add value doesn't work in transaction blocks, which severely 
limits its usefulness
- there is still no supported way to delete values from an enum

Worse still, old hacks for adding enum values no longer work. There are some 
good technical reasons for all this, which, for those who are interested, have 
been discussed recently in psql-bugs.

All the same, I think a better support for enum modification would be very 
desirable, more so than for other custom types.  The reason has to do with the 
very nature of an enum - an enumeration, a representation of a set, which, in 
general, implies open-endedness.  Some sets that may be represented by an enum 
do not imply extensibility, e.g. days of the week (barring another French 
revolution!)  But many more do.  In my practice, when considering 
implementation of a set-like entity, extensibility is a crucial consideration.

For those interested in the discussion that led to this post, look for 
Problems with ENUM type manipulation in 9.1 thread in psql-bugs.

Dmitry Epstein | Developer
 
Allied Testing
T + 7 495 544 48 69 Ext 417

www.alliedtesting.com
We Deliver Quality.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bytea columns and large values

2011-09-29 Thread Marti Raudsepp
On Tue, Sep 27, 2011 at 20:01, David North d...@corefiling.co.uk wrote:
 testdb=# select * from problem_table;
 ERROR:  invalid memory alloc request size 2003676411

 Is there some reason why my data can be stored in 1GB but triggers the
 allocation of 2GB of memory when I try to read it back? Is there any setting
 I can change or any alternate method of reading I can use to get around
 this?

I guess that it's converting the whole value to the hex-escaped bytea
format so that doubles its size. The JDBC driver probably doesn't
support tarnsferring bytea values in binary.

I've heard that some people are using substr() to read bytea values in
small chunks. Theoretically TOAST can support this in constant time
(independent of total value size or offset), but I don't know about
the implementation. In any case, it's worth a try.

It *might* help to ALTER column SET STORAGE EXTERNAL, to disable TOAST
compression, but it could also make things worse.

More details here: http://www.postgresql.org/docs/9.0/static/storage-toast.html

Regards,
Marti

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bytea columns and large values

2011-09-29 Thread Radosław Smogura

On Wed, 28 Sep 2011 10:18:27 -0500, Merlin Moncure wrote:
On Wed, Sep 28, 2011 at 3:28 AM, David North d...@corefiling.co.uk 
wrote:

On 28/09/11 01:50, Craig Ringer wrote:


On 09/28/2011 01:01 AM, David North wrote:


testdb=# select * from problem_table;
ERROR: invalid memory alloc request size 2003676411


What Pg version are you using?

On which platform?

32-bit or 64-bit OS? If 64-bit, with a 32-bit or 64-bit build of
PostgreSQL?


psql (PostgreSQL) 9.0.4

64 bit fedora:

Fedora release 14 (Laughlin)
Linux mybox 2.6.35.14-95.fc14.x86_64 #1 SMP Tue Aug 16 21:01:58 UTC 
2011

x86_64 x86_64 x86_64 GNU/Linux

I presume my build of PostgreSQL is x64 - ldd `which psql` spits out 
lots of

references to lib64.


sending ~1GB bytea values is borderline crazy, and is completely 
crazy
if you are not absolutely sure the transmission is not 100% binary.  
I
don't know if the JDBC sends/receives bytea as binary, but it may 
not.

 If not, you might have better luck with the large object interface.

merlin
Crazy? I thought that official stand was to keep such crazy values as 
TOAST, and LOB interface isn't something worth of usage.


You have exposed such interface, and you firm it. Does this crazynies 
is measured by fixed amount above 500MB or it's fuzzy measured with 
standard deviation near 1GB? If I use bytea to store such values, looks 
I'm crazy too. Sorry, for thinking that documentation is trust worthy.


Thanks for mobilization - I reserved moderngres domains. I think it's 
about 1-2 week for making some infrastructure for this.


Regards
Radosław Smogura
http://softperience.eu
http://moderngres.eu - in near future

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bytea columns and large values

2011-09-29 Thread Alban Hertroys
On 29 September 2011 13:12, Radosław Smogura rsmog...@softperience.eu wrote:
 sending ~1GB bytea values is borderline crazy, and is completely crazy
 if you are not absolutely sure the transmission is not 100% binary.  I
 don't know if the JDBC sends/receives bytea as binary, but it may not.
  If not, you might have better luck with the large object interface.

 Crazy? I thought that official stand was to keep such crazy values as
 TOAST, and LOB interface isn't something worth of usage.

Both are possible means to handle data-objects that large.

The difference between the two is that with BYTEA, the value in a
result-set is returned with the BYTEA value embedded, whereas with
LOBs you get a file-pointer that you can subsequently read out at your
leisure.

As a consequence, with objects of 1GB in size, BYTEA requires the
server to allocate over 1GB of memory for each record in the result
set until it can send such records to the client, while the memory
footprint with LOBs is MUCH smaller on the server-side; just a
file-handle.

The interface for LOBs is a little more complicated, due to getting a
file handle instead of directly receiving the large object, but you're
saving your server a mountain of memory-load.

 You have exposed such interface, and you firm it. Does this crazynies is
 measured by fixed amount above 500MB or it's fuzzy measured with standard
 deviation near 1GB? If I use bytea to store such values, looks I'm crazy
 too. Sorry, for thinking that documentation is trust worthy.

It gives you a choice. That doesn't automatically make it the best
choice for your situation. That's something only you can decide.

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Decimal vs. Bigint memory usage

2011-09-29 Thread Gregor Vollmer

Dear List,

we are currently updating our application to use multiple database 
backends, the main backend on our site will be Postgres, though.


Some of our columns exceed the limit of the 'integer' type, now we are 
discussing the alternatives. Personally, I would use bigint since it 
should suffice in most cases, but using decimal is under discussion, 
too, because our modules would be able to precisely specify their 
required column sizes.


We do not do any arithmetic on the columns, only saving and retrieval, 
is decimal as fast as bigint in that case?


How does decimal store the number internally, is it a fixed size 
through-out all rows and how does it compare to bigint?


Thanks in advance,
Gregor

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bytea columns and large values

2011-09-29 Thread Jon Nelson
On Thu, Sep 29, 2011 at 7:30 AM, Alban Hertroys haram...@gmail.com wrote:
 On 29 September 2011 13:12, Radosław Smogura rsmog...@softperience.eu wrote:
 sending ~1GB bytea values is borderline crazy, and is completely crazy
 if you are not absolutely sure the transmission is not 100% binary.  I
 don't know if the JDBC sends/receives bytea as binary, but it may not.
  If not, you might have better luck with the large object interface.

 Crazy? I thought that official stand was to keep such crazy values as
 TOAST, and LOB interface isn't something worth of usage.

I'm not saying that placing such large values in a table (or LO) is a
good idea, but - if I had managed to put data *in* to a table that I
couldn't get back out, I'd be a bit cranky, especially if my attempt
to do so kills the backend I am using (which triggers a shutdown of
all other backends, no?).

-- 
Jon

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Decimal vs. Bigint memory usage

2011-09-29 Thread Marti Raudsepp
On Thu, Sep 29, 2011 at 15:15, Gregor Vollmer
voll...@ekp.uni-karlsruhe.de wrote:
 We do not do any arithmetic on the columns, only saving and retrieval, is
 decimal as fast as bigint in that case?

It's slightly slower because numeric is passed around by pointer, not
by value. Bigint is by-value in 64-bit Postgres versions. But that's
probably only noticeable if you're processing lots of values in a
single query, or running CREATE INDEX.

Personally I'd choose bigint for efficiency reasons. But always keep
in mind that arithmetic works differently on integers and numerics:

db=# select 1::bigint/10 as x;
 x
---
 0
db=# select 1::numeric/10 as x;
   x

 0.1000

 How does decimal store the number internally, is it a fixed size through-out
 all rows and how does it compare to bigint?

Numeric (decimal) is always variable-length. The specification in
column type doesn't affect storage. Bigint is always 8 bytes.

For numbers with less than 8 digits, numeric is slightly smaller than
bigint. For larger numbers, bigint is smaller.

create table dec (i numeric);
insert into dec values(0), (1), (11), (101), (1001), (10001),
(11), (101), (1001), (10001), (11),
(101), (1001), (10001);

select pg_column_size(i), i::text from dec;
 pg_column_size |   i
+---
  3 | 0
  5 | 1
  5 | 11
  5 | 101
  5 | 1001
  7 | 10001
  7 | 11
  7 | 101
  7 | 1001
  9 | 10001
  9 | 11
  9 | 101
  9 | 1001
 11 | 10001

Regards,
Marti

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bytea columns and large values

2011-09-29 Thread Merlin Moncure
On Thu, Sep 29, 2011 at 6:12 AM, Radosław Smogura
rsmog...@softperience.eu wrote:
 On Wed, 28 Sep 2011 10:18:27 -0500, Merlin Moncure wrote:
 sending ~1GB bytea values is borderline crazy, and is completely crazy
 if you are not absolutely sure the transmission is not 100% binary.  I
 don't know if the JDBC sends/receives bytea as binary, but it may not.
  If not, you might have better luck with the large object interface.

hm, I had one extra 'not' in there of course I meant to say you should
be sure data is transferred in binary (I think everyone knew that
though).

 You have exposed such interface, and you firm it. Does this crazynies is
 measured by fixed amount above 500MB or it's fuzzy measured with standard
 deviation near 1GB? If I use bytea to store such values, looks I'm crazy
 too. Sorry, for thinking that documentation is trust worthy.

 Thanks for mobilization - I reserved moderngres domains. I think it's about
 1-2 week for making some infrastructure for this.

The interface (well, libpq) and the protocol are in fact part of the
problem.  To truly support large bytea means streaming features, new
extensions to libpq, maybe some protocol modifications.  There have
been a couple of semi-serious attempts at dealing with this problem
(see archives), but none so far have gains serious traction.

The lo interface sucks but it's slightly better on resources for
really huge bytea and tends to be more consistently implemented in
database drivers.  If I was doing this, I would of course be crafting
a carefully generated client in C, using libpqtypes, which is the gold
standard for sending bytea against which all others should be judged.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Decimal vs. Bigint memory usage

2011-09-29 Thread planas
Gregor

On Thu, 2011-09-29 at 14:15 +0200, Gregor Vollmer wrote: 

 Dear List,
 
 we are currently updating our application to use multiple database 
 backends, the main backend on our site will be Postgres, though.
 
 Some of our columns exceed the limit of the 'integer' type, now we are 
 discussing the alternatives. Personally, I would use bigint since it 
 should suffice in most cases, but using decimal is under discussion, 
 too, because our modules would be able to precisely specify their 
 required column sizes.
 
 We do not do any arithmetic on the columns, only saving and retrieval, 
 is decimal as fast as bigint in that case?
 
 How does decimal store the number internally, is it a fixed size 
 through-out all rows and how does it compare to bigint?
 
 Thanks in advance,
 Gregor
 

My preference is to use bigint since you are already using integer. It
depends on what the values mean and how you use them whether decimal
could be used. Are these values used at all in any queries?

-- 
Jay Lozier
jsloz...@gmail.com


[GENERAL] pg_stat_replication data in standy servers

2011-09-29 Thread senthilnathan
Using 9.1 SR, the pg_stat_replication data's are available in Primary server.
The table is not replicated to standby server. Is it intensionally done. if
so why ?

Senthil

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/pg-stat-replication-data-in-standy-servers-tp4852870p4852870.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bytea columns and large values

2011-09-29 Thread Marti Raudsepp
On Thu, Sep 29, 2011 at 15:48, Jon Nelson jnelson+pg...@jamponi.net wrote:
 especially if my attempt
 to do so kills the backend I am using (which triggers a shutdown of
 all other backends, no?).

No, this is just an ereport(ERROR) that's handled gracefully by
rolling back the transaction.

Regards,
Marti

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bytea columns and large values

2011-09-29 Thread David North

On 29/09/11 14:55, Merlin Moncure wrote:

On Thu, Sep 29, 2011 at 6:12 AM, Radosław Smogura
rsmog...@softperience.eu  wrote:

On Wed, 28 Sep 2011 10:18:27 -0500, Merlin Moncure wrote:

sending ~1GB bytea values is borderline crazy, and is completely crazy
if you are not absolutely sure the transmission is not 100% binary.  I
don't know if the JDBC sends/receives bytea as binary, but it may not.
  If not, you might have better luck with the large object interface.

hm, I had one extra 'not' in there of course I meant to say you should
be sure data is transferred in binary (I think everyone knew that
though).


You have exposed such interface, and you firm it. Does this crazynies is
measured by fixed amount above 500MB or it's fuzzy measured with standard
deviation near 1GB? If I use bytea to store such values, looks I'm crazy
too. Sorry, for thinking that documentation is trust worthy.

Thanks for mobilization - I reserved moderngres domains. I think it's about
1-2 week for making some infrastructure for this.

The interface (well, libpq) and the protocol are in fact part of the
problem.  To truly support large bytea means streaming features, new
extensions to libpq, maybe some protocol modifications.  There have
been a couple of semi-serious attempts at dealing with this problem
(see archives), but none so far have gains serious traction.

The lo interface sucks but it's slightly better on resources for
really huge bytea and tends to be more consistently implemented in
database drivers.  If I was doing this, I would of course be crafting
a carefully generated client in C, using libpqtypes, which is the gold
standard for sending bytea against which all others should be judged.


Thanks everyone for your replies.

For my use-case, I'm going to break down the data into smaller lumps and 
store one per row (it's actually a bunch of serialized Java objects, so 
putting them all in one field of one row was rather lazy in the first 
place).


I'll also apply GZip to keep the amount of data in any given bytea/lob 
as small as possible.


David

--
David North, Software Developer, CoreFiling Limited
http://www.corefiling.com
Phone: +44-1865-203192


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bytea columns and large values

2011-09-29 Thread Jon Nelson
On Thu, Sep 29, 2011 at 10:51 AM, David North d...@corefiling.co.uk wrote:

 I'll also apply GZip to keep the amount of data in any given bytea/lob as
 small as possible.

Aren't bytea fields compressed by postgresql anyway (when EXTENDED or
MAIN is used) (by default).

http://www.postgresql.org/docs/8.4/static/storage-toast.html

-- 
Jon

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] create a dynamic function

2011-09-29 Thread salah jubeh
Hello Guys, 


I have the following problem and I solved it in the following way, I need to 
see if there are a better approaches to do that. any suggestion is welcome 


The problem 


I have function have this logic  



FOR record1 in SELECT  LOOP

 FRO record2 in SELECT ... LOOP
 

 -- Here the  function business logic changes based on a lookup table;  
for example, the  there are many logical conditions AND and OR  and the 
decision of how to combine these logical conditions may change based on the 
lookup table 

-- i.e in some cases the logical condition might be  ' if record1. 
attribute1 = record2. attribute1 OR ' or it may be  ' if record1. 
attribute1 = record2. attribute1 AND'
 -- The number of attributes is very large (around 45) which means I 
have endless combination of these logical conditions  and also the lookup 
tables can be changed   
            

    END LOOP 
END LOOP

So my solution was 

to create atrriger on the  lookup table to create this function 


DECLARE 
    function_body TEXT := 'CREATE OR REPLACE  '
    ..
BEGIN
    ...
    FOR RECORD  in the lookup table .
    function_body = function_body 


EXECUTE function_body 
   ..

Regards

Re: [GENERAL] bytea columns and large values

2011-09-29 Thread Merlin Moncure
On Thu, Sep 29, 2011 at 10:54 AM, Jon Nelson jnelson+pg...@jamponi.net wrote:
 On Thu, Sep 29, 2011 at 10:51 AM, David North d...@corefiling.co.uk wrote:

 I'll also apply GZip to keep the amount of data in any given bytea/lob as
 small as possible.

 Aren't bytea fields compressed by postgresql anyway (when EXTENDED or
 MAIN is used) (by default).

They are assuming you didn't turn compression off manually and the
objects were big enough to toast.  If you truly didn't need access to
the data in the backend, compressing on the client and disabling
compression on the server is probably a good idea.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Streaming Replication and Firewall

2011-09-29 Thread Ian Harding
I updated the firewall rules on a streaming replication standby server
and thought nothing of it.  I later happened to notice on the primary
that ps aux | grep stream didn't show streaming to that server
anymore.  On the standby that command still showed the wal receiver
patiently waiting for new data.

I know I broke it, but would anything have eventually happened, or
would the wal receiver keep patiently waiting as the world passed it
by?

I simply restarted the standby cluster and all is well.

Thanks!

- Ian

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_stat_replication data in standy servers

2011-09-29 Thread Simon Riggs
On Thu, Sep 29, 2011 at 1:26 PM, senthilnathan
senthilnatha...@gmail.com wrote:

 Using 9.1 SR, the pg_stat_replication data's are available in Primary server.
 The table is not replicated to standby server. Is it intensionally done. if
 so why ?

pg_stat_replication is a dynamic view of a particular server, not a table.

When we have cascaded replication in 9.2, the standby's view of
pg_stat_replication will be necessarily different from the primary.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Solaris 10u9, PG 8.4.6, 'c' lang function, fails on 1 of 5 servers

2011-09-29 Thread dennis jenkins
Hello Postgresql Community Members,

I am stumped trying to install a few 'c' language functions
on a particular Solaris server (64-bit, amd cpu arch (not sparc)).  I
actually
have 5 Postgresql servers, and the .so loads fine into 4 of them, but
refuses to load into the 5th.  I've quintuple checked the file
permissions, build of the .so, gcc versions, PostgreSQL versions,
etc...  I've had a college double check my work.  We're both stumped.
Details to follow.

All servers are running Solaris 10u9 on 64-bit hardware inside
Solaris zones.  Two of the servers are X4720's, 144GB ram, 24 Intel
CPU cores.  These two servers run the 4 working Solaris zones that
are able to load the function implemented in the .so files.  Postgresql
version 8.4.6, compiled from source (not a binary package).

The server that is misbehaving is an X4600, 128 GB ram, 16 AMD CPU
cores, but otherwise identical: Solaris 10u9, 64-bit OS, Postgresql
8.4.6.  All 5 systems use the stock gcc that ships with Solaris (v3.4.3,
its old, I know).

The permissions on the files and Postgresql directories.  First the
a working server, then the server that is not working as expected.

(root@working: /db) # ls -ld /db /db/*.so
drwx--  11 pgsqlroot  23 Sep 27 10:39 /db
-rwxr-xr-x   1 root root   57440 Sep 27 10:39
/db/pgsql_micr_parser_64.so

(root@working: /db) # psql -Upgsql -dpostgres -cselect version();
 PostgreSQL 8.4.6 on x86_64-pc-solaris2.11, compiled by GCC gcc (GCC) 3.4.3
(csl-sol210-3_4-20050802), 64-bit

(root@working: /db) # file /opt/local/x64/postgresql-8.4.6/bin/postgres
/opt/local/x64/postgresql-8.4.6/bin/postgres:   ELF 64-bit LSB executable
AMD64 Version 1 [SSE], dynamically linked, not stripped

(root@working: /db) # psql -Upgsql -dmy_db -ccreate or replace function
parse_micr(text) returns micr_struct
  as '/db/pgsql_micr_parser_64.so', 'pgsql_micr_parser' language c volatile
cost 1;
CREATE FUNCTION

(root@working: /db) # psql -Upgsql -dmy_db -t -cselect transit from
parse_micr(':=: 45800=100');
 =



(root@failed: /db) # ls -ld /db /db/*.so
drwx--  11 pgsqlroot  24 Sep 29 11:16 /db
-rwxr-xr-x   1 root root   57440 Sep 29 09:46
/db/pgsql_micr_parser_64.so

(root@failed: /db) # psql -Upgsql -dpostgres -cselect version();
 PostgreSQL 8.4.6 on x86_64-pc-solaris2.11, compiled by GCC gcc (GCC) 3.4.3
(csl-sol210-3_4-20050802), 64-bit

(root@failed: /db) # file /opt/local/x64/postgresql-8.4.6/bin/postgres
/opt/local/x64/postgresql-8.4.6/bin/postgres:   ELF 64-bit LSB executable
AMD64 Version 1 [SSE], dynamically linked, not stripped

(root@failed: /db) #  psql -Upgsql -dmy_db -ccreate or replace function
parse_micr(text) returns micr_struct
 as '/db/pgsql_micr_parser_64.so', 'pgsql_micr_parser' language c volatile
cost 1;
ERROR:  could not load library /db/pgsql_micr_parser_64.so: ld.so.1:
postgres: fatal: /db/pgsql_micr_parser_64.so: Permission denied



  Ok.  Well, the file permissions are correct, so what gives?  Next
step is to trace the backend process as it attempts to load the .so.
So I connect to the failed server via pgAdmin and run select getpid();
I then run truss -p PID from my shell, and in pgAdmin, execute the
SQL to create the function.  This is the result of the system trace:

(root@failed: /db) # truss -p 10369
recv(9, 0x0097C103, 5, 0)   (sleeping...)
recv(9, 170301\0  , 5, 0) = 5
recv(9,  TBEE5 n J\0 VF6E4DDCF84.., 32, 0)= 32
recv(9, 170301\0B0, 5, 0) = 5
recv(9, AAD5A5 L97B0CEA5A9F0CD89.., 176, 0)   = 176
stat(/db/pgsql_micr_parser_64.so, 0xFD7FFFDF9520) = 0
stat(/db/pgsql_micr_parser_64.so, 0xFD7FFFDF9530) = 0
stat(/db/pgsql_micr_parser_64.so, 0xFD7FFFDF8F50) = 0
resolvepath(/db/pgsql_micr_parser_64.so, /db/pgsql_micr_parser_64.so,
1023) = 27
open(/db/pgsql_micr_parser_64.so, O_RDONLY)   = 22
mmap(0x0001, 32768, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_ALIGN, 22, 0)
Err#13 EACCES
close(22)   = 0
setcontext(0xFD7FFFDF9050)
setcontext(0xFD7FFFDF9BB0)


We can see that the backend is able to open the .so file for
reading, but the mmap fails.  From the Solaris man page on mmap:

ERRORS
 The mmap() function will fail if:

 EACCES  The fildes file descriptor is not  open  for
 read,  regardless  of  the protection speci-
 fied; or fildes is not open  for  write  and
 PROT_WRITE  was  specified  for a MAP_SHARED
 type mapping.


My analysis:

1) The file descriptor (#22) is open for O_RDONLY.
2) PROT_WRITE and MAP_SHARED are not specified, so write access is not
relevant.


Things that I tried, unsuccessfully:

1) I recompiled the .so on the target system (X4600, AMD chips) just
   in case it is somehow different from the .so that got built on the
   working system (X4270, Intel chips).

2) Tested with a different .so (I have 

[GENERAL] PDT but not WEST

2011-09-29 Thread Christophe Pettus
I am baffled.  Both PDT and WEST appear as valid timezone abbreviations, and 
each have unique values, but:

test=# select timestamp with time zone '2011-09-29 18:00 PDT';
  timestamptz   

 2011-09-29 18:00:00-07
(1 row)

test=# select timestamp with time zone '2011-09-29 18:00 WEST';
ERROR:  invalid input syntax for type timestamp with time zone: 2011-09-29 
18:00 WEST
LINE 1: select timestamp with time zone '2011-09-29 18:00 WEST';

What am I missing?  Is the parser insisting on three-letter time zone 
abbreviations?  Should it be?
--
-- Christophe Pettus
   x...@thebuild.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDT but not WEST

2011-09-29 Thread John R Pierce

On 09/29/11 10:17 AM, Christophe Pettus wrote:

Both PDT and WEST appear as valid timezone abbreviations...


WEST?  Really?  where does this appear, I've never seen that.


--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDT but not WEST

2011-09-29 Thread Tom Lane
Christophe Pettus x...@thebuild.com writes:
 I am baffled.  Both PDT and WEST appear as valid timezone abbreviations, and 
 each have unique values, but:

Where do you see WEST as a valid timezone abbrevation?  It's not listed
in the Default abbreviation list.  (Perhaps it should be, since there
don't seem to be any places that don't consider it GMT+1 summer time.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rules going away

2011-09-29 Thread Igor Neyman
 -Original Message-
 From: Ondrej Ivanič [mailto:ondrej.iva...@gmail.com]
 Sent: Wednesday, September 28, 2011 6:47 PM
 To: pgsql-general@postgresql.org
 Subject: Re: Rules going away
 
 Hi,
 
  folks, don't use RULES! use triggers -- and as much as possible, keep
  triggers simple, short, and to the point (simple validation, custom
  RI, auditing/logging, etc).
 
 I like them :). 'DO INSTEAD' rules are great for partitioning so you
 can insert (or update) to parent table and 'DO INSTEAD' rule takes
 care about the rest.
 
 --
 Ondrej Ivanic
 (ondrej.iva...@gmail.com)

Years ago (PG version 8.2), I was working on implementing partitioned tables, 
and evaluated whether to use triggers or rules to automagically redirect 
inserts/updates/delete from main table to appropriate partition based on the 
value of partitioning key.

Unfortunately, I didn't document my research, but the result was that triggers 
were unusable for my purposes, and I ended up with the rules calling functions 
where using dynamic sql I decide which partition should be used for 
insert/update/delete (at least on selects PG takes care of choosing proper 
partition, of course with properly set configuration parameter).

These rules (and functions they call) work perfectly for me all these years in 
many, many, many installations.
So, until PG takes care of the problem I'm solving with the rules (implementing 
complete partitioning feature, and not just on selects), please leave rules 
where they are, they definitely have their use when properly implemented with 
specific purpose.

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Solaris 10u9, PG 8.4.6, 'c' lang function, fails on 1 of 5 servers

2011-09-29 Thread Reid Thompson
On Thu, 2011-09-29 at 12:08 -0500, dennis jenkins wrote:
 ERROR:  could not load library /db/pgsql_micr_parser_64.so: ld.so.1:
 postgres: fatal: /db/pgsql_micr_parser_64.so: Permission denied

for a different shared object, but may provide clues...

Error: - adding iplike database function... snip
org.postgresql.util.PSQLException: ERROR: could not access file
'snip/lib/iplike.so': Permission denied
The PostgreSQL server cannot access the iplike.so file. This could be
due to the file itself not having appropriate permissions for the user
that PostgreSQL runs as and

/or one or more of the parent directories of the iplike.so not having
appropriate permissions.



Error: - adding iplike database function... snip
org.postgresql.util.PSQLException: ERROR: could not load library ...
The latter part of the error could be something like path/iplike.so:
cannot open shared object file: No such file or directory or ld.so.1:
postgres: fatal: path/iplike.so: wrong ELF class: ELFCLASS32.

The PostgreSQL server cannot load the iplike.so file. This is almost
always caused by the PostgreSQL server and the iplike.so file being
compiled for different processor instruction sets.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDT but not WEST

2011-09-29 Thread Christophe Pettus

On Sep 29, 2011, at 10:50 AM, Tom Lane wrote:

 Christophe Pettus x...@thebuild.com writes:
 I am baffled.  Both PDT and WEST appear as valid timezone abbreviations, and 
 each have unique values, but:
 
 Where do you see WEST as a valid timezone abbrevation?

Voila, Western Europe Summer Time:

test=# select * from pg_timezone_names where abbrev='WEST';
   name   | abbrev | utc_offset | is_dst 
--+++
 Atlantic/Canary  | WEST   | 01:00:00   | t
 Atlantic/Faeroe  | WEST   | 01:00:00   | t
 Atlantic/Faroe   | WEST   | 01:00:00   | t
 Atlantic/Madeira | WEST   | 01:00:00   | t
 Europe/Lisbon| WEST   | 01:00:00   | t
 Portugal | WEST   | 01:00:00   | t
 WET  | WEST   | 01:00:00   | t
(7 rows)

--
-- Christophe Pettus
   x...@thebuild.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDT but not WEST

2011-09-29 Thread Steve Crawford

On 09/29/2011 10:50 AM, Tom Lane wrote:

Christophe Pettusx...@thebuild.com  writes:

I am baffled.  Both PDT and WEST appear as valid timezone abbreviations, and 
each have unique values, but:

Where do you see WEST as a valid timezone abbrevation?  It's not listed
in the Default abbreviation list.  (Perhaps it should be, since there
don't seem to be any places that don't consider it GMT+1 summer time.)

regards, tom lane


select * from pg_timezone_names where abbrev = 'WEST' ;
  name  | abbrev | utc_offset | is_dst
+++
 Portugal   | WEST   | 01:00:00   | t
 posix/Portugal | WEST   | 01:00:00   | t
 posix/WET  | WEST   | 01:00:00   | t
 posix/Europe/Lisbon| WEST   | 01:00:00   | t
 posix/Atlantic/Canary  | WEST   | 01:00:00   | t
 posix/Atlantic/Faeroe  | WEST   | 01:00:00   | t
 posix/Atlantic/Madeira | WEST   | 01:00:00   | t
 posix/Atlantic/Faroe   | WEST   | 01:00:00   | t
 WET| WEST   | 01:00:00   | t
 Europe/Lisbon  | WEST   | 01:00:00   | t
 Atlantic/Canary| WEST   | 01:00:00   | t
 Atlantic/Faeroe| WEST   | 01:00:00   | t
 Atlantic/Madeira   | WEST   | 01:00:00   | t
 Atlantic/Faroe | WEST   | 01:00:00   | t
(14 rows)

It's interesting that there are (in my install of 9.1):

1174 distinct timezone names (all records are unique) in pg_timezone_names.
181 distinct abbreviations in pg_timezone_names

189 distinct timezone abbreviations (all unique abbreviations) in 
pg_timezone_abbrevs.


But 61 abbreviations that appear in pg_timezone_names do not have a 
corresponding entry in pg_timezone_abbrevs and 69 abbreviations in 
pg_timezone_abbrevs that don't appear in pg_timezone_names.


There are 56 records and 3 different offsets in pg_timezone_names for 
the abbreviation 'CST'.


I try to use timezone names instead of abbreviations wherever possible.

Cheers,
Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDT but not WEST

2011-09-29 Thread John R Pierce

On 09/29/11 11:44 AM, Steve Crawford wrote:
There are 56 records and 3 different offsets in pg_timezone_names for 
the abbreviation 'CST'. 


yeah, we had some internal java software crashing on CST when it was 
deployed in China :-/


I suggested the developer switch to using ISO format, and the problem 
was solved.




--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDT but not WEST

2011-09-29 Thread Christophe Pettus

On Sep 29, 2011, at 11:44 AM, Steve Crawford wrote:

 There are 56 records and 3 different offsets in pg_timezone_names for the 
 abbreviation 'CST'.

That's actually how this popped up for me; using 'IST' was giving rather 
unexpected results...

--
-- Christophe Pettus
   x...@thebuild.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDT but not WEST

2011-09-29 Thread Tom Lane
Christophe Pettus x...@thebuild.com writes:
 That's actually how this popped up for me; using 'IST' was giving rather 
 unexpected results...

IST is one of the ones where there's a real conflict, ie it means
different things to different people.  That was what drove us to invent
the timezone abbreviation configuration files.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDT but not WEST

2011-09-29 Thread Christophe Pettus

On Sep 29, 2011, at 12:11 PM, Tom Lane wrote:

 IST is one of the ones where there's a real conflict, ie it means
 different things to different people.

Indeed; just noting that the search for a non-conflicting abbreviation is what 
lead me to find the WEST thing.

--
-- Christophe Pettus
   x...@thebuild.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDT but not WEST

2011-09-29 Thread Tom Lane
Steve Crawford scrawf...@pinpointresearch.com writes:
 Actually, given that pg_timezone_abbrevs is based on the 
 timezone_abbreviations GUC, I'm not surprised that it is a subset of 
 what is in pg_timezone_names. But I am a bit surprised that the opposite 
 is true.

For zones that observe DST, pg_timezone_names only shows you the active
abbreviation, but pg_timezone_abbrevs includes both summer and winter
abbreviations.  Also, there are some zones that have alternate
abbreviations that will never show up in pg_timezone_names
(ZULU and Z for UTC, for instance).

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] problem,after restarting server

2011-09-29 Thread Prashant Bharucha
Hello 

I am facing problem,after restarting server ,i could not populate data from 
tablespace in postgres db using  version 8.4.

I am not getting any error .

Could you please help ,how to retrieve data ?

Thx
Prashant



Re: [GENERAL] problem,after restarting server

2011-09-29 Thread John R Pierce

On 09/29/11 1:19 PM, Prashant Bharucha wrote:
I am facing problem,after restarting server ,i could not populate data 
from tablespace in postgres db using  version 8.4.


I am not getting any error .

Could you please help ,how to retrieve data ?


http://www.postgresql.org/docs/current/static/bug-reporting.html

(I know you're not reporting a bug here, but the same basic information 
is needed for any problem related query)





--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help needed in Search

2011-09-29 Thread Filip Rembiałkowski
Siva,

in addition to what others said, please note that underscore matches any
character. to change it use escape char.

http://www.postgresql.org/docs/9.1/static/functions-matching.html#FUNCTIONS-LIKE



2011/9/28 Siva Palanisamy siv...@hcl.com

  Hi All,

 ** **

 I am trying to retrieve the contact names based on the keyed search string.
 It performs good for the English alphabets and behaves strangely for special
 chars such as _,/,\,%

 ** **

 My query in the function is similar to 

 ** **

 SELECT contact_name FROM contacts WHERE LOWER(contact_name) LIKE
 LOWER('_McDonald%') ORDER BY LOWER(contact_name) ASC LIMIT 1;

 ** **

 It looks like, during searching, it retrieves all the contact names instead
 of the desired. The similar bizarre happens for the above mentioned special
 chars. I need to support these. How do I educate postgres to consider these
 chars? Please guide me.

 ** **

 Thanks and Regards,

 Siva.

 --
 ::DISCLAIMER::

 ---

 The contents of this e-mail and any attachment(s) are confidential and
 intended for the named recipient(s) only.
 It shall not attach any liability on the originator or HCL or its
 affiliates. Any views or opinions presented in
 this email are solely those of the author and may not necessarily reflect
 the opinions of HCL or its affiliates.
 Any form of reproduction, dissemination, copying, disclosure, modification,
 distribution and / or publication of
 this message without the prior written consent of the author of this e-mail
 is strictly prohibited. If you have
 received this email in error please delete it and notify the sender
 immediately. Before opening any mail and
 attachments please check them for viruses and defect.


 ---



Re: [GENERAL] : Looking for a PostgreSQL book

2011-09-29 Thread Greg Smith

Achilleas Mantzios wrote:

code for the cache utils pg_cacheutils is mentioned to exist somewhere but it's 
not there


At this point the pgfincore project is the most popular way to do the 
work that pg_cacheutils was suggested for there.  See 
http://pgfoundry.org/projects/pgfincore/ and 
http://www.pgcon.org/2010/schedule/events/261.en.html for more information.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDT but not WEST

2011-09-29 Thread Steve Crawford

On 09/29/2011 11:44 AM, Steve Crawford wrote:



But 61 abbreviations that appear in pg_timezone_names do not have a 
corresponding entry in pg_timezone_abbrevs and 69 abbreviations in 
pg_timezone_abbrevs that don't appear in pg_timezone_names.



Actually, given that pg_timezone_abbrevs is based on the 
timezone_abbreviations GUC, I'm not surprised that it is a subset of 
what is in pg_timezone_names. But I am a bit surprised that the opposite 
is true.


Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Identifying old/unused views and table

2011-09-29 Thread Jason Long
On Wed, 2011-09-28 at 08:52 +0200, Guillaume Lelarge wrote:
 On Wed, 2011-09-28 at 09:04 +0800, Craig Ringer wrote:
  On 09/28/2011 04:51 AM, Jason Long wrote:
   I have an application with a couple hundred views and a couple hundred
   tables.
  
   Is there some way I can find out which views have been accessed in the
   last 6 months or so?  Or some way to log this?
  
   I know there are views and tables that are no longer in used by my
   application and I am looking for a way to identify them.
  
  Look at the pg_catalog.pg_stat* tables
  
 
 I fail to see how that gives him any answer on the views, and tables no
 longer used. AFAICT, there's no way to know for views (apart from
 logging all queries in the log). As for tables, still apart from the
 log, pg_stat_user_tables could give an answer if he was monitoring it at
 least the last six months.
 
 

Thanks for the replies.  Views were my main problem.  My application
could use some cleanup.  Doing is manually is probably the best
approach.  I was just looking for a jump start.  



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Any was to prevent a join if no columns are selected from a view?

2011-09-29 Thread Jason Long
I started an application around 5 years ago using Hibernate and writing
my queries in HQL.

The primary search screen has many options to filter and joins many
tables and views.

As the application grew the SQL Hibernate is generating is out of hand
and needs optimization.

As with other parts of the application, I usually fall back to raw SQL
with good results.

This is what I am attempting now.

I thought I had read somewhere that Postges could ignore a join if it
was not necessary because there were no columns from the table or view
selected in the query.  Is this possible?

What I want to do is to create a view with all the columns I need for
the search, and have my search pull from this view.

Here are some query times:

 396 ms without the table joined
2008 ms with the query joined, but no columns selected

The time is the same if I select the calculated values from the view.

This way I can just generate the select, group by, and order by terms
and just select from view without having to manually join the views if
they are required.

I also tried another approach and tried to join the more expensive views
to a simpler view that was not expensive.

select * 
from v_no_expensive_calcs vne 
join v_expensive_calcs ve.id=vne.id

This takes about 2000 ms when joining v_expensive_calcs directly inside
v_no_expensive_calcs only takes 1100 ms.

I thought these would be equivalent.

The idea is that some users do not need cost, price, look up, or profit
calculations which are expensive.  Hibernate handles this currently
well, but it also generates a crazy mess that is dragging down the whole
application.  I will probably end up doing this manually, but I am
trying to make the application logic simpler.

Any ideas or advice?  I am using Postgres 9.1.1.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming Replication and Firewall

2011-09-29 Thread Fujii Masao
On Fri, Sep 30, 2011 at 1:35 AM, Ian Harding harding@gmail.com wrote:
 I updated the firewall rules on a streaming replication standby server
 and thought nothing of it.  I later happened to notice on the primary
 that ps aux | grep stream didn't show streaming to that server
 anymore.  On the standby that command still showed the wal receiver
 patiently waiting for new data.

 I know I broke it, but would anything have eventually happened, or
 would the wal receiver keep patiently waiting as the world passed it
 by?

I guess that walreceiver has been stuck because it could not detect
the termination of connection for a while. By default, keepalive is
enabled on the replication connection from the standby to the master,
but how long it takes for walreceiver to notice the termination of
connection depends on the kernel parameters related to keepalive.
By default settings, it takes about two hours.

Setting keepalives parameters in primary_conninfo would help to fix
such a problem.
http://developer.postgresql.org/pgdocs/postgres/libpq-connect.html

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Identifying old/unused views and table

2011-09-29 Thread salah jubeh


Hello,

I had the same issue before and I used the  PostgreSQL statistics to 
see weather the tables are used or not. One thing that I could not solve is how 
to check if the schema design and  semantics are good. i.e. table a references 
table b,  table c references table b, and table  c reference table a. In some 
cases I find something like loops and circles in other cases, I find the same 
table referenced many times in other tables in the same schema. Any way,  here 
are my findings regarding how to clean up your data. 

1. Checking the number of sequential and indexed access to the table gives a 
good hint if the table is in use or deprecated.  The following select statement 
retrieve the tables that might be deprecated.

Select relname from
pg_stat_user_tables
WHERE (idx_tup_fetch + seq_tup_read)= 0; -- you can define a threshold here

2. Empty tables can be retrieved by checking the number of live tup i.e
Select relname from
pg_stat_user_tables
WHERE n_live_tup = 0; 
3.  column can be checked using the null fraction 
in see http://www.postgresql.org/docs/8.3/static/view-pg-stats.html

4.  use  pg_constraints to determine the tables that depends on the above 
tables 

5. table duplicates i.e the table can be found in more than one schema 

SELECT
n.nspname as Schema,
c.relname as Name FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relname IN (SELECT relname FROM pg_catalog.pg_class
WHERE relkind IN ('r')
GROUP BY relname
Having count(relname)  1)
ORDER BY 2,1;

6. For views there is no statistics, an easy way is to parse the log file 
using regular expressions and shell scripting and compare the result with the 
list of views and tables , I did that and I get many deprecated view. 

7. for duplicate data have a look on this query. 
-- if you need table to find an exact duplicate replace col1 , ...  coln  
with table name SELECT col1 , ...  coln , min(ctid) AS keep, count(*) FROM 
table GROUP BY  col1 , ...  coln  HAVING count(*)  1 --- The above code 
snippet can be combined with delete statement to delete duplicate rows


Have fun 





From: Jason Long mailing.li...@octgsoftware.com
To: Guillaume Lelarge guilla...@lelarge.info
Cc: Craig Ringer ring...@ringerc.id.au; pgsql-general@postgresql.org
Sent: Friday, September 30, 2011 12:12 AM
Subject: Re: [GENERAL] Identifying old/unused views and table

On Wed, 2011-09-28 at 08:52 +0200, Guillaume Lelarge wrote:
 On Wed, 2011-09-28 at 09:04 +0800, Craig Ringer wrote:
  On 09/28/2011 04:51 AM, Jason Long wrote:
   I have an application with a couple hundred views and a couple hundred
   tables.
  
   Is there some way I can find out which views have been accessed in the
   last 6 months or so?  Or some way to log this?
  
   I know there are views and tables that are no longer in used by my
   application and I am looking for a way to identify them.
  
  Look at the pg_catalog.pg_stat* tables
  
 
 I fail to see how that gives him any answer on the views, and tables no
 longer used. AFAICT, there's no way to know for views (apart from
 logging all queries in the log). As for tables, still apart from the
 log, pg_stat_user_tables could give an answer if he was monitoring it at
 least the last six months.
 
 

Thanks for the replies.  Views were my main problem.  My application
could use some cleanup.  Doing is manually is probably the best
approach.  I was just looking for a jump start.  



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] PL/Python

2011-09-29 Thread Joe Abbate
Hi,

I'm toying with the idea of making the Pyrseas utilities a true
PostgreSQL extension. Currently, the utilities (dbtoyaml and yamltodb)
rely on a library of Python functions: over 16 modules and 4000+ lines
of code.  The programs would remain as Python command line front-ends,
but there would be a PL/Python function, i.e., to_yaml() or to_map(),
that would return a YAML / JSON map, like this:

{'schema public': {'type greeting': {'attributes': [{'how': 'text'},
{'who': 'text'}]}, 'function pymax(a integer, b integer)': {'language':
'plpythonu', 'source': 'return a if a  b else b', 'returns':
'integer'}, 'description': 'standard public schema'}, 'language
plpythonu': {}}

(Looks nicer this way, in YAML):

language plpythonu: {}
schema public:
  description: standard public schema
  function pymax(a integer, b integer):
language: plpythonu
returns: integer
source: return a if a  b else b
  type greeting:
attributes:
- how: text
- who: text

Although there are no discussions or examples in the documentation, I've
determined that PL/Python supports Python new style classes like class
Test(object), and import of standard modules.

Now, in order for to_yaml/to_map to do its work, it needs to import 15
modules, e.g.,

from pyrseas.dbobject.language import LanguageDict

I'd be interested to know if anyone has tried something similar (in
terms of scope) and if so, how they tackled it.

The other function, diff_yaml() or diff_map(), is a bit trickier because
it requires reading in a YAML spec file, like the one above, and then
comparing it to the internal version of to_yaml/to_map, in order to
output SQL DDL statements.  The command line tool could read the spec
file locally and send it off as one big text argument.  Again, I'm
interested in similar experiences if any.

Recently in -hackers there was a discussion about splitting pg_dump.c
and some suggested breaking it up IIUC by catalog object type (much like
Pyrseas does) and providing, for example, access to functions that
output SQL DDL for a given object.  A side effect of turning Pyrseas
into an extension could be --with some extra work-- to provide such a
set of functions. I'd like to know if there is any interest in this
capability.

Joe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] could not access file $libdir/pg_buffercache: No such file or directory

2011-09-29 Thread Royce Ausburn
I'm in the process of testing out Postgres 9.0 for production use.  I've been 
using it for development on my mac, a build from EnterpriseDB.  We've just 
installed a 9.0.5 on an Ubuntu (Ubuntu 10.04.3 LTS) machine from a backport 
from lucid.  There's an existing 8.4.8 postgres install also on that machine, 
but as far as I'm aware the debs are set up so you can happily have two 
installs side by side.

I've dumped a test DB from my laptop and attempted to restore it on to the 
ubuntu machine, but I see errors:

royce@fishy:~$ createdb test
royce@fishy:~$ pg_restore -d test --no-owner test_RTR.pgc 
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 65; 1255 2194102 FUNCTION 
pg_buffercache_pages() persona
pg_restore: [archiver (db)] could not execute query: ERROR:  could not access 
file $libdir/pg_buffercache: No such file or directory
Command was: CREATE FUNCTION pg_buffercache_pages() RETURNS SETOF record
LANGUAGE c
AS '$libdir/pg_buffercache', 'pg_buffercache_...


After looking around a bit, my theory on these errors is that my mac has some 
contrib module installed that adds views to my test database, and those views 
have been included in the dump.  The contrib module is apparently not installed 
on the ubuntu machine and hence the restore can't create the views.  Is this 
theory correct?  If so:

- Can these errors be safely ignored? (not ideal, because I'll miss other, real 
errors)
- Is there some simple way I can prevent these views from being emitted as part 
of the dump?
- Is there some simple way I can prevent these views from being restored from a 
dump that contains them?
- Otherwise, how can I purge these contrib modules from my laptop?

Otherwise, if my theory is incorrect, any hints as to what it might be?

Cheers!

--Royce

Mac:

  version   


 PostgreSQL 9.0.4 on x86_64-apple-darwin, compiled by GCC 
i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664), 64-bit
(1 row)


Ubuntu box:

   version  
 
-
 PostgreSQL 9.0.5 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu 
4.4.3-4ubuntu5) 4.4.3, 64-bit
(1 row)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Any was to prevent a join if no columns are selected from a view?

2011-09-29 Thread Ben Chobot
On Sep 29, 2011, at 4:57 PM, Jason Long wrote:

 I thought I had read somewhere that Postges could ignore a join if it
 was not necessary because there were no columns from the table or view
 selected in the query.  Is this possible?

This sounds like incorrect logic to me, so I would be surprised if it was 
possible.

Re: [GENERAL] Any was to prevent a join if no columns are selected from a view?

2011-09-29 Thread Royce Ausburn

On 30/09/2011, at 8:57 AM, Jason Long wrote:

 I thought I had read somewhere that Postges could ignore a join if it
 was not necessary because there were no columns from the table or view
 selected in the query.  Is this possible?

You might be thinking of this enhancement introduced in 9.0:

http://www.postgresql.org/docs/9.0/static/release-9-0.html#AEN99923

Remove unnecessary outer joins (Robert Haas)

Outer joins where the inner side is unique and not referenced above the join 
are unnecessary and are therefore now removed. This will accelerate many 
automatically generated queries, such as those created by object-relational 
mappers (ORMs).

Re: [GENERAL] Any was to prevent a join if no columns are selected from a view?

2011-09-29 Thread Royce Ausburn

 
 On 30/09/2011, at 8:57 AM, Jason Long wrote:
 
 I thought I had read somewhere that Postges could ignore a join if it
 was not necessary because there were no columns from the table or view
 selected in the query.  Is this possible?
 
 You might be thinking of this enhancement introduced in 9.0:
 
 http://www.postgresql.org/docs/9.0/static/release-9-0.html#AEN99923
 
 Remove unnecessary outer joins (Robert Haas)
 
 Outer joins where the inner side is unique and not referenced above the join 
 are unnecessary and are therefore now removed. This will accelerate many 
 automatically generated queries, such as those created by object-relational 
 mappers (ORMs).
 

Ack! Hit send too early.

Note that enhancement addresses only the case where the inner join is unique 
(ie a primary key or covered by a unique constraint).  My understanding of this 
is that in this case the outer join won't affect the number of rows returned, 
so if it's not used it's not necessary.  Without the unique constraint, or if 
it's not an outer join it still needs to be included, even if you're not 
referencing the table in select / conditions because the join affects the 
number of rows…. HTH.. and I hope I understand this correctly

[GENERAL] Deadlock of REINDEX and SELECT queries in PostgresSQL 7.4

2011-09-29 Thread Tendulker, Shivanand G Prabhu (SSTL)
Hello

We are facing a deadlock kind of issue in PostgresSQL 7.4

We have 2 databases with 3 tables each. DB contains about 250 records. We 
observed deadlock when 2 different clients are performing REINDEX and SELECT 
start their operations near simultaneously.

Client 1  performs following operations in a tight loop:-
VACCUM, ANALYZE of each table, REINDEX of each table'

Client 2 performs SELECT in one of the table in a tight loop.

Upon looking at the postgres locks, it seems like all the locks are granted to 
'REINDEX' operation and SELECT is waiting. REINDEX never returns in this 
scenario. This problem occurs when Client1  and Client 2 are running 
simultaneously. Both in a tight loop. Once this deadlock is reached all the 
subsequent SELECT, RENDEX operations go into 'waiting' mode.

Is this a known issue? Is the REINDEX and SELECT transactions directed at 
postgres at same time  not a valid combination?

Please provide help in resolving this issue.

Thanks and Regards
Shiv




Re: [GENERAL] could not access file $libdir/pg_buffercache: No such file or directory

2011-09-29 Thread Venkat Balaji
I had faced the same problem 2 days earlier and that was for
pg_freespacemap contrib module.

I did not know the way to ignore these functions and installed THE contrib
modules and restored. It worked !

I am also looking for a way to ignore these functions.

Thanks
VB

On Fri, Sep 30, 2011 at 8:15 AM, Royce Ausburn royce...@inomial.com wrote:

 I'm in the process of testing out Postgres 9.0 for production use.  I've
 been using it for development on my mac, a build from EnterpriseDB.  We've
 just installed a 9.0.5 on an Ubuntu (Ubuntu 10.04.3 LTS) machine from a
 backport from lucid.  There's an existing 8.4.8 postgres install also on
 that machine, but as far as I'm aware the debs are set up so you can happily
 have two installs side by side.

 I've dumped a test DB from my laptop and attempted to restore it on to the
 ubuntu machine, but I see errors:

 royce@fishy:~$ createdb test
 royce@fishy:~$ pg_restore -d test --no-owner test_RTR.pgc
 pg_restore: [archiver (db)] Error while PROCESSING TOC:
 pg_restore: [archiver (db)] Error from TOC entry 65; 1255 2194102 FUNCTION
 pg_buffercache_pages() persona
 pg_restore: [archiver (db)] could not execute query: ERROR:  could not
 access file $libdir/pg_buffercache: No such file or directory
Command was: CREATE FUNCTION pg_buffercache_pages() RETURNS SETOF record
LANGUAGE c
AS '$libdir/pg_buffercache', 'pg_buffercache_...


 After looking around a bit, my theory on these errors is that my mac has
 some contrib module installed that adds views to my test database, and those
 views have been included in the dump.  The contrib module is apparently not
 installed on the ubuntu machine and hence the restore can't create the
 views.  Is this theory correct?  If so:

 - Can these errors be safely ignored? (not ideal, because I'll miss other,
 real errors)
 - Is there some simple way I can prevent these views from being emitted as
 part of the dump?
 - Is there some simple way I can prevent these views from being restored
 from a dump that contains them?
 - Otherwise, how can I purge these contrib modules from my laptop?

 Otherwise, if my theory is incorrect, any hints as to what it might be?

 Cheers!

 --Royce

 Mac:

  version

 
  PostgreSQL 9.0.4 on x86_64-apple-darwin, compiled by GCC
 i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664), 64-bit
 (1 row)


 Ubuntu box:

   version

 -
  PostgreSQL 9.0.5 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real
 (Ubuntu 4.4.3-4ubuntu5) 4.4.3, 64-bit
 (1 row)


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Deadlock of REINDEX and SELECT queries in PostgresSQL 7.4

2011-09-29 Thread John R Pierce

On 09/29/11 8:49 AM, Tendulker, Shivanand G Prabhu (SSTL) wrote:


Please provide help in resolving this issue.



7.4 is an ancient dead end release.  update to something released this 
century, like 8.4 or 9.0... where, btw, vacuum, analyze, and reindex are 
now automatic and no longer need doing manually except in very unusual 
situations.





--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general