[GENERAL] conditional insert

2011-09-05 Thread Pau Marc Muñoz Torres
Hi follk

 i trying  to performe a  conditional insert into a table, indeed, what i'm
trying to do is not insert a record into the table if that record exist

googleling i found something like

 insert into XX values (1,2,3) where not exist (select );

but i'm having and error near where...

anyone knows how do i can perfome this insert?

thanks

p


Re: [GENERAL] conditional insert

2011-09-05 Thread Achilleas Mantzios
Στις Monday 05 September 2011 12:38:34 ο/η Pau Marc Muñoz Torres έγραψε:
 Hi follk
 
  i trying  to performe a  conditional insert into a table, indeed, what i'm
 trying to do is not insert a record into the table if that record exist
 

thats why primary/unique keys are for.

isolate the columns which you consider to be a correct unique key
and create a unique key on them.

thereis no notion of conditional insert that i know of.

 googleling i found something like
 
  insert into XX values (1,2,3) where not exist (select );
 
 but i'm having and error near where...
 
 anyone knows how do i can perfome this insert?
 
 thanks
 
 p
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] conditional insert

2011-09-05 Thread Raymond O'Donnell
On 05/09/2011 10:38, Pau Marc Muñoz Torres wrote:
 Hi follk
 
  i trying  to performe a  conditional insert into a table, indeed, what
 i'm trying to do is not insert a record into the table if that record exist
 
 googleling i found something like
 
  insert into XX values (1,2,3) where not exist (select );
 
 but i'm having and error near where...
 

Shouldn't it be EXISTS, not EXIST?

Anyway, what is the error you're getting?

Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] conditional insert

2011-09-05 Thread Thomas Kellerer

Pau Marc Muñoz Torres, 05.09.2011 11:38:

Hi follk

  i trying  to performe a  conditional insert into a table, indeed, what i'm 
trying to do is not insert a record into the table if that record exist

googleling i found something like

  insert into XX values (1,2,3) where not exist (select );

but i'm having and error near where...
anyone knows how do i can perfome this insert?


INSERT INTO xxx
SELECT 1,2,3
WHERE NOT EXISTS (SELECT ...)

Regards
Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] conditional insert

2011-09-05 Thread Sim Zacks


  
  
On 09/05/2011 12:38 PM, Pau Marc Muoz Torres wrote:
Hi follk
  
  i trying to performe a conditional insert into a table, indeed,
  what i'm trying to do is not insert a record into the table if
  that record exist
  
  googleling i found something like
  
  insert into XX values (1,2,3) where not exist (select );
  
  but i'm having and error near where...
  
  anyone knows how do i can perfome this insert?
  
  thanks
  
  p

You can either do an Insert(...) select... from...where...
or you can add a rule to the table that checks if the key exists
  and if so, do an update or nothing instead.

  



Re: [GENERAL] conditional insert

2011-09-05 Thread Pau Marc Muñoz Torres
i don't  see it clear, let me put an example

i got the following table

molec varchar(30)
seq varchar(100)

where I insert my values

 lets image that i have a record introduced as ('ubq', 'aadgylpittrs')

how i can prevent to insert another record where molec='ubq' ?

thanks



2011/9/5 Thomas Kellerer spam_ea...@gmx.net

 Pau Marc Muñoz Torres, 05.09.2011 11:38:

  Hi follk

  i trying  to performe a  conditional insert into a table, indeed, what
 i'm trying to do is not insert a record into the table if that record exist

 googleling i found something like

  insert into XX values (1,2,3) where not exist (select );

 but i'm having and error near where...
 anyone knows how do i can perfome this insert?


 INSERT INTO xxx
 SELECT 1,2,3
 WHERE NOT EXISTS (SELECT ...)

 Regards
 Thomas



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/**mailpref/pgsql-generalhttp://www.postgresql.org/mailpref/pgsql-general




-- 
*Pau Marc Muñoz Torres*

Laboratori de Biologia Computacional
Institut de  Biotecnologia   i Biomedicina Vicent Villar

Universitat Autonoma de Barcelona
E-08193 Bellaterra (Barcelona)

telèfon:  (+34)935 86 89 39*
Email : paumarc.mu...@bioinf.uab.cat*


Re: [GENERAL] conditional insert

2011-09-05 Thread Sim Zacks


  
  
On 09/05/2011 01:37 PM, Pau Marc Muoz Torres wrote:
i don't see it clear, let me put an example
  
  i got the following table
  
  molec varchar(30)
  seq varchar(100)
  
  where I insert my values
  
  lets image that i have a record introduced as ('ubq',
  'aadgylpittrs')
  
  how i can prevent to insert another record where molec='ubq' ?
  
  thanks
  

Either put a unique constraint on molec or do
insert into tbl(molec,seq)
select 'ubq', 'aadgylpittrs' where not exists(select molec from tbl
where molec='ubq')
  



Re: [GENERAL] conditional insert

2011-09-05 Thread Pau Marc Muñoz Torres
Ok , thanks Sim, now i see it

P

2011/9/5 Sim Zacks s...@compulab.co.il

 **
 On 09/05/2011 01:37 PM, Pau Marc Muñoz Torres wrote:

 i don't  see it clear, let me put an example

 i got the following table

 molec varchar(30)
 seq varchar(100)

 where I insert my values

  lets image that i have a record introduced as ('ubq', 'aadgylpittrs')

 how i can prevent to insert another record where molec='ubq' ?

 thanks

  Either put a unique constraint on molec or do
 insert into tbl(molec,seq)
 select 'ubq', 'aadgylpittrs' where not exists(select molec from tbl where
 molec='ubq')




-- 
*Pau Marc Muñoz Torres*

Laboratori de Biologia Computacional
Institut de  Biotecnologia   i Biomedicina Vicent Villar

Universitat Autonoma de Barcelona
E-08193 Bellaterra (Barcelona)

telèfon:  (+34)935 86 89 39*
Email : paumarc.mu...@bioinf.uab.cat*


[GENERAL] Protocol question - fastpath parameter status 'S'

2011-09-05 Thread Radosław Smogura

Hello,

During testing of (forked) driver we had seen following strange 
behaviour. JDBC driver mainly invokes Fastpath to obtain LOBs, because 
of unscientific privileges I get

1. Some bytes
2. 'E' (error about priviliges)
3. (sic!) 'S' application_name (driver throws exception)
Now I analyse buffer byte after byte
4. 'Z', 00 00 00 05 69 108 (last number may be trash)

It's looks like without 3 everything should be OK, so... I have 
question if this is intended and undocumented behaviour, or some async 
trashes came in, because docs says nothing about 'S'. I found this only 
one app server, but I don't think it makes some background async 
calls.


Regards,
Radosław Smogura

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] conditional insert

2011-09-05 Thread J. Hondius




I agree that there are better ways to do this.
But for me this works. (legacy driven situation)

INSERT INTO tbinitialisatie (col1, col2)

SELECT 'x', 'y'

FROM tbinitialisatie

WHERE not exists (select * from tbinitialisatie where col1 = 'x' and
col2 = 'y')

LIMIT 1



Pau Marc Muoz Torres schreef:
Ok , thanks Sim, now i see it
  
P
  
  2011/9/5 Sim Zacks s...@compulab.co.il
  

 On 09/05/2011 01:37 PM, Pau Marc Muoz Torres
wrote:
i don't see it clear, let me put an example
  
i got the following table
  
molec varchar(30)
seq varchar(100)
  
where I insert my values
  
lets image that i have a record introduced as ('ubq', 'aadgylpittrs')
  
how i can prevent to insert another record where molec='ubq' ?
  
thanks
  


Either put a unique constraint on molec or do
insert into tbl(molec,seq)
select 'ubq', 'aadgylpittrs' where not exists(select molec from tbl
where molec='ubq')

  
  
  
  
  
-- 
  Pau Marc Muoz Torres
  
Laboratori de Biologia Computacional 
Institut de Biotecnologia i Biomedicina Vicent Villar   
  
Universitat Autonoma de Barcelona
E-08193 Bellaterra (Barcelona)
   
telfon: (+34)935 86 89 39
Email : paumarc.mu...@bioinf.uab.cat





[GENERAL] compression of query and result data in tcp socket connections

2011-09-05 Thread Oguz Yilmaz
Hi,

We need some handy method for compression of pgsql communication on
port 5432. For my case, database is available over the internet and
application logic has to reach the database remotely.

I have searched for it and found those threads:
http://archives.postgresql.org/pgsql-hackers/2002-05/msg00752.php
http://archives.postgresql.org/pgsql-general/2010-08/msg3.php
http://archives.postgresql.org/pgsql-hackers/2002-03/msg00664.php
http://archives.postgresql.org/pgsql-hackers/2006-05/msg01318.php

Some suggested ssh tunneling for the compression as a wrapper. However
this requires having to open ssh tunnels seperately for each remote db
server. So this is not much handy for us.

Is it possible to include compression on tcp db connection through a
way which is more internal to postgresql.

Best Regards,


--
Oguz YILMAZ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] compression of query and result data in tcp socket connections

2011-09-05 Thread Radosław Smogura

On Mon, 5 Sep 2011 14:23:12 +0300, Oguz Yilmaz wrote:

Hi,

We need some handy method for compression of pgsql communication on
port 5432. For my case, database is available over the internet and
application logic has to reach the database remotely.

I have searched for it and found those threads:
http://archives.postgresql.org/pgsql-hackers/2002-05/msg00752.php
http://archives.postgresql.org/pgsql-general/2010-08/msg3.php
http://archives.postgresql.org/pgsql-hackers/2002-03/msg00664.php
http://archives.postgresql.org/pgsql-hackers/2006-05/msg01318.php

Some suggested ssh tunneling for the compression as a wrapper. 
However
this requires having to open ssh tunnels seperately for each remote 
db

server. So this is not much handy for us.

No, there is no such support.

But if you don't want ssh, You may try IPSEC VPN for e.g. with IKEv2, 
unless You are behind firewall compression may be enabled.


Regards,
Radosław Smogura

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSL certificates issue

2011-09-05 Thread Asia
 Asia asia123...@op.pl writes:
  Now the issue is then when using libpq it was enough to have only root 
  certificate in server's root.crt and it worked fine.
  But when I tried using the same with JDBC it turned out that I need to put 
  whole chain (2 certs) of Intermediate CA 1 in server's root.crt.
 
 This is poor configuration, because every certificate listed in root.crt
 is considered fully trusted for every purpose.  It's best to keep only
 top-level root certs in root.crt.  Instead, put the full chain of
 certificates into the client's postgresql.crt, as per the manual:
 
 : In some cases, the client certificate might be signed by an
 : intermediate certificate authority, rather than one that is directly
 : trusted by the server. To use such a certificate, append the certificate
 : of the signing authority to the postgresql.crt file, then its parent
 : authority's certificate, and so on up to a root authority that is
 : trusted by the server. The root certificate should be included in every
 : case where postgresql.crt contains more than one certificate.
 
 In the JDBC case you'd need to put all those certs into the client's
 keystore, which I'm afraid I don't know the details of doing.  Possibly
 somebody on pgsql-jdbc could help you with that.
 
   regards, tom lane
 

Hi Tom,

I have analyzed your reply thoroughly in my implementation, but unfortunately 
either I make something wrong with the configuration or it does not work like 
described in the doc.

When I put top-level CA (just to remind intermediate CA is a 2 certs chain) 
certificate in root.crt on client I receive following error when connecting:

SSL error: tlsv1 alert unknown ca

When I do the same on server (with original root.crt on client) I receive 
following error when connecting with server's root.crt containing only top 
level CA:

SSL error: certificate verify failed

I was not actually asking for the details ho to do it with JDBC, since I got it 
working with proper keystore and truststore and clientcert=1. I was asking 
why jdbc works differently than libpq - it should have similar behavior (JDBC 
uses standard ssl implementation from Java, I did not find custom 
implementation from Postgres). JDBC requires clients full CA chain in server's 
root.crt while libpq does not. The question is why and is it right ?

Would you please let me know what possibly I am doing wrong and confirm that 
chained CA's are supported?

I would expect to have only one top-level CA cert in server's and client's 
root.crt and it was not possible to configure with 2-level intermediate CA. 

Please advise.

Kind regards,
Joanna



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] warm standby - apply wal archives

2011-09-05 Thread MirrorX
hello all,

i would like your advice in the following matter. If i am not wrong, by
implementing a warm standby (pg 8.4) the wal archives are being sent to the
fail over server and when the time comes the fail over who already has a
copy of the /data of the primary and all the wal archives, starts the
recovery process by applying all these wals. and when it has finished, it
goes up and is ready for connections. 

the question i have is the following. what happens if the wal archives are
too many? how much could this procedure take? if someone has tested it and
has some metrics i would really appreciate to see them. and more than that,
is there a way to apply the wals every hour for example? so that when the
time comes this procedure doesnt take too long? if i write a script that
does the mentioned above, would that work? thx in advance

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/warm-standby-apply-wal-archives-tp4770567p4770567.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Protocol question - fastpath parameter status 'S'

2011-09-05 Thread John DeSoi

On Sep 5, 2011, at 7:05 AM, Radosław Smogura wrote:

 Hello,
 
 During testing of (forked) driver we had seen following strange behaviour. 
 JDBC driver mainly invokes Fastpath to obtain LOBs, because of unscientific 
 privileges I get
 1. Some bytes
 2. 'E' (error about priviliges)
 3. (sic!) 'S' application_name (driver throws exception)
 Now I analyse buffer byte after byte
 4. 'Z', 00 00 00 05 69 108 (last number may be trash)
 
 It's looks like without 3 everything should be OK, so... I have question if 
 this is intended and undocumented behaviour, or some async trashes came in, 
 because docs says nothing about 'S'. I found this only one app server, but I 
 don't think it makes some background async calls.
 


'S' is the Sync message. 

http://www.postgresql.org/docs/current/static/protocol-message-formats.html

See this section to understand the role of the Sync message:

http://www.postgresql.org/docs/current/static/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY


John DeSoi, Ph.D.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Protocol question - fastpath parameter status 'S'

2011-09-05 Thread Radosław Smogura

On Mon, 5 Sep 2011 10:54:21 -0400, John DeSoi wrote:

On Sep 5, 2011, at 7:05 AM, Radosław Smogura wrote:


Hello,

During testing of (forked) driver we had seen following strange 
behaviour. JDBC driver mainly invokes Fastpath to obtain LOBs, because 
of unscientific privileges I get

1. Some bytes
2. 'E' (error about priviliges)
3. (sic!) 'S' application_name (driver throws exception)
Now I analyse buffer byte after byte
4. 'Z', 00 00 00 05 69 108 (last number may be trash)

It's looks like without 3 everything should be OK, so... I have 
question if this is intended and undocumented behaviour, or some async 
trashes came in, because docs says nothing about 'S'. I found this 
only one app server, but I don't think it makes some background 
async calls.





'S' is the Sync message.


http://www.postgresql.org/docs/current/static/protocol-message-formats.html

See this section to understand the role of the Sync message:


http://www.postgresql.org/docs/current/static/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY


John DeSoi, Ph.D.


No, 'S' is sent by backend, and by analysing buffered bytes I'm sure 
it's ParameterStatus, even those after it ('S') I have name of one of 
GUC parameters, and it's (empty) value.


Regards,
Radosław Smogura

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSL certificates issue

2011-09-05 Thread Tom Lane
Asia asia123...@op.pl writes:
 I would expect to have only one top-level CA cert in server's and client's 
 root.crt and it was not possible to configure with 2-level intermediate CA. 

This seems a little confused, since in your previous message you stated
that libpq worked correctly and JDBC did not, and now you seem to be
saying the opposite.

As far as libpq goes, I would expect it to function correctly in 9.0 and
up (and it did function correctly, last I tested it).  Previous releases
will not do this nicely, for lack of this patch:
http://git.postgresql.org/gitweb/?p=postgresql.gita=commitdiffh=4ed4b6c54

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] warm standby - apply wal archives

2011-09-05 Thread MirrorX
my bad...
i read in the manual that the recovery process is constant and runs all the
time. so the question now is 
how many wals can this procedure handle? for example can it handle 100-200G
every day? if it cannot, any other suggestions for HA ?thx in advance

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/warm-standby-apply-wal-archives-tp4770567p4771178.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] warm standby - apply wal archives

2011-09-05 Thread Andreas Kretschmer
MirrorX mirr...@gmail.com wrote:

 my bad...
 i read in the manual that the recovery process is constant and runs all the
 time. so the question now is 
 how many wals can this procedure handle? for example can it handle 100-200G

sure, if the master can handle that it's no problem for the client (same
hardware). In my experience it's only a fraction of work for the client
(streaming replication with 9.0)


 every day? if it cannot, any other suggestions for HA ?thx in advance

Depends on your requirements, for instance heartbeat and DRBD is an
other solution.


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
If I was god, I would recompile penguin with --enable-fly.   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] warm standby - apply wal archives

2011-09-05 Thread MirrorX
thx a lot for your answer.

actually DRBD is the solution i am trying to avoid, since i think the
performance is degrading a lot (i ve used it in the past). and also i have
serious doubts if the data is corrupted in case of the master's failure, if
not all blocks have been replicated to they secondary. has anyone faced this
situation? any comments on that? thx in advance

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/warm-standby-apply-wal-archives-tp4770567p4771295.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] alter column appears to work, but doesn't?

2011-09-05 Thread Ron Peterson
I just updated a table to have a larger column size as follows.

alter table attributes_log alter column attribute_name type varchar(48);

The size was previously 24.

iddb= \d attributes
   Table iddb.attributes
 Column | Type  |  
Modifiers
+---+-
 attribute_id   | uuid  | not null default 
(encode(pgcrypto.gen_random_bytes(16), 'hex'::text))::uuid
 attribute_name | character varying(48) | not null
 management | character varying(24) | not null default 
'by_value'::character varying

iddb= insert into attributes ( attribute_name ) values ( 
'abcdefghijklmnopqrstuvwxyz' );
ERROR:  value too long for type character varying(24)

I'm using PostgreSQL 9.0.4

I tried to replicate this with a new database and a simple table, but
could not.

I had to drop (and then recreate) three rules and a view on this table
before altering the column.

This is a production database, so I need to treat it gently.

-- 
Ron Peterson
Network  Systems Administrator
Mount Holyoke College
http://www.mtholyoke.edu/~rpeterso

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] alter column appears to work, but doesn't?

2011-09-05 Thread Tom Lane
Ron Peterson rpete...@mtholyoke.edu writes:
 I just updated a table to have a larger column size as follows.

 alter table attributes_log alter column attribute_name type varchar(48);

How come this refers to attributes_log while your failing command is
an insert into attributes?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] alter column appears to work, but doesn't?

2011-09-05 Thread Ron Peterson
2011-09-05_15:03:00-0400 Tom Lane t...@sss.pgh.pa.us:
 Ron Peterson rpete...@mtholyoke.edu writes:
  I just updated a table to have a larger column size as follows.
 
  alter table attributes_log alter column attribute_name type varchar(48);
 
 How come this refers to attributes_log while your failing command is
 an insert into attributes?

That was a typo, sorry.  Did do the same thing on original table.  I did
the same thing to attributes_log because I have rules that log data
there from my original table on insert/update/delete.

I just dropped my logging rules, stopped the database and restarted it,
put my rules back in place, and now it works.  Not sure why.  Cached
query plan?

-- 
Ron Peterson
Network  Systems Administrator
Mount Holyoke College
http://www.mtholyoke.edu/~rpeterso

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] warm standby - apply wal archives

2011-09-05 Thread Alan Hodgson
On September 5, 2011, MirrorX mirr...@gmail.com wrote:
 thx a lot for your answer.
 
 actually DRBD is the solution i am trying to avoid, since i think the
 performance is degrading a lot (i ve used it in the past). and also i
 have serious doubts if the data is corrupted in case of the master's
 failure, if not all blocks have been replicated to they secondary. has
 anyone faced this situation? any comments on that? thx in advance
 

DRBD mode C is very good. If you're running mode C, when PostgreSQL issues 
an fsync, that doesn't return until the secondary node has the data on disk. 
It's as safe as you're going to get.

The performance limit for DRBD is the write speed of a single network 
interface. If you're exceeding that, though, you also aren't going to be 
shipping out WAL segments in real time. I guess also if your nodes aren't 
close by, the latency could be a speed killer, but that's not really the 
normal use case.


Re: [GENERAL] alter column appears to work, but doesn't?

2011-09-05 Thread Tom Lane
Ron Peterson rpete...@mtholyoke.edu writes:
 I just dropped my logging rules, stopped the database and restarted it,
 put my rules back in place, and now it works.  Not sure why.  Cached
 query plan?

Maybe.  We'd need a reproducible test case to do more than speculate
though.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] alter column appears to work, but doesn't?

2011-09-05 Thread Ron Peterson
2011-09-05_16:14:00-0400 Tom Lane t...@sss.pgh.pa.us:
 Ron Peterson rpete...@mtholyoke.edu writes:
  I just dropped my logging rules, stopped the database and restarted it,
  put my rules back in place, and now it works.  Not sure why.  Cached
  query plan?
 
 Maybe.  We'd need a reproducible test case to do more than speculate
 though.

Hi Tom,

I was able to reproduce this.  DDL below.  Probably more DDL than
necessary, but not sure what is or isn't relevant.

postgres=# drop rule attribute_insert_rule on attributes;
postgres=# drop rule attribute_update_rule on attributes;
postgres=# drop rule attribute_delete_rule on attributes;
postgres=# alter table attributes_log alter column attribute_name type 
varchar(50);
...then recreate rules below
postgres=# insert into attributes values ( repeat( 'x', 49 ) );
ERROR:  value too long for type character varying(48)


CREATE TABLE attributes (
  attribute_name
VARCHAR(48)
UNIQUE
NOT NULL
);

-- Attribute names can be inserted or deleted, but not changed.
CREATE OR REPLACE FUNCTION attribute_name_freeze_tf()
RETURNS TRIGGER
AS $$
BEGIN
  IF (TG_OP = 'INSERT') THEN
IF NEW.attribute_name = OLD.attribute_name THEN
  RETURN NEW;
END IF;
  END IF;
  RETURN NULL;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER attribute_name_freeze_t
BEFORE UPDATE ON attributes
FOR EACH ROW EXECUTE PROCEDURE attribute_name_freeze_tf();

CREATE TABLE attributes_log (
  attribute_name
VARCHAR(48),
  action
CHAR(6)
NOT NULL
CHECK( action IN ('insert', 'delete','update') ),
  changed
TIMESTAMP WITH TIME ZONE
NOT NULL
DEFAULT CURRENT_TIMESTAMP
);

CREATE RULE attribute_insert_rule AS
ON INSERT TO attributes
DO
(
  INSERT INTO attributes_log (
attribute_name,
action )
  VALUES (
new.attribute_name,
'insert' );
);

CREATE RULE attribute_update_rule AS
ON UPDATE TO attributes
DO
(
  INSERT INTO attributes_log (
attribute_name,
action )
  VALUES (
new.attribute_name,
'update' );
);

CREATE RULE attribute_delete_rule AS
ON DELETE TO attributes
DO
(
  INSERT INTO attributes_log (
attribute_name,
action )
  VALUES (
old.attribute_name,
'delete' );
);



-- 
Ron Peterson
Network  Systems Administrator
Mount Holyoke College
http://www.mtholyoke.edu/~rpeterso

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UPDATE using query; per-row function calling problem

2011-09-05 Thread Rory Campbell-Lange
On 02/09/11, David Johnston (pol...@yahoo.com) wrote:
  In my -1 example, am I right in assuming that I created a correlated
  subquery rather than an correlated one? I'm confused about the
  difference.
  
 Correlated: has a where clause that references the outer query
 Un-correlated: not correlated
 
 Because of the where clause a correlated sub-query will return a
 different record for each row whereas an un-correlated sub-query will
 return the same record for all rows since the where clause (if any) is
 constant.

Hi David -- thanks for the clarification. However I'm still a little
confused. As I understand it the following is a un-correlated sub-query:

UPDATE
slots
SET
a = 'a'
,b = (SELECT uuid_generate_v1())
WHERE
c = TRUE;

and the following, without a 'WHERE', is a correlated sub-query:

UPDATE
slots
SET
a = 'a'
,b = uuid_generate_v1()
WHERE
c = TRUE;

Is the point that the lower is not a sub-query at all?

Regards
Rory

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] alter column appears to work, but doesn't?

2011-09-05 Thread Adrian Klaver
On Monday, September 05, 2011 1:48:58 pm Ron Peterson wrote:
 2011-09-05_16:14:00-0400 Tom Lane t...@sss.pgh.pa.us:
  Ron Peterson rpete...@mtholyoke.edu writes:
   I just dropped my logging rules, stopped the database and restarted it,
   put my rules back in place, and now it works.  Not sure why.  Cached
   query plan?
  
  Maybe.  We'd need a reproducible test case to do more than speculate
  though.
 
 Hi Tom,
 
 I was able to reproduce this.  DDL below.  Probably more DDL than
 necessary, but not sure what is or isn't relevant.
 
 postgres=# drop rule attribute_insert_rule on attributes;
 postgres=# drop rule attribute_update_rule on attributes;
 postgres=# drop rule attribute_delete_rule on attributes;
 postgres=# alter table attributes_log alter column attribute_name type
 varchar(50); ...then recreate rules below
 postgres=# insert into attributes values ( repeat( 'x', 49 ) );
 ERROR:  value too long for type character varying(48)
 
 

I am not seeing where you change the varchar length in the table attributes. 
That is where the error is coming from.

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] alter column appears to work, but doesn't?

2011-09-05 Thread Ron Peterson
Phghght.  Sorry, no, that didn't do it, I was typing too fast and
skipped updating the attributes table.  That was definitely not the case
w/ my original database.  Wasn't working.  The table definition reported
the update I made.  Insert did not work.  Dropping rules, restarting
database, and recreating rules got it working.  Dunno.

-Ron-

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Query runs in 335ms; function in 100,239ms : date problem?

2011-09-05 Thread Rory Campbell-Lange
I have a function wrapping a (fairly complex) query.

The query itself runs in about a 1/3rd of a second. When running the
query as a 'RETURN QUERY' function on Postgres 8.4, the function runs in
over 100 seconds, about 300 times slower.

The function takes 3 input parameters: 2 dates and a boolean. The dates
(in_date_from, in_date_to) are used several times in the function.

When I replace the two parameters in the body of the query with, for
instance date'2011-05-01' and date'2011-08-01', the function operates
almost as speedily as the straight query.

I would be grateful to know how to work around this date problem.

As the function is rather large I have taken the liberty of posting it
here:
http://campbell-lange.net/media/files/fn_report_pers_leave.sql.html

Rory


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query runs in 335ms; function in 100,239ms : date problem?

2011-09-05 Thread Tomas Vondra
On 5 Září 2011, 23:07, Rory Campbell-Lange wrote:
 I have a function wrapping a (fairly complex) query.

 The query itself runs in about a 1/3rd of a second. When running the
 query as a 'RETURN QUERY' function on Postgres 8.4, the function runs in
 over 100 seconds, about 300 times slower.

 The function takes 3 input parameters: 2 dates and a boolean. The dates
 (in_date_from, in_date_to) are used several times in the function.

 When I replace the two parameters in the body of the query with, for
 instance date'2011-05-01' and date'2011-08-01', the function operates
 almost as speedily as the straight query.

 I would be grateful to know how to work around this date problem.

 As the function is rather large I have taken the liberty of posting it
 here:
 http://campbell-lange.net/media/files/fn_report_pers_leave.sql.html

Do I understand correctly that you compare a query with literal parameters
with a parametrized query wrapped in a plpgsql function?

Try to run it as a prepared query - I guess you'll get about the same run
time as with the function (i.e. over 100 seconds).

The problem with prepared statements is that when planning the query, the
parameter values are unknown - so the optimizer does not know selectivity
of the conditions etc. and uses common values to prepare a safe plan.
OTOH the literal parameters allow to optimize the plan according to the
actual parameter values.

Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] warm standby - apply wal archives

2011-09-05 Thread MirrorX
the nodes communicate through 4Gbps ethernet so i dont think there is an
issue there. probably some kind of misconfiguration of DRBD has occured. i
will check on that tommorow. thx a lot :)

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/warm-standby-apply-wal-archives-tp4770567p4772126.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgfoundry.org is not accessible

2011-09-05 Thread Tomas Vondra
On 2 Září 2011, 7:36, Magnus Hagander wrote:
 Yeah, all hub.org hosted services had a rather long downtime again
 yesterday. They seem to be back up now.

And down again :-(

Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgfoundry.org is not accessible

2011-09-05 Thread Raghavendra
Hopefully It should be back after sometime :)

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/



On Tue, Sep 6, 2011 at 3:17 AM, Tomas Vondra t...@fuzzy.cz wrote:

 On 2 Září 2011, 7:36, Magnus Hagander wrote:
  Yeah, all hub.org hosted services had a rather long downtime again
  yesterday. They seem to be back up now.

 And down again :-(

 Tomas




Re: [GENERAL] Query runs in 335ms; function in 100,239ms : date problem?

2011-09-05 Thread Rory Campbell-Lange
On 05/09/11, Tomas Vondra (t...@fuzzy.cz) wrote:
 On 5 Zá??í 2011, 23:07, Rory Campbell-Lange wrote:
...
  The query itself runs in about a 1/3rd of a second. When running the
  query as a 'RETURN QUERY' function on Postgres 8.4, the function runs in
  over 100 seconds, about 300 times slower.
...
  As the function is rather large I have taken the liberty of posting it
  here:
  http://campbell-lange.net/media/files/fn_report_pers_leave.sql.html
 
 Do I understand correctly that you compare a query with literal parameters
 with a parametrized query wrapped in a plpgsql function?

Yes! Certainly I need to make the function perform more quickly.

 Try to run it as a prepared query - I guess you'll get about the same run
 time as with the function (i.e. over 100 seconds).

The prepared query runs in almost exactly the same time as the function,
but thanks for the suggestion. A very useful aspect of it is that I was
able to get the EXPLAIN output which I guess gives a fairly good picture
of the plan used for the function.

The explain output is here:
http://campbell-lange.net/media/files/explain.txt.html

I'm inexperienced in reading EXPLAIN output, but it looks like the
Nested Loop Semi Join at line 72 is running very slowly.

 The problem with prepared statements is that when planning the query, the
 parameter values are unknown - so the optimizer does not know selectivity
 of the conditions etc. and uses common values to prepare a safe plan.
 OTOH the literal parameters allow to optimize the plan according to the
 actual parameter values.

Thank you very much for the helpful explanation.

Regards
Rory

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query runs in 335ms; function in 100,239ms : date problem?

2011-09-05 Thread Rory Campbell-Lange
On 05/09/11, Rory Campbell-Lange (r...@campbell-lange.net) wrote:
 On 05/09/11, Tomas Vondra (t...@fuzzy.cz) wrote:
  On 5 Zá??í 2011, 23:07, Rory Campbell-Lange wrote:
 ...
   The query itself runs in about a 1/3rd of a second. When running the
   query as a 'RETURN QUERY' function on Postgres 8.4, the function runs in
   over 100 seconds, about 300 times slower.

...

  Try to run it as a prepared query - I guess you'll get about the same run
  time as with the function (i.e. over 100 seconds).
 
 The prepared query runs in almost exactly the same time as the function,
 but thanks for the suggestion. A very useful aspect of it is that I was
 able to get the EXPLAIN output which I guess gives a fairly good picture
 of the plan used for the function.
 
 The explain output is here:
 http://campbell-lange.net/media/files/explain.txt.html
 
 I'm inexperienced in reading EXPLAIN output, but it looks like the
 Nested Loop Semi Join at line 72 is running very slowly.

I added in more filtering conditions to the clause at line 72 and the
prepared statement dropped in runtime to 24043.902 ms. Unfortunately the
function ran slower -- 47957.796 -- but even that is a 50% improvement.

Thanks very much for your help.

Regards
Rory

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query runs in 335ms; function in 100,239ms : date problem?

2011-09-05 Thread Tomas Vondra
On 6 Září 2011, 0:27, Rory Campbell-Lange wrote:
 On 05/09/11, Tomas Vondra (t...@fuzzy.cz) wrote:
 Do I understand correctly that you compare a query with literal
 parameters
 with a parametrized query wrapped in a plpgsql function?

 Yes! Certainly I need to make the function perform more quickly.

 Try to run it as a prepared query - I guess you'll get about the same
 run
 time as with the function (i.e. over 100 seconds).

 The prepared query runs in almost exactly the same time as the function,
 but thanks for the suggestion. A very useful aspect of it is that I was
 able to get the EXPLAIN output which I guess gives a fairly good picture
 of the plan used for the function.

Well, my point was that the queries wrapped in functions are executed just
like prepared statements. And because prepared queries do not use
parameter values to optimize the plan, the result may be worse compared to
queries with literal parameters. So I was not expecting an improvement, I
was merely trying to show the problem.

 The explain output is here:
 http://campbell-lange.net/media/files/explain.txt.html

 I'm inexperienced in reading EXPLAIN output, but it looks like the
 Nested Loop Semi Join at line 72 is running very slowly.

I've posted the plan here: http://explain.depesz.com/s/uYX

Yes, the nested loop is the problem. One of the problems is that the join
condition - can you rewrite this

AND r.d_date || '-' || r.n_session || '-' || u.n_id IN
(SELECT
d_date || '-' || n_session || '-' || n_person
FROM
leave_association
WHERE
d_date = in_date_from
AND d_date = in_date_to
) -- i.e. leave where the person normally works

like this

AND EXISTS
(SELECT
1
FROM
leave_association
WHERE
d_date = in_date_from
AND d_date = in_date_to
AND d_date = r.d_date
AND n_session = r.n_session
AND n_person = u.n_id
) -- i.e. leave where the person normally works

and then do the same for the NOT IN subquery (= NOT EXISTS). I think it
should return the same results, plus it might use indexes on the
leave_association. That was not possible because of the join condition.

Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] warm standby - apply wal archives

2011-09-05 Thread Venkat Balaji
In my experience, I had configured a warm standby for 2 TB Postgres Cluster
(PostgreSQL 8.4).

Note : I do not know your database size and WAL archive generation rate.

Important considerations i made were as follows -

1. WAL archives transfer from production to standy depends on the network
bandwidth (i think you said there is no issue there) and the size of the
WAL archives.
2. Transfer rate can be optimized by compressing the WAL files. Each WAL
file size would reduce to 2 - 3 MB from 16 MB (only in case of warm
standby. In streaming replication size would decrease to 7 or 6 MB),
which makes huge difference for the network bandwidth.

Compress the WAL archives at the production and transfer  uncompress the
WALs on standby.

I did this successfully.

Hope this helps !

Thanks
Venkat

On Tue, Sep 6, 2011 at 2:57 AM, MirrorX mirr...@gmail.com wrote:

 the nodes communicate through 4Gbps ethernet so i dont think there is an
 issue there. probably some kind of misconfiguration of DRBD has occured. i
 will check on that tommorow. thx a lot :)

 --
 View this message in context:
 http://postgresql.1045698.n5.nabble.com/warm-standby-apply-wal-archives-tp4770567p4772126.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general