Re: [GENERAL] logging statements in PostgreSQL

2013-10-01 Thread Jayadevan M
Hi,
I was looking for options to make sure SQLs executed as part of functions
also get logged. Since this is a production system, I wanted to do it
without the EXPLAIN also written to the logs. May be that is not possible?
Regards,
Jayadevan


On Mon, Sep 30, 2013 at 5:08 PM, Albe Laurenz laurenz.a...@wien.gv.atwrote:

 Jayadevan M wrote:
  Thanks for the pointer. I do not really want to log the plans of
 queries, just the queries, execution
  time and a couple of other details (database,user). If I use the
 auto-explain module, it will end up
  printing the plan for all query execution in the log files?

 You can configure it so that only statements exceeding a certain
 duration will be logged.

 Yours,
 Laurenz Albe



Re: [GENERAL] Postgres replication question :- One master 2 slaves 9.0.10

2013-10-01 Thread akp geek
it is a firewall issue.  they can't open the port that we requested it.

so as you mentioned tunnel to the primary via tunnel. will give that a try

regards


On Mon, Sep 30, 2013 at 11:10 PM, Chris Travers chris.trav...@gmail.comwrote:




 On Mon, Sep 30, 2013 at 7:14 PM, akp geek akpg...@gmail.com wrote:

 Hi all -

 Currently we have set up one master one slave , which working
 fine.  Now we need to replicate to an other slave.  The problem  we have ,
 the port that we use on primary can not be reached from the new slave.  We
 can't the change the primary port also, because many applications using it.

   I can't reach out to my primary using the recovery.conf on the
 new slave.

   Can you suggest how I can handle this? Appreciate your help.


 Why can't you reach it?  Is it a firewall?

 if so basically you have two options.  The first is you can configure your
 firewall to allow the connection.  The second is you can tunnel through
 using another port/service like SSH or IPSec ESP.

 Best Wishes,
 Chris Travers


 Regards




 --
 Best Wishes,
 Chris Travers

 Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
 lock-in.
 http://www.efficito.com/learn_more.shtml



[GENERAL] Pretend update

2013-10-01 Thread Perry Smith
With make I can do make -n and it just tells me what it would do but 
doesn't actually do anything.

How could I do that with SQL?

I want to write a really complicated (for me) SQL UPDATE statement.  I'm sure I 
won't get it right the first time.  Is there an easy way to not really make the 
changes?

I've thought about starting a transaction and then roll it back.  That would 
undo the changes.  But I won't be able to tell what the changes were.

Thank you for your time,
Perry



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [GENERAL] Pretend update

2013-10-01 Thread Adrian Klaver

On 10/01/2013 10:16 AM, Perry Smith wrote:

With make I can do make -n and it just tells me what it would do but 
doesn't actually do anything.

How could I do that with SQL?

I want to write a really complicated (for me) SQL UPDATE statement.  I'm sure I 
won't get it right the first time.  Is there an easy way to not really make the 
changes?

I've thought about starting a transaction and then roll it back.  That would 
undo the changes.  But I won't be able to tell what the changes were.


Assuming you are not doing this in a function, you can. Do UPDATE, then 
SELECT to see your changes or not and then ROLLBACK.




Thank you for your time,
Perry




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pretend update

2013-10-01 Thread Perry Smith

On Oct 1, 2013, at 12:23 PM, Adrian Klaver adrian.kla...@gmail.com wrote:

 On 10/01/2013 10:16 AM, Perry Smith wrote:
 With make I can do make -n and it just tells me what it would do but 
 doesn't actually do anything.
 
 How could I do that with SQL?
 
 I want to write a really complicated (for me) SQL UPDATE statement.  I'm 
 sure I won't get it right the first time.  Is there an easy way to not 
 really make the changes?
 
 I've thought about starting a transaction and then roll it back.  That would 
 undo the changes.  But I won't be able to tell what the changes were.
 
 Assuming you are not doing this in a function, you can. Do UPDATE, then 
 SELECT to see your changes or not and then ROLLBACK.

Ah... yes.  I forgot you can see the changes within the same transaction.  
Dohhh...

Thank you very much
Perry



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [GENERAL] Pretend update

2013-10-01 Thread Victor Yegorov
2013/10/1 Perry Smith pedz...@gmail.com


 On Oct 1, 2013, at 12:23 PM, Adrian Klaver adrian.kla...@gmail.com
 wrote:

  Assuming you are not doing this in a function, you can. Do UPDATE, then
 SELECT to see your changes or not and then ROLLBACK.

 Ah... yes.  I forgot you can see the changes within the same transaction.
  Dohhh...


It is possible to use RETURNING clause of the UPDATE statement and avoid
SELECT.

And yes, it is necessary to do this within a transaction and roll it back
after.

-- 
Victor Y. Yegorov


Re: [GENERAL] Postgres replication question :- One master 2 slaves 9.0.10

2013-10-01 Thread akp geek
One more thing.. pardon me for being dumb

I want to set the 2 nd slave as HOT STAND  BY,  not steaming ..

What would be steps.

on the primary I will have the archive_command

on the slave in the recover.conf , restore_command.

After I make my slave as exactly as master,  How can the slave gets the
files from master ( HOT STAND BY )..  master has trusted connection(ssh )
with slave.

Thanks a lot for the help.





On Tue, Oct 1, 2013 at 10:14 AM, akp geek akpg...@gmail.com wrote:

 it is a firewall issue.  they can't open the port that we requested it.

 so as you mentioned tunnel to the primary via tunnel. will give that a try

 regards


 On Mon, Sep 30, 2013 at 11:10 PM, Chris Travers 
 chris.trav...@gmail.comwrote:




 On Mon, Sep 30, 2013 at 7:14 PM, akp geek akpg...@gmail.com wrote:

 Hi all -

 Currently we have set up one master one slave , which working
 fine.  Now we need to replicate to an other slave.  The problem  we have ,
 the port that we use on primary can not be reached from the new slave.  We
 can't the change the primary port also, because many applications using it.

   I can't reach out to my primary using the recovery.conf on the
 new slave.

   Can you suggest how I can handle this? Appreciate your help.


 Why can't you reach it?  Is it a firewall?

 if so basically you have two options.  The first is you can configure
 your firewall to allow the connection.  The second is you can tunnel
 through using another port/service like SSH or IPSec ESP.

 Best Wishes,
 Chris Travers


 Regards




 --
 Best Wishes,
 Chris Travers

 Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
 lock-in.
  http://www.efficito.com/learn_more.shtml





[GENERAL] Largest PG database known to man!

2013-10-01 Thread Mark Jones
Hi all,

We are currently working with a customer who is looking at a database of
between 200-400 TB! They are after any confirmation of PG working at this
size or anywhere near it.
Anyone out there worked on anything like this size in PG please? If so, can
you let me know more details etc..

   Mark Jones
   Principal Sales Engineer Emea


   http://www.enterprisedb.com/
   
   Email: mark.jo...@enterprisedb.com
   Tel: 44 7711217186
   Skype: Mxjones121







Re: [GENERAL] Largest PG database known to man!

2013-10-01 Thread John R Pierce

On 10/1/2013 2:49 PM, Mark Jones wrote:
We are currently working with a customer who is looking at a database 
of between 200-400 TB! They are after any confirmation of PG working 
at this size or anywhere near it.



is that really 200-400TB of relational data, or is it 199-399TB of bulk 
data (blobs or whatever) interspersed with some relational metadata?


what all is the usage pattern of this data?   that determines the 
feasibility of something far more than just the raw size.





--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Largest PG database known to man!

2013-10-01 Thread Mark Jones
Thanks for your quick response John.

From the limited information, it is mostly relational.
As for usage patterns, I do not have that yet.
I was just after a general feel of what is out there size wise.

Regards

 

   Mark Jones
   Principal Sales Engineer Emea


   http://www.enterprisedb.com/
   
   Email: mark.jo...@enterprisedb.com
   Tel: 44 7711217186
   Skype: Mxjones121














On 01/10/2013 22:56, John R Pierce pie...@hogranch.com wrote:

On 10/1/2013 2:49 PM, Mark Jones wrote:
 We are currently working with a customer who is looking at a database
 of between 200-400 TB! They are after any confirmation of PG working
 at this size or anywhere near it.


is that really 200-400TB of relational data, or is it 199-399TB of bulk
data (blobs or whatever) interspersed with some relational metadata?

what all is the usage pattern of this data?   that determines the
feasibility of something far more than just the raw size.




-- 
john r pierce  37N 122W
somewhere on the middle of the left coast



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Largest PG database known to man!

2013-10-01 Thread Jeff Ross

Maybe some of these folks can chime in?

http://cds.u-strasbg.fr/

Simbad (and I think VisieR) runs on PostgreSQL.  A friend of mine is a 
grad student in astronomy and he told me about them.


Jeff Ross
On 10/1/13 3:49 PM, Mark Jones wrote:

Hi all,

We are currently working with a customer who is looking at a database 
of between 200-400 TB! They are after any confirmation of PG working 
at this size or anywhere near it.
Anyone out there worked on anything like this size in PG please? If 
so, can you let me know more details etc..



 Mark Jones
   Principal Sales Engineer Emea

http://www.enterprisedb.com/

   Email: mark.jo...@enterprisedb.com mailto:mark.jo...@enterprisedb.com
   Tel: 44 7711217186
   Skype: Mxjones121






Re: [GENERAL] Largest PG database known to man!

2013-10-01 Thread John R Pierce

On 10/1/2013 3:00 PM, Mark Jones wrote:

From the limited information, it is mostly relational.


phew.   thats going to be a monster.400TB on 600GB 15000rpm SAS 
drives in raid10 will require around 1400 drives.   at 25 disks per 2U 
drive tray, thats 2 6' racks of nothing but disks, and to maintain a 
reasonable fanout to minimize IO bottlenecks, would require on the order 
of 25 SAS raid cards.   or, a really big SAN with some serious IOPS.   
and naturally, you should have at least 2 of these for availability.


if we assume the tables average 1KB/record (which is a fairly large 
record size even including indexing), you're looking at 400 billion 
records.   if you can populate these at 5000 records/second, it would 
take 2.5 years of 24/7 operation to populate that.


this sort of big data system is probably more suitable for something 
like hadoop+mongo or whatever on a cloud of 1000 nodes, not a monolithic 
SQL relational database.



--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Largest PG database known to man!

2013-10-01 Thread Julian
On 02/10/13 07:49, Mark Jones wrote:
 Hi all,

 We are currently working with a customer who is looking at a database
 of between 200-400 TB! They are after any confirmation of PG working
 at this size or anywhere near it.
 Anyone out there worked on anything like this size in PG please? If
 so, can you let me know more details etc..  
 

Wow that's awesome - but you know the difference between 200TB and 400TB
is quite significant (100%)? Like a whole bunch of cash
significant...unless we are talking GB.
But is that it? This isn't really fair, is this a test?
Jules


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres replication question :- One master 2 slaves 9.0.10

2013-10-01 Thread Sergey Konoplev
On Tue, Oct 1, 2013 at 2:03 PM, akp geek akpg...@gmail.com wrote:
 One more thing.. pardon me for being dumb

 I want to set the 2 nd slave as HOT STAND  BY,  not steaming ..

Hot standby assumes being streaming. You can not establish a hot
standby without using streaming replication. What is the reason not to
do it streaming?

BTW, you will find the SSH tunnel instructions here
http://code.google.com/p/pgcookbook/wiki/Streaming_Replication_Setup

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres replication question :- One master 2 slaves 9.0.10

2013-10-01 Thread akp geek
thanks. I can try this. Any idea for the message below. Thanks for the
patience

I tried tunneling this morning and it did not work. when tried the
tunneling command in the url you mentioned getting following error. I will
try to find what exactly this mean , but any help is appreciated.

command-line: line 0: Bad configuration option: ExitOnForwardFailure
command-line: line 0: Bad configuration option: ExitOnForwardFailure

Regards



On Tue, Oct 1, 2013 at 6:46 PM, Sergey Konoplev gray...@gmail.com wrote:

 On Tue, Oct 1, 2013 at 2:03 PM, akp geek akpg...@gmail.com wrote:
  One more thing.. pardon me for being dumb
 
  I want to set the 2 nd slave as HOT STAND  BY,  not steaming ..

 Hot standby assumes being streaming. You can not establish a hot
 standby without using streaming replication. What is the reason not to
 do it streaming?

 BTW, you will find the SSH tunnel instructions here
 http://code.google.com/p/pgcookbook/wiki/Streaming_Replication_Setup

 --
 Kind regards,
 Sergey Konoplev
 PostgreSQL Consultant and DBA

 http://www.linkedin.com/in/grayhemp
 +1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
 gray...@gmail.com



Re: [GENERAL] Postgres replication question :- One master 2 slaves 9.0.10

2013-10-01 Thread Jaime Casanova
On Tue, Oct 1, 2013 at 5:46 PM, Sergey Konoplev gray...@gmail.com wrote:
 On Tue, Oct 1, 2013 at 2:03 PM, akp geek akpg...@gmail.com wrote:
 One more thing.. pardon me for being dumb

 I want to set the 2 nd slave as HOT STAND  BY,  not steaming ..

 Hot standby assumes being streaming. You can not establish a hot
 standby without using streaming replication. What is the reason not to
 do it streaming?


Ah! why?

you don't need to use streaming replication for a hot standby, it
works perfectly well even if you replay everything from archive and
never do streaming.

but it would be a good idea to set hot_standby_feedback to on and
max_standby_archive_delay to something larger than 30s

-- 
Jaime Casanova www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitaciĆ³n
Phone: +593 4 5107566 Cell: +593 987171157


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Largest PG database known to man!

2013-10-01 Thread Stephen Frost
* John R Pierce (pie...@hogranch.com) wrote:
 if we assume the tables average 1KB/record (which is a fairly large
 record size even including indexing), you're looking at 400 billion
 records.   if you can populate these at 5000 records/second, it
 would take 2.5 years of 24/7 operation to populate that.

5000 1KB records per second is only 5MB/s or so, which is really quite
slow..  I can't imagine that they'd load all of this data by doing a
commit for each record and you could load a *huge* amount of data *very*
quickly, in parallel, by using either unlogged tables or wal_level =
minimal and creating the tables in the same transaction that's loading
them.

 this sort of big data system is probably more suitable for something
 like hadoop+mongo or whatever on a cloud of 1000 nodes, not a
 monolithic SQL relational database.

Or a federated PG database using FDWs..

Sadly, I've not personally worked with a data system on the 100+TB range
w/ PG (we do have a Hadoop environment along that scale) but I've built
systems as large as 25TB which, built correctly, work very well.  Still,
I don't think I'd recommend building a single-image PG database on that
scale but rather would shard it.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Largest PG database known to man!

2013-10-01 Thread John R Pierce

On 10/1/2013 6:53 PM, Stephen Frost wrote:

I don't think I'd recommend building a single-image PG database on that
scale but rather would shard it.


sharding only works well if your data has natural divisions and you're 
not doing complex joins/aggregates across those divisions.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-01 Thread Tim Uckun
I have the following query.

with parsed_data as (
 SELECT
devicereportedtime ,
   DATE_TRUNC('minute', devicereportedtime - (EXTRACT(minute FROM
devicereportedtime)::integer % 5 || 'minutes')::interval) as interval_start
FROM systemevents
   WHERE devicereportedtime = now() - interval '10 minutes'
  ORDER BY devicereportedtime asc
  limit 1
 ),
 grouped_data as (
 SELECT
 interval_start  at time zone 'Etc/UTC' as interval_start,
 MIN(devicereportedtime) at time zone 'Etc/UTC'   as min_datetime,
 MAX(devicereportedtime) at time zone 'Etc/UTC'   as max_datetime
 FROM parsed_data
 GROUP BY interval_start
 )

 SELECT
 interval_start,
 (interval_start AT TIME ZONE 'Africa/Monrovia')::timestamp with time zone
 as interval_start_in_africa,
  min_datetime,
 min_datetime AT TIME ZONE 'Europe/Berlin' as min_datetime_in_berlin,
  max_datetime,
 max_datetime AT TIME ZONE 'America/New_York' as max_datetime_in_new_york

  FROM grouped_data gd

The field devicereportedtime is timstamp without time zone. The database
is set to  'Pacific/Auckland (my development machine)  and the devices are
reporting UTC.

The grouped_data clause forces the parsing of the timestamp without
timestamp to be UTC as well as the mins and the max timestamps.   I then
want to present this data in other time zones

When I run this query in pgadmin I get the following results

interval_start,interval_start_in_africa,min_datetime,min_datetime_in_berlin,max_datetime,max_datetime_in_new_york
2013-10-04 15:35:00+13,2013-10-04 02:35:00+13,2013-10-04
15:35:00+13,2013-10-04 04:35:00,2013-10-04 15:39:59+13,2013-10-03
22:39:59
2013-10-04 15:25:00+13,2013-10-04 02:25:00+13,2013-10-04
15:28:11+13,2013-10-04 04:28:11,2013-10-04 15:29:59+13,2013-10-03
22:29:59
2013-10-04 15:40:00+13,2013-10-04 02:40:00+13,2013-10-04
15:40:00+13,2013-10-04 04:40:00,2013-10-04 15:44:39+13,2013-10-03
22:44:39
2013-10-04 15:30:00+13,2013-10-04 02:30:00+13,2013-10-04
15:30:00+13,2013-10-04 04:30:00,2013-10-04 15:34:59+13,2013-10-03
22:34:59

Notice that all the offsets are set to +13 which is my laptop's offset. Why
don't they show the offset of Africa or Berlin or whatever?  Also note then
unless I explictly cast the data as timestamp with time zone all the
offsets go away and it's reported as timestamp without time zone.


So what am I doing wrong here?


Re: [GENERAL] Largest PG database known to man!

2013-10-01 Thread Chris Travers
On Tue, Oct 1, 2013 at 3:00 PM, Mark Jones mark.jo...@enterprisedb.comwrote:

 Thanks for your quick response John.

 From the limited information, it is mostly relational.
 As for usage patterns, I do not have that yet.
 I was just after a general feel of what is out there size wise.


Usage patterns are going to be critical here.  There is a huge difference
between a large amount of data being used in an OLTP workflow than a
DSS/OLAP workflow.  Additionally, I am concerned your indexes are going to
be very large.  Now, depending on your usage pattern, breaking things down
carefully regarding tablespace and partial indexes may be enough.  However,
keep in mind that no table can be larger than 32TB.  At any rate, no matter
what solution you use, I don't see a generally tuned database being what
you want (which means you are tuning for workflow).

Now, I think your big limits in OLTP are going to be max table size (32TB)
and index size.  These can all be managed (and managed carefully) but they
are limits.

For OLAP you have a totally different set of concerns, and since you are
talking about aggregating a lot of data, vanilla PostgreSQL is going to be
a pain to get working as it is.  On the other hand OLAP and large db mixed
workloads is where Postgres-XC might really shine.  The complexity costs
there will likely be worth it in removing limitations on disk I/O and lack
of intraquery parallelism.

200TB is a lot of data.  400TB is twice that.  Either way you are going to
have a really complex set of problems to tackle regardless of what solution
you choose.

I have heard of db sizes in the 30-100TB range on PostgreSQL even before
Postgres-XC.  I am not sure beyond that.

Best Wishes,
Chris Travers


 Regards


 
Mark Jones
Principal Sales Engineer Emea


http://www.enterprisedb.com/

Email: mark.jo...@enterprisedb.com
Tel: 44 7711217186
Skype: Mxjones121














 On 01/10/2013 22:56, John R Pierce pie...@hogranch.com wrote:

 On 10/1/2013 2:49 PM, Mark Jones wrote:
  We are currently working with a customer who is looking at a database
  of between 200-400 TB! They are after any confirmation of PG working
  at this size or anywhere near it.
 
 
 is that really 200-400TB of relational data, or is it 199-399TB of bulk
 data (blobs or whatever) interspersed with some relational metadata?
 
 what all is the usage pattern of this data?   that determines the
 feasibility of something far more than just the raw size.
 
 
 
 
 --
 john r pierce  37N 122W
 somewhere on the middle of the left coast
 
 
 
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more.shtml


Re: [GENERAL] Postgres replication question :- One master 2 slaves 9.0.10

2013-10-01 Thread Sergey Konoplev
On Tue, Oct 1, 2013 at 6:15 PM, Jaime Casanova ja...@2ndquadrant.com wrote:
 you don't need to use streaming replication for a hot standby, it
 works perfectly well even if you replay everything from archive and
 never do streaming.

Right, I mixed up a with the terms a bit.

 but it would be a good idea to set hot_standby_feedback to on and
 max_standby_archive_delay to something larger than 30s

Doesn't replica need a connection to master for hot_standby_feedback?

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres replication question :- One master 2 slaves 9.0.10

2013-10-01 Thread Sergey Konoplev
On Tue, Oct 1, 2013 at 5:30 PM, akp geek akpg...@gmail.com wrote:
 I tried tunneling this morning and it did not work. when tried the tunneling
 command in the url you mentioned getting following error. I will try to find
 what exactly this mean , but any help is appreciated.

 command-line: line 0: Bad configuration option: ExitOnForwardFailure
 command-line: line 0: Bad configuration option: ExitOnForwardFailure

It looks like your SSH version or implementation doesn't support
ExitOnForwardFailure. Try to find an alternative.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general