Re: [HACKERS] Re: [COMMITTERS] pgsql: Basic Recovery Control functions for use in Hot Standby. Pause,

2011-03-16 Thread Fujii Masao
On Sat, Mar 12, 2011 at 1:59 AM, Robert Haas robertmh...@gmail.com wrote:
 On Fri, Mar 11, 2011 at 9:31 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Fujii Masao masao.fu...@gmail.com writes:
 Yeah, since I like the former, I changed the wordings in the doc and
 recovery.conf.sample. What about the attached patch?

 Please stop plastering the code with elog(FATAL) calls.  Those are
 hardly ever appropriate.  In contexts where it might be reasonable
 to do that, the error handler will treat ERROR like FATAL anyway.

 Another problem here is that we are defaulting to hot_standby=off and
 pause_at_recovery_target=on.  So AIUI, with this patch, if someone
 sets a recovery target without making any other changes to the
 configuration, their database won't start up.  That seems poor.

We should flip the default value of pause_at_recovery_target?

 Even without the FATAL error, this whole pause_at_recovery_target
 thing is a little weird.  If someone sets a recovery target without
 making any other configuration changes, and Hot Standby is not
 enabled, then we will enter normal running, but if Hot Standby *is*
 enabled, then we'll replay to that point and pause recovery.  That
 seems a bit confusing.

That's because there is no way to resume recovery which was
paused by pause_at_recovery_target when hot standby is disabled,
i.e., in that case we cannot call pg_xlog_replay_resume() to resume
the recovery.

How should recovery work when pause_at_recovery_target is
enabled but hot standby is disabled? We have three choices:

1. Forbit those settings, i.e., throw FATAL error. Tom dislikes this
idea.
2. Ignore pause_at_recovery_target. When recovery reaches the
target, it ends without pausing, and then the server gets into
normal processing mode. This would be unexpected behavior
from DBA's point of view because he or she expects that
recovery is paused at the target. To retry recovery, he or she
needs to restore the backup again.
3. Pause recovery even if hot standby is disabled. Since there
is no way to resume recovery, recovery would pause until
shutdown is requested.

For me, #1 looks like the most harmless in them. But, better
ideas? Votes?

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How should the waiting backends behave in sync rep?

2011-03-16 Thread Fujii Masao
On Sat, Mar 12, 2011 at 3:12 AM, Robert Haas robertmh...@gmail.com wrote:
 There's a comment that looks related to this issue in syncrep.c.  It reads:

                /*
                 * We don't receive SIGHUPs at this point, so resetting
                 * synchronous_standby_names has no effect on waiters.
                 */

 It's unclear to me what this actually means.  Is there some reason we
 CAN'T receive SIGHUPs at that point, or have we just chosen not to
 (for unexplained reasons)?

Not sure. Simon?

It seems harmless to receive SIGHUP at that point.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Replication server timeout patch

2011-03-16 Thread Fujii Masao
On Sat, Mar 12, 2011 at 4:34 AM, Robert Haas robertmh...@gmail.com wrote:
 On Fri, Mar 11, 2011 at 8:29 AM, Fujii Masao masao.fu...@gmail.com wrote:
 I think we should consider making this change for 9.1.  This is a real
 wart, and it's going to become even more of a problem with sync rep, I
 think.

 Yeah, that's a welcome! Please feel free to review the patch.

 I discussed this with Heikki on IM.

 I think we should rip all the GUC change stuff out of this patch and
 just decree that if you set a timeout, you get a timeout.  If you set
 this inconsistently with wal_receiver_status_interval, then you'll get
 lots of disconnects.  But that's your problem.  This may seem a little
 unfriendly, but the logic in here is quite complex and still isn't
 going to really provide that much protection against bad
 configurations.  The only realistic alternative I see is to define
 replication_timeout as a multiple of the client's
 wal_receiver_status_interval, but that seems quite annoyingly
 unfriendly.  A single replication_timeout that applies to all slaves
 doesn't cover every configuration someone might want, but it's simple
 and easy to understand and should cover 95% of cases.  If we find that
 it's really necessary to be able to customize it further, then we
 might go the route of adding the much-discussed standby registration
 stuff, where there's a separate config file or system table where you
 can stipulate that when a walsender with application_name=foo
 connects, you want it to get wal_receiver_status_interval=$FOO.  But I
 think that complexity can certainly wait until 9.2 or later.

 I also think that the default for replication_timeout should not be 0.
  Something like 60s seems about right.  That way, if you just use the
 default settings, you'll get pretty sane behavior - a connectivity
 hiccup that lasts more than a minute will bounce the client.  We've
 already gotten reports of people who thought they were replicating
 when they really weren't, and had to fiddle with settings and struggle
 to try to make it robust.  This should make things a lot nicer for
 people out of the box, but it won't if it's disabled out of the box.

 On another note, there doesn't appear to be any need to change the
 return value of WaitLatchOrSocket().

Agreed. I'll change the patch.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Basic Recovery Control functions for use in Hot Standby. Pause,

2011-03-16 Thread Simon Riggs
On Wed, 2011-03-16 at 16:29 +0900, Fujii Masao wrote:
 On Sat, Mar 12, 2011 at 1:59 AM, Robert Haas robertmh...@gmail.com wrote:
  On Fri, Mar 11, 2011 at 9:31 AM, Tom Lane t...@sss.pgh.pa.us wrote:
  Fujii Masao masao.fu...@gmail.com writes:
  Yeah, since I like the former, I changed the wordings in the doc and
  recovery.conf.sample. What about the attached patch?
 
  Please stop plastering the code with elog(FATAL) calls.  Those are
  hardly ever appropriate.  In contexts where it might be reasonable
  to do that, the error handler will treat ERROR like FATAL anyway.
 
  Another problem here is that we are defaulting to hot_standby=off and
  pause_at_recovery_target=on.  So AIUI, with this patch, if someone
  sets a recovery target without making any other changes to the
  configuration, their database won't start up.  That seems poor.
 
 We should flip the default value of pause_at_recovery_target?

No, we shouldn't. Robert's comments are wrong and he shouldn't post such
things without testing them or reading the code.

  Even without the FATAL error, this whole pause_at_recovery_target
  thing is a little weird.  If someone sets a recovery target without
  making any other configuration changes, and Hot Standby is not
  enabled, then we will enter normal running, but if Hot Standby *is*
  enabled, then we'll replay to that point and pause recovery.  That
  seems a bit confusing.
 
 That's because there is no way to resume recovery which was
 paused by pause_at_recovery_target when hot standby is disabled,
 i.e., in that case we cannot call pg_xlog_replay_resume() to resume
 the recovery.
 
 How should recovery work when pause_at_recovery_target is
 enabled but hot standby is disabled? We have three choices:
 
 1. Forbit those settings, i.e., throw FATAL error. Tom dislikes this
 idea.
 2. Ignore pause_at_recovery_target. When recovery reaches the
 target, it ends without pausing, and then the server gets into
 normal processing mode. This would be unexpected behavior
 from DBA's point of view because he or she expects that
 recovery is paused at the target. To retry recovery, he or she
 needs to restore the backup again.
 3. Pause recovery even if hot standby is disabled. Since there
 is no way to resume recovery, recovery would pause until
 shutdown is requested.
 
 For me, #1 looks like the most harmless in them. But, better
 ideas? Votes?

(2) is how it works now.

(3) doesn't sound very sensible. Why would that be better than (2)

There's lots of ways to misconfigure things, so I'm not too concerned
about this minor point.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How should the waiting backends behave in sync rep?

2011-03-16 Thread Simon Riggs
On Wed, 2011-03-16 at 16:36 +0900, Fujii Masao wrote:
 On Sat, Mar 12, 2011 at 3:12 AM, Robert Haas robertmh...@gmail.com wrote:
  There's a comment that looks related to this issue in syncrep.c.  It reads:
 
 /*
  * We don't receive SIGHUPs at this point, so resetting
  * synchronous_standby_names has no effect on waiters.
  */
 
  It's unclear to me what this actually means.  Is there some reason we
  CAN'T receive SIGHUPs at that point, or have we just chosen not to
  (for unexplained reasons)?
 
 Not sure. Simon?
 
 It seems harmless to receive SIGHUP at that point.

You pointed out this out to me, so if you want I can explain back to you
again ;-)   Signals are blocked over that section of code.

We could write a scary bit of code to get around that, but it smells
badly of kludge.

What do you think we should do?

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: Sync Rep and shutdown Re: [HACKERS] Sync Rep v19

2011-03-16 Thread Simon Riggs
On Tue, 2011-03-15 at 22:07 -0400, Robert Haas wrote:
 On Wed, Mar 9, 2011 at 11:11 PM, Fujii Masao masao.fu...@gmail.com wrote:
  Same as above. I think that it's more problematic to leave the code
  as it is. Because smart/fast shutdown can make the server get stuck
  until immediate shutdown is requested.
 
 I agree that the current state of affairs is a problem.  However,
 after looking through the code somewhat carefully, it looks a bit
 difficult to fix.  Suppose that backend A is waiting for sync rep.  A
 fast shutdown is performed.  Right now, backend A shrugs its shoulders
 and does nothing.  Not good.  But suppose we change it so that backend
 A closes the connection and exits without either confirming the commit
 or throwing ERROR/FATAL.  That seems like correct behavior, since, if
 we weren't using sync rep, the client would have to interpret that as
 indicating that the connection denied in mid-COMMIT, and mustn't
 assume anything about the state of the transaction.  So far so good.
 
 The problem is that there may be another backend B waiting on a lock
 held by A.  If backend A exits cleanly (without a PANIC), it will
 remove itself from the ProcArray and release locks.  That wakes up A,
 which can now go do its thing.  If the operating system is a bit on
 the slow side delivering the signal to B, then the client to which B
 is connected might manage to see a database state that shows the
 transaction previous running in A as committed, even though that
 transaction wasn't committed.  That would stink, because the whole
 point of having A hold onto locks until the standby ack'd the commit
 was that no other transaction would see it as committed until it was
 replicated.
 
 This is a pretty unlikely race condition in practice but people who
 are running sync rep are intending precisely to guard against unlikely
 failure scenarios.
 
 The only idea I have for allowing fast shutdown to still be fast, even
 when sync rep is involved, is to shut down the system in two phases.
 The postmaster would need to stop accepting new connections, and first
 kill off all the backends that aren't waiting for sync rep.  Then,
 once all remaining backends are waiting for sync rep, we can have them
 proceed as above: close the connection without acking the commit or
 throwing ERROR/FATAL, and exit.  That's pretty complicated, especially
 given the rule that the postmaster mustn't touch shared memory, but I
 don't see any alternative.  


 We could just not allow fast shutdown, as
 now, but I think that's worse.

Please explain why not allowing fast shutdown makes it worse?

For me, I'd rather not support a whole bunch of dubious code, just to
allow you to type -m fast when you can already type -m immediate.

What extra capability are we actually delivering by doing that??
The risk of introducing a bug and thereby losing data far outweighs the
rather dubious benefit.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Replication server timeout patch

2011-03-16 Thread Fujii Masao
On Wed, Mar 16, 2011 at 4:49 PM, Fujii Masao masao.fu...@gmail.com wrote:
 Agreed. I'll change the patch.

Done. I attached the updated patch.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


replication_timeout_v6.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How should the waiting backends behave in sync rep?

2011-03-16 Thread Fujii Masao
On Wed, Mar 16, 2011 at 5:44 PM, Simon Riggs si...@2ndquadrant.com wrote:
 On Wed, 2011-03-16 at 16:36 +0900, Fujii Masao wrote:
 On Sat, Mar 12, 2011 at 3:12 AM, Robert Haas robertmh...@gmail.com wrote:
  There's a comment that looks related to this issue in syncrep.c.  It reads:
 
                 /*
                  * We don't receive SIGHUPs at this point, so resetting
                  * synchronous_standby_names has no effect on waiters.
                  */
 
  It's unclear to me what this actually means.  Is there some reason we
  CAN'T receive SIGHUPs at that point, or have we just chosen not to
  (for unexplained reasons)?

 Not sure. Simon?

 It seems harmless to receive SIGHUP at that point.

 You pointed out this out to me, so if you want I can explain back to you
 again ;-)   Signals are blocked over that section of code.

Yeah, I pointed out that SIGINT and SIGTERM are blocked there.
But not SIGHUP ;)

 We could write a scary bit of code to get around that, but it smells
 badly of kludge.

 What do you think we should do?

What I'm thinking is to make the waiting backends get out of the wait
state if synchronous_standby_names is emptied and configuration file
is reloaded. IOW, I'd like to change SyncRepWaitForLSN() so that it
calls ProcessConfigFile() when the flag got_SIGHUP is true, and then
gets out of the wait loop if there is no name in synchronous_standby_names
(i.e., when the variable sync_standbys_defined is FALSE).

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] CREATE FOREIGN TABLE doc

2011-03-16 Thread Shigeru HANADA
I noticed that CREATE FOREIGN TABLE document would need some
corrections.

  - server_name is not mentioned in Parameters section
  - = is unnecessary in OPTIONS

Please find attached a brief patch.

Regards,
--
Shigeru Hanada


create_foreign_table_doc.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How should the waiting backends behave in sync rep?

2011-03-16 Thread Simon Riggs
On Wed, 2011-03-16 at 18:41 +0900, Fujii Masao wrote:
 On Wed, Mar 16, 2011 at 5:44 PM, Simon Riggs si...@2ndquadrant.com wrote:
  On Wed, 2011-03-16 at 16:36 +0900, Fujii Masao wrote:
  On Sat, Mar 12, 2011 at 3:12 AM, Robert Haas robertmh...@gmail.com wrote:
   There's a comment that looks related to this issue in syncrep.c.  It 
   reads:
  
  /*
   * We don't receive SIGHUPs at this point, so resetting
   * synchronous_standby_names has no effect on waiters.
   */
  
   It's unclear to me what this actually means.  Is there some reason we
   CAN'T receive SIGHUPs at that point, or have we just chosen not to
   (for unexplained reasons)?
 
  Not sure. Simon?
 
  It seems harmless to receive SIGHUP at that point.
 
  You pointed out this out to me, so if you want I can explain back to you
  again ;-)   Signals are blocked over that section of code.
 
 Yeah, I pointed out that SIGINT and SIGTERM are blocked there.
 But not SIGHUP ;)
 
  We could write a scary bit of code to get around that, but it smells
  badly of kludge.
 
  What do you think we should do?
 
 What I'm thinking is to make the waiting backends get out of the wait
 state if synchronous_standby_names is emptied and configuration file
 is reloaded. IOW, I'd like to change SyncRepWaitForLSN() so that it
 calls ProcessConfigFile() when the flag got_SIGHUP is true, and then
 gets out of the wait loop if there is no name in synchronous_standby_names
 (i.e., when the variable sync_standbys_defined is FALSE).

I did try that and it didn't work.

If you think it will, I'll try again.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: Sync Rep and shutdown Re: [HACKERS] Sync Rep v19

2011-03-16 Thread Robert Haas
On Wed, Mar 16, 2011 at 1:43 AM, Fujii Masao masao.fu...@gmail.com wrote:
 On Wed, Mar 16, 2011 at 11:07 AM, Robert Haas robertmh...@gmail.com wrote:
 The problem is that there may be another backend B waiting on a lock
 held by A.  If backend A exits cleanly (without a PANIC), it will
 remove itself from the ProcArray and release locks.  That wakes up A,
 which can now go do its thing.  If the operating system is a bit on
 the slow side delivering the signal to B, then the client to which B
 is connected might manage to see a database state that shows the
 transaction previous running in A as committed, even though that
 transaction wasn't committed.  That would stink, because the whole
 point of having A hold onto locks until the standby ack'd the commit
 was that no other transaction would see it as committed until it was
 replicated.

 The lock can be released also when the transaction running in A is
 rollbacked. So I could not understand why the client wrongly always
 see the transaction as commtted even though it's not committed.

The transaction IS committed, but only locally.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: Sync Rep and shutdown Re: [HACKERS] Sync Rep v19

2011-03-16 Thread Robert Haas
On Wed, Mar 16, 2011 at 4:51 AM, Simon Riggs si...@2ndquadrant.com wrote:
 On Tue, 2011-03-15 at 22:07 -0400, Robert Haas wrote:
 On Wed, Mar 9, 2011 at 11:11 PM, Fujii Masao masao.fu...@gmail.com wrote:
  Same as above. I think that it's more problematic to leave the code
  as it is. Because smart/fast shutdown can make the server get stuck
  until immediate shutdown is requested.

 I agree that the current state of affairs is a problem.  However,
 after looking through the code somewhat carefully, it looks a bit
 difficult to fix.  Suppose that backend A is waiting for sync rep.  A
 fast shutdown is performed.  Right now, backend A shrugs its shoulders
 and does nothing.  Not good.  But suppose we change it so that backend
 A closes the connection and exits without either confirming the commit
 or throwing ERROR/FATAL.  That seems like correct behavior, since, if
 we weren't using sync rep, the client would have to interpret that as
 indicating that the connection denied in mid-COMMIT, and mustn't
 assume anything about the state of the transaction.  So far so good.

 The problem is that there may be another backend B waiting on a lock
 held by A.  If backend A exits cleanly (without a PANIC), it will
 remove itself from the ProcArray and release locks.  That wakes up A,
 which can now go do its thing.  If the operating system is a bit on
 the slow side delivering the signal to B, then the client to which B
 is connected might manage to see a database state that shows the
 transaction previous running in A as committed, even though that
 transaction wasn't committed.  That would stink, because the whole
 point of having A hold onto locks until the standby ack'd the commit
 was that no other transaction would see it as committed until it was
 replicated.

 This is a pretty unlikely race condition in practice but people who
 are running sync rep are intending precisely to guard against unlikely
 failure scenarios.

 The only idea I have for allowing fast shutdown to still be fast, even
 when sync rep is involved, is to shut down the system in two phases.
 The postmaster would need to stop accepting new connections, and first
 kill off all the backends that aren't waiting for sync rep.  Then,
 once all remaining backends are waiting for sync rep, we can have them
 proceed as above: close the connection without acking the commit or
 throwing ERROR/FATAL, and exit.  That's pretty complicated, especially
 given the rule that the postmaster mustn't touch shared memory, but I
 don't see any alternative.


 We could just not allow fast shutdown, as
 now, but I think that's worse.

 Please explain why not allowing fast shutdown makes it worse?

 For me, I'd rather not support a whole bunch of dubious code, just to
 allow you to type -m fast when you can already type -m immediate.

 What extra capability are we actually delivering by doing that??
 The risk of introducing a bug and thereby losing data far outweighs the
 rather dubious benefit.

Well, my belief is that when users ask the database to shut down, they
want it to work.  If I'm the only one who thinks that, then whatever.
But I firmly believe we'll get bug reports about this.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Basic Recovery Control functions for use in Hot Standby. Pause,

2011-03-16 Thread Robert Haas
On Wed, Mar 16, 2011 at 4:41 AM, Simon Riggs si...@2ndquadrant.com wrote:
  Another problem here is that we are defaulting to hot_standby=off and
  pause_at_recovery_target=on.  So AIUI, with this patch, if someone
  sets a recovery target without making any other changes to the
  configuration, their database won't start up.  That seems poor.

 We should flip the default value of pause_at_recovery_target?

 No, we shouldn't. Robert's comments are wrong and he shouldn't post such
 things without testing them or reading the code.

Did you miss the part where I said with this patch?  Because my
description of what happens with Fujii-san's patch applied does in
fact match the behavior of the code he wrote.  It doesn't match the
current behavior, nor was it intended to describe the current
behavior.

  Even without the FATAL error, this whole pause_at_recovery_target
  thing is a little weird.  If someone sets a recovery target without
  making any other configuration changes, and Hot Standby is not
  enabled, then we will enter normal running, but if Hot Standby *is*
  enabled, then we'll replay to that point and pause recovery.  That
  seems a bit confusing.

 That's because there is no way to resume recovery which was
 paused by pause_at_recovery_target when hot standby is disabled,
 i.e., in that case we cannot call pg_xlog_replay_resume() to resume
 the recovery.

 How should recovery work when pause_at_recovery_target is
 enabled but hot standby is disabled? We have three choices:

 1. Forbit those settings, i.e., throw FATAL error. Tom dislikes this
     idea.
 2. Ignore pause_at_recovery_target. When recovery reaches the
     target, it ends without pausing, and then the server gets into
     normal processing mode. This would be unexpected behavior
     from DBA's point of view because he or she expects that
     recovery is paused at the target. To retry recovery, he or she
     needs to restore the backup again.
 3. Pause recovery even if hot standby is disabled. Since there
     is no way to resume recovery, recovery would pause until
     shutdown is requested.

 For me, #1 looks like the most harmless in them. But, better
 ideas? Votes?

 (2) is how it works now.

 (3) doesn't sound very sensible. Why would that be better than (2)

 There's lots of ways to misconfigure things, so I'm not too concerned
 about this minor point.

I agree that (3) is not very sensible.  I think there's a reasonable
debate to be had about whether (1) or (2) is better.  Like you, I
prefer #2 (the current behavior) to #1 (the proposed patch); but for
my money it would be a little less confusing if the default were
pause_at_recovery_target=false.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: index corruption in PG 8.3.13

2011-03-16 Thread Nikhil Sontakke
Hi,

 To summarize, as I see it - the zeroed out block 523 should have been
 the second left-most leaf and should have pointed out to 522. Thus
 re-establishing the index chain

 524 - 523 - 522 - 277 - ...

 Was there a machine restart in the picture as well?


It seems there might have been a machine restart involved too. So I
guess even WAL writing could have been impacted.

But even if VF was ongoing at the time of restart, the WAL replay on
restart should not do anything since this will be a non-committed
transaction?

Also I was looking at ReadRecord and saw that it logs a message for
failed CRC blocks but the WAL replay just stops at that point since it
returns a NULL. Is there a way to find out if more blocks follow in
the wake of this failed block (should be a matter of calling
ReadRecord with NULL as a first argument I think)? If so maybe we can
warn further that error was encountered in the middle of WAL replay.
However the last block too could be CRC check-fail candidate...

BTW, is there a possibility to encounter trailing blocks with CRC
failures regularly? For transactions that were ongoing at the time of
shutdown and did not get a chance to commit or WAL log properly?

Regards,
Nikhils

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: index corruption in PG 8.3.13

2011-03-16 Thread Robert Haas
On Wed, Mar 16, 2011 at 7:51 AM, Nikhil Sontakke
nikhil.sonta...@enterprisedb.com wrote:
 Hi,

 To summarize, as I see it - the zeroed out block 523 should have been
 the second left-most leaf and should have pointed out to 522. Thus
 re-establishing the index chain

 524 - 523 - 522 - 277 - ...

 Was there a machine restart in the picture as well?


 It seems there might have been a machine restart involved too. So I
 guess even WAL writing could have been impacted.

 But even if VF was ongoing at the time of restart, the WAL replay on
 restart should not do anything since this will be a non-committed
 transaction?

That's not how it works.  Replaying an uncommitted transaction
shouldn't result in any user-visible changes, but it still does stuff.

 Also I was looking at ReadRecord and saw that it logs a message for
 failed CRC blocks but the WAL replay just stops at that point since it
 returns a NULL. Is there a way to find out if more blocks follow in
 the wake of this failed block (should be a matter of calling
 ReadRecord with NULL as a first argument I think)? If so maybe we can
 warn further that error was encountered in the middle of WAL replay.
 However the last block too could be CRC check-fail candidate...

In general, when we WAL-log, we're writing over a previous WAL segment
that's been recycled.  A failed CRC is indistinguishable from
end-of-WAL, because we expect there to be arbitrary garbage bytes in
the file after the end of WAL position.

 BTW, is there a possibility to encounter trailing blocks with CRC
 failures regularly? For transactions that were ongoing at the time of
 shutdown and did not get a chance to commit or WAL log properly?

Well you might have a torn page if there was a *system* crash in the
middle of recovery, but in theory even that shouldn't break anything,
because the system shouldn't rely on the fsync being complete until it
actually is.  Of course, as you mentioned earlier, it's not impossible
there's a bug in the recovery code.  But if an OS crash is involved,
another possibility is that something went wrong with the fsync -
maybe there's a lying writeback cache between PG and the platter, for
example.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE FOREIGN TABLE doc

2011-03-16 Thread Robert Haas
On Wed, Mar 16, 2011 at 6:26 AM, Shigeru HANADA
han...@metrosystems.co.jp wrote:
 I noticed that CREATE FOREIGN TABLE document would need some
 corrections.

  - server_name is not mentioned in Parameters section
  - = is unnecessary in OPTIONS

 Please find attached a brief patch.

Committed, thanks.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Rectifying wrong Date outputs

2011-03-16 Thread Piyush Newe
Hi,

I was randomly testing some date related stuff on PG  observed that the
outputs were wrong.

e.g.
postgres=# SELECT TO_DATE('01-jan-2010',  'DD-MON-YY');
  to_date

 3910-01-01  - Look at this
(1 row)

postgres=# SELECT TO_DATE('01-jan-2010',  'DD-MON-');
  to_date

 2010-01-01
(1 row)

User can provide the year in any format i.e. 1-digit to 4-digit, in this
case the internal representation of century should be logically correct.

Considering this situation, I am drawing the table below  logging my
observation here. This might help us to demonstrate where we are lacking in
PG.

*Data Format PostgreSQL EDBAS*
*TO_DATE('01-jan-1',  'DD-MON-Y') 2001-01-01 01-JAN-2001*
*TO_DATE('01-jan-1',  'DD-MON-YY') 2001-01-01 01-JAN-2001*
*TO_DATE('01-jan-1',  'DD-MON-YYY') 2001-01-01 01-JAN-2001*
*TO_DATE('01-jan-1',  'DD-MON-') 0001-01-01 01-JAN-0001*

In this case, all the cases seems correct. Also the YEAR part in the output
is seems logical.

*Data Format PostgreSQL EDBAS*
*TO_DATE('01-jan-10',  'DD-MON-Y') 2010-01-01 Error *
*TO_DATE('01-jan-10',  'DD-MON-YY') 2010-01-01 01-JAN-2010*
*TO_DATE('01-jan-10',  'DD-MON-YYY') 2010-01-01 01-JAN-2010*
*TO_DATE('01-jan-10',  'DD-MON-') 0010-01-01 01-JAN-0010*

In this case, it seems in last 3 cases PG is behaving correctly. Whereas in
1st case the output is not correct since the Format ('Y') is lesser than the
actual input ('10'). But PG is ignoring this condition and throwing whatever
is input. The output year is might not be the year, what user is expecting.
Hence PG should throw an error.

*Data Format PostgreSQL EDBAS*
*TO_DATE('01-jan-067',  'DD-MON-Y') 2067-01-01 Error*
*TO_DATE('01-jan-111',  'DD-MON-YY') 2011-01-01 Error*
*TO_DATE('01-jan-678',  'DD-MON-YYY') 1678-01-01 01-JAN-2678*
*TO_DATE('01-jan-001',  'DD-MON-') 0001-01-01 01-JAN-0001*

In this case, only last case seems correct in PG. Rest other cases are might
not be logical, rather the output is vague. In PG, I haven't seen any
document which is saying something like this, if year is 111...999 then the
century would be 2000 and 001...099 then then century would be 1000.
However, the 1st and 2nd case should throw an error since the output format
('Y'  'YY') are really not matching with the Input ('067'  '111'),
respectively.

*Data Format PostgreSQL EDBAS*
*TO_DATE('01-jan-2010',  'DD-MON-Y') 4010-01-01 Error*
*TO_DATE('01-jan-2010',  'DD-MON-YY') 3910-01-01 Error *
*TO_DATE('01-jan-2010',  'DD-MON-YYY') 3010-01-01 Error *
*TO_DATE('01-jan-2010',  'DD-MON-') 2010-01-01 01-JAN-2010*

In this case, PG is giving wrong output in first 3 cases. Those need to get
rectified. Again it should throw error in these cases, because Output Format
is not matching with Input Data. The exception here is the 2nd case, where
century is well-defined.

After observing the all above cases, the summary would be, if the output
format is lesser than the actual input value, then it should throw an error.
Considering this thumb rule, we can fix the wrong outputs in PG. I have made
necessary changes to the code  attaching the patch with this email.

In the patch, I have written one centralize function which will decide the
century depends upon the given Input format.

Thoughts ? Any other ideas on this ?

-- 
Piyush S Newe
Principal Engineer
EnterpriseDB
office: +91 20 3058 9500
www.enterprisedb.com

Website: www.enterprisedb.com
EnterpriseDB Blog: http://blogs.enterprisedb.com/
Follow us on Twitter: http://www.twitter.com/enterprisedb

This e-mail message (and any attachment) is intended for the use of the
individual or entity to whom it is addressed. This message contains
information from EnterpriseDB Corporation that may be privileged,
confidential, or exempt from disclosure under applicable law. If you are not
the intended recipient or authorized to receive this for the intended
recipient, any use, dissemination, distribution, retention, archiving, or
copying of this communication is strictly prohibited. If you have received
this e-mail in error, please notify the sender immediately by reply e-mail
and delete this message.
diff --git a/src/backend/utils/adt/formatting.c b/src/backend/utils/adt/formatting.c
index aba1145..ad42126 100644
--- a/src/backend/utils/adt/formatting.c
+++ b/src/backend/utils/adt/formatting.c
@@ -989,6 +989,7 @@ static DCHCacheEntry *DCH_cache_getnew(char *str);
 static NUMCacheEntry *NUM_cache_search(char *str);
 static NUMCacheEntry *NUM_cache_getnew(char *str);
 static void NUM_cache_remove(NUMCacheEntry *ent);
+static int DecideCentury(int *in, int len);
 
 
 /* --
@@ -2733,21 +2734,14 @@ DCH_from_char(FormatNode *node, char *in, TmFromChar *out)
 			case DCH_IYYY:
 from_char_parse_int(out-year, s, n);
 out-yysz = 4;
+DecideCentury(out-year, out-yysz);
 s += SKIP_THth(n-suffix);
 break;
 			case 

Re: [HACKERS] How should the waiting backends behave in sync rep?

2011-03-16 Thread Robert Haas
On Wed, Mar 16, 2011 at 6:46 AM, Simon Riggs si...@2ndquadrant.com wrote:
 Yeah, I pointed out that SIGINT and SIGTERM are blocked there.
 But not SIGHUP ;)

  We could write a scary bit of code to get around that, but it smells
  badly of kludge.
 
  What do you think we should do?

 What I'm thinking is to make the waiting backends get out of the wait
 state if synchronous_standby_names is emptied and configuration file
 is reloaded. IOW, I'd like to change SyncRepWaitForLSN() so that it
 calls ProcessConfigFile() when the flag got_SIGHUP is true, and then
 gets out of the wait loop if there is no name in synchronous_standby_names
 (i.e., when the variable sync_standbys_defined is FALSE).

 I did try that and it didn't work.

 If you think it will, I'll try again.

There are two potential problems here.  One is that we don't normally
reload the config file except in between toplevel commands, so doing
it here would be slightly inconsistent.  A bigger problem is that
doing complicated stuff at this particular point in the code is a
really bad idea.  It's too late for the commit to fail, and as I
learned the hard way yesterday while fooling around with it, anything
that throws an ERROR here causes a database-wide PANIC and abnormal
system shutdown.  So doing something as complicated as reloading the
config file doesn't seem like a good idea.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: index corruption in PG 8.3.13

2011-03-16 Thread Nikhil Sontakke
Hi,

 Of course, as you mentioned earlier, it's not impossible
 there's a bug in the recovery code.

Yeah, I was looking at the repair_frag function in 8.3.13 (yup it's
ugly!) and found out that the normal ExecInsertIndexTuples call is
used to insert the index entries. That is standard index code used
everywhere. So btree WAL bugs in this code path should be pretty rare
I would think..

 But if an OS crash is involved,
 another possibility is that something went wrong with the fsync -
 maybe there's a lying writeback cache between PG and the platter, for
 example.


Yup, plan to re-confirm this too.

Thanks Robert!

Regards,
Nikhils

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: index corruption in PG 8.3.13

2011-03-16 Thread Alvaro Herrera
Excerpts from Nikhil Sontakke's message of mié mar 16 08:51:00 -0300 2011:
 Hi,
 
  To summarize, as I see it - the zeroed out block 523 should have been
  the second left-most leaf and should have pointed out to 522. Thus
  re-establishing the index chain
 
  524 - 523 - 522 - 277 - ...
 
  Was there a machine restart in the picture as well?
 
 
 It seems there might have been a machine restart involved too. So I
 guess even WAL writing could have been impacted.

Maybe the filesystem decided to fill a couple of FS pages (one PG page)
with zeroes on recovery due to believing that it wasn't fsynced at the
time of the crash.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Basic Recovery Control functions for use in Hot Standby. Pause,

2011-03-16 Thread Tom Lane
Fujii Masao masao.fu...@gmail.com writes:
 How should recovery work when pause_at_recovery_target is
 enabled but hot standby is disabled? We have three choices:

 1. Forbit those settings, i.e., throw FATAL error. Tom dislikes this
 idea.

No, I didn't say that.  I said not to write elog(FATAL).  If the
combination is nonsensical then it's fine to forbid it, but you don't
need FATAL for that.  In particular, attempting to change to a
disallowed setting after system startup should not result in crashing
the postmaster.  And it won't, if you just use the normal error level
for complaining about an invalid GUC setting.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: Sync Rep and shutdown Re: [HACKERS] Sync Rep v19

2011-03-16 Thread Robert Haas
On Wed, Mar 16, 2011 at 7:39 AM, Robert Haas robertmh...@gmail.com wrote:
 The only idea I have for allowing fast shutdown to still be fast, even
 when sync rep is involved, is to shut down the system in two phases.
 The postmaster would need to stop accepting new connections, and first
 kill off all the backends that aren't waiting for sync rep.  Then,
 once all remaining backends are waiting for sync rep, we can have them
 proceed as above: close the connection without acking the commit or
 throwing ERROR/FATAL, and exit.  That's pretty complicated, especially
 given the rule that the postmaster mustn't touch shared memory, but I
 don't see any alternative.

 What extra capability are we actually delivering by doing that??
 The risk of introducing a bug and thereby losing data far outweighs the
 rather dubious benefit.

 Well, my belief is that when users ask the database to shut down, they
 want it to work.  If I'm the only one who thinks that, then whatever.
 But I firmly believe we'll get bug reports about this.

On further review, the approach proposed above doesn't really work,
because a backend can get a SIGTERM either because the system is doing
a fast shutdown or because a user has issued
pg_terminate_backend(PID); and in the latter case we have to continue
letting in connections.

As of right now, synchronous replication continues to wait even when:

- someone tries to perform a fast shutdown
- someone tries to kill the backend using pg_terminate_backend()
- someone attempts to cancel the query using pg_cancel_backend() or by
pressing control-C in, for example, psql
- someone attempts to shut off synchronous replication by changing
synchronous_standby_names in postgresql.conf and issuing pg_ctl reload

We've worked pretty hard to ensure that things like query cancel and
shutdown work quickly and reliably, and I don't think we want to make
synchronous replication the one part of the system that departs from
that general principle.

So, patch attached.  This patch arranges to do the following things:

1. If a die interrupt is received (pg_terminate_backend or fast
shutdown), then terminate the sync rep wait and arrange for the
connection to be closed without acknowledging the commit (but do send
a warning message back).  The commit still happened, though, so other
transactions will see its effects.  This is unavoidable unless we're
willing to either ignore attempts to terminate a backend waiting for
sync rep, or panic the system when it happens, and I don't think
either of those is appropriate.

2. If a query cancel interrupt is received (pg_cancel_backend or ^C),
then cancel the sync rep wait and issue a warning before acknowledging
the commit.  Again, the alternative is to either ignore the cancel or
panic, neither of which I believe to be what users will want.

3. If synchronous_standby_names is changed to '' by editing
postgresql.conf and issuing pg_ctl reload, then cancel all waits in
progress and wake everybody up.  As I mentioned before, reloading the
config file from within the waiting backend (which can't safely throw
an error) seems risky, so what I did instead is made WAL writer
responsible for handling this.  Nobody's allowed to wait for sync rep
unless a global shared memory flag is set, and the WAL writer process
is responsible for setting and clearing this flag when the config file
is reloaded.  This has basically no performance cost; WAL writer only
ever does any extra work at all with this code when it receives a
SIGHUP, and even then the work is trivial except in the case where
synchronous_standby_names has changed from empty to non-empty or visca
versa.  The advantage of putting this in WAL writer rather than, say,
bgwriter is that WAL writer doesn't have nearly as many jobs to do and
they don't involve nearly as much I/O, so the chances of a long delay
due to the process being busy are much less.

4. Remove the SYNC_REP_MUST_DISCONNECT state, which actually does
absolutely nothing right now, despite what the name would seem to
imply.  In particular, it doesn't arrange for any sort of disconnect.
This patch does arrange for that, but not using this mechanism.

5. The existing code relies on being able to read MyProc-syncRepState
without holding the lock, even while a WAL sender must be updating it
in another process.  I'm not 100% sure this is safe on a
multi-processor machine with weak memory ordering.  In practice, the
chances of something going wrong here seem extremely small.  You'd
need something like this: a WAL sender updates MyProc-syncRepState
just after the wait timeout expires and before the latch is reset, but
the regular backend fails to see the state change due to
memory-ordering effects and drops through the loop, waiting another 60
s, and then finally wakes up and completes the wait (but a minute
later than expected).  That seems vanishingly unlikely but it's also
simple to protect against, so I did.

Review appreciated.

Thanks,

-- 
Robert Haas
EnterpriseDB: 

Re: [HACKERS] Rectifying wrong Date outputs

2011-03-16 Thread Robert Haas
On Wed, Mar 16, 2011 at 8:21 AM, Piyush Newe
piyush.n...@enterprisedb.com wrote:
 Data Format                 PostgreSQL EDBAS
 TO_DATE('01-jan-10',  'DD-MON-Y')         2010-01-01 Error
 TO_DATE('01-jan-10',  'DD-MON-YY')         2010-01-01 01-JAN-2010
 TO_DATE('01-jan-10',  'DD-MON-YYY') 2010-01-01 01-JAN-2010
 TO_DATE('01-jan-10',  'DD-MON-') 0010-01-01 01-JAN-0010
 In this case, it seems in last 3 cases PG is behaving correctly. Whereas in
 1st case the output is not correct since the Format ('Y') is lesser than the
 actual input ('10'). But PG is ignoring this condition and throwing whatever
 is input. The output year is might not be the year, what user is expecting.
 Hence PG should throw an error.

I can't get worked up about this.  If there's a consensus that
throwing an error here is better, fine, but on first blush the PG
behavior doesn't look unreasonable to me.

 Data Format                 PostgreSQL EDBAS
 TO_DATE('01-jan-2010',  'DD-MON-Y') 4010-01-01 Error
 TO_DATE('01-jan-2010',  'DD-MON-YY') 3910-01-01 Error
 TO_DATE('01-jan-2010',  'DD-MON-YYY') 3010-01-01 Error
 TO_DATE('01-jan-2010',  'DD-MON-') 2010-01-01 01-JAN-2010

These cases look a lot stranger.  I'm inclined to think that if the
number of digits specified exceeds the number of Ys, then we can
either (1) throw an error, as you suggest or (2) give the same answer
we would have given if the number of Ys were equal to the number of
digits given.  In other words, if we're not going to error out here,
all of these should return 2010-01-01.

 Data Format PostgreSQL EDBAS
 TO_DATE('01-jan-067',  'DD-MON-Y') 2067-01-01 Error
 TO_DATE('01-jan-111',  'DD-MON-YY') 2011-01-01 Error
 TO_DATE('01-jan-678',  'DD-MON-YYY') 1678-01-01 01-JAN-2678
 TO_DATE('01-jan-001',  'DD-MON-') 0001-01-01 01-JAN-0001

These are so strange that it's hard to reason about them; who uses
three-digit years?In the third case above, you should EDBAS
deciding that 678 means 2678 instead of 1678, but that seems quite
arbitrary.  1678 seems just as plausible.  But the behavior in the
second case looks wrong (shouldn't the answer should be either  or
2111?) and the first case looks inconsistent with the third one (why
does 067 mean 2067 rather than 1967 while 678 means 1678 rather than
2678?).

I'm inclined to think that we have a bug here in the case where the #
of digits given is greater than the # of Ys.  See also this:

rhaas=# select to_date('01-jan-678', 'DD-MON-Y');
  to_date

 2678-01-01
(1 row)

rhaas=# select to_date('01-jan-678', 'DD-MON-YY');
  to_date

 2578-01-01
(1 row)

rhaas=# select to_date('01-jan-678', 'DD-MON-YYY');
  to_date

 1678-01-01
(1 row)

It's a lot less clear to me that we have a bug in the other direction
(# of digits given is less than the # of Ys), but maybe

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: tabcomple for pset - format and linestyle

2011-03-16 Thread Robert Haas
On Mon, Mar 14, 2011 at 11:26 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
 I am sending a very simple patch that enhance a autocomplete for
 format and linestyle attributes

Committed.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Rectifying wrong Date outputs

2011-03-16 Thread Bruce Momjian
Robert Haas wrote:
 On Wed, Mar 16, 2011 at 8:21 AM, Piyush Newe
 piyush.n...@enterprisedb.com wrote:
  Data Format ?? ? ? ? ? ? ? ?PostgreSQL EDBAS
  TO_DATE('01-jan-10', ?'DD-MON-Y') ?? ? ? ?2010-01-01 Error
  TO_DATE('01-jan-10', ?'DD-MON-YY') ?? ? ? ?2010-01-01 01-JAN-2010
  TO_DATE('01-jan-10', ?'DD-MON-YYY') 2010-01-01 01-JAN-2010
  TO_DATE('01-jan-10', ?'DD-MON-') 0010-01-01 01-JAN-0010
  In this case, it seems in last 3 cases PG is behaving correctly. Whereas in
  1st case the output is not correct since the Format ('Y') is lesser than the
  actual input ('10'). But PG is ignoring this condition and throwing whatever
  is input. The output year is might not be the year, what user is expecting.
  Hence PG should throw an error.
 
 I can't get worked up about this.  If there's a consensus that
 throwing an error here is better, fine, but on first blush the PG
 behavior doesn't look unreasonable to me.
 
  Data Format ?? ? ? ? ? ? ? ?PostgreSQL EDBAS

To clarify, the user is reporting EDB Advanced Server, though the
community PG has the same issues, or at least similar;  with git HEAD:

test= SELECT TO_DATE('01-jan-2010',  'DD-MON-YY');
  to_date

 3910-01-01
(1 row)

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Rectifying wrong Date outputs

2011-03-16 Thread Robert Haas
On Wed, Mar 16, 2011 at 5:52 PM, Bruce Momjian br...@momjian.us wrote:
 Robert Haas wrote:
 On Wed, Mar 16, 2011 at 8:21 AM, Piyush Newe
 piyush.n...@enterprisedb.com wrote:
  Data Format ?? ? ? ? ? ? ? ?PostgreSQL EDBAS
  TO_DATE('01-jan-10', ?'DD-MON-Y') ?? ? ? ?2010-01-01 Error
  TO_DATE('01-jan-10', ?'DD-MON-YY') ?? ? ? ?2010-01-01 01-JAN-2010
  TO_DATE('01-jan-10', ?'DD-MON-YYY') 2010-01-01 01-JAN-2010
  TO_DATE('01-jan-10', ?'DD-MON-') 0010-01-01 01-JAN-0010
  In this case, it seems in last 3 cases PG is behaving correctly. Whereas in
  1st case the output is not correct since the Format ('Y') is lesser than 
  the
  actual input ('10'). But PG is ignoring this condition and throwing 
  whatever
  is input. The output year is might not be the year, what user is expecting.
  Hence PG should throw an error.

 I can't get worked up about this.  If there's a consensus that
 throwing an error here is better, fine, but on first blush the PG
 behavior doesn't look unreasonable to me.

  Data Format ?? ? ? ? ? ? ? ?PostgreSQL EDBAS

 To clarify, the user is reporting EDB Advanced Server, though the
 community PG has the same issues, or at least similar;  with git HEAD:

        test= SELECT TO_DATE('01-jan-2010',  'DD-MON-YY');
          to_date
        
         3910-01-01
        (1 row)

Actually, I think he's comparing PostgreSQL to Advanced Server.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Rectifying wrong Date outputs

2011-03-16 Thread Bruce Momjian
Robert Haas wrote:
 On Wed, Mar 16, 2011 at 5:52 PM, Bruce Momjian br...@momjian.us wrote:
  Robert Haas wrote:
  On Wed, Mar 16, 2011 at 8:21 AM, Piyush Newe
  piyush.n...@enterprisedb.com wrote:
   Data Format ?? ? ? ? ? ? ? ?PostgreSQL EDBAS
   TO_DATE('01-jan-10', ?'DD-MON-Y') ?? ? ? ?2010-01-01 Error
   TO_DATE('01-jan-10', ?'DD-MON-YY') ?? ? ? ?2010-01-01 01-JAN-2010
   TO_DATE('01-jan-10', ?'DD-MON-YYY') 2010-01-01 01-JAN-2010
   TO_DATE('01-jan-10', ?'DD-MON-') 0010-01-01 01-JAN-0010
   In this case, it seems in last 3 cases PG is behaving correctly. Whereas 
   in
   1st case the output is not correct since the Format ('Y') is lesser than 
   the
   actual input ('10'). But PG is ignoring this condition and throwing 
   whatever
   is input. The output year is might not be the year, what user is 
   expecting.
   Hence PG should throw an error.
 
  I can't get worked up about this. ?If there's a consensus that
  throwing an error here is better, fine, but on first blush the PG
  behavior doesn't look unreasonable to me.
 
   Data Format ?? ? ? ? ? ? ? ?PostgreSQL EDBAS
 
  To clarify, the user is reporting EDB Advanced Server, though the
  community PG has the same issues, or at least similar; ?with git HEAD:
 
  ? ? ? ?test= SELECT TO_DATE('01-jan-2010', ?'DD-MON-YY');
  ? ? ? ? ?to_date
  ? ? ? ?
  ? ? ? ? 3910-01-01
  ? ? ? ?(1 row)
 
 Actually, I think he's comparing PostgreSQL to Advanced Server.

Oh, I understand now.  I was confused that the headings didn't line up
with the values.  I see now the first value is community PG and the
second is EDBAS.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Shared invalidation cache messages for temporary tables

2011-03-16 Thread Jim Nasby
On Mar 14, 2011, at 9:29 AM, Robert Haas wrote:

 On Mon, Mar 14, 2011 at 10:21 AM, Bruce Momjian br...@momjian.us wrote:
 Since your original email is fairly unclear about what you think the
 problem is, it's a bit hard to speculate here, but like Simon, I don't
 see any obvious problem here.  Maybe you're asking not so much about
 inserts, updates, or deletes into temporary tables but about creating
 and making modifications to them, which will generate catcache and
 relcache flushes when the pg_class/pg_attribute entries are updated.
 But I don't think those invalidation messages can be optimized away,
 since other backends can access temporary tables of other sessions in
 limited ways - for example, they can drop them.
 
 Sorry, yes that was my point --- should we be doing as much cache
 invalidation traffic for temporary tables as we are doing?  I think you
 are saying we are fine and there are no optimizations possible.
 
 Yeah, I think so.  I mean, if you have a concrete example of this
 causing a problem, then we can look into it, but my intuition is that
 it's OK.  Programmers intuition are notoriously wrong, of course, so
 we're all just shooting in the dark until we have something to
 measure.

Sounds like there should be a comment somewhere in the code that explains why 
we actually need those messages...
--
Jim C. Nasby, Database Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: Sync Rep and shutdown Re: [HACKERS] Sync Rep v19

2011-03-16 Thread Dimitri Fontaine
Robert Haas robertmh...@gmail.com writes:
 1. If a die interrupt is received (pg_terminate_backend or fast
 shutdown), then terminate the sync rep wait and arrange for the
 connection to be closed without acknowledging the commit (but do send
 a warning message back).  The commit still happened, though, so other
 transactions will see its effects.  This is unavoidable unless we're
 willing to either ignore attempts to terminate a backend waiting for
 sync rep, or panic the system when it happens, and I don't think
 either of those is appropriate.

Is it possible to force the standby out here, so that logs show that
there was something going on wrt replication?

 2. If a query cancel interrupt is received (pg_cancel_backend or ^C),
 then cancel the sync rep wait and issue a warning before acknowledging
 the commit.  Again, the alternative is to either ignore the cancel or
 panic, neither of which I believe to be what users will want.

Or force the standby to disconnect.

In both those cases what we have is a situation were either we can't
satisfy the user request or we can't continue to offer sync rep.  You're
saying that we have to satisfy the user's query, so I say kick off sync
rep or it does not make any sense.

 3. If synchronous_standby_names is changed to '' by editing
 postgresql.conf and issuing pg_ctl reload, then cancel all waits in
 progress and wake everybody up.  As I mentioned before, reloading the

Ok.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] really lazy vacuums?

2011-03-16 Thread Jim Nasby
On Mar 14, 2011, at 2:36 PM, Robert Haas wrote:
 I'm not quite sure how we'd decide whether to do a really lazy
 vacuum or the kind we do now.  The case where this approach wins big
 is when there are few or no dead tuples.  In that case, we do a lot of
 work looking at the indexes and we don't get much out of it; plus we
 scan the heap twice instead of just once.  If there are a lot of dead
 tuples, then we have to bite the bullet and do the whole thing.
snip
 Thoughts?  Does this sound at all feasible/useful?  Any ideas on how to tune 
 it?

One way to look at this is that any system will have a limit on how quickly it 
can vacuum everything. If it's having trouble dedicating enough IO to vacuum, 
then autovac is going to have a long list of tables that it wants to vacuum. 
When you're in that situation, you want to get to the next table that needs 
vacuuming as quickly as possible, so if you've run through the first heap scan 
and found only a limited number of dead tuples, it doesn't make sense to spend 
a bunch of time scanning indexes and making a second heap scan (though, IIRC 
the second scan doesn't hit the entire heap; it only hits the tuples that were 
remembered as being dead).

Of course, going along the lines of an autovac-based tuning mechanism, you have 
to question how a table would show up for autovac if there's not actually a 
number of dead tuples. One scenario is freezing (though I'm not sure if your 
super-lazy vacuum could freeze tuples or not). Another is inserts. That might 
become a big win; you might want to aggressively scan a table that gets data 
loaded into it in order to set hint/all visible bits.

From a manual standpoint, ISTM that super-lazy vac would be extremely useful 
for dealing with hint bits after a bulk insert to a table that also has some 
update activity. Using a regular vacuum in that case would result in a lot of 
extra work to deal with the small number of dead tuples.

Perhaps it would be useful to write a script that analyzed the output of vacuum 
verbose looking for tables where a super-lazy vacuum would have made sense 
(assuming vacuum verbose provides the needed info). If we had such a script we 
could ask folks to run it and see how much super-lazy vacuuming would help in 
the real world.
--
Jim C. Nasby, Database Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Japanese developers?

2011-03-16 Thread Josh Berkus
All,

So far, a few of us have heard from Fujii, Koichi, Satoshi, and
Ishakawa-san.  While we don't know that any contributors live in the
northern coastal towns, we don't know that they don't either.

Has anyone heard from our other Japanese developers?  Tatsuo, Hiroshi
(both of them), Hitoshi, Kaori, or any of the many members of JPUG?

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Japanese developers?

2011-03-16 Thread Dave Page
On Wed, Mar 16, 2011 at 11:41 PM, Josh Berkus j...@agliodbs.com wrote:
 All,

 So far, a few of us have heard from Fujii, Koichi, Satoshi, and
 Ishakawa-san.  While we don't know that any contributors live in the
 northern coastal towns, we don't know that they don't either.

 Has anyone heard from our other Japanese developers?  Tatsuo, Hiroshi
 (both of them), Hitoshi, Kaori, or any of the many members of JPUG?

I've seen activity from Tatsuo and both Hiroshi's.



-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Japanese developers?

2011-03-16 Thread Thom Brown
On 16 March 2011 23:43, Dave Page dp...@pgadmin.org wrote:
 On Wed, Mar 16, 2011 at 11:41 PM, Josh Berkus j...@agliodbs.com wrote:
 All,

 So far, a few of us have heard from Fujii, Koichi, Satoshi, and
 Ishakawa-san.  While we don't know that any contributors live in the
 northern coastal towns, we don't know that they don't either.

 Has anyone heard from our other Japanese developers?  Tatsuo, Hiroshi
 (both of them), Hitoshi, Kaori, or any of the many members of JPUG?

 I've seen activity from Tatsuo and both Hiroshi's.

Harada-san hasn't been on the mailing lists for 2 weeks now, but he's
been using Twitter today.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Japanese developers?

2011-03-16 Thread Tatsuo Ishii
 On Wed, Mar 16, 2011 at 11:41 PM, Josh Berkus j...@agliodbs.com wrote:
 All,

 So far, a few of us have heard from Fujii, Koichi, Satoshi, and
 Ishakawa-san.  While we don't know that any contributors live in the
 northern coastal towns, we don't know that they don't either.

 Has anyone heard from our other Japanese developers?  Tatsuo, Hiroshi
 (both of them), Hitoshi, Kaori, or any of the many members of JPUG?
 
 I've seen activity from Tatsuo and both Hiroshi's.

Kaori and other employees of SRA OSS, Inc. Japan are all fine.  The
only JPUG board member living in northern towns, whose name is Naoto
Takahashi living in Sendai (the largest city in the northern part of
Japan), is fine so far.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: Sync Rep and shutdown Re: [HACKERS] Sync Rep v19

2011-03-16 Thread Robert Haas
On Wed, Mar 16, 2011 at 6:23 PM, Dimitri Fontaine
dimi...@2ndquadrant.fr wrote:
 Robert Haas robertmh...@gmail.com writes:
 1. If a die interrupt is received (pg_terminate_backend or fast
 shutdown), then terminate the sync rep wait and arrange for the
 connection to be closed without acknowledging the commit (but do send
 a warning message back).  The commit still happened, though, so other
 transactions will see its effects.  This is unavoidable unless we're
 willing to either ignore attempts to terminate a backend waiting for
 sync rep, or panic the system when it happens, and I don't think
 either of those is appropriate.

 Is it possible to force the standby out here, so that logs show that
 there was something going on wrt replication?

That's an interesting idea, but I think it might be too much spooky
action at a distance.  I think we should look at getting Fujii Masao's
replication_timeout patch committed; that seems like the right way to
kick out unresponsive standbys.  Another problem with doing it here is
that any ERROR will turn into a PANIC, which rules out doing anything
very complicated.  Also note that we can (and do) log a WARNING, which
I think answers your concern about having something in the logs wrt
replication.

A further point is that even if we could kick out the standby, it'd
presumably reconnect after the usual 2 s interval, so it doesn't seem
like it really accomplishes much.  We can't just unilaterally decide
that it is no longer allowed to be a sync standby ever again; that's
controlled by postgresql.conf.

I think the most important part of all this is that it is logged.
Anyone who is running synchronous replication should also be doing
careful monitoring; if not, shame on them, because if your data is
important enough that you need synchronous replication, it's surely
important enough to watch the logs.  If you don't, all sorts of bad
things can happen to your data (either related to sync rep, or
otherwise) and you'll have no idea until it's far too late.

 2. If a query cancel interrupt is received (pg_cancel_backend or ^C),
 then cancel the sync rep wait and issue a warning before acknowledging
 the commit.  Again, the alternative is to either ignore the cancel or
 panic, neither of which I believe to be what users will want.

 Or force the standby to disconnect.

 In both those cases what we have is a situation were either we can't
 satisfy the user request or we can't continue to offer sync rep.  You're
 saying that we have to satisfy the user's query, so I say kick off sync
 rep or it does not make any sense.

Same considerations here.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] really lazy vacuums?

2011-03-16 Thread Robert Haas
On Wed, Mar 16, 2011 at 6:36 PM, Jim Nasby j...@nasby.net wrote:
 One way to look at this is that any system will have a limit on how quickly 
 it can vacuum everything. If it's having trouble dedicating enough IO to 
 vacuum, then autovac is going to have a long list of tables that it wants to 
 vacuum. When you're in that situation, you want to get to the next table that 
 needs vacuuming as quickly as possible, so if you've run through the first 
 heap scan and found only a limited number of dead tuples, it doesn't make 
 sense to spend a bunch of time scanning indexes and making a second heap scan 
 (though, IIRC the second scan doesn't hit the entire heap; it only hits the 
 tuples that were remembered as being dead).

I mostly agree with this, but you also can't postpone vacuuming
indefinitely just because you're too busy; that's going to blow up in
your face.

 Of course, going along the lines of an autovac-based tuning mechanism, you 
 have to question how a table would show up for autovac if there's not 
 actually a number of dead tuples. One scenario is freezing (though I'm not 
 sure if your super-lazy vacuum could freeze tuples or not). Another is 
 inserts. That might become a big win; you might want to aggressively scan a 
 table that gets data loaded into it in order to set hint/all visible bits.

Right.  Really-lazy vacuum could freeze tuples.  Unlike regular
vacuum, it can also sensibly be done incrementally.  One thing I was
thinking about is counting the number of times that we fetched a tuple
that was older than RecentGlobalXmin and had a committed xmin and an
invalid xmax, but where the page was not PD_ALL_VISIBLE.  If that's
happening a lot, it probably means that some vacuuming would speed
things up, by getting those PD_ALL_VISIBLE bits set.  Perhaps you
could work out some formula where you do a variable amount of
super-lazy vacuuming depending on the number of such tuple fetches.
The trick would be to avoid overdoing it (so that you swamp the I/O
system) or underdoing it (so that the system never converges).  It
would be really nice (for this and for other things) if we had some
way of measuring the I/O saturation of the system, so that we could
automatically adjust the aggressiveness of background processes
accordingly.

Note also that if and when we get index-only scans, making sure the
PD_ALL_VISIBLE bits (and thus the visibility map bits) actually get
set is going to be a lot more important.

 From a manual standpoint, ISTM that super-lazy vac would be extremely useful 
 for dealing with hint bits after a bulk insert to a table that also has some 
 update activity. Using a regular vacuum in that case would result in a lot of 
 extra work to deal with the small number of dead tuples.

I can see that.

 Perhaps it would be useful to write a script that analyzed the output of 
 vacuum verbose looking for tables where a super-lazy vacuum would have made 
 sense (assuming vacuum verbose provides the needed info). If we had such a 
 script we could ask folks to run it and see how much super-lazy vacuuming 
 would help in the real world.

I'm a bit doubtful about this part.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: Sync Rep and shutdown Re: [HACKERS] Sync Rep v19

2011-03-16 Thread Aidan Van Dyk
On Wed, Mar 16, 2011 at 8:30 PM, Robert Haas robertmh...@gmail.com wrote:

 I think the most important part of all this is that it is logged.
 Anyone who is running synchronous replication should also be doing
 careful monitoring; if not, shame on them, because if your data is
 important enough that you need synchronous replication, it's surely
 important enough to watch the logs.  If you don't, all sorts of bad
 things can happen to your data (either related to sync rep, or
 otherwise) and you'll have no idea until it's far too late.

+

If your data is that important, your logs/monitoring are *equally*
important, because they are what give you confidence your data is as
safe as you think it is...


-- 
Aidan Van Dyk                                             Create like a god,
ai...@highrise.ca                                       command like a king,
http://www.highrise.ca/                                   work like a slave.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Japanese developers?

2011-03-16 Thread Hitoshi Harada
2011/3/17 Thom Brown t...@linux.com:
 On 16 March 2011 23:43, Dave Page dp...@pgadmin.org wrote:
 On Wed, Mar 16, 2011 at 11:41 PM, Josh Berkus j...@agliodbs.com wrote:
 All,

 So far, a few of us have heard from Fujii, Koichi, Satoshi, and
 Ishakawa-san.  While we don't know that any contributors live in the
 northern coastal towns, we don't know that they don't either.

 Has anyone heard from our other Japanese developers?  Tatsuo, Hiroshi
 (both of them), Hitoshi, Kaori, or any of the many members of JPUG?

 I've seen activity from Tatsuo and both Hiroshi's.

 Harada-san hasn't been on the mailing lists for 2 weeks now, but he's
 been using Twitter today.

Thanks, I'm fine at all. Everything in Tokyo is getting slower than
before, hoping to have much more time spending for -hackers:)

Regards,



-- 
Hitoshi Harada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [BUGS] BUG #5842: Memory leak in PL/Python when taking slices of results

2011-03-16 Thread Robert Haas
On Fri, Mar 11, 2011 at 6:02 AM, Bruce Momjian br...@momjian.us wrote:
 What has been done with this report/fix?

AFAIK, nothing.  Added to 9.1 open items list.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] volatile markings to silence compilers

2011-03-16 Thread Bruce Momjian
Looking over the release notes, we have added a few 'volatile' storage
specifications to variables which are near longjump/TRY blocks to
silence compilers.  I am worried that these specifications don't clearly
identify their purpose.  Can we rename these to use a macro for
'volatile' that will make their purpose clearer and perhaps their
removal one day easier?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: tabcomple for pset - format and linestyle

2011-03-16 Thread Pavel Stehule
2011/3/16 Robert Haas robertmh...@gmail.com:
 On Mon, Mar 14, 2011 at 11:26 AM, Pavel Stehule pavel.steh...@gmail.com 
 wrote:
 I am sending a very simple patch that enhance a autocomplete for
 format and linestyle attributes

 Committed.


thank you very much

Regards

Pavel Stehule

 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Rectifying wrong Date outputs

2011-03-16 Thread Piyush Newe
Sorry for creating the confusion. The table drawn was PostgreSQL vs EDB
Advanced Server.
Thanks Burce for clarification.

For the 1-digit, 2-digit  3-digit Year inputs, as I said, I didn't see any
document in PG which will explain what would be the century considered if it
is not given. If I missed out it somewhere please let me know.

I refer to following link which explains the behavior of Oracle.
http://forums.oracle.com/forums/thread.jspa?threadID=312239

Now, if
1. # of digits given is greater than the # of Ys

i.e.
postgres=# select to_date('01-jan-111', 'DD-MON-Y');
  to_date

 2111-01-01
(1 row)

What we should do ? Either we should throw an error or we should give what
user has provided.
IMHO, we should follow what format is given by user. However, even if the
'format' gets wrong rather invalid, it is not throwing any error.

e.g.
postgres=# select to_date('01-jan-111', 'DD-MON-Y POSTGRES');   Look
at this
  to_date

 2111-01-01
(1 row)



2. # of digits given is less than the # of Ys
Consider following case,

postgres=# select to_date('01-jan-6', 'DD-MON-Y'), to_date('01-jan-6',
'DD-MON-');
  to_date   |  to_date
+
 2006-01-01 | 0006-01-01


Why this behaviour not predictable ? I think we are always considering the
current century, if it is not provided. If I missed out any related
document, please share.

And yes,

postgres=# select to_date('01-jan-1761', 'DD-MON-Y');
  to_date

 3761-01-01- Look at this.
(1 row)

Definitely, their is a bug in this case.

Am I missing something ?

-Piyush

On Thu, Mar 17, 2011 at 3:30 AM, Bruce Momjian br...@momjian.us wrote:

 Robert Haas wrote:
  On Wed, Mar 16, 2011 at 5:52 PM, Bruce Momjian br...@momjian.us wrote:
   Robert Haas wrote:
   On Wed, Mar 16, 2011 at 8:21 AM, Piyush Newe
   piyush.n...@enterprisedb.com wrote:
Data Format ?? ? ? ? ? ? ? ?PostgreSQL EDBAS
TO_DATE('01-jan-10', ?'DD-MON-Y') ?? ? ? ?2010-01-01 Error
TO_DATE('01-jan-10', ?'DD-MON-YY') ?? ? ? ?2010-01-01 01-JAN-2010
TO_DATE('01-jan-10', ?'DD-MON-YYY') 2010-01-01 01-JAN-2010
TO_DATE('01-jan-10', ?'DD-MON-') 0010-01-01 01-JAN-0010
In this case, it seems in last 3 cases PG is behaving correctly.
 Whereas in
1st case the output is not correct since the Format ('Y') is lesser
 than the
actual input ('10'). But PG is ignoring this condition and throwing
 whatever
is input. The output year is might not be the year, what user is
 expecting.
Hence PG should throw an error.
  
   I can't get worked up about this. ?If there's a consensus that
   throwing an error here is better, fine, but on first blush the PG
   behavior doesn't look unreasonable to me.
  
Data Format ?? ? ? ? ? ? ? ?PostgreSQL EDBAS
  
   To clarify, the user is reporting EDB Advanced Server, though the
   community PG has the same issues, or at least similar; ?with git HEAD:
  
   ? ? ? ?test= SELECT TO_DATE('01-jan-2010', ?'DD-MON-YY');
   ? ? ? ? ?to_date
   ? ? ? ?
   ? ? ? ? 3910-01-01
   ? ? ? ?(1 row)
 
  Actually, I think he's comparing PostgreSQL to Advanced Server.

 Oh, I understand now.  I was confused that the headings didn't line up
 with the values.  I see now the first value is community PG and the
 second is EDBAS.

 --
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +




-- 
-- 
Piyush S Newe
Principal Engineer
EnterpriseDB
office: +91 20 3058 9500
www.enterprisedb.com

Website: www.enterprisedb.com
EnterpriseDB Blog: http://blogs.enterprisedb.com/
Follow us on Twitter: http://www.twitter.com/enterprisedb

This e-mail message (and any attachment) is intended for the use of the
individual or entity to whom it is addressed. This message contains
information from EnterpriseDB Corporation that may be privileged,
confidential, or exempt from disclosure under applicable law. If you are not
the intended recipient or authorized to receive this for the intended
recipient, any use, dissemination, distribution, retention, archiving, or
copying of this communication is strictly prohibited. If you have received
this e-mail in error, please notify the sender immediately by reply e-mail
and delete this message.