Re: [HACKERS] Online base backup from the hot-standby

2011-09-13 Thread Jun Ishiduka

Update patch.

Changes:
  * set 'on' full_page_writes by user (in document)
  * read FROM: XX in backup_label (in xlog.c)
  * check status when pg_stop_backup is executed (in xlog.c)

 Hi, Created a patch in response to comments.
 
 
 * Procedure
 1. Call pg_start_backup('x') on hot standby.
 2. Take a backup of the data dir.
 3. Copy the control file on hot standby to the backup.
 4. Call pg_stop_backup() on hot standby.
 
 
 * Behavior
 (take backup)
  If we execute pg_start_backup() on hot standby then execute restartpoint,
  write a strings as FROM: slave in backup_label and change backup mode,
  but do not change full_page_writes into on forcibly.
 
  If we execute pg_stop_backup() on hot standby then rename backup_label
  and change backup mode, but neither write backup end record and history
  file nor wait to complete the WAL archiving.
  pg_stop_backup() is returned this MinRecoveryPoint as result.
 
  If we execute pg_stop_backup() on the server promoted then error
  message is output since read the backup_label.
 
 (recovery)
  If we recover with the backup taken on hot standby, MinRecoveryPoint in
  the control file copied by 3 of above-procedure is used instead of backup
  end record.
 
  If recovery starts as first, BackupEndPoint in the control file is written
  a same value as MinRecoveryPoint. This is for remembering the value of
  MinRecoveryPoint during recovery.
 
  HINT message(If this has ...) is always output when we recover with the
  backup taken on hot standby.
 
 
 * Problem
  full_page_writes's problem.
This has the following two problems.
 * pg_start_backup() must set 'on' to full_page_writes of the master that 
   is actual writing of the WAL, but not the standby.
 * The standby doesn't need to connect to the master that's actual 
 writing 
   WAL.
   (Ex. Standby2 in Cascade Replication: Master - Standby1 - Standby2)

I'm worried how I should clear these problems.
 
  Status: Considering
   (Latest: http://archives.postgresql.org/pgsql-hackers/2011-08/msg00880.php)
 
 
 Regards.
 
 
 
 Jun Ishizuka
 NTT Software Corporation
 TEL:045-317-7018
 E-Mail: ishizuka@po.ntts.co.jp
 



Jun Ishizuka
NTT Software Corporation
TEL:045-317-7018
E-Mail: ishizuka@po.ntts.co.jp



standby_online_backup_07.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sponsored development

2011-09-13 Thread Simon Riggs
On Mon, Sep 12, 2011 at 10:33 PM, Dermot paik...@googlemail.com wrote:

 First off, I hope this approach is not breaking protocol.

 I have seen this feature on the todo list:
 http://wiki.postgresql.org/wiki/Todo#Referential_Integrity

 It's my understanding that this will allow FK constraints on array
 elements, if I'm wrong, please stop me now

 If I've assumed correctly, the company I work for would like to offer
 an incentive to progress the work. As we're a tax registered company,
 we'd need a invoice or receipt. I do not know if there is an existing
 mechanism for such as arrangement. We thought payment should be
 released once the feature was available in a stable release. The
 amount we'd like to offer is £500 (pounds sterling).

 Again, I hope this email does not offend the etiquette of the list and
 I'd like to keep the discussion on-list initially if possible.

Inward investment into PostgreSQL is important, so I'd like to set an
example here by accepting your offer to implement FKs on arrays.

It's also important since it indicates what users really want.

The sum offered is low, but the proactivity and graciousness of your
offer deserves to be rewarded.

Thanks. We'll get that into 9.2 for you.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to improve reliability of postgresql on linux nfs

2011-09-13 Thread Florian Pflug
[CC'ing to the list again - I assume you omitted pgsql-hackers from the
recipient list by accident]

On Sep13, 2011, at 03:00 , George Barnett wrote:
 On 12/09/2011, at 11:39 PM, Florian Pflug wrote:
 Also, non-interruptible IO primitives are by no means right. At best, 
 they're
 a compromise between complexity and functionality for I/O devices with rather
 short (and bounded) communication timeouts - because in that case, processes 
 are
 only blocked un-interruptibly for a short while.
 
 Just to expand on that - I'm now in the situation where I can run my nfs 
 mounts
 'nointr' and postgres will work, but that means if I lose a storage unit I 
 have
 a number of stuck processes, effectively meaning I need to reboot all my 
 frontend
 servers before I can fail over to backup nfs stores.
 
 However, if I run the mounts with intr, then if a storage unit fails, I can 
 fail
 over to a backup node (taking a minor loss of data hit I'm willing to accept) 
 but
 postgres breaks under a moderate insert load.
 
 With the patch I supplied though, I'm able to have most of my cake and eat it.
 
 I'd be very interested in moving this forward - is there something I can 
 change
 in the patch to make it more acceptable for a merge?

Here are a few comments

Tom already remarked that if we do that for write()s, we ought to do it for 
read()s
also which I agree with. All other primitives like lseek, close, ... should be 
taken
care of by SA_RESTART, but I'd be a good idea to verify that.

Also, I don't think that POSIX mandates that errno be reset to 0 if a function 
returns
successfully, making that returnCode == 0  errno == 0 check pretty dubious. 
I'm not
sure of this was what Tom was getting at with his remark about the ENOSPC 
handling being
wrong in the retry case.

And I also think that if we do this, we might as well handle EINTR correctly, 
even if
our use of SA_RESTART should prevent us from ever seeing that. The rules 
surrounding
EINTR and SA_RESTART for read/write are quite subtle...

If we retry, shouldn't be do CHECK_FOR_INTERRUPTS? Otherwise, processes waiting 
for
a vanished NFS server would be killable only with SIGKILL, not SIGTERM or 
SIGINT.
But I'm not sure if it's safe to put that into a generic function like 
pg_write_nointr.

Finally, WriteAll() seems like a poor name for that function. How about 
pg_write_nointr()?

Here's my suggested implementation for pg_write_nointr. pg_read_nointr should 
be similar
(but obviously without the ENOSPC handling)

int pg_write_nointr(int fd, const void *bytes, Size amount)
{
  int written = 0;

  while (amount  0)
  {
int ret;

ret = write(fd, bytes, amount);
if ((ret  0)  (errno == EINTR))
{
  /* interrupted by signal before first byte was written. Retry */
  /* XXX: Is it safe to call CHECK_FOR_INTERRUPTS here? */
  CHECK_FOR_INTERRUPTS();
  continue;
}
else if (ret  0)
{
  /* error occurred. Abort */
  return -1;
}
else if (ret == 0)
{
  /* out of disk space. Abort */
  return written;
}

/* made progress */

/* XXX: Is it safe to call CHECK_FOR_INTERRUPTS here? */
CHECK_FOR_INTERRUPTS();

written += ret;
amount -= ret;
bytes = (const char *) bytes + ret;
  }
}

best regards,
Florian Pflug



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to improve reliability of postgresql on linux nfs

2011-09-13 Thread Florian Pflug
On Sep13, 2011, at 13:07 , Florian Pflug wrote:
 Here's my suggested implementation for pg_write_nointr. pg_read_nointr should 
 be similar
 (but obviously without the ENOSPC handling)
 
 wrong pg_write_nointr implementation snipped

Sorry for the self-reply. I realized only after hitting send that I
got the ENOSPC handling wrong again - we probably ought to check for
ENOSPC as well as ret == 0. Also, it seems preferable to return the
number of bytes actually written instead of -1 if we hit an error during
retry.

With this version, any return value other than amount signals an
error, the number of actually written bytes is reported even in the
case of an error (to the best of pg_write_nointr's knowledge), and
errno always indicates the kind of error.

int pg_write_nointr(int fd, const void *bytes, Size amount)
{
 int written = 0;

 while (amount  0)
 {
   int ret;

   ret = write(fd, bytes, amount);

   if ((ret  0)  (errno == EINTR))
   {
 /* interrupted by signal before first byte was written. Retry */

 /* XXX: Is it safe to call CHECK_FOR_INTERRUPTS here? */
 CHECK_FOR_INTERRUPTS();

 continue;
   }
   else if (ret  1)
   {
 /* error occurred. Abort */

 if (ret == 0)
   /* out of disk space */
   errno = ENOSPC;

 if (written == 0)
   return -1;
 else
   return written;
   }

   /* made progress */
   written += ret;
   amount -= ret;
   bytes = (const char *) bytes + ret;
   
   /* XXX: Is it safe to call CHECK_FOR_INTERRUPTS here? */
   CHECK_FOR_INTERRUPTS();
 }
}

best regards,
Florian Pflug


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cheaper snapshots redux

2011-09-13 Thread Robert Haas
On Tue, Sep 13, 2011 at 7:49 AM, Amit Kapila amit.kap...@huawei.com wrote:
Yep, that's pretty much what it does, although xmax is actually
defined as the XID *following* the last one that ended, and I think
xmin needs to also be in xip, so in this case you'd actually end up
with xmin = 15, xmax = 22, xip = { 15, 16, 17, 19 }.  But you've got
the basic idea of it.

 Shouldn't Xmax be 21 okay as current check in TupleVisibility indicate if
 XID is greater than equal to Xmax then it returns tuple is not visible.

No, that's not OK.  You stipulated 21 as committed, so it had better be visible.

In particular, if someone with proc-xmin = InvalidTransactionId is
taking a snapshot while you're computing RecentGlobalXmin, and then
stores a proc-xmin less than your newly-computed RecentGlobalXmin,
you've got a problem.

 I am assuming here you are reffering to take a snapshot means it has to be
 updated in shared memory because otherwise no need to refer proc with your
 new design.

 Session-1
 Updating RecentGlobalXmin during GetSnapshotData() using shared memory copy
 of snapshot and completed transactions as RecentGlobalXmin can be updated if
 we get xmin.

 Session-2
 Getting Snapshot to update in shared memory, here it needs to go through
 procarray.
 Now when it is going through procarray using proclock it can be case that
 proc of Session-1 has InvalidTransId, so we will ignore it and go through
 remaining session procs.
 Now normally Session-1 proc should not get lesser xmin as compare to other
 session procs but incase it has got his copy from shared memory ring buffer
 before other session procs then it can be lower and which can cause a
 problem.

 It's not one extra read - you'd have to look at every PGPROC.

 If the above explanation is right then is this the reason that to update
 RecentGlobalXmin, it has to go through every PGPROC.

Your explanation isn't very clear to me.  But I will post the patch
once I have some of these details sorted out.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to improve reliability of postgresql on linux nfs

2011-09-13 Thread k...@rice.edu
On Tue, Sep 13, 2011 at 01:30:34PM +0200, Florian Pflug wrote:
 On Sep13, 2011, at 13:07 , Florian Pflug wrote:
  Here's my suggested implementation for pg_write_nointr. pg_read_nointr 
  should be similar
  (but obviously without the ENOSPC handling)
  
  wrong pg_write_nointr implementation snipped
 
 Sorry for the self-reply. I realized only after hitting send that I
 got the ENOSPC handling wrong again - we probably ought to check for
 ENOSPC as well as ret == 0. Also, it seems preferable to return the
 number of bytes actually written instead of -1 if we hit an error during
 retry.
 
 With this version, any return value other than amount signals an
 error, the number of actually written bytes is reported even in the
 case of an error (to the best of pg_write_nointr's knowledge), and
 errno always indicates the kind of error.
 
 int pg_write_nointr(int fd, const void *bytes, Size amount)
 {
  int written = 0;
 
  while (amount  0)
  {
int ret;
 
ret = write(fd, bytes, amount);
 
if ((ret  0)  (errno == EINTR))
{
  /* interrupted by signal before first byte was written. Retry */
 
  /* XXX: Is it safe to call CHECK_FOR_INTERRUPTS here? */
  CHECK_FOR_INTERRUPTS();
 
  continue;
}
else if (ret  1)
{
  /* error occurred. Abort */
 
  if (ret == 0)
/* out of disk space */
errno = ENOSPC;
 
  if (written == 0)
return -1;
  else
return written;
}
 
/* made progress */
written += ret;
amount -= ret;
bytes = (const char *) bytes + ret;

/* XXX: Is it safe to call CHECK_FOR_INTERRUPTS here? */
CHECK_FOR_INTERRUPTS();
  }
 }
 
 best regards,
 Florian Pflug
 

It will be interesting to see if there are any performance ramifications to
this new write function.

Regards,
Ken

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to improve reliability of postgresql on linux nfs

2011-09-13 Thread Florian Pflug
On Sep13, 2011, at 14:58 , k...@rice.edu wrote:
 It will be interesting to see if there are any performance ramifications to
 this new write function.

What would those be? For non-interruptible reads and writes, the overhead
comes down to an additional function call (if we don't make pg_write_nointr
inlined) and a few conditional jumps (which branch prediction should be
able to take care of). These are bound to disappear in the noise compared
to the cost of the actual syscall.

best regards,
Florian Pflug


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to improve reliability of postgresql on linux nfs

2011-09-13 Thread k...@rice.edu
On Tue, Sep 13, 2011 at 03:02:57PM +0200, Florian Pflug wrote:
 On Sep13, 2011, at 14:58 , k...@rice.edu wrote:
  It will be interesting to see if there are any performance ramifications to
  this new write function.
 
 What would those be? For non-interruptible reads and writes, the overhead
 comes down to an additional function call (if we don't make pg_write_nointr
 inlined) and a few conditional jumps (which branch prediction should be
 able to take care of). These are bound to disappear in the noise compared
 to the cost of the actual syscall.
 
 best regards,
 Florian Pflug
 
That would be my expectation too. It is just always nice to benchmark changes,
just in case. I have had similar simple changes blow out a cache and have a
much greater impact on performance than might be expected from inspection. :)

Regards,
Ken

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to improve reliability of postgresql on linux nfs

2011-09-13 Thread Aidan Van Dyk
On Tue, Sep 13, 2011 at 7:30 AM, Florian Pflug f...@phlo.org wrote:


 Sorry for the self-reply. I realized only after hitting send that I
 got the ENOSPC handling wrong again - we probably ought to check for
 ENOSPC as well as ret == 0. Also, it seems preferable to return the
 number of bytes actually written instead of -1 if we hit an error during
 retry.

 With this version, any return value other than amount signals an
 error, the number of actually written bytes is reported even in the
 case of an error (to the best of pg_write_nointr's knowledge), and
 errno always indicates the kind of error.

Personally, I'ld think that's ripe for bugs.   If the contract is that
ret != amount is the error case, then don't return -1 for an error
*sometimes*.

If you sometimes return -1 for an error, even though ret != amount is
the *real* test, I'm going to guess there will be lots of chance for
code to do:
  if (pg_write_no_intr(...)  0)
   ...

which will only catch some of the errors, and happily continue with the rest...

a.

-- 
Aidan Van Dyk                                             Create like a god,
ai...@highrise.ca                                       command like a king,
http://www.highrise.ca/                                   work like a slave.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] timezone GUC

2011-09-13 Thread Peter Eisentraut
On ons, 2011-09-07 at 17:16 -0400, Tom Lane wrote:
 Magnus Hagander mag...@hagander.net writes:
  On Tue, Sep 6, 2011 at 23:52, Robert Haas robertmh...@gmail.com wrote:
  On Tue, Sep 6, 2011 at 5:16 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  Although there's always more than one way to skin a cat. Consider
  this idea:
  
  1. The hard-wired default for timezone is always UTC (or something
  else not dependent on environment).
  
  2. We put the identify_system_timezone work into initdb, and have it
  inject a non-default entry into postgresql.conf in the usual way
  if it can identify what the system zone is.
  
  3. Run-time dependency on TZ environment disappears altogether.
  
  This basically means that instead of incurring that search on every
  postmaster start, we do it once at initdb. If you change the
  postmaster's timezone environment, well, you gotta go change
  postgresql.conf.
 
  Seems reasonable to me...
 
  +1.
 
 I spent a bit of time on this idea last night.  The most painful part
 actually seems to be translating identify_system_timezone to run in a
 non-backend environment (no elog, etc).  The one thing I've run into
 that doesn't seem straightforward is to decide where to look for the
 timezone files.  If we have --with-system-tzdata then of course it's a
 constant, but should we honor initdb's -L switch otherwise?  And if so,
 how should we pass that into the pg_TZDIR code?
 
   regards, tom lane
 

It looks like the --with-system-tzdata case is somewhat broken now in
initdb:

creating configuration files ... could not open directory 
./pg-install/share/timezone: No such file or directory
ok



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unite recovery.conf and postgresql.conf

2011-09-13 Thread Peter Eisentraut
On tis, 2011-09-13 at 14:46 +0900, Fujii Masao wrote:
 Are you still thinking the backward-compatibility (i.e., the
 capability to specify recovery parameters in recovery.conf) is
 required? 

I think parameters related to a particular recovery, e.g.,
recovery_target_time, fit better into a recovery.conf that is renamed
after the recovery is complete.  That was the original idea, after all.

Everything that is a permanent setting across multiple recovery
attempts, and anything related to replication, better fits elsewhere.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] SSL key with passphrase

2011-09-13 Thread Thom Brown
Hi,

There appears to be a problem with starting Postgres if the SSL key
has a passphrase on it.  The following happens:

Enter PEM pass phrase:
FATAL:  could not load private key file server.key: problems getting password

Starting with postgres -D /path/to/cluster returns:

Enter PEM pass phrase:
LOG:  database system was shut down at 2011-09-13 13:51:51 BST
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started

So the postgres binary accepts stdin, but pg_ctl doesn't.  This isn't
an unusual case, so could I request a fix to allow pg_ctl to take
stdin rather than /dev/null?

Thanks

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] augmenting MultiXacts to improve foreign keys

2011-09-13 Thread Robert Haas
On Fri, Sep 9, 2011 at 5:31 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
 [ multixact complexity ]

I wonder if it's a mistake to be thinking about solving this problem
by extending the MultiXact mechanism.  Pushing xmax out-of-line so
that we have room to store tuple information seems expensive,
especially because there's no convenient way to undo it once the locks
are old enough not to be interesting any more.  The current system
works because we never need both pieces of information at the same
time, but that's not going to be true any more.

I'm wondering if it would be possible to modify the main lock manager,
or create a special-purpose tuple lock manager, to record all tuple
locks, both awaited and granted.  You'd need to make sure that if
there were more than a few locks the information could spill to disk
somehow, and you'd also need to make sure that you didn't pull that
information in from disk more often than necessary - i.e. you should
try to keep enough summary info in memory to determine whether there
*might* be a conflicting lock that spilled out, so that you only need
to go examine the spilled data if there's a possibility that you might
find something interesting there.

A system like this would make it possible to clean up all the lock
entries at transaction end, which would avoid a lot of the complexity
you mention.  On the other hand, it's clearly not simple, either, and
I haven't thought through all the details...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to improve reliability of postgresql on linux nfs

2011-09-13 Thread Florian Pflug
On Sep13, 2011, at 15:05 , Aidan Van Dyk wrote:
 On Tue, Sep 13, 2011 at 7:30 AM, Florian Pflug f...@phlo.org wrote:
 Sorry for the self-reply. I realized only after hitting send that I
 got the ENOSPC handling wrong again - we probably ought to check for
 ENOSPC as well as ret == 0. Also, it seems preferable to return the
 number of bytes actually written instead of -1 if we hit an error during
 retry.
 
 With this version, any return value other than amount signals an
 error, the number of actually written bytes is reported even in the
 case of an error (to the best of pg_write_nointr's knowledge), and
 errno always indicates the kind of error.
 
 Personally, I'ld think that's ripe for bugs.   If the contract is that
 ret != amount is the error case, then don't return -1 for an error
 *sometimes*.

Hm, but isn't that how write() works also? AFAIK (non-interruptible) write()
will return the number of bytes written, which may be less than the requested
number if there's not enough free space, or -1 in case of an error like
an invalid fd being passed.

 If you sometimes return -1 for an error, even though ret != amount is
 the *real* test, I'm going to guess there will be lots of chance for
 code to do:
  if (pg_write_no_intr(...)  0)
   ...
 
 which will only catch some of the errors, and happily continue with the 
 rest...

Yeah, but that's equally wrong for plain write(), so I'm not sure I share
your concern there. Also, I'm not sure how to improve that. We could always
return -1 in case of an error, and amount in case of success, but that makes
it impossible to determine how many bytes where actually written (and also feel
awkward). Or we could return 0 instead of -1 if there was an error and zero
bytes where written. But that feels awkward also...

One additional possibility would be to make the signature

  boolean pg_write_nointr(int fd, const void *bytes, int len, int *written)

and simply return true on success and false on error. Callers who're interested
in the number of bytes actually written (in the case of an error) would need to
pass some non-NULL pointer for written, while all others would just pass NULL.

Thoughts?

best regards,
Florian Pflug



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSL key with passphrase

2011-09-13 Thread Tom Lane
Thom Brown t...@linux.com writes:
 There appears to be a problem with starting Postgres if the SSL key
 has a passphrase on it.

It's documented that that's unsupported.  Given the number of ways to
start a postmaster, and the fact that many of them are noninteractive,
I don't think it's very productive for us to worry about it.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to improve reliability of postgresql on linux nfs

2011-09-13 Thread Tom Lane
Florian Pflug f...@phlo.org writes:
 On Sep13, 2011, at 15:05 , Aidan Van Dyk wrote:
 Personally, I'ld think that's ripe for bugs.   If the contract is that
 ret != amount is the error case, then don't return -1 for an error
 *sometimes*.

 Hm, but isn't that how write() works also?

Yeah.  It's not possible to maintain the same error-reporting contract
that bare write() has got, unless you're willing to forget about actual
errors reported by a non-first write attempt.  Which might not be
totally unreasonable, because presumably something similar is going on
under the hood within write() itself.  Most of the errors one might
think are worth reporting would have had to occur on the first write
attempt anyway.

But if you do want to report such errors, I think you have to push the
error reporting logic into the subroutine, which seems a bit messy since
there's quite a variety of error message phrasings out there, all of
which require information that write() itself does not have.  Also, we
do *not* want e.g. gettext() to be invoked unless an error actually
occurs and has to be reported.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] augmenting MultiXacts to improve foreign keys

2011-09-13 Thread Alvaro Herrera

Excerpts from Robert Haas's message of mar sep 13 11:02:51 -0300 2011:
 
 On Fri, Sep 9, 2011 at 5:31 PM, Alvaro Herrera
 alvhe...@commandprompt.com wrote:
  [ multixact complexity ]
 
 I wonder if it's a mistake to be thinking about solving this problem
 by extending the MultiXact mechanism.  Pushing xmax out-of-line so
 that we have room to store tuple information seems expensive,
 especially because there's no convenient way to undo it once the locks
 are old enough not to be interesting any more.  The current system
 works because we never need both pieces of information at the same
 time, but that's not going to be true any more.

Hmm, it doesn't look that way to me: whenever you lock a row, all
previous lockers that are gone can now be forgotten.  Locks that are old
enough not to be interesting, are constantly and automatically gone.

The only reason that multixact now needs to persist beyond currently
running transaction is the chance that there might be an update xmax
hiding somewhere; and tuples marked with those are going to be removed
by vacuum anyway.  (I have been thinking that long before vacuum, we
could remove the multixact and replace it with a plain Xid, if the
lockers are all gone -- which is another part of your undo it once the
locks are old enough.)

The expensive bit is the reason why I used a hint bit to mark this
possibility; we distinguish the cheap case of locked-but-not-updated
from the expensive one of locked-and-updated with hint bits, so the
cheap case stays cheap; and the expensive one requires a bit more work,
yes, but this brings more concurrency overall.

 I'm wondering if it would be possible to modify the main lock manager,
 or create a special-purpose tuple lock manager, to record all tuple
 locks, both awaited and granted.  You'd need to make sure that if
 there were more than a few locks the information could spill to disk
 somehow, and you'd also need to make sure that you didn't pull that
 information in from disk more often than necessary - i.e. you should
 try to keep enough summary info in memory to determine whether there
 *might* be a conflicting lock that spilled out, so that you only need
 to go examine the spilled data if there's a possibility that you might
 find something interesting there.

This is where we started, back when we were creating SELECT FOR SHARE:
trying to spill the lock table.  That idea went down in flames; consider
that someone might request to block an entire huge table, and you're in
trouble.  There might have been other problems I don't recall with that
idea.

I don't want to go back to that drawing board -- obviously I'm not very
keen of going great lengths down the same path, only to fail, twice.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSL key with passphrase

2011-09-13 Thread Thom Brown
On 13 September 2011 15:17, Tom Lane t...@sss.pgh.pa.us wrote:
 Thom Brown t...@linux.com writes:
 There appears to be a problem with starting Postgres if the SSL key
 has a passphrase on it.

 It's documented that that's unsupported.  Given the number of ways to
 start a postmaster, and the fact that many of them are noninteractive,
 I don't think it's very productive for us to worry about it.

For reference, could you point me to the page which states this lack
of support?  All I could find was a mention that in order to start the
service automatically, you would need to remove the passphrase.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to improve reliability of postgresql on linux nfs

2011-09-13 Thread Aidan Van Dyk
On Tue, Sep 13, 2011 at 10:14 AM, Florian Pflug f...@phlo.org wrote:

 Personally, I'ld think that's ripe for bugs.   If the contract is that
 ret != amount is the error case, then don't return -1 for an error
 *sometimes*.

 Hm, but isn't that how write() works also? AFAIK (non-interruptible) write()
 will return the number of bytes written, which may be less than the requested
 number if there's not enough free space, or -1 in case of an error like
 an invalid fd being passed.

Looking through the code, it appears as if all the write calls I've
seen are checking ret != amount, so it's probably not as big a deal
for PG as I fear...

But the subtle change in semantics (from system write ret != amount
not necessarily a real error, hence no errno set) of pg_write ret !=
amount only happening after a real error (errno should be set) is
one that could yet lead to confusion.

a.


-- 
Aidan Van Dyk                                             Create like a god,
ai...@highrise.ca                                       command like a king,
http://www.highrise.ca/                                   work like a slave.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] timezone GUC

2011-09-13 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 It looks like the --with-system-tzdata case is somewhat broken now in
 initdb:

 creating configuration files ... could not open directory 
 ./pg-install/share/timezone: No such file or directory

Sigh.  That's what I get for assuming that case was simple enough to not
need testing.  Will fix.  Where's my brown paper bag?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to improve reliability of postgresql on linux nfs

2011-09-13 Thread Florian Pflug
On Sep13, 2011, at 16:25 , Tom Lane wrote:
 Florian Pflug f...@phlo.org writes:
 On Sep13, 2011, at 15:05 , Aidan Van Dyk wrote:
 Personally, I'ld think that's ripe for bugs.   If the contract is that
 ret != amount is the error case, then don't return -1 for an error
 *sometimes*.
 
 Hm, but isn't that how write() works also?
 
 Yeah.  It's not possible to maintain the same error-reporting contract
 that bare write() has got, unless you're willing to forget about actual
 errors reported by a non-first write attempt.

Hm, yeah, but we're only replacing the exclusive or in either sets errno
*or* returns = 0 and  amount by a non-exclusive one. Which, in practice,
doesn't make much difference for callers. They can (and should) continue to
check whether they correct amount of bytes has been written, and they may
still use errno to distinguish different kinds of errors. They should just
do so upon any error condition, not upon us returning -1.

The important thing, I believe, is that we don't withhold any information
from callers, which we don't. If write() sets errno, it must return -1,
so we'll abort and hence leave the errno in place to be inspected by the
caller. And we faithfully track the actual number of bytes written.

Or am I missing something?

 But if you do want to report such errors, I think you have to push the
 error reporting logic into the subroutine, which seems a bit messy since
 there's quite a variety of error message phrasings out there, all of
 which require information that write() itself does not have.  Also, we
 do *not* want e.g. gettext() to be invoked unless an error actually
 occurs and has to be reported.

Yeah, I had the same idea (moving the error reporting into the subroutine)
when I first looked at the OP's patch, but then figured it'd just complicate
the API for no good reason.

best regards,
Florian Pflug



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cheaper snapshots redux

2011-09-13 Thread Amit Kapila

Yep, that's pretty much what it does, although xmax is actually
defined as the XID *following* the last one that ended, and I think
xmin needs to also be in xip, so in this case you'd actually end up
with xmin = 15, xmax = 22, xip = { 15, 16, 17, 19 }.  But you've got
the basic idea of it.

Shouldn't Xmax be 21 okay as current check in TupleVisibility indicate if
XID is greater than equal to Xmax then it returns tuple is not visible.

In particular, if someone with proc-xmin = InvalidTransactionId is
taking a snapshot while you're computing RecentGlobalXmin, and then
stores a proc-xmin less than your newly-computed RecentGlobalXmin,
you've got a problem.  

I am assuming here you are reffering to take a snapshot means it has to be
updated in shared memory because otherwise no need to refer proc with your
new design.

Session-1
Updating RecentGlobalXmin during GetSnapshotData() using shared memory copy
of snapshot and completed transactions as RecentGlobalXmin can be updated if
we get xmin.

Session-2
Getting Snapshot to update in shared memory, here it needs to go through
procarray. 
Now when it is going through procarray using proclock it can be case that
proc of Session-1 has InvalidTransId, so we will ignore it and go through
remaining session procs.
Now normally Session-1 proc should not get lesser xmin as compare to other
session procs but incase it has got his copy from shared memory ring buffer
before other session procs then it can be lower and which can cause a
problem.

 It's not one extra read - you'd have to look at every PGPROC.  

If the above explanation is right then is this the reason that to update
RecentGlobalXmin, it has to go through every PGPROC.

***
This e-mail and attachments contain confidential information from HUAWEI,
which is intended only for the person or entity whose address is listed
above. Any use of the information contained herein in any way (including,
but not limited to, total or partial disclosure, reproduction, or
dissemination) by persons other than the intended recipient's) is
prohibited. If you receive this e-mail in error, please notify the sender by
phone or email immediately and delete it!

-Original Message-
From: Robert Haas [mailto:robertmh...@gmail.com] 
Sent: Monday, September 12, 2011 9:31 PM
To: Amit Kapila
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] cheaper snapshots redux

On Mon, Sep 12, 2011 at 11:07 AM, Amit Kapila amit.kap...@huawei.com
wrote:
If you know what transactions were running the last time a snapshot
summary
 was written and what transactions have ended since then, you can work
out
 the new xmin on the fly.  I have working code for this and it's actually
 quite simple.

 I believe one method to do same is as follows:

 Let us assume at some point of time the snapshot and completed XID list is
 somewhat as follows:

 Snapshot

 { Xmin – 5, Xip[] – 8 10 12, Xmax  - 15 }

 Committed XIDS – 8, 10 , 12, 18, 20, 21

 So it means 16,17,19 are running transactions. So it will behave as
follows:

 { Xmin – 16, Xmax – 21, Xip[] – 17,19 }

Yep, that's pretty much what it does, although xmax is actually
defined as the XID *following* the last one that ended, and I think
xmin needs to also be in xip, so in this case you'd actually end up
with xmin = 15, xmax = 22, xip = { 15, 16, 17, 19 }.  But you've got
the basic idea of it.

 But if we do above way to calculate Xmin, we need to check in existing Xip
 array and committed Xid array to find Xmin. Won’t this cause reasonable
time
 even though it is outside lock time if Xip and Xid are large.

Yes, Tom raised this concern earlier.  I can't answer it for sure
without benchmarking, but clearly xip[] can't be allowed to get too
big.

 Because GetSnapshotData() computes a new value for RecentGlobalXmin by
 scanning the ProcArray.   This isn't costing a whole lot extra right now
 because the xmin and xid fields are normally in  the same cache line, so
 once you've looked at one of them it doesn't cost that much extra to
 look at the other.  If, on the other hand, you're not looking at (or even
 locking) the
 ProcArray, then doing so just to recomputed RecentGlobalXmin sucks.

 Yes, this is more time as compare to earlier, but if our approach to
 calculate Xmin is like above point, then one extra read outside lock
should
 not matter. However if for above point approach is different then it will
be
 costlier.

It's not one extra read - you'd have to look at every PGPROC.  And it
is not outside a lock, either.  You definitely need locking around
computing RecentGlobalXmin; see src/backend/access/transa/README.  In
particular, if someone with proc-xmin = InvalidTransactionId is
taking a snapshot while you're computing RecentGlobalXmin, and then
stores a proc-xmin less than your newly-computed RecentGlobalXmin,
you've got a problem.  That can't happen right now because no
transactions can commit while 

[HACKERS] Rough impl of IGNORE NULLS for window functions

2011-09-13 Thread Joe Banafato
Hi all,

I wrote an implementation of last_value that ignores null values,
effectively achieving the behavior of last_value(exp ignore nulls).
The code is up on BitBucket [1] for the moment.  Thoughts:

* This isn't on the TODO [2].  Is anyone interested in getting this in
the language?  I use this feature extensively and need it for an
Oracle port.
* I'd love to get a critique, especially on using WinSetMarkPosition.
* I have not extended the syntax (I simply called the function
last_value_ignore_nulls), and I hope to get some guidance on starting
that.

I'll be starting the other functions shortly.

Cheers,

Joe

[1] https://bitbucket.org/joeb/pg-window-ignore-nulls/src
[2] https://wiki.postgresql.org/wiki/Todo#Window_Functions

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] fix for pg_upgrade

2011-09-13 Thread panam
Hi, just tried to upgrade from 9.0 to 9.1 and got this error during
pg_upgrade :
Mismatch of relation id: database xyz, old relid 465783, new relid 16494
It seems, I get this error on every table as I got it on another table
(which I did not need and deleted) before as well. Schmemas seem to be
migrated but the content is missing.

I am using Windows 7 64bit (both PG servers are 64 bit as well), everthing
on the same machine.

Any ideas?
Thanks  regards
panam

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/fix-for-pg-upgrade-tp3411128p4798957.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Range Types - symmetric

2011-09-13 Thread Erik Rijkers
Hi,

Just a thought:

select int4range(5,2);
ERROR:  range lower bound must be less than or equal to range upper bound

Of course, I won't argue this is a bug, but I was wondering if it wouldn't be 
handy to allow a
'symmetric' mode in range construction, where, if the first of the pair is 
higher than the second,
they are automatically swapped, similar to SYMMETRIC in the BETWEEN clause.

Admittedly, I don't have a specific 'use case' -- it might just often prevent 
'manual' swapping
before range construction calls.

Thanks,

Erik Rijkers




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unite recovery.conf and postgresql.conf

2011-09-13 Thread Simon Riggs
On Tue, Sep 13, 2011 at 2:51 PM, Peter Eisentraut pete...@gmx.net wrote:
 On tis, 2011-09-13 at 14:46 +0900, Fujii Masao wrote:
 Are you still thinking the backward-compatibility (i.e., the
 capability to specify recovery parameters in recovery.conf) is
 required?

 I think parameters related to a particular recovery, e.g.,
 recovery_target_time, fit better into a recovery.conf that is renamed
 after the recovery is complete.  That was the original idea, after all.

 Everything that is a permanent setting across multiple recovery
 attempts, and anything related to replication, better fits elsewhere.

I've just been thinking that a better way would be to make
recovery.conf an extension of postgresql.conf when we are in archive
recovery.

So treat postgresql.conf as if it has an automatic include
recovery.conf in it. The file format is the same.

That way we don't need to change existing behaviour, so any software
that relies upon this will still work, but we gain the additional
ability to reload values in recovery,conf (where appropriate).

We can change the .sample files to show parameters that make more
sense in one or the other file, rather than making it a hard
requirement for them to appear in specific files which will be a real
pain in the ass.

Internal changes would then be to move existing recovery.conf
parameters into guc.c and revise the manual accordingly.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Range Types - symmetric

2011-09-13 Thread Christopher Browne
On Tue, Sep 13, 2011 at 12:08 PM, Erik Rijkers e...@xs4all.nl wrote:
 Just a thought:

 select int4range(5,2);
 ERROR:  range lower bound must be less than or equal to range upper bound

 Of course, I won't argue this is a bug, but I was wondering if it wouldn't be 
 handy to allow a
 'symmetric' mode in range construction, where, if the first of the pair is 
 higher than the second,
 they are automatically swapped, similar to SYMMETRIC in the BETWEEN clause.

 Admittedly, I don't have a specific 'use case' -- it might just often prevent 
 'manual' swapping
 before range construction calls.

I'll buy that this is a plausible feature, but suggest an opposite
perspective, namely that this DWIM means that you can't notice
'getting things backwards' in your application as a bug anymore.

If you have a computation that gets a backwards range, then it is
more than possible that what you've got isn't an error of getting the
range backwards, but rather the error that your data is
overconstraining, and that you don't actually have a legitimate range.

So, if I decide that I want a range that expresses:
 - Dates before 2012-01-01
  and
 - Dates after 2012-02-01

Which smells like (2012-02-01,2012-01-01).

It is NOT proper to turn that into the range (2012-01-01,2012-02-01) -
that's definitely not consistent with the facts I started with.

If you want to create your own range constructor function where you'll
take 2 values and reorder as needed to get a feasible range, that's
fine.

I think I rather oppose doing the swap automagically, by default,
because, in the case described above, it gives a WRONG range.
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, How would the Lone Ranger handle this?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] What Would You Like To Do?

2011-09-13 Thread Hannu Krosing
On Sun, 2011-09-11 at 21:21 -0700, David E. Wheeler wrote:
 Hackers,
 
 Later this week I'm giving a [brief][] for an audience of what I 
 hope will be corporate PostgreSQL users that covers how to get a 
 feature developed for PostgreSQL. The idea here is that there are
  a lot of organizations out there with very deep commitments to
 PostgreSQL, who really take advantage of what it has to offer, 
 but also would love additional features PostgreSQL doesn't offer. 
 Perhaps some of them would be willing to fund development of the featured 
 they need.

Hannu Krosing / 2ndQuadrant

 * more enhancements to pl/python - use real function arguments, 
   store modules in database, direct support for postgresql types,
   operators and functions, automatic startup command, 
   automatic ORM from table definitions, ...
 * various support functionality for replication and automatic growth
   of sharded databases - user defined tuple visibility functions, 
   triggers for DDL and ON COMMIT/ON ROLLBACK, ...
 * putting time travel (which Oracle calls flashback queries) back
   into postgreSQL
 * moving tuple visibility in a separate index-like structure which
   should be highly compressible in most cases, as a way to enabling
   index-only scans, column oriented storage and effective table
   compression, ...

 [brief]: http://postgresopen.org/2011/schedule/presentations/83/
 
 Toward the end of the presentation, I'd like to make some suggestions and 
 offer to do some match-making. I'm thinking primarily of listing some of the 
 stuff the community would love to see done, along with the names of the folks 
 and/or companies who, with funding, might make it happen. My question for you 
 is: What do you want to work on?
 
 Here's my preliminary list:
 
 * Integrated partitioning support: Simon/2nd Quadrant
 * High-CPU concurrency: Robert/Enterprise DB
 * Multimaster replication and clustering: Simon/2nd Quadrant
 * Multi-table indexes: Heiki? Oleg  Teodor?
 * Column-leve collation support: Peter/Enterprise DB
 * Faster and more fault tolerant data loading: Andrew/PGX
 * Automated postgresql.conf Configuration: Greg/2nd Quadrant
 * Parallel pg_dump: Andrew/PGX
 * SET GLOBAL-style configuration in SQL: Greg/2nd Quadant
 * Track table and index caching to improve optimizer decisions: 
 Robert/Enterprise DB
 
 Thanks to Greg Smith for adding a few bonus ideas I hadn't thought of. What 
 else have you got? I don't think we necessarily have to limit ourselves to 
 core features, BTW: projects like PostGIS and pgAdmin are also clearly 
 popular, and new projects of that scope (or improvements to those!) would no 
 doubt be welcome. Also, I'm highlighting PGXN and an example of how this sort 
 of thing might work.
 
 So, what do you want to work on? Let me know, I'll do as much match-making at 
 the conference as I can.
 
 Best,
 
 David
 
 
 



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] What Would You Like To Do?

2011-09-13 Thread David E. Wheeler
On Sep 13, 2011, at 9:43 AM, Hannu Krosing wrote:

 Hannu Krosing / 2ndQuadrant
 
 * more enhancements to pl/python - use real function arguments, 
   store modules in database, direct support for postgresql types,
   operators and functions, automatic startup command, 
   automatic ORM from table definitions, ...
 * various support functionality for replication and automatic growth
   of sharded databases - user defined tuple visibility functions, 
   triggers for DDL and ON COMMIT/ON ROLLBACK, ...
 * putting time travel (which Oracle calls flashback queries) back
   into postgreSQL
 * moving tuple visibility in a separate index-like structure which
   should be highly compressible in most cases, as a way to enabling
   index-only scans, column oriented storage and effective table
   compression, ...

Awesome, thanks!

David


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Range Types - symmetric

2011-09-13 Thread Jeff Davis
On Tue, 2011-09-13 at 12:34 -0400, Christopher Browne wrote:
  select int4range(5,2);
  ERROR:  range lower bound must be less than or equal to range upper bound
 
  Of course, I won't argue this is a bug, but I was wondering if it wouldn't 
  be handy to allow a
  'symmetric' mode in range construction, where, if the first of the pair is 
  higher than the second,
  they are automatically swapped, similar to SYMMETRIC in the BETWEEN clause.

...

 If you have a computation that gets a backwards range, then it is
 more than possible that what you've got isn't an error of getting the
 range backwards, but rather the error that your data is
 overconstraining, and that you don't actually have a legitimate range.

Agreed. On balance, it's just as likely that you miss an error as save a
few keystrokes.

I'll add that it would also cause a little confusion with inclusivity.
What if you do: '[5,2)'::int4range? Is that really '[2,5)' or '(2,5]'?

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] What Would You Like To Do?

2011-09-13 Thread Thom Brown
On 12 September 2011 05:21, David E. Wheeler da...@kineticode.com wrote:
 Hackers,

 Later this week I'm giving a [brief][] for an audience of what I hope will be 
 corporate PostgreSQL users that covers how to get a feature developed for 
 PostgreSQL. The idea here is that there are a lot of organizations out there 
 with very deep commitments to PostgreSQL, who really take advantage of what 
 it has to offer, but also would love additional features PostgreSQL doesn't 
 offer. Perhaps some of them would be willing to fund development of the 
 featured they need.

 [brief]: http://postgresopen.org/2011/schedule/presentations/83/

 Toward the end of the presentation, I'd like to make some suggestions and 
 offer to do some match-making. I'm thinking primarily of listing some of the 
 stuff the community would love to see done, along with the names of the folks 
 and/or companies who, with funding, might make it happen. My question for you 
 is: What do you want to work on?

 Here's my preliminary list:

 * Integrated partitioning support: Simon/2nd Quadrant
 * High-CPU concurrency: Robert/Enterprise DB
 * Multimaster replication and clustering: Simon/2nd Quadrant
 * Multi-table indexes: Heiki? Oleg  Teodor?
 * Column-leve collation support: Peter/Enterprise DB
 * Faster and more fault tolerant data loading: Andrew/PGX
 * Automated postgresql.conf Configuration: Greg/2nd Quadrant
 * Parallel pg_dump: Andrew/PGX
 * SET GLOBAL-style configuration in SQL: Greg/2nd Quadant
 * Track table and index caching to improve optimizer decisions: 
 Robert/Enterprise DB

 Thanks to Greg Smith for adding a few bonus ideas I hadn't thought of. What 
 else have you got? I don't think we necessarily have to limit ourselves to 
 core features, BTW: projects like PostGIS and pgAdmin are also clearly 
 popular, and new projects of that scope (or improvements to those!) would no 
 doubt be welcome. Also, I'm highlighting PGXN and an example of how this sort 
 of thing might work.

 So, what do you want to work on? Let me know, I'll do as much match-making at 
 the conference as I can.

I have a wish-list of features, but I don't know of anyone specific
who could work on them.  In addition to some you've mentioned they
are:

* Distributed queries
* Multi-threaded query operations (single queries making use of more
than 1 core in effect)
* Stored procedures
* Automatic failover re-subscription (okay, I don't know what you'd
call this, but where you have several standbys, the primary fails, one
standby is automatically promoted, and the remaining standbys
automatically subscribe to the newly-promoted one without needing a
new base backup)
* ROLLUP and CUBE
* pg_dumpall custom format (Guillaume mentioned this was on his to-do
list previously)

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] What Would You Like To Do?

2011-09-13 Thread Michael Nolan
The lists all seem to be focusing on the things that the developers would
like to add to PostgreSQL, what about some things that users or ISPs might
like to have, and thus perhaps something that companies might actually see
as worth funding?

For example:

A fully integrated ability to query across multiple databases,possibly on
multiple servers, something Oracle has had for nearly two decades.

Complete isolation at the user level, allowing an ISP to support multiple
independent customers on a server without having to fiddle with multiple
back ends each running on a separate port, a feature that MySQL has had for
as far back as I can recall, and one of the reasons ISPs are more likely to
offer MySQL than PostgreSQL.

The ability to restore a table from a backup file to a different table name
in the same database and schema.

A built-in report writer, capable of things like column totals.  (SqlPlus
has this, even though it isn't very pretty.)
--
Mike Nolan


Re: [HACKERS] What Would You Like To Do?

2011-09-13 Thread Joshua D. Drake


On 09/13/2011 10:13 AM, Michael Nolan wrote:

The lists all seem to be focusing on the things that the developers
would like to add to PostgreSQL, what about some things that users or
ISPs might like to have, and thus perhaps something that companies might
actually see as worth funding?


Well just my own two cents ... but it all depends on who is doing the 
funding. At this point 80% of the work CMD codes for Pg (or tertiary 
projects and modules) is funded by companies. So let's not assume that 
companies aren't funding things. They are.




For example:

A fully integrated ability to query across multiple databases,possibly
on multiple servers, something Oracle has had for nearly two decades.


That isn't the approach to take. The fact that Oracle has it is not a 
guarantee that it is useful or good. If you need to query across 
databases (assuming within the same cluster) then you designed your 
database wrong and should have used our SCHEMA support (what Oracle 
calls Namespaces) instead.




Complete isolation at the user level, allowing an ISP to support
multiple independent customers on a server without having to fiddle with
multiple back ends each running on a separate port, a feature that MySQL
has had for as far back as I can recall, and one of the reasons ISPs are
more likely to offer MySQL than PostgreSQL.


Now this would definitely be nice. It is frustrating that we don't have 
per database users.




The ability to restore a table from a backup file to a different table
name in the same database and schema.



This can be done but agreed it is not intuitive.


A built-in report writer, capable of things like column totals.
(SqlPlus has this, even though it isn't very pretty.)


There are a billion and one tools that do this without us having to 
reinvent the wheel. Why would we support that?


Sincerely,

Joshua D. Drake
--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
The PostgreSQL Conference - http://www.postgresqlconference.org/
@cmdpromptinc - @postgresconf - 509-416-6579

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] DBI-LINK not support special support?

2011-09-13 Thread paulo matadr
Hi all,
I use dbi-link to connect for oracle db 10g and 11g, and
big problem give to me:
example:
select * from table(oracle)
çavân
When dbi-link call information from oracle his show
?cv?an


In pure perl script no have problems too.


Any ideas for help me?


Regards,

 
Paulo

Re: [HACKERS] What Would You Like To Do?

2011-09-13 Thread Michael Nolan
On Tue, Sep 13, 2011 at 12:26 PM, Joshua D. Drake j...@commandprompt.comwrote:


 On 09/13/2011 10:13 AM, Michael Nolan wrote:

 The lists all seem to be focusing on the things that the developers
 would like to add to PostgreSQL, what about some things that users or
 ISPs might like to have, and thus perhaps something that companies might
 actually see as worth funding?


 Well just my own two cents ... but it all depends on who is doing the
 funding. At this point 80% of the work CMD codes for Pg (or tertiary
 projects and modules) is funded by companies. So let's not assume that
 companies aren't funding things. They are.


But perhaps if a few 'commercial' features were on the wish list there would
be more companies willing to fund development?   The developers get a bit of
what they want to work on, the production users get a bit of what they need,
everybody's happy.


 For example:

 A fully integrated ability to query across multiple databases,possibly
 on multiple servers, something Oracle has had for nearly two decades.



 That isn't the approach to take. The fact that Oracle has it is not a
 guarantee that it is useful or good. If you need to query across databases
 (assuming within the same cluster) then you designed your database wrong and
 should have used our SCHEMA support (what Oracle calls Namespaces) instead.


This is the difference between developers and real world users.  Real world
users may not have the ability, time or resources to redesign their
databases just because that's the 'best' way to do something.  Will it be
the most efficient way to do it?  Almost certainly not.

I've been involved in a few corporate mergers, and there was a short term
need to do queries on the combined databases while the tiger team handling
the IT restructuring figured out how (or whether) to merge the dabases
together.  (One of these happened to be an Oracle/Oracle situation, it was a
piece of cake even though the two data centers were 750 miles apart and the
table structures had almost nothing in common.  Another was a two week
headache, the third was even worse!)

In a perfect world, it would be nice if one could do combined queries
linking a PostgreSQL database with an Oracle one, or a MySQL one, too.
Because sometimes, that's what you gotta do.  Even something that is several
hundred times slower is going to be faster than merging the databases
together.  When I do this today, I have to write a program (in perl or php)
that accesses both databases and merges it by hand.



 The ability to restore a table from a backup file to a different table
 name in the same database and schema.


 This can be done but agreed it is not intuitive.


Can you elaborate on tha a bit, please?  The only way I've been able to do
it is to edit the dump file to change the table name.  That's not very
practical with a several gigabyte dump file, even less so with one that is
much larger.  If this capability already exists, is it documented?


 (SqlPlus has this, even though it isn't very pretty.)

 A built-in report writer, capable of things like column totals.

 There are a billion and one tools that do this without us having to
 reinvent the wheel. Why would we support that?


There are other databases out there, too, why reinvent the wheel by working
on PostgreSQL?  :-)

The question shoud be, would this be USEFUL?
--
Mike Nolan


Re: [HACKERS] What Would You Like To Do?

2011-09-13 Thread Rodrigo Gonzalez

On 09/13/2011 03:51 PM, Michael Nolan wrote:

For example:


A fully integrated ability to query across multiple
databases,possibly
on multiple servers, something Oracle has had for nearly two
decades.



That isn't the approach to take. The fact that Oracle has it is
not a guarantee that it is useful or good. If you need to query
across databases (assuming within the same cluster) then you
designed your database wrong and should have used our SCHEMA
support (what Oracle calls Namespaces) instead.


This is the difference between developers and real world users.  Real 
world users may not have the ability, time or resources to redesign 
their databases just because that's the 'best' way to do something.  
Will it be the most efficient way to do it?  Almost certainly not.


I've been involved in a few corporate mergers, and there was a short 
term need to do queries on the combined databases while the tiger team 
handling the IT restructuring figured out how (or whether) to merge 
the dabases together.  (One of these happened to be an Oracle/Oracle 
situation, it was a piece of cake even though the two data centers 
were 750 miles apart and the table structures had almost nothing in 
common.  Another was a two week headache, the third was even worse!)


In a perfect world, it would be nice if one could do combined queries 
linking a PostgreSQL database with an Oracle one, or a MySQL one, 
too.  Because sometimes, that's what you gotta do.  Even something 
that is several hundred times slower is going to be faster than 
merging the databases together.  When I do this today, I have to write 
a program (in perl or php) that accesses both databases and merges it 
by hand.



Can't you do that with FDW that is present in 9.1?

Check http://wiki.postgresql.org/wiki/Foreign_data_wrappers



Re: [HACKERS] What Would You Like To Do?

2011-09-13 Thread Tom Lane
Rodrigo Gonzalez rjgonz...@estrads.com.ar writes:
 In a perfect world, it would be nice if one could do combined queries 
 linking a PostgreSQL database with an Oracle one, or a MySQL one, 

 Can't you do that with FDW that is present in 9.1?

FDW provides the structure within which that will eventually be
possible, but there's no Oracle or MySQL wrapper today ... and there are
a lot of FDW restrictions that need to be worked on, too.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] What Would You Like To Do?

2011-09-13 Thread Joshua D. Drake


On 09/13/2011 11:51 AM, Michael Nolan wrote:



The ability to restore a table from a backup file to a different
table
name in the same database and schema.


This can be done but agreed it is not intuitive.


Can you elaborate on tha a bit, please?  The only way I've been able to
do it is to edit the dump file to change the table name.  That's not
very practical with a several gigabyte dump file, even less so with one
that is much larger.  If this capability already exists, is it documented?


You use the -Fc method, extract the TOC and edit just the TOC (so you 
don't have to edit a multi-gig file)





(SqlPlus has this, even though it isn't very pretty.)

A built-in report writer, capable of things like column totals.

There are a billion and one tools that do this without us having to
reinvent the wheel. Why would we support that?


There are other databases out there, too, why reinvent the wheel by
working on PostgreSQL?  :-)

The question shoud be, would this be USEFUL?


Personally, I don't think so but others may disagree.

Joshua D. Drake


--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
The PostgreSQL Conference - http://www.postgresqlconference.org/
@cmdpromptinc - @postgresconf - 509-416-6579

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] What Would You Like To Do?

2011-09-13 Thread Rodrigo Gonzalez

On 09/13/2011 04:52 PM, Tom Lane wrote:

Rodrigo Gonzalezrjgonz...@estrads.com.ar  writes:

In a perfect world, it would be nice if one could do combined queries
linking a PostgreSQL database with an Oracle one, or a MySQL one,

Can't you do that with FDW that is present in 9.1?

FDW provides the structure within which that will eventually be
possible, but there's no Oracle or MySQL wrapper today ... and there are
a lot of FDW restrictions that need to be worked on, too.

regards, tom lane


They are both listed at wiki
I know there are a lot of limitationsbut OP message says Even 
something that is several hundred times slower is going to be faster 
than merging the databases together.  When I do this today, I have to 
write a program (in perl or php) that accesses both databases and merges 
it by hand.

Am I wrong that this is currently possible using FDW?

Thanks

Rodrigo Gonzalez


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] What Would You Like To Do?

2011-09-13 Thread Kevin Grittner
Rodrigo Gonzalez  wrote:
 On 09/13/2011 04:52 PM, Tom Lane wrote:
 
 FDW provides the structure within which that will eventually be
 possible, but there's no Oracle or MySQL wrapper today ...
 
 They are both listed at wiki
 
And here:
 
http://www.pgxn.org/tag/foreign%20data%20wrapper/
 
-Kevin



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] What Would You Like To Do?

2011-09-13 Thread Michael Nolan
On Tue, Sep 13, 2011 at 2:55 PM, Joshua D. Drake j...@commandprompt.comwrote:


 On 09/13/2011 11:51 AM, Michael Nolan wrote:


The ability to restore a table from a backup file to a different
table
name in the same database and schema.


This can be done but agreed it is not intuitive.


 Can you elaborate on tha a bit, please?  The only way I've been able to
 do it is to edit the dump file to change the table name.  That's not
 very practical with a several gigabyte dump file, even less so with one
 that is much larger.  If this capability already exists, is it documented?


 You use the -Fc method, extract the TOC and edit just the TOC (so you don't
 have to edit a multi-gig file)

 That is, at best, a bit obscure.  I've wondered at times if the -f tar
option would have any benefits here, though it appears to have significant
downsides.

A downside of either method may be that I can't predict in advance when I
will want to do a restore of a single table from a backup file,
so I'd have to always use that method of generating the file.

I did propose an extension to pg_restore a couple of months ago to add an
option to re-name a table as it is restored, but that seemed to have
generated no interest.

Maybe an external tool that reads a pg_dump file looking for a specific
table and writes that portion of the dump file to a separate file, changing
the table name would be easier?  It'd probably have to handle most of or all
of the different pg_dump formats, but that doesn't sound like an
unachievable goal.
--
Mike Nolan


Re: [HACKERS] What Would You Like To Do?

2011-09-13 Thread Alexander Korotkov
On Mon, Sep 12, 2011 at 8:21 AM, David E. Wheeler da...@kineticode.comwrote:

 So, what do you want to work on? Let me know, I'll do as much match-making
 at the conference as I can.

Here is my list:
* Additional approximate string matching functions and index access for them
using gin/gist/spgist.
* Signature indexing with gist/spgist in various fields. For example,
indexing of image signatures with similar images retreival.
* Statistics collection and selectivity estimation for geometric datatypes.

--
With best regards,
Alexander Korotkov.


Re: [HACKERS] WIP: Fast GiST index build

2011-09-13 Thread Stefan Keller
Robert,

2011/9/6 Alexander Korotkov aekorot...@gmail.com:
 GiST use serial numbers of operations for concurrency. In current
 implementation xlog record ids are used in capacity of that numbers. In
 unlogged table no xlog records are produced. So, we haven't serial numbers
 of operations. AFAIK, it's enough to provide some other source of serial
 number in order to make GiST work with unlogged tables.

GiST is IMHO quite broadly used. I use it for example for indexing
geometry and hstore types and there's no other choice there.
Do you know whether unlogged option in create table will support GiST
in the next release?

Stefan

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] performance-test farm

2011-09-13 Thread Tomas Vondra
Dne 12.5.2011 08:54, Greg Smith napsal(a):
 Tomas Vondra wrote:
 Actually I was not aware of how the buildfarm works, all I
 knew was there's something like that because some of the hackers mention
 a failed build on the mailing list occasionally.

 So I guess this is a good opportunity to investigate it a bit ;-)

 Anyway I'm not sure this would give us the kind of environment we need
 to do benchmarks ... but it's worth to think of.
   
 
 The idea is that buildfarm systems that are known to have a) reasonable
 hardware and b) no other concurrent work going on could also do
 performance tests.  The main benefit of this approach is it avoids
 duplicating all of the system management and source code building work
 needed for any sort of thing like this; just leverage the buildfarm
 parts when they solve similar enough problems.  Someone has actually
 done all that already; source code was last sync'd to the build farm
 master at the end of March:  https://github.com/greg2ndQuadrant/client-code
 
 By far the #1 thing needed to move this forward from where it's stuck at
 now is someone willing to dig into the web application side of this. 
 We're collecting useful data.  It needs to now be uploaded to the
 server, saved, and then reports of what happened generated.  Eventually
 graphs of performance results over time will be straighforward to
 generate.  But the whole idea requires someone else (not Andrew, who has
 enough to do) sits down and figures out how to extend the web UI with
 these new elements.

Hi all,

it seems CSPUG will get two refurbished servers at the end of this
month. We plan to put both of them to the buildfarm - one for regular
testing with Czech locales and I'd like to use the other one for the
proposed performance testing.

I'm willing to put some time into this, but I'll need help with
preparing the 'action plan' (because you know - I live in EU, and in EU
everything is driven by action plans).

AFAIK what needs to be done is:

1) preparing the hw, OS etc. - ok
2) registering the machine as a buildfarm member - ok
3) modifying the buildfarm client-code to collect performance data

  - What data should to be collected prior to the benchmark?

 a) info about the environment (to make sure it's safe)?
 b) something else?

  - What performance tests should be executed?

 a) let's start with pgbench - select-only and regular
 b) something else in the future? DSS/DWH workloads?
 c) special tests (spinlocks, db that fits to RAM, ...)

4) modifying the buildfarm server-code to accept and display
   performance data

   - not really sure what needs to be done here

regards
Tomas

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] What Would You Like To Do?

2011-09-13 Thread Dann Corbit
From: pgsql-hackers-ow...@postgresql.org 
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Michael Nolan
Sent: Tuesday, September 13, 2011 11:51 AM
To: Joshua D. Drake
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] What Would You Like To Do?


On Tue, Sep 13, 2011 at 12:26 PM, Joshua D. Drake 
j...@commandprompt.commailto:j...@commandprompt.com wrote:

On 09/13/2011 10:13 AM, Michael Nolan wrote:
The lists all seem to be focusing on the things that the developers
would like to add to PostgreSQL, what about some things that users or
ISPs might like to have, and thus perhaps something that companies might
actually see as worth funding?

Well just my own two cents ... but it all depends on who is doing the funding. 
At this point 80% of the work CMD codes for Pg (or tertiary projects and 
modules) is funded by companies. So let's not assume that companies aren't 
funding things. They are.

But perhaps if a few 'commercial' features were on the wish list there would be 
more companies willing to fund development?   The developers get a bit of what 
they want to work on, the production users get a bit of what they need, 
everybody's happy.

For example:

A fully integrated ability to query across multiple databases,possibly
on multiple servers, something Oracle has had for nearly two decades.

That isn't the approach to take. The fact that Oracle has it is not a guarantee 
that it is useful or good. If you need to query across databases (assuming 
within the same cluster) then you designed your database wrong and should have 
used our SCHEMA support (what Oracle calls Namespaces) instead.

This is the difference between developers and real world users.  Real world 
users may not have the ability, time or resources to redesign their databases 
just because that's the 'best' way to do something.  Will it be the most 
efficient way to do it?  Almost certainly not.

I've been involved in a few corporate mergers, and there was a short term need 
to do queries on the combined databases while the tiger team handling the IT 
restructuring figured out how (or whether) to merge the dabases together.  (One 
of these happened to be an Oracle/Oracle situation, it was a piece of cake even 
though the two data centers were 750 miles apart and the table structures had 
almost nothing in common.  Another was a two week headache, the third was even 
worse!)

In a perfect world, it would be nice if one could do combined queries linking a 
PostgreSQL database with an Oracle one, or a MySQL one, too.  Because 
sometimes, that's what you gotta do.  Even something that is several hundred 
times slower is going to be faster than merging the databases together.  When I 
do this today, I have to write a program (in perl or php) that accesses both 
databases and merges it by hand.

Microsoft uses Linked servers.
DB/2 uses DB/2 Connect
Informix uses Informix Connect
Etc.

At CONNX, our product suite provides this ability generically from any data 
source collection.  It is obvious why such a thing is utterly mandatory for 
every large business.  For example:
The business purchases a CRM system for customer relationship management like 
SAP.
The business purchases a HCM system for Human Capital Management like 
Peoplesoft.
The business purchases a Manufacturing system like MAXIM for their 
manufacturing systems.
Etc., etc., etc.

Some of these systems may have the same database type, but it is highly 
unlikely that every solution to a business problem in the entire organization 
uses the same underlying database.
People buy or build software systems to solve their business problems.  There 
is a low probability that each and every business problem was solved by the 
same sets of tools from the same vendors.
Therefore, the ability to process queries across heterogeneous systems is a 
fundamental business need.

The larger the company the more database systems you will find.  But even 
teeny-tiny organizations tend to have several different database systems needed 
to run their business.

{snip}



[HACKERS] foreign table: WARNING: problem in alloc set ExecutorState detected write past chunk end in block ...

2011-09-13 Thread Josh Kupershmidt
While trying to test out the EXPLAIN and nfiltered, take two patch
with its test file explaintesti, I noticed I was getting a warning
message like:

 WARNING:  problem in alloc set ExecutorState: detected write past
chunk end in block 0x101134e00, chunk 0x101134f38

I got the same warning on unpatched git head as well. I was able to
reproduce the problem with the attached input.sql file, along with a
dummy input file, dummy_passwd, for the foreign table to read from. I
get the above warning along with:

ERROR:  missing data for column pass
CONTEXT:  COPY passwd, line 1: ##
STATEMENT:  SELECT * FROM passwd WHERE uid  4;

The error message is not surprising, since I am feeding the foreign
table improperly-formatted lines. The warning is what concerns me;
from the comments in aset.c and the warning message text itself, I
gather this is not expected. I tested on OS X 10.6, with
--enable-cassert --enable-debug.

Josh


input.sql
Description: Binary data


dummy_passwd
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Rough impl of IGNORE NULLS for window functions

2011-09-13 Thread David Fetter
On Tue, Sep 13, 2011 at 04:18:48AM -0400, Joe Banafato wrote:
 Hi all,
 
 I wrote an implementation of last_value that ignores null values,
 effectively achieving the behavior of last_value(exp ignore nulls).
 The code is up on BitBucket [1] for the moment.  Thoughts:

Just in case of lossage on bitbucket, could you please send a patch
vs. git master to the list?

Cheers,
David (pretty excited about this feature).
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] foreign table: WARNING: problem in alloc set ExecutorState detected write past chunk end in block ...

2011-09-13 Thread Tom Lane
Josh Kupershmidt schmi...@gmail.com writes:
 While trying to test out the EXPLAIN and nfiltered, take two patch
 with its test file explaintesti, I noticed I was getting a warning
 message like:

  WARNING:  problem in alloc set ExecutorState: detected write past
 chunk end in block 0x101134e00, chunk 0x101134f38

 I got the same warning on unpatched git head as well.

Can't reproduce that here, on either 32-bit or 64-bit hardware.
However, this sort of issue is often exceedingly sensitive to
environment and build options.  What's your platform, what did you use
for configure options, etc?

[ thinks for a bit... ]  It could easily also depend on the precise
lengths of strings being pushed around.  What's the exact length of
the foreign table pathname you actually used?  And is that the exact
dummy file you used?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] New releases for back branches 8.4 and 9.0?

2011-09-13 Thread Mark Kirkwood
Recently some pretty significant join optimization improvements have 
made their way into these branches. Are we looking at cutting an 8.4.9 
and 9.0.5 soon?


Cheers

Mark

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] foreign table: WARNING: problem in alloc set ExecutorState detected write past chunk end in block ...

2011-09-13 Thread Josh Kupershmidt
On Tue, Sep 13, 2011 at 6:29 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Josh Kupershmidt schmi...@gmail.com writes:
 While trying to test out the EXPLAIN and nfiltered, take two patch
 with its test file explaintesti, I noticed I was getting a warning
 message like:

  WARNING:  problem in alloc set ExecutorState: detected write past
 chunk end in block 0x101134e00, chunk 0x101134f38

 I got the same warning on unpatched git head as well.

 Can't reproduce that here, on either 32-bit or 64-bit hardware.
 However, this sort of issue is often exceedingly sensitive to
 environment and build options.  What's your platform, what did you use
 for configure options, etc?

I got the warning on OS X 10.6 64-bit. The only configure options I
used were --prefix=/path/ --enable-cassert --enable-debug.

 [ thinks for a bit... ]  It could easily also depend on the precise
 lengths of strings being pushed around.  What's the exact length of
 the foreign table pathname you actually used?  And is that the exact
 dummy file you used?

I saw the warning first with my actual /etc/passwd file, which had
those comment lines to trigger the error, while trying Marko's
explaintesti sample file[1]. Then I reproduced the problem on git head
with that dummy file at location
/Users/josh/pg_permanent/warning_alloc/dummy_passwd.

FWIW, I can't reproduce the problem on a 64-bit Ubuntu 11.04 machine either.

Josh

-- 
[1] http://archives.postgresql.org/pgsql-hackers/2011-09/msg00434.php

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] foreign table: WARNING: problem in alloc set ExecutorState detected write past chunk end in block ...

2011-09-13 Thread Tom Lane
Josh Kupershmidt schmi...@gmail.com writes:
 On Tue, Sep 13, 2011 at 6:29 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Can't reproduce that here, on either 32-bit or 64-bit hardware.
 However, this sort of issue is often exceedingly sensitive to
 environment and build options.  What's your platform, what did you use
 for configure options, etc?

 I got the warning on OS X 10.6 64-bit. The only configure options I
 used were --prefix=/path/ --enable-cassert --enable-debug.

Hmm.  I tried on 64-bit Lion (10.7.1) and had no luck.

Please try setting a gdb breakpoint on the warning elog (it's in aset.c)
and looking to see what's the contents of the complained-of chunk.  That
might give us a hint anyway ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New releases for back branches 8.4 and 9.0?

2011-09-13 Thread Tom Lane
Mark Kirkwood mark.kirkw...@catalyst.net.nz writes:
 Recently some pretty significant join optimization improvements have 
 made their way into these branches. Are we looking at cutting an 8.4.9 
 and 9.0.5 soon?

The plan that was being batted around the core list was to schedule a
set of update releases the last week of September, ie two weeks after
9.1.0, which would give us a chance for a quick 9.1.1 too in case
anybody found any really nasty bugs.  It's not definite yet, but I agree
that we're overdue for back-branch updates.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSL key with passphrase

2011-09-13 Thread Thom Brown
On 13 September 2011 15:17, Tom Lane t...@sss.pgh.pa.us wrote:
 Thom Brown t...@linux.com writes:
 There appears to be a problem with starting Postgres if the SSL key
 has a passphrase on it.

 It's documented that that's unsupported.  Given the number of ways to
 start a postmaster, and the fact that many of them are noninteractive,
 I don't think it's very productive for us to worry about it.

I've managed to get pg_ctl to accept the passphrase with the -w
option.  Works fine like that.  Since that works, perhaps the page
referring to SSL could mention this.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unite recovery.conf and postgresql.conf

2011-09-13 Thread Fujii Masao
On Wed, Sep 14, 2011 at 1:10 AM, Simon Riggs si...@2ndquadrant.com wrote:
 On Tue, Sep 13, 2011 at 2:51 PM, Peter Eisentraut pete...@gmx.net wrote:
 On tis, 2011-09-13 at 14:46 +0900, Fujii Masao wrote:
 Are you still thinking the backward-compatibility (i.e., the
 capability to specify recovery parameters in recovery.conf) is
 required?

 I think parameters related to a particular recovery, e.g.,
 recovery_target_time, fit better into a recovery.conf that is renamed
 after the recovery is complete.  That was the original idea, after all.

 Everything that is a permanent setting across multiple recovery
 attempts, and anything related to replication, better fits elsewhere.

 I've just been thinking that a better way would be to make
 recovery.conf an extension of postgresql.conf when we are in archive
 recovery.

 So treat postgresql.conf as if it has an automatic include
 recovery.conf in it. The file format is the same.

 That way we don't need to change existing behaviour, so any software
 that relies upon this will still work, but we gain the additional
 ability to reload values in recovery,conf (where appropriate).

 We can change the .sample files to show parameters that make more
 sense in one or the other file, rather than making it a hard
 requirement for them to appear in specific files which will be a real
 pain in the ass.

 Internal changes would then be to move existing recovery.conf
 parameters into guc.c and revise the manual accordingly.

Sounds reasonable. I'll revise the patch that way.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [v9.2] make_greater_string() does not return a string in some cases

2011-09-13 Thread Kyotaro HORIGUCHI
This is rebased patch of `Allow encoding specific character
incrementer'(https://commitfest.postgresql.org/action/patch_view?id=602).

Addition to the patch, increment sanity check program for new
functions pg_generic_charinc and pg_utf8_increment is attached.
 
-- 
Kyotaro Horiguchi
NTT Open Source Software Center
diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index 5d999e6..b7f1922 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -5652,6 +5652,18 @@ pattern_selectivity(Const *patt, Pattern_Type ptype)
 
 
 /*
+ * This function is character increment function for bytea used in
+ * make_greater_string() that has same interface with pg_wchar_tbl.charinc.
+ */
+static bool byte_increment(unsigned char *ptr, int len)
+{
+	if (*ptr = 255) return false;
+
+	(*ptr)++;
+	return true;
+}
+
+/*
  * Try to generate a string greater than the given string or any
  * string it is a prefix of.  If successful, return a palloc'd string
  * in the form of a Const node; else return NULL.
@@ -5690,6 +5702,7 @@ make_greater_string(const Const *str_const, FmgrInfo *ltproc, Oid collation)
 	int			len;
 	Datum		cmpstr;
 	text	   *cmptxt = NULL;
+	character_incrementer charincfunc;
 
 	/*
 	 * Get a modifiable copy of the prefix string in C-string format, and set
@@ -5751,27 +5764,38 @@ make_greater_string(const Const *str_const, FmgrInfo *ltproc, Oid collation)
 		}
 	}
 
+	if (datatype != BYTEAOID)
+		charincfunc = pg_database_encoding_character_incrementer();
+	else
+		charincfunc = byte_increment;
+
 	while (len  0)
 	{
-		unsigned char *lastchar = (unsigned char *) (workstr + len - 1);
-		unsigned char savelastchar = *lastchar;
+		int charlen;
+		unsigned char *lastchar;
+		unsigned char savelastbyte;
+		Const	   *workstr_const;
+		
+		if (datatype == BYTEAOID)
+			charlen = 1;
+		else
+			charlen = len - pg_mbcliplen(workstr, len, len - 1);
+
+		lastchar = (unsigned char *) (workstr + len - charlen);
 
 		/*
-		 * Try to generate a larger string by incrementing the last byte.
+		 * savelastbyte has meaning only for datatype == BYTEAOID
 		 */
-		while (*lastchar  (unsigned char) 255)
-		{
-			Const	   *workstr_const;
+		savelastbyte = *lastchar;
 
-			(*lastchar)++;
+		/*
+		 * Try to generate a larger string by incrementing the last byte or
+		 * character.
+		 */
 
+		if (charincfunc(lastchar, charlen)) {
 			if (datatype != BYTEAOID)
-			{
-/* do not generate invalid encoding sequences */
-if (!pg_verifymbstr(workstr, len, true))
-	continue;
 workstr_const = string_to_const(workstr, datatype);
-			}
 			else
 workstr_const = string_to_bytea_const(workstr, len);
 
@@ -5786,26 +5810,17 @@ make_greater_string(const Const *str_const, FmgrInfo *ltproc, Oid collation)
 pfree(workstr);
 return workstr_const;
 			}
-
+			
 			/* No good, release unusable value and try again */
 			pfree(DatumGetPointer(workstr_const-constvalue));
 			pfree(workstr_const);
 		}
 
-		/* restore last byte so we don't confuse pg_mbcliplen */
-		*lastchar = savelastchar;
-
 		/*
-		 * Truncate off the last character, which might be more than 1 byte,
-		 * depending on the character encoding.
+		 * Truncate off the last character or restore last byte for BYTEA.
 		 */
-		if (datatype != BYTEAOID  pg_database_encoding_max_length()  1)
-			len = pg_mbcliplen(workstr, len, len - 1);
-		else
-			len -= 1;
-
-		if (datatype != BYTEAOID)
-			workstr[len] = '\0';
+		len -= charlen;
+		workstr[len] = (datatype != BYTEAOID ? '\0' : savelastbyte);
 	}
 
 	/* Failed... */
diff --git a/src/backend/utils/mb/wchar.c b/src/backend/utils/mb/wchar.c
index f23732f..00b3e2a 100644
--- a/src/backend/utils/mb/wchar.c
+++ b/src/backend/utils/mb/wchar.c
@@ -1,3 +1,4 @@
+
 /*
  * conversion functions between pg_wchar and multibyte streams.
  * Tatsuo Ishii
@@ -1336,53 +1337,254 @@ pg_utf8_islegal(const unsigned char *source, int length)
 
 /*
  *---
+ * character incrementer
+ *
+ * These functions accept charptr, a pointer to the first byte of a
+ * maybe-multibyte character. Try `increment' the character and return true if
+ * successed.  If these functions returns false, the character should be
+ * untouched.  These functions must be implemented in correspondence with
+ * verifiers, in other words, the rewrited character by this function must pass
+ * the check by pg_*_verifier() if returns true. Returning the return value of
+ * pg_*_verifier() corresponding can finnaly avoid such a inconsistency when
+ * something wrong.
+ * ---
+ */
+
+#ifndef FRONTEND
+static bool pg_generic_charinc(unsigned char *charptr, int len)
+{
+ 	unsigned char *lastchar = (unsigned char *) (charptr + len - 1);
+ 	unsigned char savelastchar = *lastchar;
+ 	const char *const_charptr = (const char *)charptr;
+ 
+ 	while (*lastchar  (unsigned char) 255)
+ 	{
+ 		

[HACKERS] [REVIEW] Generate column names for subquery expressions

2011-09-13 Thread Kyotaro HORIGUCHI
This is a review for the patch `Generate column names for
subquery expressions'
(https://commitfest.postgresql.org/action/patch_view?id=632)



Summary

Patch format is in context diff format.
This patch applies cleanly on HEAD and make check suceeded.
It seems have no problem to apply.
Documents is needed to modify.


Purpose and function of this patch


This patch intends to name a part of the columns in the outmost
SELECT caluse currently left unnamed - seen as `?column?' - and
fix `unnatural' naming - seen as `int4', `case'.

This patch figures column name after T_SubLink parse nodes
corresponding to EXISTS, ARRAY, and subquery in addition to
currently processed parse node types.

It seems reasonable that (ALL|ANY|ROWCOMPARE|CTE)_SUBLINK is left
unnnamed.


Patch application, regression test

The patch applies cleanly onto HEAD. make check yiels no error.
This patch adds no additional test case and it seems ok.
The coding style in this patch seems according to the convention.


Behavior changes

The behavior of column naming changes as following.

  STATEMENTAFTER   BEFORE
-
select (select 1 as foo)   foo ?column?
select (exists (select 1)) exists  ?column?
select (array (select 1 as x)) array   ?column?
select (select 1 as aaa)::int  aaa int4

select case when true then 1 else (select 1 as foo) end;
   foo case

Aboves are same as described. But the following expression
returns somewhat confising outcome.

 select case when true then (select 2 as bar) else (select 1 as foo) end;
  foo 
 -
2
 (1 row)

But this patch is not to blame for the behavior. The following is
seen for unpatched pg.

 # create table foo (a int, b int, c int);
 # insert into foo values (1, 100, -100), (0, 10, -10), (-1, 25, -25);
 # select case when a  0 then b else c end from foo;
   c   
 --
(snipped)

 # select case a when -1 then c when 1 then a else b end from foo;
   b  
 -
(snipped)

Nevertheless this behavior seems a bit unnatural, it is not the
issue for this patch.



Performance


This patch adds no extra load such as loops, recursive calls or
deep calls. Added code runs for exists(), array() and subquery
appear in the column list of select clause. So I think this patch
can put only negligible impact on performance.


Gleaning

This patch assumes node(subLinkType==EXPR_SUBLINK)-subselect is
not null, and it seems that gram.y(c) says the assumption is
correct.

I think this patch needs no documentation, but it is needed to
edit the changed behaviors quoted in document. Maybe only one
change as far as I have seen.

http://www.postgresql.org/docs/9.0/static/sql-expressions.html

 4.2.11
.. 
 SELECT ARRAY(SELECT oid FROM pg_proc WHERE proname LIKE 'bytea%');
   ?column?
 -
  {2011,1954,1948,1952,1951,1244,1950,2005,1949,1953,2006,31}



Sincerely,

-- 
Kyotaro Horiguchi

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] What Would You Like To Do?

2011-09-13 Thread David E. Wheeler
On Sep 13, 2011, at 3:12 PM, Kevin Grittner wrote:

 And here:
 
 http://www.pgxn.org/tag/foreign%20data%20wrapper/

A shorter URL with more results:

  http://www.pgxn.org/tag/fdw

Best,

David

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] What Would You Like To Do?

2011-09-13 Thread David E . Wheeler
On Sep 12, 2011, at 9:50 AM, Dimitri Fontaine wrote:

 Thanks to Greg Smith for adding a few bonus ideas I hadn't thought of. What
 else have you got? I don't think we necessarily have to limit ourselves to
 core features, BTW: projects like PostGIS and pgAdmin are also clearly
 popular, and new projects of that scope (or improvements to those!) would no
 doubt be welcome.
 
 You could add DDL Triggers from me (2ndQuadrant) and process-based
 parallel loading in pgloader (currently thread based, sucks).

Got it, thanks. What about additional extension features? Got a road map for 
that?

Best,

David


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers