Re: [HACKERS] Streaming replication status

2010-01-14 Thread Fujii Masao
On Wed, Jan 13, 2010 at 5:47 PM, Greg Smith g...@2ndquadrant.com wrote:
 The pieces are coming together...summary:

Thanks for the summary!

 -Also add pg_standbys_xlog_location() on the master:  while they could live 
 without it, this really helps out the alert/monitor script writer whose use 
 cases keep popping up here.

 Details...the original idea from Fujii was:

 I'm thinking something like pg_standbys_xlog_location() [on the primary] 
 which returns
 one row per standby servers, showing pid of walsender, host name/
 port number/user OID of the standby, the location where the standby
 has written/flushed WAL. DBA can measure the gap from the
 combination of pg_current_xlog_location() and pg_standbys_xlog_location()
 via one query on the primary.

This function is useful but not essential for troubleshooting, I think.
So I'd like to postpone it.

 It seems to me that we should have at least two functions available
 on the slave: latest xlog location received and synced to disk by
 walreceiver (ie, we are guaranteed to be able to replay up to here);
 and latest xlog location actually replayed (ie, the state visible
 to queries on the slave).  The latter perhaps could be
 pg_current_xlog_location().

 So there's the first two of them:  on the slave, pg_current_xlog_location()
 giving the latest location replayed, and a new one named something like
 pg_standby_received_xlog_location().  If you take the position that an
 unreachable standby does provide answers to these questions too (you just
 won't like them), this pair might be sufficient to ship.

Done.

 git://git.postgresql.org/git/users/fujii/postgres.git
 branch: replication

I added two new functions;

(1) pg_last_xlog_receive_location() reports the last WAL location received
and synced by walreceiver. If streaming replication is still in progress
this will increase monotonically. If streaming replication has completed
then this value will remain static at the value of the last WAL record
received and synced. When the server has been started without a streaming
replication then the return value will be InvalidXLogRecPtr (0/0).

(2) pg_last_xlog_replay_location() reports the last WAL location replayed
during recovery. If recovery is still in progress this will increase
monotonically. If recovery has completed then this value will remain
static at the value of the last WAL record applied. When the server has
been started normally without a recovery then the return value will be
InvalidXLogRecPtr (0/0).

Since it's somewhat odd for me that pg_current_xlog_location() reports the
WAL replay location, I didn't do that. But if the majority feel that it's sane,
I'll merge pg_last_xlog_replay_location() into pg_current_xlog_location().

Thought? Better name?

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming replication and non-blocking I/O

2010-01-14 Thread Fujii Masao
On Wed, Jan 13, 2010 at 7:27 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 the frontend always puts the
 connection to non-blocking mode, while the backend uses blocking mode.

Really? By default (i.e., without the expressly setting by using
PQsetnonblocking()), the connection is set to blocking mode even
in frontend. Am I missing something?

 At least with SSL, I think it's possible for pq_wait() to return false
 positives, if the SSL layer decides to renegotiate the connection
 causing data to flow in the other direction in the underlying TCP
 connection. A false positive would lead cause walsender to block
 indefinitely on the pq_getbyte() call.

Sorry. I could not understand that issue scenario. Could you explain
it in more detail?

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] remove redundant ownership checks

2010-01-14 Thread KaiGai Kohei

(2010/01/14 15:04), Greg Smith wrote:

KaiGai Kohei wrote:

(2010/01/14 4:54), Tom Lane wrote:

Robert Haasrobertmh...@gmail.com writes:

On Wed, Jan 13, 2010 at 1:34 PM, Tom Lanet...@sss.pgh.pa.us wrote:

If I thought this patch represented incremental movement in the
direction of a better security-check factorization, I'd be fine
with it,
but that's not clear either. �The argument for it is that these checks
are redundant with some other ones, but why should we remove these and
not the other ones instead?

That's a good question, and I have an answer [ namely that ALTER TABLE
is the right place ].

But note Stephen Frost's concurrent reply suggesting that he wants to
move the checks *out* of ALTER TABLE. With his plan, these checks
are probably in the right place already.


Note that this patch tries to remove redundant checks in this code path.
If ATPrepCmd() would not be a right place to apply permission checks
we should remove invocation of the ATSimplePermissions()...


I'm glad to see this discussion thread, because I think it's really
getting to the heart of the core issues that have made development in
this area (like SEPostgreSQL) more complicated than it needs to be. I
just talked with Stephen for a bit tonight to try and get my head
wrapped around what you're all trying to do, and from what I gather the
plan here that's taking shape looks like this:

1) Reach an agreement of the preferred way to handle permissions checks
in these situations where there is more than one such check going on,
and therefore a direction to consolidate all of them toward
2) Update the messy ALTER TABLE code to use that preferred method
3) Find another messy chunk of code and confirm the same style of
refactoring can be applied to it as well (Stephen suggested ALTER TYPE
as a candidate here)
4) Finish up converting any other ALTER commands that have split checks
in them, since ALTER seems to be operation that's most prone to this
type of issue
5) Confirm that the permissions checks in the major operations with
simpler permissions related code paths (CREATE etc.) are also free of
split checks
6) Add an extended alternate permissions checker to the now consolidated
code, such as considering security labels. If the previous steps were
done right, this should have a smaller logical and diff footprint than
previous such efforts because it will touch many less places.

Tom's objection to this patch is that without at least a general idea
what form (2) through (4) (or similar incremental steps) intend to take,
you don't want to touch just (2) lest you do something that only make
sense for it--but not the later work. The patch being discussed here is
a first step of the work needed for (2). However, it seems pretty clear
to me that there's not even close to an agreement about step (1) here
yet. Let me quote a few bits out of context to highlight:

KaiGai: ...it is quite natural to check permission to modify properties
of relations in ATPrepCmd

Robert: Most of the ALTER TABLE operations use ATSimplePermissions() or
ATSimplePermissionsRelationOrIndex() to check permissions...what I have
in mind is to modify ATPrepCmd

Stephen: I think a patch which attacks ATPrepCmd and rips out all of
the owner checks from it and moves them to appropriate places...would
probably be the first step...At the moment we do a 'simple' check in
ATPrepCmd (essentially, ownership of the relation) and then any more
complicated checks have to be done by the function...this patch isn't
doing that because it was intended to make
the existing code consistant, not institute a new policy for how
permissions checking should be done.

(Apologies if a fragment or two of the above aren't in the archives, I
think I grabbed a bit from one of the off-list messages in my mailbox
while assembling).

I've looked at this for a little bit, and I sure can't tell who's right
here. What I am sure of though is that even a majority here isn't going
to fly. If we don't even have all three of you guys lined up in the same
direction on something this small, there's little hope of getting the
whole community sold on this already controversial issue. Tom said back
on 12/17 that we need a very well-defined notion of where permissions
checks should be made, the thing I pulled out as (1) above. The
discussion around that topic has been going on here quite regularly now
for almost a month, and these little quoted bits highlight that opinion
is still quite split. Please keep hammering away at this little piece; I
think it's really important to set a good example here.


Greg, thanks for this summary.

I'd like to introduce two points prior to the stage of (1).


First, we need to make clear what is the functionality of access control
features. It is making an access control decision either allowed or
denied on a certain combination of database role (subject), database
object and required action, based on the rules.

For example, when a database user X tries to alter a certain 

Re: [HACKERS] Streaming replication and non-blocking I/O

2010-01-14 Thread Heikki Linnakangas
Fujii Masao wrote:
 On Wed, Jan 13, 2010 at 7:27 PM, Heikki Linnakangas
 heikki.linnakan...@enterprisedb.com wrote:
 the frontend always puts the
 connection to non-blocking mode, while the backend uses blocking mode.
 
 Really? By default (i.e., without the expressly setting by using
 PQsetnonblocking()), the connection is set to blocking mode even
 in frontend. Am I missing something?

That's right. The underlying socket is always put to non-blocking mode
in libpq. PQsetnonblocking() only affects whether libpq commands wait
and retry if the output buffer is full.

 At least with SSL, I think it's possible for pq_wait() to return false
 positives, if the SSL layer decides to renegotiate the connection
 causing data to flow in the other direction in the underlying TCP
 connection. A false positive would lead cause walsender to block
 indefinitely on the pq_getbyte() call.
 
 Sorry. I could not understand that issue scenario. Could you explain
 it in more detail?

1. Walsender calls pq_wait() which calls select(), waiting for timeout,
or data to become available for reading in the underlying socket.

2. Client issues an SSL renegotiation by sending a message to the server

3. Server receives the message, and select() returns indicating that
data has arrived

4. Walsender calls HandleEndOfRep() which calls pq_getbyte().
pq_readbyte() calls SSL_read(), which receives the renegotiation message
and handles it. No application data has arrived, however, so SSL_read()
blocks for some to arrive. It never does.

I don't understand enough of SSL to know if renegotiation can actually
happen like that, but the man page of SSL_read() suggests so. But a
similar thing can happen if an SSL record is broken into two TCP
packets. select() returns immediately as the first packet arrives, but
SSL_read() will block until the 2nd packet arrives.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] review: More frame options in window functions

2010-01-14 Thread Pavel Stehule
Hello

I looked on Hitoshi's patch - and my result is:

1. Patch is in correct format, it is cleanly applied and compilation
is without warnings,
2. Patch contains adequate changes of doc, and rich set of regress tests,
3. If I could to understand to implemented feature - it is in
conformity with SQL standard,
4. This feature increase feature set in analytic area:
 a)  we can do simply some operation over time series like moving average,
 b) it decrease difference between Oracle and DB2 in this area (so
port of some application should be simpler)

5. Last version of this patch work without known crashes
6. make check doesn't signalise any problem

7. Source code respects our coding guidelines - it is well commented
8. Contrib is compiled without warnings
9. All contrib regress tests passed.

Bonus:
It simplify and unifies access to agg or window context via func:
AggGetMemoryContext. It protect us against a future changes.

My  recommendation
* add some examples to documentation - mainly some use case of RANGE frame

Possible issue
* It could to break compatibility for some custom aggregates in C.
This topic was discussed and this way is preferred. I thing, so this
issue  have to be documented somewhere:

if you use aggcontext in your custom aggregates, fix your code

if (fcinfo-context  IsA(fcinfo-context, AggState))
 aggcontext = ((AggState *) fcinfo-context)-aggcontext;
else if (fcinfo-context  IsA(fcinfo-context, WindowAggState))
 aggcontext = ((WindowAggState *) fcinfo-context)-wincontext;
else

have to be replaced with line:
  aggcontext = AggGetMemoryContext((Node *) fcinfo-context, iswindowagg);

Regards
Pavel Stehule

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming replication and non-blocking I/O

2010-01-14 Thread Magnus Hagander
2010/1/14 Heikki Linnakangas heikki.linnakan...@enterprisedb.com:
 Fujii Masao wrote:
 On Wed, Jan 13, 2010 at 7:27 PM, Heikki Linnakangas
 heikki.linnakan...@enterprisedb.com wrote:
 the frontend always puts the
 connection to non-blocking mode, while the backend uses blocking mode.

 Really? By default (i.e., without the expressly setting by using
 PQsetnonblocking()), the connection is set to blocking mode even
 in frontend. Am I missing something?

 That's right. The underlying socket is always put to non-blocking mode
 in libpq. PQsetnonblocking() only affects whether libpq commands wait
 and retry if the output buffer is full.

 At least with SSL, I think it's possible for pq_wait() to return false
 positives, if the SSL layer decides to renegotiate the connection
 causing data to flow in the other direction in the underlying TCP
 connection. A false positive would lead cause walsender to block
 indefinitely on the pq_getbyte() call.

 Sorry. I could not understand that issue scenario. Could you explain
 it in more detail?

 1. Walsender calls pq_wait() which calls select(), waiting for timeout,
 or data to become available for reading in the underlying socket.

 2. Client issues an SSL renegotiation by sending a message to the server

 3. Server receives the message, and select() returns indicating that
 data has arrived

 4. Walsender calls HandleEndOfRep() which calls pq_getbyte().
 pq_readbyte() calls SSL_read(), which receives the renegotiation message
 and handles it. No application data has arrived, however, so SSL_read()
 blocks for some to arrive. It never does.

 I don't understand enough of SSL to know if renegotiation can actually
 happen like that, but the man page of SSL_read() suggests so. But a
 similar thing can happen if an SSL record is broken into two TCP
 packets. select() returns immediately as the first packet arrives, but
 SSL_read() will block until the 2nd packet arrives.

I *think* renegotiation happens based on amount of content, not amount
of time. But it could still happen in cornercases I think. If the
renegotiation happens right after a complete packet has been sent
(which would be the logical place), but not fast enough that the SSL
library gets it in one read() from the socket, you could end up in
that situation. (if the SSL library gets the renegotiation request as
part of the first read(), it would probably do the renegotiation
before returning from that call to SSL_read(), in which case the
socket would be in the correct state before you call select)

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clearing global statistics

2010-01-14 Thread Magnus Hagander
2010/1/12 Greg Smith g...@2ndquadrant.com:
 Magnus Hagander wrote:

 I have on my TODO to implement the ability to do stats reset on a
 single object (say, one table only). Please take this into
 consideration when you design/name this, so theres no unnecessary
 overlap :-) Same goes for the stats message itself.


 The idea suggested upthread was to add this:

 pg_stat_reset( which text )
      which := 'buffers' | 'checkpoints' | 'tables' | 'functions' |  ...

 Now, the way the pg_stat_bgwriter tables are computed, it doesn't actually 
 make sense to separate out clearing the buffers/checkpoints stats, since one 
 of those values is in both categories:  buffers_checkpoint.  They're really 
 all too tightly coupled to break them apart.  So I was thinking of this:

 pg_stat_reset( which text )
      which := 'bgwriter' | ...

 I could convert the patch I've got to be an initial implementation of this 
 new pg_stat_reset with a parameter, laying some useful groundwork in the 
 process too.  Then people who want to reset more things can just re-use that 
 same outline and message passing mechanism, just adding comparisons for new 
 text and a handler to go with it--not even touching the catalog again.

 This may not mesh well with what you plan though.  If pg_stat_reset is 
 updated to reset stats on an individual table, that could be a second version 
 that takes in a regclass:

 pg_stat_reset('tablename'::regclass)

 But that seems like a confusing bit of overloading--I can easily see people 
 thinking that pg_stat_reset('bgwriter') would be resetting the stats for a 
 relation named 'bgwriter' rather than what it actually does if I build it 
 that way.

 So, combining with Peter's naming suggestion, I think what I should build is:

 pg_stat_reset_shared( which text )
      which := 'bgwriter' | ...

 Which satisfies what I'm looking for now, and future patches that need to 
 reset other shared across the cluster statistics can re-use this without 
 needing to add a whole new function/stats message.  I think that satisfies 
 the cross-section of planned use cases we're expecting now best.

 Any comments before I update my patch to do that?

Are you planning to get this in for the CF? (Yes, I realize there are
only hours left). This is functionality I'd *really* like to see in
8.5, so I'll be happy to work with you to get that committed inside or
outside CF bounds, but it's easier if it's on there for reviewers ;)
(plus, the outside cf bounds really only works *before* the
commitfest :P)


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] segmentation fault in function

2010-01-14 Thread Dimitri Fontaine
Sergej Galkin sergej.gal...@gmail.com writes:

 I am realizing gist index and get a bug, that crashes DB. I' debugged
 my program as Robert(thanks !) advised me and I know which procedure
 crashed.

Using gdb you should have the line number in the source code and should
be able to look up the variable values. For that you need to use a
custom PostgreSQL build using --with-cassert --enable-debug.

Then report some more details if you still need help.

Also have a look at Gevel to be able to inspect your index :
  http://www.sai.msu.su/~megera/wiki/Gevel

Regards,
-- 
dim

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small Bug in GetConflictingVirtualXIDs

2010-01-14 Thread Simon Riggs
On Sun, 2009-12-27 at 20:11 +0100, Andres Freund wrote:
 On Tuesday 22 December 2009 11:42:30 Simon Riggs wrote:
  On Tue, 2009-12-22 at 03:19 +0100, Andres Freund wrote:
   On Monday 21 December 2009 16:48:52 Simon Riggs wrote:
Giving the drop database a snapshot is not the answer. I expect Andres
to be able to fix this with a simple patch that would not effect the
case of normal running.
  
   Actually its less simply than I had thought at first - I don't think the
   code ever handled that correctly.
   I might be wrong there, my knowledge of the involved code is a bit
   sparse... The whole conflict resolution builds on the concept of waiting
   for an VXid, but an idle backend does not have a valid vxid. Thats
   correct, right?
  I don't see any mileage in making Startup process wait for an idle
  session, so no real reason to wait for others either.
 So here is a small patch implementing that behaviour.

On further testing, I received a re-connection from an automatic session
retry. That shouldn't have happened, but it indicates the need for
locking around the conflict handler. I had understood that to be placed
elsewhere but that seems wrong now.

This is a low priority item, so looking for a quick fix to allow time on
other areas.

Any objections?

-- 
 Simon Riggs   www.2ndQuadrant.com
*** a/src/backend/commands/dbcommands.c
--- b/src/backend/commands/dbcommands.c
***
*** 1944,1950  dbase_redo(XLogRecPtr lsn, XLogRecord *record)
--- 1944,1958 
  		dst_path = GetDatabasePath(xlrec-db_id, xlrec-tablespace_id);
  
  		if (InHotStandby)
+ 		{
+ 			/*
+ 			 * Lock database while we resolve conflicts to ensure that InitPostgres()
+ 			 * cannot fully re-execute concurrently. This avoids backends re-connecting
+ 			 * automatically to same database, which can happen in some cases.
+ 			 */
+ 			LockSharedObjectForSession(DatabaseRelationId, xlrec-db_id, 0, AccessExclusiveLock);
  			ResolveRecoveryConflictWithDatabase(xlrec-db_id);
+ 		}
  
  		/* Drop pages for this database that are in the shared buffer cache */
  		DropDatabaseBuffers(xlrec-db_id);
***
*** 1960,1965  dbase_redo(XLogRecPtr lsn, XLogRecord *record)
--- 1968,1984 
  			ereport(WARNING,
  	(errmsg(some useless files may be left behind in old database directory \%s\,
  			dst_path)));
+ 
+ 		if (InHotStandby)
+ 		{
+ 			/*
+ 			 * Release locks prior to commit. XX There is a race condition here that may allow
+ 			 * backends to reconnect, but the window for this is small because the gap between
+ 			 * here and commit is mostly fairly small and it is unlikely that people will be
+ 			 * dropping databases that we are trying to connect to anyway.
+ 			 */
+ 			UnlockSharedObjectForSession(DatabaseRelationId, xlrec-db_id, 0, AccessExclusiveLock);
+ 		}
  	}
  	else
  		elog(PANIC, dbase_redo: unknown op code %u, info);

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby and query cancel

2010-01-14 Thread Simon Riggs
On Wed, 2010-01-13 at 19:23 +, Simon Riggs wrote:
 On Wed, 2010-01-13 at 19:58 +0100, Andres Freund wrote:
 
   I am still testing patch, so should be confident to commit tomorrow
   barring issues.
  I have only looked at briefly because right now I dont have the time (going 
  to 
  eat at a friends place...) but I think I spotted an issue:
  The IsAbortedTransactionBlockState() check in RecoveryConflictInterrupt is 
  not 
  correct right now because that returns true for TBLOCK_SUBABORT as well.
  Wouldnt that mess with the case where were in a failed subxact and then 
  rollback only that subxact?
 
 Well spotted, yes. 

Latest version of same patch, but uses conflict reasons passed-thru
directly from recovery to backend.

Please review, no commit before tomorrow.

-- 
 Simon Riggs   www.2ndQuadrant.com
*** a/src/backend/access/transam/xact.c
--- b/src/backend/access/transam/xact.c
***
*** 313,320  IsTransactionState(void)
  /*
   *	IsAbortedTransactionBlockState
   *
!  *	This returns true if we are currently running a query
!  *	within an aborted transaction block.
   */
  bool
  IsAbortedTransactionBlockState(void)
--- 313,319 
  /*
   *	IsAbortedTransactionBlockState
   *
!  *	This returns true if we are within an aborted transaction block.
   */
  bool
  IsAbortedTransactionBlockState(void)
*** a/src/backend/storage/ipc/procarray.c
--- b/src/backend/storage/ipc/procarray.c
***
*** 324,329  ProcArrayEndTransaction(PGPROC *proc, TransactionId latestXid)
--- 324,330 
  		/* must be cleared with xid/xmin: */
  		proc-vacuumFlags = ~PROC_VACUUM_STATE_MASK;
  		proc-inCommit = false; /* be sure this is cleared in abort */
+ 		proc-recoveryConflictPending = false;
  
  		/* Clear the subtransaction-XID cache too while holding the lock */
  		proc-subxids.nxids = 0;
***
*** 350,355  ProcArrayEndTransaction(PGPROC *proc, TransactionId latestXid)
--- 351,357 
  		/* must be cleared with xid/xmin: */
  		proc-vacuumFlags = ~PROC_VACUUM_STATE_MASK;
  		proc-inCommit = false; /* be sure this is cleared in abort */
+ 		proc-recoveryConflictPending = false;
  
  		Assert(proc-subxids.nxids == 0);
  		Assert(proc-subxids.overflowed == false);
***
*** 377,383  ProcArrayClearTransaction(PGPROC *proc)
  	proc-xid = InvalidTransactionId;
  	proc-lxid = InvalidLocalTransactionId;
  	proc-xmin = InvalidTransactionId;
! 	proc-recoveryConflictMode = 0;
  
  	/* redundant, but just in case */
  	proc-vacuumFlags = ~PROC_VACUUM_STATE_MASK;
--- 379,385 
  	proc-xid = InvalidTransactionId;
  	proc-lxid = InvalidLocalTransactionId;
  	proc-xmin = InvalidTransactionId;
! 	proc-recoveryConflictPending = false;
  
  	/* redundant, but just in case */
  	proc-vacuumFlags = ~PROC_VACUUM_STATE_MASK;
***
*** 1665,1671  GetConflictingVirtualXIDs(TransactionId limitXmin, Oid dbOid,
  		if (proc-pid == 0)
  			continue;
  
! 		if (skipExistingConflicts  proc-recoveryConflictMode  0)
  			continue;
  
  		if (!OidIsValid(dbOid) ||
--- 1667,1673 
  		if (proc-pid == 0)
  			continue;
  
! 		if (skipExistingConflicts  proc-recoveryConflictPending)
  			continue;
  
  		if (!OidIsValid(dbOid) ||
***
*** 1704,1710  GetConflictingVirtualXIDs(TransactionId limitXmin, Oid dbOid,
   * Returns pid of the process signaled, or 0 if not found.
   */
  pid_t
! CancelVirtualTransaction(VirtualTransactionId vxid, int cancel_mode)
  {
  	ProcArrayStruct *arrayP = procArray;
  	int			index;
--- 1706,1712 
   * Returns pid of the process signaled, or 0 if not found.
   */
  pid_t
! CancelVirtualTransaction(VirtualTransactionId vxid, ProcSignalReason sigmode)
  {
  	ProcArrayStruct *arrayP = procArray;
  	int			index;
***
*** 1722,1749  CancelVirtualTransaction(VirtualTransactionId vxid, int cancel_mode)
  		if (procvxid.backendId == vxid.backendId 
  			procvxid.localTransactionId == vxid.localTransactionId)
  		{
! 			/*
! 			 * Issue orders for the proc to read next time it receives SIGINT
! 			 */
! 			if (proc-recoveryConflictMode  cancel_mode)
! proc-recoveryConflictMode = cancel_mode;
! 
  			pid = proc-pid;
  			break;
  		}
  	}
  
  	LWLockRelease(ProcArrayLock);
  
- 	if (pid != 0)
- 	{
- 		/*
- 		 * Kill the pid if it's still here. If not, that's what we wanted
- 		 * so ignore any errors.
- 		 */
- 		kill(pid, SIGINT);
- 	}
- 
  	return pid;
  }
  
--- 1724,1745 
  		if (procvxid.backendId == vxid.backendId 
  			procvxid.localTransactionId == vxid.localTransactionId)
  		{
! 			proc-recoveryConflictPending = true;
  			pid = proc-pid;
+ 			if (pid != 0)
+ 			{
+ /*
+  * Kill the pid if it's still here. If not, that's what we wanted
+  * so ignore any errors.
+  */
+ (void) SendProcSignal(pid, sigmode, vxid.backendId);
+ 			}
  			break;
  		}
  	}
  
  	LWLockRelease(ProcArrayLock);
  
  	return pid;
  }
  
***
*** 1834,1839  CancelDBBackends(Oid 

Re: [HACKERS] mailing list archiver chewing patches

2010-01-14 Thread Matteo Beccati

Il 14/01/2010 08:22, Matteo Beccati ha scritto:

Hi,


3) A nice set of SQL queries to return message, parts, threads,
folders based on $criteria (search, id, folder, etc)


I guess Matteo's working on that…


Right, but this is where I want to see the AOX schema imporove... In
ways like adding persistant tables for threading, which are updated by
triggers as new messages are delivered, etc. Documented queries that
show how to use CTEs, ltree, etc to get threaded views, good FTS support
(with indexes and triggers managing them), etc.


+1.

I just didn't understand how much your proposal fit into current work :)


I'm looking into it. The link I've previously sent will most likely
return a 500 error for the time being.


A quick update:

I've extended AOX with a trigger that takes care of filling a separate 
table that's used to display the index pages. The new table also stores 
threading information (standard headers + Exchange headers support) and 
whether or not the email has attachments.


Please check the updated PoC: http://archives.beccati.org/

pgsql-hackers and -general are currently subscribed, while -www only has 
2003 history imported via aoximport (very fast!).


BTW, I've just noticed a bug in the attachment detection giving false 
positives, but have no time to check now.



Cheers
--
Matteo Beccati

Development  Consulting - http://www.beccati.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] improving log management

2010-01-14 Thread Euler Taveira de Oliveira
Tom Lane escreveu:
 (1) Windows compatibility?
 
Yes.

 (2) What happens if the command takes a significant amount of time to
 execute?  We can't afford to have the log collector blocked.
 
You're right. We can't have that command in the same process because DBAs
could have high values for log_rotation_* parameters or resources could become
unavailable. :( One idea is to have another process (logger worker) to execute
it. Also we need to store the logfiles that were already rotated.

 (3) What do you intend those %p and %f symbols to signify?
 
%p and %f are relative log path and log filename, respectively.


-- 
  Euler Taveira de Oliveira
  http://www.timbira.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpython3

2010-01-14 Thread Aidan Van Dyk
* Greg Smith g...@2ndquadrant.com [100114 02:17]:

 One of the things I'm increasingly frustrated by (and don't take this  
 personally, this is a general comment coming more from the last CF  
 rather than something I mean to single you out for) is how many patch  
 submissions we get that don't have *compelling* examples showing their  
 value. Have a better programming approach to something? Show me the old  
 way and how the new way is better. Performance improvement? Provide a  
 complete, self-contained example showing how to demonstrate it. New type  
 of feature? Cut and paste a whole session showing how it's used, with  
 every single command you typed after initdb.

Wow, I can't agree more... I've seen *so* many patches fly by that don't
mean *anything* to me with the description sent to -hackers, until I
find out what they actually do, or could do, until I find it in my RSS
reader, via:

I hope that everyone submitting patches  
 reads http://www.depesz.com/ at least once in a while. One of the things  
 I really enjoy about his blog is how he shows complete working examples  
 of so many patches. To pick a standout recent entry,  
 http://www.depesz.com/index.php/2010/01/03/waiting-for-8-5-exclusion-constraints/
  
 takes exclusion constraints--a feature I didn't follow a bit of the  
 discussion about--and works through the whole feature with a series of  
 examples that, while still complicated, are completely self-contained  
 and possible to follow along until you understand how it all fits  
 together.

what he said++

   Patch submitters should consider it a goal to make life that  
 easy for the reviewer stuck with checking their patch out.

Yes, submitters, please specifically try to make Hubert's life easier,
because we *all* will appreciate it...

-- 
Aidan Van Dyk Create like a god,
ai...@highrise.ca   command like a king,
http://www.highrise.ca/   work like a slave.


signature.asc
Description: Digital signature


Re: [HACKERS] plpython3

2010-01-14 Thread James William Pye
On Jan 14, 2010, at 12:17 AM, Greg Smith wrote:
 Code samples.

Okay.

 I don't know, because even with several thousand lines of basic Python code 
 to my credit I cannot understand a single one of the arguments you presented 
 for why your implementation is better--except agreeing that, yes, tracebacks 
 are useful  

 And even on that one, I'm not going to take your word on the superiority of 
 your implementation.

Sure, that's what review is about. No?

  You're writing way over people's heads here.

Okay. I guess I hoped the documentation would help clarify a lot of this, and 
make the advantages self-evident.

On that:

 (Doesn't help that your docs link at the bottom of 
 http://wiki.postgresql.org/wiki/WIP:plpython3 is broken either).

Ouch. Thanks, that's fixed now. Please take a look again:

 http://python.projects.postgresql.org/pldocs/plpython3.html

 If one has to be a Python expert to understand your position, you've already 
 lost.

Function modules should be pretty obvious. native typing is a bit more 
difficult as a solid understanding of PG's type system is fairly important for 
a firm grasp.

 Python code is easy to read though.  If you'd said here's a great example of 
 how Function Modules are an improvement over what you can do with the current 
 pl/python, that would be infinitely more useful than the list of language 
 trivia related to them.  You should be aiming to put Peter on the spot to 
 respond to claims you make like you can't do this easily with the current 
 implementation after showing an elegant bit of code.

Okay. So, some examples would help. The documentation is back up, so please be 
sure to look at the numerous examples provided therein. In addition to that, 
I'll try to get some contrasting examples posted as a follow-up to an earlier 
message. In plpython you do X whereas in plpython3 you do Y.


Thanks.
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] improving log management

2010-01-14 Thread Euler Taveira de Oliveira
Takahiro Itagaki escreveu:
 If you have better loggers already, why don't you use them? In another word,
 should we cooperate with them instead of re-inventing alternative loggers?
 
We can use but they don't features that our logger has. If we have a logger,
let's improve it.

 We have Logging Brainstorm topic in out wiki. It might help you.
 http://wiki.postgresql.org/wiki/Logging_Brainstorm
 
Interesting. Don't know about that. Why some of those ideas aren't in our TODO?


-- 
  Euler Taveira de Oliveira
  http://www.timbira.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming replication and non-blocking I/O

2010-01-14 Thread Fujii Masao
On Thu, Jan 14, 2010 at 9:14 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 After reading up on SSL_read() and SSL_pending(), it seems that there is
 unfortunately no reliable way of checking if there is incoming data that
 can be read using SSL_read() without blocking, short of putting the
 socket to non-blocking mode. It also seems that we can't rely on poll()
 returning POLLHUP if the remote end has disconnected; it's not doing
 that at least on my laptop.

 So, the only solution I can see is to put the socket to non-blocking
 mode. But to keep the change localized, let's switch to non-blocking
 mode only temporarily, just when polling to see if there's data to read
 (or EOF), and switch back immediately afterwards.

Agreed. Though I also read some pages referring to that issue,
I was not able to find any better action other than the temporal
switch of the blocking mode.

 I've added a pq_getbyte_if_available() function to pqcomm.c to do that.
 The API to the upper levels is quite nice, the function returns a byte
 if one is available without blocking. Only minimal changes are required
 elsewhere.

Great! Thanks a lot!

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Table size does not include toast size

2010-01-14 Thread Bernd Helmle



--On 22. Dezember 2009 15:11:40 +0100 Bernd Helmle maili...@oopsware.de 
wrote:



Bernd, there's a basic spec if you have time to work on this.


I see if i can get some time for it during christmas vacation (its on my
radar for a longer period of time). I'm still working on this NOT NULL
pg_constraint representation and would like to propose a patch fairly
soon for this.


Since i'm not able to finish those other things in time, i wrapped up my 
existing code for this issue and came up with the attached patch, which 
should implement the behavior Tom proposed. These are two new functions 
pg_table_size() and pg_indexes_size(). This patch also changes 
pg_total_relation_size() to be a shorthand for pg_table_size() + 
pg_indexes_size().


Barring any objections i'm adding this to the CF.

--
Thanks

Bernd

pg_table_size.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch to implement ECPG side tracing / tracking ...

2010-01-14 Thread Michael Meskes
On Wed, Jan 13, 2010 at 10:30:32PM +0100, Hans-Juergen Schoenig wrote:
 performance tune your precompiler application. in PostgreSQL it is
 currently a little hard to get from the log what is executed how
 often by which application in which speed and so on. so, we came up

Hard or impossible? I agree with the other replies that this looks more like a
functionality you'd want in the server rather than the client.

 why for prepared queries: we found out that Informix is heavily
 using prepared queries internally. we already fixed something in

If you want a general feature why do you only implement it for one case?

 this area (patch sent some time ago) and we were finally able to
 catch up with Informix performance-wise in this area (mostly cursor
 work). before this auto_prepare fix, we were sometimes 2-3 times

Which fix are you talking about? I don't really remember a performance
improvement fix. Did I simply forget it or did I miss something important?

 slower than Informix. saving on network time solved the job. now we
 are left with many many programs performing somehow strange and we
 need to check for every program why. a decent summary on exit wouldA

Well I guess this is what you get paid for.

 to make it short: it is impossible to port hundreds of applications
 to PostgreSQL without having the chance to trace what the
 precompiler is doing how often in which program via which
 connection. it is simply impossible. so, we really and desparately
 need this patch in.

I'm sorry, but this is neither true (we've done it before) nor a valid point
(project decisions are independant from your contracts). You can surely
implement whatever you want for your customer but for your patch to make it
into our source tree there should be an advantage fore more people.

Michael
 
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
ICQ: 179140304, AIM/Yahoo/Skype: michaelmeskes, Jabber: mes...@jabber.org
VfL Borussia! Forca Barca! Go SF 49ers! Use: Debian GNU/Linux, PostgreSQL

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] mailing list archiver chewing patches

2010-01-14 Thread Dimitri Fontaine
Matteo Beccati p...@beccati.com writes:
 I've extended AOX with a trigger that takes care of filling a separate table
 that's used to display the index pages. The new table also stores threading
 information (standard headers + Exchange headers support) and whether or not
 the email has attachments.

 Please check the updated PoC: http://archives.beccati.org/

Looks pretty good, even if some thread are still separated (this one for
example), and the ordering looks strange.

Seems to be right on tracks, that said :)

Thanks for your work,
-- 
dim

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clearing global statistics

2010-01-14 Thread Rafael Martinez
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Greg Smith wrote:
 Magnus Hagander wrote:
 I have on my TODO to implement the ability to do stats reset on a
 single object (say, one table only). Please take this into
 consideration when you design/name this, so theres no unnecessary
 overlap :-) Same goes for the stats message itself.
  
[.]

 Any comments before I update my patch to do that?
 

Hello

One thing I miss from the statistics you can get via pg_stat_* is
information about how long we have been collecting stats (or in other
words, when was the last time the stats were reset)

Statistics without time period information are unfortunately not very
usefull for a DBA :-(

Before 8.3, we had the stats_reset_on_server_start parameter and the
pg_postmaster_start_time() function. This was an easy way of resetting
*all* statistics delivered by pg_stat_* and knowing when this was done.
We were able to produce stats with information about sec/hours/days
average values in an easy way.

I tried to discuss this some time ago but we did not get anywhere,
Ref: http://archives.postgresql.org/pgsql-general/2009-07/msg00614.php

Maybe this time? :-)

Is there any chance of implementing a way of knowing when was the last
time statistics delivered via pg_stat_* were reset?

regards,
- --
 Rafael Martinez, r.m.guerr...@usit.uio.no
 Center for Information Technology Services
 University of Oslo, Norway

 PGP Public Key: http://folk.uio.no/rafael/
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.7 (GNU/Linux)

iD8DBQFLTx9cBhuKQurGihQRAnTZAJ9afYGu4UShAha0L6Z3OFyqgJ6SJQCffEow
sfFKKoT3ODap6JRpn2I1IfI=
=bCqY
-END PGP SIGNATURE-

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] mailing list archiver chewing patches

2010-01-14 Thread Dave Page
On Thu, Jan 14, 2010 at 7:09 PM, Dimitri Fontaine
dfonta...@hi-media.com wrote:
 Matteo Beccati p...@beccati.com writes:
 I've extended AOX with a trigger that takes care of filling a separate table
 that's used to display the index pages. The new table also stores threading
 information (standard headers + Exchange headers support) and whether or not
 the email has attachments.

 Please check the updated PoC: http://archives.beccati.org/

 Looks pretty good, even if some thread are still separated (this one for
 example), and the ordering looks strange.

 Seems to be right on tracks, that said :)

Yup.

Matteo - Can you try loading up a lot more of the old mbox files,
particularly the very early ones from -hackers? It would be good to
see how it copes under load with a few hundred thousand messages in
the database.

-- 
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch to implement ECPG side tracing / tracking ...

2010-01-14 Thread Dimitri Fontaine
Michael Meskes mes...@postgresql.org writes:
 On Wed, Jan 13, 2010 at 10:30:32PM +0100, Hans-Juergen Schoenig wrote:
 performance tune your precompiler application. in PostgreSQL it is
 currently a little hard to get from the log what is executed how
 often by which application in which speed and so on. so, we came up

 Hard or impossible? I agree with the other replies that this looks more like a
 functionality you'd want in the server rather than the client.

PgFouine partly answers that, and with application_name in 8.5 it should
further improve:

  http://pgfouine.projects.postgresql.org/

Regards,
-- 
dim

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] mailing list archiver chewing patches

2010-01-14 Thread Matteo Beccati

Il 14/01/2010 14:39, Dimitri Fontaine ha scritto:

Matteo Beccatip...@beccati.com  writes:

I've extended AOX with a trigger that takes care of filling a separate table
that's used to display the index pages. The new table also stores threading
information (standard headers + Exchange headers support) and whether or not
the email has attachments.

Please check the updated PoC: http://archives.beccati.org/


Looks pretty good, even if some thread are still separated (this one for
example), and the ordering looks strange.


This one is separated as the first one is not in the archive yet, thus 
to the system there are multiple parent messages. It shouldn't happen 
with full archives. About sorting, here's the query I've used (my first 
try with CTEs incidentally):


WITH RECURSIVE t (mailbox, uid, date, subject, sender, has_attachments, 
parent_uid, idx, depth) AS (
  SELECT mailbox, uid, date, subject, sender, has_attachments, 
parent_uid, uid::text, 1

  FROM arc_messages
  WHERE parent_uid IS NULL AND mailbox = 15
  UNION ALL
  SELECT a.mailbox, a.uid, a.date, a.subject, a.sender, 
a.has_attachments, a.parent_uid, t.idx || '.' || a.uid::text, t.depth + 1

  FROM t JOIN arc_messages a USING (mailbox)
  WHERE t.uid = a.parent_uid
) SELECT * FROM t ORDER BY idx

Any improvements to sorting are welcome :)


Cheers
--
Matteo Beccati

Development  Consulting - http://www.beccati.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Streaming replication, retrying from archive

2010-01-14 Thread Heikki Linnakangas
Imagine this scenario:

1. Master is up and running, standby is connected and streaming happily
2. Network goes down, connection is broken.
3. Standby falls behind a lot. Old WAL files that the standby needs are
archived, and deleted from master.
4. Network is restored. Standby reconnects
5. Standby will get an error because the WAL file it needs is not in the
master anymore.

What will currently happen is:

6, Standby retries connecting and failing indefinitely, until the admin
restarts it.

What we would *like* to happen is:

6. Standby fetches the missing WAL files from archive, then reconnects
and continues streaming.

Can we fix that?

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch to implement ECPG side tracing / tracking ...

2010-01-14 Thread Boszormenyi Zoltan
Michael Meskes írta:
 this area (patch sent some time ago) and we were finally able to
 catch up with Informix performance-wise in this area (mostly cursor
 work). before this auto_prepare fix, we were sometimes 2-3 times
 

 Which fix are you talking about? I don't really remember a performance
 improvement fix. Did I simply forget it or did I miss something important?
   

Hans meant the auto-prepare fix. Being able to use it
is an important performance improvement for small queries. :-)

-- 
Bible has answers for everything. Proof:
But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
than these cometh of evil. (Matthew 5:37) - basics of digital technology.
May your kingdom come - superficial description of plate tectonics

--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] remove redundant ownership checks

2010-01-14 Thread Stephen Frost
* KaiGai Kohei (kai...@ak.jp.nec.com) wrote:
 Some of ALTER TABLE operations take multiple permission checks, not only
 ownership of the relation to be altered.

Yes, exactly my point.  Those places typically just presume that the
owner check has already been done.

 For example, ALTER TABLE with SET TABLESPACE option also need ACL_CREATE
 permission on the new tablespace, not only ownership of the relation.
 It means we need to gather two information before whole of the permission
 checks. (1) OID of the relation to be altered. (2) OID of the tablespace
 to be set.

Right.  I would say that we should wait until we have all the necessary
information to do the permissions checking, and then do it all at that
point, similar to how we handle DML today.

 In my understanding, Stephen suggests that we should, ideally, rip out
 permission logic from the code closely-combined with the steps of
 implementation. 

This might be a confusion due to language, but I think of the
implementation as being the work.  The check on permissions on the
tablespace that you're describing above is done right before the work.
For this case, specifically, ATPrepSetTableSpace() takes the action on
line 6754: tab-newTableSpace = tablespaceId;
Prior to that, it checks the tablespace permissions (but not the table
permissions, since they've been checked already).  I would suggest we
add a call to ATSimplePermissions() in ATPrepSetTableSpace at line 6745-
right after the /* Check its permissions */ comment (which would be
changed to check permissions for ALTER TABLE x SET TABLESPACE y).

 Of course, it does not mean all the checks should be
 moved just before simple_heap_update().

No, I would have it earlier than simple_heap_update(), we don't need to
go building the structures and whatnot needed to call
simple_heap_update().  For this specific case though, I'm a bit torn by
the fact that the work associated with changing the tablespace can
actually happen in two distinct places- either through
ATExecSetTableSpace, or in ATRewriteTables directly.
ATExecSetTableSpace would actually be a good candidate rather than in
the 'prep' stage, if all tablespace changes were done there.  The 'prep'
stage worries me a bit since I'm not sure if all permissions checking
is currently, or coulde be, done at that point, and I'd prefer that we
use the same approach for permissions checking throughout the code- for
example, it's either done in 'phase 3' (where we're going through the
subcommands) or all done in 'phase 1/2', where we're setting things up.

 However, it is a separate topic for this patch.
 This patch just tries to remove redundant checks, and integrate them
 into a common place where most of ALTER TABLE option applies its
 permission checks on.

I don't believe we can make the case that it's a distinct topic based on
the feedback received.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] [PATCH] remove redundant ownership checks

2010-01-14 Thread Robert Haas
On Thu, Jan 14, 2010 at 1:04 AM, Greg Smith g...@2ndquadrant.com wrote:
 But the preference of the last CF is to not apply any patch which doesn't
 have a very clear justification to be committed.  Given that whether this
 patch is applied or not to 8.5 really doesn't make any functional
 difference, I don't see anywhere for this to go right now except for
 Returned with Feedback.  It's extremely valuable to have had this patch
 submitted.  I don't believe an exact spot of contention with the current
 code was ever highlighted so clearly before this discussion, because
 previous patches were just too big.  We do need to get this whole thing off
 the list for a while now though, I think it's gotten quite a fair slice of
 discussion already.

While I understand your desire to get this patch closed out and move
on to other things, I don't agree that we've given any meaningful
feedback as yet.  We really haven't made any progress getting an
agreement on where or how the permissions checks should be done.  I
would be perfectly happy to throw this patch under the bus in exchange
for some meaningful feedback on what a more comprehensive approach
would look like, but so far none has been forthcoming - and not
because it hasn't been requested.

http://archives.postgresql.org/pgsql-hackers/2009-12/msg01824.php

It is my view that no patch which makes substantial changes to the
security checks in the code is likely to get committed without Tom's
approval.  If Tom is not willing to provide input on a comprehensive
plan, and is also not willing to accept even the least-consequential
change without a fully-fleshed-out comprehensive plan, then I think we
are at an impasse.  Note that I am NOT saying it is Tom's
RESPONSIBILITY to provide input on a comprehensive plan.  I am only
expressing my opinion that no progress can be made otherwise.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] mailing list archiver chewing patches

2010-01-14 Thread Matteo Beccati

Il 14/01/2010 14:46, Dave Page ha scritto:

On Thu, Jan 14, 2010 at 7:09 PM, Dimitri Fontaine
dfonta...@hi-media.com  wrote:

Matteo Beccatip...@beccati.com  writes:

I've extended AOX with a trigger that takes care of filling a separate table
that's used to display the index pages. The new table also stores threading
information (standard headers + Exchange headers support) and whether or not
the email has attachments.

Please check the updated PoC: http://archives.beccati.org/


Looks pretty good, even if some thread are still separated (this one for
example), and the ordering looks strange.

Seems to be right on tracks, that said :)


Yup.

Matteo - Can you try loading up a lot more of the old mbox files,
particularly the very early ones from -hackers? It would be good to
see how it copes under load with a few hundred thousand messages in
the database.


Sure, I will give it a try in the evening or tomorrow.


Cheers
--
Matteo Beccati

Development  Consulting - http://www.beccati.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming replication, retrying from archive

2010-01-14 Thread Robert Haas
On Thu, Jan 14, 2010 at 9:15 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 Imagine this scenario:

 1. Master is up and running, standby is connected and streaming happily
 2. Network goes down, connection is broken.
 3. Standby falls behind a lot. Old WAL files that the standby needs are
 archived, and deleted from master.
 4. Network is restored. Standby reconnects
 5. Standby will get an error because the WAL file it needs is not in the
 master anymore.

 What will currently happen is:

 6, Standby retries connecting and failing indefinitely, until the admin
 restarts it.

 What we would *like* to happen is:

 6. Standby fetches the missing WAL files from archive, then reconnects
 and continues streaming.

 Can we fix that?

Just MHO here, but this seems like a bigger project than we should be
starting at this stage of the game.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming replication, retrying from archive

2010-01-14 Thread Magnus Hagander
On Thu, Jan 14, 2010 at 15:36, Robert Haas robertmh...@gmail.com wrote:
 On Thu, Jan 14, 2010 at 9:15 AM, Heikki Linnakangas
 heikki.linnakan...@enterprisedb.com wrote:
 Imagine this scenario:

 1. Master is up and running, standby is connected and streaming happily
 2. Network goes down, connection is broken.
 3. Standby falls behind a lot. Old WAL files that the standby needs are
 archived, and deleted from master.
 4. Network is restored. Standby reconnects
 5. Standby will get an error because the WAL file it needs is not in the
 master anymore.

 What will currently happen is:

 6, Standby retries connecting and failing indefinitely, until the admin
 restarts it.

 What we would *like* to happen is:

 6. Standby fetches the missing WAL files from archive, then reconnects
 and continues streaming.

 Can we fix that?

 Just MHO here, but this seems like a bigger project than we should be
 starting at this stage of the game.

+1.

We want this eventually (heck, it'd be awesome!), but let's get what
we have now stable first.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] mailing list archiver chewing patches

2010-01-14 Thread Dave Page
On Thu, Jan 14, 2010 at 8:02 PM, Matteo Beccati p...@beccati.com wrote:
 Il 14/01/2010 14:46, Dave Page ha scritto:

 On Thu, Jan 14, 2010 at 7:09 PM, Dimitri Fontaine
 dfonta...@hi-media.com  wrote:

 Matteo Beccatip...@beccati.com  writes:

 I've extended AOX with a trigger that takes care of filling a separate
 table
 that's used to display the index pages. The new table also stores
 threading
 information (standard headers + Exchange headers support) and whether or
 not
 the email has attachments.

 Please check the updated PoC: http://archives.beccati.org/

 Looks pretty good, even if some thread are still separated (this one for
 example), and the ordering looks strange.

 Seems to be right on tracks, that said :)

 Yup.

 Matteo - Can you try loading up a lot more of the old mbox files,
 particularly the very early ones from -hackers? It would be good to
 see how it copes under load with a few hundred thousand messages in
 the database.

 Sure, I will give it a try in the evening or tomorrow.

Thanks :-)


-- 
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PL/PgSQL] EXECUTE...USING enhancement proposal

2010-01-14 Thread Vincenzo Romano
Hi all.
There's currently a limitation in the v8.4.2 implementation of the
EXECUTE...USING predicate in PL/PgSQL which prevents you from
exploiting the USING-supplied value list with DDL commands.
For example:

CREATE TABLE test ( i int );
...
EXECUTE 'ALTER TABLE test ALTER COLUMN i SET DEFAULT $1' USING 42;

complains with:

ERROR:  there is no parameter $1
CONTEXT:  SQL statement ALTER TABLE test ALTER COLUMN i SET DEFAULT $1

while:

EXECUTE 'SELECT $1' USING 42;

works.
In both cases the $1 variable/placeholder refers to a constant value.
And actually, even if the thing defined after the USING lexeme was a
variable, that should be evaluated and substituted *before* executing
the command.

The current documentation
(http://www.postgresql.org/docs/8.4/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN)
doesn't say so and clearly describes how this feature is meant to
work.
Quoting:

The command string can use parameter values, which are referenced in
the command as $1, $2,
etc. These symbols refer to values supplied in the USING clause. This
method is often preferable to
inserting data values into the command string as text: it avoids
run-time overhead of converting the
values to text and back, and it is much less prone to SQL-injection
attacks since there is no need for
quoting or escaping. An example is:

(38.5.4. Executing Dynamic Commands)

It talks about values, that is typed constants.
Please, refer also to the following discussion on pgsql-general mailing list:
http://archives.postgresql.org/pgsql-general/2010-01/msg00522.php

My proposal is to relax that (clearly running but undocumented)
constraint and allow any SQL command in the EXECUTE...USING predicate.
I would leave the responsibility to the programmer to ensure whether
the dynamic command makes any syntactic and semantic sense.

-- 
Vincenzo Romano
NotOrAnd Information Technologies
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] mailing list archiver chewing patches

2010-01-14 Thread Dimitri Fontaine
Matteo Beccati p...@beccati.com writes:
 WITH RECURSIVE t (mailbox, uid, date, subject, sender, has_attachments,
 parent_uid, idx, depth) AS (
   SELECT mailbox, uid, date, subject, sender, has_attachments, parent_uid,
 uid::text, 1
   FROM arc_messages
   WHERE parent_uid IS NULL AND mailbox = 15
   UNION ALL
   SELECT a.mailbox, a.uid, a.date, a.subject, a.sender, a.has_attachments,
 a.parent_uid, t.idx || '.' || a.uid::text, t.depth + 1
   FROM t JOIN arc_messages a USING (mailbox)
   WHERE t.uid = a.parent_uid
 ) SELECT * FROM t ORDER BY idx

 Any improvements to sorting are welcome :)

What I'd like would be to have it sorted by activity, showing first the
thread which received the later messages. I'm yet to play with CTE and
window function myself so without a database example to play with I
won't come up with a nice query, but I guess a more educated reader will
solve this without a sweat, as it looks easier than sudoku-solving,
which has been done already :)

Regards,
-- 
dim

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PL/PgSQL] EXECUTE...USING enhancement proposal

2010-01-14 Thread Pavel Stehule
Hello

I thing, so this is bad idea.

a) this behave depends on DDL implementation, not plpgsql implementation

b) proposed implementation needs some escape magic. This was first
implementation of USING clause and it was rejected. Some composite and
nested values are significant break.

see in archive http://archives.postgresql.org/pgsql-patches/2006-08/msg00267.php

Regards
Pavel Stehule


2010/1/14 Vincenzo Romano vincenzo.rom...@notorand.it:
 Hi all.
 There's currently a limitation in the v8.4.2 implementation of the
 EXECUTE...USING predicate in PL/PgSQL which prevents you from
 exploiting the USING-supplied value list with DDL commands.
 For example:

 CREATE TABLE test ( i int );
 ...
 EXECUTE 'ALTER TABLE test ALTER COLUMN i SET DEFAULT $1' USING 42;

 complains with:

 ERROR:  there is no parameter $1
 CONTEXT:  SQL statement ALTER TABLE test ALTER COLUMN i SET DEFAULT $1

 while:

 EXECUTE 'SELECT $1' USING 42;

 works.
 In both cases the $1 variable/placeholder refers to a constant value.
 And actually, even if the thing defined after the USING lexeme was a
 variable, that should be evaluated and substituted *before* executing
 the command.

 The current documentation
 (http://www.postgresql.org/docs/8.4/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN)
 doesn't say so and clearly describes how this feature is meant to
 work.
 Quoting:
 
 The command string can use parameter values, which are referenced in
 the command as $1, $2,
 etc. These symbols refer to values supplied in the USING clause. This
 method is often preferable to
 inserting data values into the command string as text: it avoids
 run-time overhead of converting the
 values to text and back, and it is much less prone to SQL-injection
 attacks since there is no need for
 quoting or escaping. An example is:
 
 (38.5.4. Executing Dynamic Commands)

 It talks about values, that is typed constants.
 Please, refer also to the following discussion on pgsql-general mailing list:
 http://archives.postgresql.org/pgsql-general/2010-01/msg00522.php

 My proposal is to relax that (clearly running but undocumented)
 constraint and allow any SQL command in the EXECUTE...USING predicate.
 I would leave the responsibility to the programmer to ensure whether
 the dynamic command makes any syntactic and semantic sense.

 --
 Vincenzo Romano
 NotOrAnd Information Technologies
 NON QVIETIS MARIBVS NAVTA PERITVS

 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] mailing list archiver chewing patches

2010-01-14 Thread Matteo Beccati

Il 14/01/2010 15:47, Dimitri Fontaine ha scritto:

Matteo Beccatip...@beccati.com  writes:

WITH RECURSIVE t (mailbox, uid, date, subject, sender, has_attachments,
parent_uid, idx, depth) AS (
   SELECT mailbox, uid, date, subject, sender, has_attachments, parent_uid,
uid::text, 1
   FROM arc_messages
   WHERE parent_uid IS NULL AND mailbox = 15
   UNION ALL
   SELECT a.mailbox, a.uid, a.date, a.subject, a.sender, a.has_attachments,
a.parent_uid, t.idx || '.' || a.uid::text, t.depth + 1
   FROM t JOIN arc_messages a USING (mailbox)
   WHERE t.uid = a.parent_uid
) SELECT * FROM t ORDER BY idx

Any improvements to sorting are welcome :)


What I'd like would be to have it sorted by activity, showing first the
thread which received the later messages. I'm yet to play with CTE and
window function myself so without a database example to play with I
won't come up with a nice query, but I guess a more educated reader will
solve this without a sweat, as it looks easier than sudoku-solving,
which has been done already :)


Eheh, that was my first try as well. CTEs look very nice even though I'm 
not yet very comfortable with the syntax. Anyway both for date and 
thread indexes sort is the other way around, with newer posts/threads at 
the bottom. Again I'll give it a try as soon as I find time to work 
again on it.



Cheers
--
Matteo Beccati

Development  Consulting - http://www.beccati.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] mailing list archiver chewing patches

2010-01-14 Thread Magnus Hagander
On Thu, Jan 14, 2010 at 16:06, Matteo Beccati p...@beccati.com wrote:
 Il 14/01/2010 15:47, Dimitri Fontaine ha scritto:

 Matteo Beccatip...@beccati.com  writes:

 WITH RECURSIVE t (mailbox, uid, date, subject, sender, has_attachments,
 parent_uid, idx, depth) AS (
   SELECT mailbox, uid, date, subject, sender, has_attachments,
 parent_uid,
 uid::text, 1
   FROM arc_messages
   WHERE parent_uid IS NULL AND mailbox = 15
   UNION ALL
   SELECT a.mailbox, a.uid, a.date, a.subject, a.sender,
 a.has_attachments,
 a.parent_uid, t.idx || '.' || a.uid::text, t.depth + 1
   FROM t JOIN arc_messages a USING (mailbox)
   WHERE t.uid = a.parent_uid
 ) SELECT * FROM t ORDER BY idx

 Any improvements to sorting are welcome :)

 What I'd like would be to have it sorted by activity, showing first the
 thread which received the later messages. I'm yet to play with CTE and
 window function myself so without a database example to play with I
 won't come up with a nice query, but I guess a more educated reader will
 solve this without a sweat, as it looks easier than sudoku-solving,
 which has been done already :)

 Eheh, that was my first try as well. CTEs look very nice even though I'm not
 yet very comfortable with the syntax. Anyway both for date and thread
 indexes sort is the other way around, with newer posts/threads at the
 bottom. Again I'll give it a try as soon as I find time to work again on it.

Three tips around this,

1) don't be constrained by how things look now. Make something that's useful.

2) don't be constrained  by the fact that we have two ways to view it
now (thread + date). we can easily do three, if different people like
different ways. As long as it's not so much it becomes a maintenance
burden

3) Remember to run your tests with lots of emails, some designs just
tend to fall apart over that (say a thread with 200+ emails in it)


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming replication, retrying from archive

2010-01-14 Thread Heikki Linnakangas
Magnus Hagander wrote:
 On Thu, Jan 14, 2010 at 15:36, Robert Haas robertmh...@gmail.com wrote:
 On Thu, Jan 14, 2010 at 9:15 AM, Heikki Linnakangas
 heikki.linnakan...@enterprisedb.com wrote:
 Imagine this scenario:

 1. Master is up and running, standby is connected and streaming happily
 2. Network goes down, connection is broken.
 3. Standby falls behind a lot. Old WAL files that the standby needs are
 archived, and deleted from master.
 4. Network is restored. Standby reconnects
 5. Standby will get an error because the WAL file it needs is not in the
 master anymore.

 What will currently happen is:

 6, Standby retries connecting and failing indefinitely, until the admin
 restarts it.

 What we would *like* to happen is:

 6. Standby fetches the missing WAL files from archive, then reconnects
 and continues streaming.

 Can we fix that?
 Just MHO here, but this seems like a bigger project than we should be
 starting at this stage of the game.
 
 +1.
 
 We want this eventually (heck, it'd be awesome!), but let's get what
 we have now stable first.

If we don't fix that within the server, we will need to document that
caveat and every installation will need to work around that one way or
another. Maybe with some monitoring software and an automatic restart. Ugh.

I wasn't really asking if it's possible to fix, I meant Let's think
about *how* to fix that.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] archive_timeout behavior for no activity

2010-01-14 Thread Bruce Momjian
Looking at the archive_timeout documentation and CheckArchiveTimeout(),
it appears we force a new xlog file and archive it even if no activity
has been recorded in the xlog file.  Is this correct?  Should we
document this or fix it so only xlog files with contents are archived?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clearing global statistics

2010-01-14 Thread Tom Lane
Rafael Martinez r.m.guerr...@usit.uio.no writes:
 Is there any chance of implementing a way of knowing when was the last
 time statistics delivered via pg_stat_* were reset?

Actually, that brings up a more general question: what's with the
enthusiasm for clearing statistics *at all*?  ISTM that's something
you should do only in dire emergencies, like the collector went
haywire and has now got a bunch of garbage numbers.  The notion of
resetting subsets of the stats seems even more dubious, because now
you have numbers that aren't mutually comparable.  So I fail to
understand why the desire to expend valuable development time on
any of this.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] archive_timeout behavior for no activity

2010-01-14 Thread Kevin Grittner
Bruce Momjian br...@momjian.us wrote:
 
 Looking at the archive_timeout documentation and
 CheckArchiveTimeout(), it appears we force a new xlog file and
 archive it even if no activity has been recorded in the xlog file.
 Is this correct?  Should we document this or fix it so only xlog
 files with contents are archived?
 
Er, you can probably blame me for that.  Tom was going to fix it and
I pointed out that it would break our monitoring of our warm standby
processes.  We have a one hour maximum and send alerts if we've gone
75 minutes or more without receiving a WAL file from one of our
databases.  Of course, if we had a nicer way to know that we were
up-to-date with our WAL file copies, we wouldn't need this; but
right now there aren't a lot of options for monitoring these things.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Miscellaneous changes to plperl [PATCH]

2010-01-14 Thread Tim Bunce
This is the second of the patches to be split out from the former
'plperl feature patch 1'.

Changes in this patch:

- Allow (ineffective) use of 'require' in plperl
If the required module is not already loaded then it dies.
So use strict; now works in plperl.

- Pre-load the feature module if perl = 5.10.
So use feature :5.10; now works in plperl.

- Stored procedure subs are now given names.
The names are not visible in ordinary use, but they make
tools like Devel::NYTProf and Devel::Cover _much_ more useful.

- Simplified and generalized the subroutine creation code.
Now one code path for generating sub source code, not four.
Can generate multiple 'use' statements with specific imports
(which handles plperl.use_strict currently and can easily
be extended to handle a plperl.use_feature=':5.12' in future).

- Disallows use of Safe version 2.20 which is broken for PL/Perl.
http://rt.perl.org/rt3/Ticket/Display.html?id=72068

- Assorted minor optimizations by pre-growing data structures.

This patch will apply cleanly over the 'add functions' patch:
https://commitfest.postgresql.org/action/patch_view?id=264

Tim.
diff --git a/doc/src/sgml/plperl.sgml b/doc/src/sgml/plperl.sgml
index 94db722..6fee031 100644
*** a/doc/src/sgml/plperl.sgml
--- b/doc/src/sgml/plperl.sgml
*** SELECT * FROM perl_set();
*** 285,313 
/para
  
para
!If you wish to use the literalstrict/ pragma with your code,
!the easiest way to do so is to commandSET/
!literalplperl.use_strict/literal to true.  This parameter affects
!subsequent compilations of applicationPL/Perl/ functions, but not
!functions already compiled in the current session.  To set the
!parameter before applicationPL/Perl/ has been loaded, it is
!necessary to have added quoteliteralplperl// to the xref
!linkend=guc-custom-variable-classes list in
!filenamepostgresql.conf/filename.
/para
  
para
!Another way to use the literalstrict/ pragma is to put:
  programlisting
  use strict;
  /programlisting
!in the function body.  But this only works in applicationPL/PerlU/
!functions, since the literaluse/ triggers a literalrequire/
!which is not a trusted operation.  In
!applicationPL/Perl/ functions you can instead do:
! programlisting
! BEGIN { strict-import(); }
! /programlisting
/para
   /sect1
  
--- 285,323 
/para
  
para
!If you wish to use the literalstrict/ pragma with your code you have a few options.
!For temporary global use you can commandSET/ literalplperl.use_strict/literal
!to true (see xref linkend=plperl.use_strict).
!This will affect subsequent compilations of applicationPL/Perl/
!functions, but not functions already compiled in the current session.
!For permanent global use you can set literalplperl.use_strict/literal
!to true in the filenamepostgresql.conf/filename file.
/para
  
para
!For permanent use in specific functions you can simply put:
  programlisting
  use strict;
  /programlisting
!at the top of the function body.
!   /para
! 
!   para
!   The literalfeature/ pragma is also available to functionuse/ if your Perl is version 5.10.0 or higher.
!   /para
! 
!  /sect1
! 
!  sect1 id=plperl-data
!   titleData Values in PL/Perl/title
! 
!   para
!The argument values supplied to a PL/Perl function's code are
!simply the input arguments converted to text form (just as if they
!had been displayed by a commandSELECT/command statement).
!Conversely, the functionreturn/function and functionreturn_next/function
!commands will accept any string that is acceptable input format
!for the function's declared return type.
/para
   /sect1
  
*** SELECT done();
*** 682,699 
   /sect2
   /sect1
  
-  sect1 id=plperl-data
-   titleData Values in PL/Perl/title
- 
-   para
-The argument values supplied to a PL/Perl function's code are
-simply the input arguments converted to text form (just as if they
-had been displayed by a commandSELECT/command statement).
-Conversely, the literalreturn/ command will accept any string
-that is acceptable input format for the function's declared return
-type.  So, within the PL/Perl function,
-all values are just text strings.
-   /para
   /sect1
  
   sect1 id=plperl-global
--- 692,697 
*** CREATE TRIGGER test_valid_id_trig
*** 1042,1049 
 itemizedlist
  listitem
   para
!   PL/Perl functions cannot call each other directly (because they
!   are anonymous subroutines inside Perl).
   /para
  /listitem
  
--- 1040,1046 
 itemizedlist
  listitem
   para
!   PL/Perl functions cannot call each other directly.
   /para
  /listitem
  
*** CREATE TRIGGER test_valid_id_trig
*** 1072,1077 
--- 1069,1076 
  /listitem
 /itemizedlist
/para
+  /sect2
+ 
   /sect1
  
  /chapter
diff 

Re: [HACKERS] Streaming replication, retrying from archive

2010-01-14 Thread Dimitri Fontaine
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
 If we don't fix that within the server, we will need to document that
 caveat and every installation will need to work around that one way or
 another. Maybe with some monitoring software and an automatic restart. Ugh.

 I wasn't really asking if it's possible to fix, I meant Let's think
 about *how* to fix that.

Did I mention my viewpoint on that already?
  http://archives.postgresql.org/pgsql-hackers/2009-07/msg00943.php

It could well be I'm talking about things that have no relation at all
to what is in the patch currently, and that make no sense for where we
want the patch to go. But I'd like to know about that so that I'm not
banging my head on the nearest wall each time the topic surfaces.

Regards,
-- 
dim

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming replication, retrying from archive

2010-01-14 Thread Fujii Masao
On Fri, Jan 15, 2010 at 12:23 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 If we don't fix that within the server, we will need to document that
 caveat and every installation will need to work around that one way or
 another. Maybe with some monitoring software and an automatic restart. Ugh.

 I wasn't really asking if it's possible to fix, I meant Let's think
 about *how* to fix that.

OK. How about the following (though it's a rough design)?

(1)If walsender cannot read the WAL file because of ENOENT, it sends the
 special message indicating that error to walreceiver. This message is
 shipped on the COPY protocol.

(2-a) If the message arrives, walreceiver exits by using proc_exit().
(3-a) If the startup process detects the exit of walreceiver in
WaitNextXLogAvailable(),
  it switches back to a normal archive recovery mode, closes
the currently opened
  WAL file, resets some variables (readId, readSeg, etc), and
calls FetchRecord()
  again. Then it tries to restore the WAL file from the
archive if the restore_command
  is supplied, and switches to a streaming recovery mode again
if invalid WAL is
  found.

Or

(2-b) If the message arrives, walreceiver executes restore_command,
and then sets
  the receivedUpto to the end location of the restored WAL
file. The restored file is
  expected to be filled because it doesn't exist in the
primary's pg_xlog. So that
  update of the receivedUpto is OK.
(3-b) After one WAL file is restored, walreceiver tries to connect to
the primary, and
  starts replication again. If the ENOENT error occurs again,
we go back to the (1).

I like the latter approach since it's simpler. Thought?

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming replication, retrying from archive

2010-01-14 Thread Fujii Masao
On Fri, Jan 15, 2010 at 1:06 AM, Dimitri Fontaine
dfonta...@hi-media.com wrote:
 Did I mention my viewpoint on that already?
  http://archives.postgresql.org/pgsql-hackers/2009-07/msg00943.php

  0. base: slave asks the master for a base-backup, at the end of this it
 reaches the base-lsn

What if the WAL file including the archive recovery starting location has
been removed from the primary's pg_xlog before the end of online-backup
(i.e., the procedure 0)? Where should the standby get such a WAL file from?
How?

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] EXPLAIN, utility statement parameters, and recent plpgsql changes

2010-01-14 Thread Tom Lane
Pavel pointed out here
http://archives.postgresql.org/pgsql-hackers/2010-01/msg01233.php
that it no longer works to reference plpgsql variables in EXPLAIN
statements in plpgsql.  I dug into this a bit, and the code is trying
to do it but it doesn't quite work.

The issue centers around the behavior of the ParamListInfo data
structure, which was originally intended to carry only values for a
fixed set of $1 .. $n parameters (as in PREPARE/EXECUTE for instance).
This is the structure that carries plpgsql values into a command that's
executed as a cursor.  To support the recent changes in plgsql parsing,
I extended that struct to also carry parser hook functions.  The idea is
that while doing parse analysis of a statement, the parser hook
functions could capture references to plpgsql variables and turn them
into Params, which would then reference the data area of the
ParamListInfo struct at runtime.

This works well enough for regular DML statements, but it falls down for
EXPLAIN which is a utility statement, because *parse analysis of utility
statements doesn't do anything*.  EXPLAIN actually does the parse
analysis of its contained statement at the beginning of execution.
And that is too late, in the scenario Pavel exhibited.  Why is it too
late?  Because SPI_cursor_open_internal() intentionally freezes the
ParamListInfo struct after doing initial parsing: what it copies into
the cursor portal is just a static list of data values without the
parser hooks (see copyParamList).  This is really necessary because the
execution of the portal could outlive the function that created the
cursor, so we can't safely execute its parsing hooks anymore.

So what to do about it?  I can see two basic avenues towards a solution:

1. Change things so that copyParamList copies enough state into the
cursor portal so that we can still run the plpgsql parsing hooks during
cursor execution.  In the worst case this would imply copying *all*
local variables and parameters of the plpgsql function into the cursor
portal, plus a lot of names, types, etc.  We could perhaps optimize
things enough to only copy the values actually referenced, but it still
seems like possibly a rather nasty performance hit.  And it'd affect not
only explicit cursors, but every plpgsql for-over-rows construct,
because those are cursors internally.

2. Redesign EXPLAIN so that it parses the contained query in the initial
parsing step; it wouldn't be a simple utility command anymore but a
hybrid much like DECLARE CURSOR.  I think this would not be very messy.
The main objection to it is that it doesn't scale to solve the problem
for other types of utility statements.  Now we don't support parameters
in other types of utility statements anyway, but it's something we'd
like to do someday probably.

(Of course there are also 3. Sorry, we're not going to support
variables in EXPLAIN anymore and 4. Revert all those parsing fixes
in plpgsql, but I rejected these solutions out of hand.)

I'm kind of leaning to #2, particularly given that we don't have time
to expend a great deal of work on this for 8.5.  But I wonder if anyone
has any comments or alternative ideas.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Miscellaneous changes to plperl [PATCH]

2010-01-14 Thread David E. Wheeler
On Jan 14, 2010, at 8:07 AM, Tim Bunce wrote:

 - Stored procedure subs are now given names.
The names are not visible in ordinary use, but they make
tools like Devel::NYTProf and Devel::Cover _much_ more useful.

Wasn't this in the previous patch, too?

Best,

David

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clearing global statistics

2010-01-14 Thread Greg Smith

Tom Lane wrote:

Actually, that brings up a more general question: what's with the
enthusiasm for clearing statistics *at all*?  ISTM that's something
you should do only in dire emergencies, like the collector went
haywire and has now got a bunch of garbage numbers.  The notion of
resetting subsets of the stats seems even more dubious, because now
you have numbers that aren't mutually comparable.  So I fail to
understand why the desire to expend valuable development time on
any of this.
  


When doing checkpoint tuning, the usual thing you start with is by 
considering the ratio of time to segment-based checkpoints, along with 
the corresponding balance of buffers written by the backends vs. the 
checkpoint.  When that shows poor behavior, typically because 
checkpoint_segments is too low, you change its value and then resume 
monitoring at the new setting.  Right now, you're still carrying around 
the history of the bad period forever though, and every check of the 
pg_stat_bgwriter requires manually subtracting the earlier values out.  
What people would like to do is reset those after adjusting 
checkpoint_segments, and then you can eyeball the proportions directly 
instead.  That's exactly what the patch does.  If I didn't see this 
request in the field every month I wouldn't have spent a minute on a 
patch to add it.


There was a suggestion that subsets of the data I'm clearing might be 
useful to target, which I rejected on the bounds that it made it 
possible to get an inconsistent set of results as you're concerned 
about.  You really need to clear everything that shows up in 
pg_stat_bgwriter or not touch it at all.  The main use case I'm trying 
to support is the person who just made a config change and now wants to do:


select pg_stat_reset();
select pg_stat_reset_shared('bgwriter');

So that all of the stats they're now dealing with are from the same 
post-tuning time period.  Having numbers that are mutually comparable 
across the whole system is exactly the reason why this new call is 
needed, because there's this one part you just can't touch.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clearing global statistics

2010-01-14 Thread Greg Smith

Euler Taveira de Oliveira wrote:

Greg Smith escreveu:
  

pg_stat_reset( which text )
  which := 'buffers' | 'checkpoints' | 'tables' | 'functions' |  ...



What about adding 'all' too? Or the idea is resetting all global counters when
we call pg_stat_reset() (without parameters)?
  


Once there's more than one piece to clear maybe adding in an 'all' 
target makes sense.  In the context of the update patch I've finished, 
it just doesn't make sense given the code involved.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com



Re: [HACKERS] Clearing global statistics

2010-01-14 Thread Magnus Hagander
2010/1/14 Tom Lane t...@sss.pgh.pa.us:
 Rafael Martinez r.m.guerr...@usit.uio.no writes:
 Is there any chance of implementing a way of knowing when was the last
 time statistics delivered via pg_stat_* were reset?

 Actually, that brings up a more general question: what's with the
 enthusiasm for clearing statistics *at all*?  ISTM that's something
 you should do only in dire emergencies, like the collector went
 haywire and has now got a bunch of garbage numbers.  The notion of
 resetting subsets of the stats seems even more dubious, because now
 you have numbers that aren't mutually comparable.  So I fail to
 understand why the desire to expend valuable development time on
 any of this.

s/collector/application/ and you've got one reason.

Example, that I hit the other day. Examining pg_stat_user_functions
shows one function taking much longer than you'd expect. Called about
6 million times, total time about 7 days spent. Reason turned out to
be a missing index. Without clearing the stats, it'll take a *long*
time before the average goes down enough to make it possible to use
the simple SELECT self_time/calls FROM pg_stat_user_functions WHERE...
to monitor. Sure, if you have a system that graphs it, it'll update
properly, but for the quick manual checks, that view suddenly becomes
a lot less ueful.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Miscellaneous changes to plperl [PATCH]

2010-01-14 Thread Tim Bunce
On Thu, Jan 14, 2010 at 09:34:42AM -0800, David E. Wheeler wrote:
 On Jan 14, 2010, at 8:07 AM, Tim Bunce wrote:
 
  - Stored procedure subs are now given names.
 The names are not visible in ordinary use, but they make
 tools like Devel::NYTProf and Devel::Cover _much_ more useful.
 
 Wasn't this in the previous patch, too?

Ah, I see it was in the description of the previous patch but not in the
patch itself. Thanks. I'll add a note to the commitfest.

Tim.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clearing global statistics

2010-01-14 Thread Greg Smith

Rafael Martinez wrote:

One thing I miss from the statistics you can get via pg_stat_* is
information about how long we have been collecting stats (or in other
words, when was the last time the stats were reset)
  


I've considered adding this for the same reasons you're asking about it, 
but am not happy with the trade-offs involved.  The problem is that you 
have to presume the server was running the entirety of the time since 
stats were reset for that data to be useful.  So unless people are in 
that situation, they're going to get data that may not represent what 
they think it does.  Realistically, if you want a timestamp that always 
means something useful you have to rest the stats at every server start, 
which leads us to:



Before 8.3, we had the stats_reset_on_server_start parameter and the
pg_postmaster_start_time() function. This was an easy way of resetting
*all* statistics delivered by pg_stat_* and knowing when this was done.
We were able to produce stats with information about sec/hours/days
average values in an easy way.
  


With this new feature I'm submitting, you can adjust your database 
startup scripts to make this happen again.  Start the server, 
immediately loop over every database and call pg_stat_reset on them all, 
and call pg_stat_reset_shared('bgwriter').  Now you've got completely 
cleared stats that are within a second or two of 
pg_postmaster_start_time(), should be close enough to most purposes.  
Theoretically we could automate that better, but I've found it hard to 
justify working on given that it's not that difficult to handle outside 
of the database once the individual pieces are exposed.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] archive_timeout behavior for no activity

2010-01-14 Thread Fujii Masao
On Fri, Jan 15, 2010 at 12:50 AM, Bruce Momjian br...@momjian.us wrote:
 Looking at the archive_timeout documentation and CheckArchiveTimeout(),
 it appears we force a new xlog file and archive it even if no activity
 has been recorded in the xlog file.  Is this correct?

No. CheckArchiveTimeout() doesn't switch WAL files if there is no activity
after the last switch. In fact, though it calls RequestXLogSwitch(),
the switch is skipped in XLogInsert() because we are exactly at the start
of a file in that case.

But unfortunately checkpoint would be often recorded between each
switches. So the archive_timeout appears to always force a new WAL file.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] primary key display in psql

2010-01-14 Thread Ross J. Reedstrom
On Wed, Jan 13, 2010 at 05:03:33PM -0500, Robert Haas wrote:
 On Wed, Jan 13, 2010 at 4:47 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 
  Spelling out primary key would seem to be more in keeping with existing
  entries in that column, eg we have not null not NN.
 
  I think this is a sensible proposal for a single-column PK, but am less
  sure that it makes sense for multi-col.  The modifiers column is
  intended to describe column constraints; which a multi-col PK is not,
  by definition.
 
 Yeah, IIRC, MySQL shows PRI for each column of a multi-column primary
 key, and I think it's horribly confusing.  I wouldn't even be in favor
 of doing this just for the single-column case, on the grounds that it
 makes the single and multiple column cases asymmetrical.  IMO, the \d
 output has too many bells and whistles already; the last thing we
 should do is add more.

How about spelling it as so:

     Table public.test
  Column |  Type   | Modifiers
 +-+---
  a      | integer | primary key
  b      | integer | 
 Indexes:
     test1_pkey PRIMARY KEY, btree (a)


     Table public.test2
  Column |  Type   | Modifiers
 +-+---
  a      | integer | primary key (compound)
  b      | integer | primary key (compound)
 Indexes:
     test2_pkey PRIMARY KEY, btree (a, b)

As to Tom's point that a compound primary key is a table level
restriction, by definition, participating in such a key is still a 
restriction on what values that column can take. When introspecting
someone else's schema, with a very wide table, seeing '(compound)' 
is a nice strong hint to go looking for the other members of the PK.

Ross
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone: 713-348-6166
The Connexions Project  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clearing global statistics

2010-01-14 Thread Tom Lane
Greg Smith g...@2ndquadrant.com writes:
 Tom Lane wrote:
 Actually, that brings up a more general question: what's with the
 enthusiasm for clearing statistics *at all*?

 ... Right now, you're still carrying around 
 the history of the bad period forever though, and every check of the 
 pg_stat_bgwriter requires manually subtracting the earlier values out.  

Seems like a more appropriate solution would be to make it easier to do
that subtraction, ie, make it easier to capture the values at a given
time point and then get deltas from there.  It's more general (you could
have multiple saved sets of values), and doesn't require superuser
permissions to do, and doesn't have the same potential for
damn-I-wish-I-hadn't-done-that moments.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add .gitignore files to CVS?

2010-01-14 Thread Chris Browne
bada...@gmail.com (Alex Hunsaker) writes:
 On Fri, Jan 8, 2010 at 02:03, Magnus Hagander mag...@hagander.net wrote:
 You can always create your own branch with just the .gitignore files
 and merge that into whatever you're working on :)

 The only thing annoying about that is if you generate diffs ala git
 diff origin/master.. you get your .gitignore in it.

 What I do is have a .gitignore that is gitignored.  That way its not
 committed, its on any branch i switch to or make and I don't
 accidentally commit it.

I'd put that in $GITHOME/.git/info/exclude

That's specifically what that file's for...
-- 
output = reverse(moc.liamg @ enworbbc)
http://www3.sympatico.ca/cbbrowne/slony.html
DSK: STAN.K; ML EXIT -- FILE NOT FOUND

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] review: More frame options in window functions

2010-01-14 Thread Hitoshi Harada
2010/1/14 Pavel Stehule pavel.steh...@gmail.com:
 Hello

 I looked on Hitoshi's patch - and my result is:

Thanks for the review. I've found another crash today and attached is
fixed version. The case is:

SELECT four, sum(ten) over (PARTITION BY four ORDER BY four RANGE 1
PRECEDING) FROM tenk1 WHERE unique1  10;

The planner recognizes windowagg-ordNumCol as 0 for optimization, but
RANGE offset cases should really have interest on that value as the
syntax says.

Regards,


-- 
Hitoshi Harada


more_frame_options.20100115.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] mailing list archiver chewing patches

2010-01-14 Thread David Fetter
On Thu, Jan 14, 2010 at 03:08:22PM +0100, Matteo Beccati wrote:
 Il 14/01/2010 14:39, Dimitri Fontaine ha scritto:
 Matteo Beccatip...@beccati.com  writes:
 I've extended AOX with a trigger that takes care of filling a separate table
 that's used to display the index pages. The new table also stores threading
 information (standard headers + Exchange headers support) and whether or not
 the email has attachments.
 
 Please check the updated PoC: http://archives.beccati.org/
 
 Looks pretty good, even if some thread are still separated (this one for
 example), and the ordering looks strange.
 
 This one is separated as the first one is not in the archive yet,
 thus to the system there are multiple parent messages. It shouldn't
 happen with full archives. About sorting, here's the query I've used
 (my first try with CTEs incidentally):
 
 WITH RECURSIVE t (mailbox, uid, date, subject, sender,
 has_attachments, parent_uid, idx, depth) AS (
   SELECT mailbox, uid, date, subject, sender, has_attachments,
 parent_uid, uid::text, 1
   FROM arc_messages
   WHERE parent_uid IS NULL AND mailbox = 15
   UNION ALL
   SELECT a.mailbox, a.uid, a.date, a.subject, a.sender,
 a.has_attachments, a.parent_uid, t.idx || '.' || a.uid::text,
 t.depth + 1
   FROM t JOIN arc_messages a USING (mailbox)
   WHERE t.uid = a.parent_uid
 ) SELECT * FROM t ORDER BY idx

 Any improvements to sorting are welcome :)

This is probably better written as:

WITH RECURSIVE t (
mailbox,
uid,
date,
subject,
sender,
has_attachments,
path
)
AS (
SELECT
mailbox,
uid,
date,
subject,
sender,
has_attachments,
ARRAY[uid]
FROM
arc_messages
WHERE
parent_uid IS NULL AND
mailbox = 15
UNION ALL
SELECT
a.mailbox,
a.uid,
a.date,
a.subject,
a.sender,
a.has_attachments,
t.path || a.uid,
FROM
t JOIN arc_messages a
ON (
a.mailbox = t.mailbox AND
t.uid = a.parent_uid
)
)
SELECT *
FROM t
ORDER BY path;

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] EXPLAIN, utility statement parameters, and recent plpgsql changes

2010-01-14 Thread Pavel Stehule
2010/1/14 Tom Lane t...@sss.pgh.pa.us:
 Pavel pointed out here
 http://archives.postgresql.org/pgsql-hackers/2010-01/msg01233.php
 that it no longer works to reference plpgsql variables in EXPLAIN
 statements in plpgsql.  I dug into this a bit, and the code is trying
 to do it but it doesn't quite work.

 The issue centers around the behavior of the ParamListInfo data
 structure, which was originally intended to carry only values for a
 fixed set of $1 .. $n parameters (as in PREPARE/EXECUTE for instance).
 This is the structure that carries plpgsql values into a command that's
 executed as a cursor.  To support the recent changes in plgsql parsing,
 I extended that struct to also carry parser hook functions.  The idea is
 that while doing parse analysis of a statement, the parser hook
 functions could capture references to plpgsql variables and turn them
 into Params, which would then reference the data area of the
 ParamListInfo struct at runtime.

 This works well enough for regular DML statements, but it falls down for
 EXPLAIN which is a utility statement, because *parse analysis of utility
 statements doesn't do anything*.  EXPLAIN actually does the parse
 analysis of its contained statement at the beginning of execution.
 And that is too late, in the scenario Pavel exhibited.  Why is it too
 late?  Because SPI_cursor_open_internal() intentionally freezes the
 ParamListInfo struct after doing initial parsing: what it copies into
 the cursor portal is just a static list of data values without the
 parser hooks (see copyParamList).  This is really necessary because the
 execution of the portal could outlive the function that created the
 cursor, so we can't safely execute its parsing hooks anymore.

 So what to do about it?  I can see two basic avenues towards a solution:

 1. Change things so that copyParamList copies enough state into the
 cursor portal so that we can still run the plpgsql parsing hooks during
 cursor execution.  In the worst case this would imply copying *all*
 local variables and parameters of the plpgsql function into the cursor
 portal, plus a lot of names, types, etc.  We could perhaps optimize
 things enough to only copy the values actually referenced, but it still
 seems like possibly a rather nasty performance hit.  And it'd affect not
 only explicit cursors, but every plpgsql for-over-rows construct,
 because those are cursors internally.

 2. Redesign EXPLAIN so that it parses the contained query in the initial
 parsing step; it wouldn't be a simple utility command anymore but a
 hybrid much like DECLARE CURSOR.  I think this would not be very messy.
 The main objection to it is that it doesn't scale to solve the problem
 for other types of utility statements.  Now we don't support parameters
 in other types of utility statements anyway, but it's something we'd
 like to do someday probably.

+1

Pavel

 (Of course there are also 3. Sorry, we're not going to support
 variables in EXPLAIN anymore and 4. Revert all those parsing fixes
 in plpgsql, but I rejected these solutions out of hand.)

 I'm kind of leaning to #2, particularly given that we don't have time
 to expend a great deal of work on this for 8.5.  But I wonder if anyone
 has any comments or alternative ideas.

                        regards, tom lane

 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clearing global statistics

2010-01-14 Thread Greg Smith

Tom Lane wrote:

Seems like a more appropriate solution would be to make it easier to do
that subtraction, ie, make it easier to capture the values at a given
time point and then get deltas from there.  It's more general (you could
have multiple saved sets of values), and doesn't require superuser
permissions to do, and doesn't have the same potential for
damn-I-wish-I-hadn't-done-that moments.
  


You can make the same argument about the existing pg_stat_reset 
mechanism.  I would love to completely rework the stats infrastructure 
so that it's easier to capture values with timestamps, compute diffs, 
and do trending.  However, I'm not sure the database itself is 
necessarily the best place to do that at anyway.  People who know what 
they're doing are already handling this exact job using external tools 
that grab regular snapshots for that purpose, so why try to duplicate 
that work?


I'm trying to triage here, to scrub off the worst of the common 
problems.  I would never claim this is the perfect direction to follow 
forever.  There are a number of people who consider the inability to 
reset the pg_stat_bgwriter stats in any way a bug that's gone unfixed 
for two versions now.  Your larger point that this style of 
implementation is not ideal as a long-term way to manage statistics I 
would completely agree with, I just don't have the time to spend on a 
major rewrite to improve that.  What I can offer is a fix for the most 
common issue I get complaints about, in the form of a tool much more 
likely to be used correctly by people who go looking for it than misused 
IMHO.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Miscellaneous changes to plperl [PATCH]

2010-01-14 Thread David Fetter
On Thu, Jan 14, 2010 at 05:49:54PM +, Tim Bunce wrote:
 On Thu, Jan 14, 2010 at 09:34:42AM -0800, David E. Wheeler wrote:
  On Jan 14, 2010, at 8:07 AM, Tim Bunce wrote:
  
   - Stored procedure subs are now given names.
  The names are not visible in ordinary use, but they make
  tools like Devel::NYTProf and Devel::Cover _much_ more useful.
  
  Wasn't this in the previous patch, too?
 
 Ah, I see it was in the description of the previous patch but not in
 the patch itself.  Thanks.  I'll add a note to the commitfest.

A description here would help, too :)

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] EXPLAIN, utility statement parameters, and recent plpgsql changes

2010-01-14 Thread Dimitri Fontaine

Tom Lane t...@sss.pgh.pa.us writes:
 This works well enough for regular DML statements, but it falls down for
 EXPLAIN which is a utility statement, because *parse analysis of utility
 statements doesn't do anything*.  EXPLAIN actually does the parse
 analysis of its contained statement at the beginning of execution.
 And that is too late, in the scenario Pavel exhibited.  Why is it too
 late?  Because SPI_cursor_open_internal() intentionally freezes the
 ParamListInfo struct after doing initial parsing: what it copies into
 the cursor portal is just a static list of data values without the
 parser hooks (see copyParamList).

Would it make any sense for this function to get to call the hook in the
case a utility statement is being processed?

Regards,
-- 
dim

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming replication, retrying from archive

2010-01-14 Thread Dimitri Fontaine
Fujii Masao masao.fu...@gmail.com writes:
 On Fri, Jan 15, 2010 at 1:06 AM, Dimitri Fontaine dfonta...@hi-media.com 
 wrote:
  0. base: slave asks the master for a base-backup, at the end of this it
 reaches the base-lsn

 What if the WAL file including the archive recovery starting location has
 been removed from the primary's pg_xlog before the end of online-backup
 (i.e., the procedure 0)? Where should the standby get such a WAL file from?
 How?

I guess it would be perfectly sensible for 8.5, given the timeframe, to
not implement this as part of SR, but tell our users they need to make a
base backup themselves.

If after that the first WAL we need from the master ain't available, 8.5
SR should maybe only issue an ERROR with a HINT explaining how to ensure
not running in the problem when trying again.

But how we handle failures when transitioning from one state to the
other should be a lot easier to discuss and decide as soon as we have
the possible states and the transitions we want to allow and support. I
think.

My guess is that those states and transitions are in the code, but not
explicit, so that each time we talk about how to handle the error cases
we have to be extra verbose and we risk not talking about exactly the
same thing. Naming the states should make those arrangements easier, I
should think. Not sure if it would help follow the time constraint now
though.

Regards,
-- 
dim

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] EXPLAIN, utility statement parameters, and recent plpgsql changes

2010-01-14 Thread Tom Lane
Dimitri Fontaine dfonta...@hi-media.com writes:
 Tom Lane t...@sss.pgh.pa.us writes:
 This works well enough for regular DML statements, but it falls down for
 EXPLAIN which is a utility statement, because *parse analysis of utility
 statements doesn't do anything*.  EXPLAIN actually does the parse
 analysis of its contained statement at the beginning of execution.
 And that is too late, in the scenario Pavel exhibited.  Why is it too
 late?  Because SPI_cursor_open_internal() intentionally freezes the
 ParamListInfo struct after doing initial parsing: what it copies into
 the cursor portal is just a static list of data values without the
 parser hooks (see copyParamList).

 Would it make any sense for this function to get to call the hook in the
 case a utility statement is being processed?

Well, the point of the hook is to change the results of parse
transformation, so just calling it doesn't do much --- you have to apply
the whole parse analysis process, *and keep the resulting tree*.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] EXPLAIN, utility statement parameters, and recent plpgsql changes

2010-01-14 Thread Tom Lane
I wrote:
 2. Redesign EXPLAIN so that it parses the contained query in the initial
 parsing step; it wouldn't be a simple utility command anymore but a
 hybrid much like DECLARE CURSOR.  I think this would not be very messy.
 The main objection to it is that it doesn't scale to solve the problem
 for other types of utility statements.  Now we don't support parameters
 in other types of utility statements anyway, but it's something we'd
 like to do someday probably.

I've been looking some more at this.  The analogy to DECLARE CURSOR
isn't as good as I thought: we can't use a transformed representation
similar to DECLARE CURSOR's (namely, a Query with some extra stuff in
its utilityStmt field) because EXPLAIN can take non-SELECT queries,
which could be rewritten into multiple Query trees by the action of
rules.  So it seems the transformed representation would have to be
an ExplainStmt with a list of Queries underneath it.

The reason for the rule that utility statements aren't affected by parse
analysis is that parse analysis of regular queries takes locks on the
referenced tables, and we must keep hold of those locks to be sure that
the transformed tree still reflects database reality.  At the time we
made that rule it seemed too messy to consider doing anything similar
for utility statements.  However, now the locking considerations have
been centralized in plancache.c, which knows about re-taking locks on
a possibly stale cached plan.  So the price of doing parse analysis of
EXPLAIN's target statement during the normal parse analysis phase is
just going to be some adjustments in plancache.c so that it knows to
look underneath an ExplainStmt for queries representing additional locks
to re-take.  This is a little bit ugly, but not really any worse than
what it knows already about the representation of parsed queries.

So I conclude that the it doesn't scale argument isn't as strong
as it seemed.  In principle, to support parameters in other utility
statements, we'll have the same type of changes to make:
  * transform the expressions that might reference parameters during
the normal parse analysis phase
  * teach plancache.c about finding lock dependencies in these
expressions
That seems fairly reasonable.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Testing with concurrent sessions

2010-01-14 Thread Markus Wanner

Hi,

Markus Wanner wrote:
Sorry, if that didn't get clear. I'm trying to put together something I 
can release real soon now (tm). I'll keep you informed.


Okay, here we go: dtester version 0.0.

This emerged out of Postgres-R, where I don't just need to test multiple 
client connections, but multiple postmasters interacting with each 
other. None the less, it may be suitable for other needs as well, 
especially testing with concurrent sessions.


I've decided to release this as a separate project named dtester, as 
proposed by Michael Tan (thanks for your inspiration).


It's certainly missing lots of things, mainly documentation. However, 
I've attached a patch which integrates nicely into the Postgres 
Makefiles, so you just need to say: make dcheck.


That very same patch includes a test case with three concurrent 
transactions with circular dependencies, where the current SERIALIZABLE 
isolation level fails to provide serializability.


Installing dtester itself is as simple as 'python setup.py install' in 
the extracted archive's directory.


Go try it, read the code and simply ask, if you get stuck. I'll try to 
come up with some more documentation and such...


Regards

Markus Wanner


dtester-0.0.tar.bz2
Description: application/bzip
#
# old_revision [651e6b451b14a926bbca5ac0b308bc5f979c8c8e]
#
# add_file src/test/regress/pg_dtester.py.in
#  content [edcf9857ee4d884b6120b72bf319774126241ee3]
# 
# patch GNUmakefile.in
#  from [da68d137cee6cec6458a1d9e877c5b623a11415a]
#to [b605a66cee8f77978da09c326cf9dbaeba464bd2]
# 
# patch src/test/regress/GNUmakefile
#  from [ed08fe1be025ede31fa2dba35f81f653809a2096]
#to [55ae74cf991ae07b0d5b082882ff5ba1f1ef4036]
#

--- src/test/regress/pg_dtester.py.in	edcf9857ee4d884b6120b72bf319774126241ee3
+++ src/test/regress/pg_dtester.py.in	edcf9857ee4d884b6120b72bf319774126241ee3
@@ -0,0 +1,762 @@
+#!/usr/bin/python
+
+#-
+#
+# dtester.py.in
+#
+#	 Sample test suite running two concurrent transactions, showing
+#off some capabilities of dtester.
+#
+# Copyright (c) 2006-2010, Markus Wanner
+#
+#-
+
+import re, os, sys, getopt
+from twisted.internet import defer, reactor
+
+from dtester.events import EventMatcher, EventSource, Event, \
+	ProcessOutputEvent, ProcessErrorEvent, ProcessEndedEvent
+from dtester.exceptions import TestAborted
+from dtester.test import TestSuite, BaseTest, SyncTest
+from dtester.reporter import StreamReporter
+from dtester.runner import Runner, Timeout
+
+# **  definition of tests and suites  ***
+
+class InstallationSuite(TestSuite):
+
+	setUpDescription = creating temporary installation
+	tearDownDescription = removing temporary installation
+
+	needs = (('shell', IShell or something),)
+
+	def setUp(self):
+		# inherit getConfig from the shell
+		setattr(self, 'getConfig', self.shell.getConfig)
+		setattr(self, 'runCommand', self.shell.runCommand)
+		setattr(self, 'recursive_remove', self.shell.recursive_remove)
+
+		# (re) create an installation directory
+		self.pg_inst_dir = self.shell.getConfig('inst_dir')
+		if os.path.exists(self.pg_inst_dir):
+			self.shell.recursive_remove(self.pg_inst_dir)
+		os.mkdir(self.pg_inst_dir)
+
+		# install into that directory
+		proc = self.shell.runCommand('make', 'make',
+			args=['make', '-C', self.shell.getConfig('top-builddir'),
+  'DESTDIR=%s' % self.pg_inst_dir, 'install',
+  'with_perl=no', 'with_python=no'],
+			lineBasedOutput=True)
+
+		d = self.waitFor(proc, EventMatcher(ProcessEndedEvent))
+		d.addCallback(self.makeTerminated)
+		proc.start()
+
+		# FIXME: how to properly handle these?
+		self.shell.addEnvPath(self.shell.getConfig('bindir'))
+		self.shell.addEnvLibraryPath(self.shell.getConfig('libdir'))
+		return d
+
+	def makeTerminated(self, event):
+		if event.exitCode != 0:
+			raise Exception(Initdb returned %d % event.exitCode)
+		else:
+			return True
+
+	def tearDown(self):
+		# The installation procedure should be able to simply override any
+		# formerly installed files, so we save the time to clean up the
+		# installation directory.
+		return
+
+
+class InitdbSuite(TestSuite):
+
+	args = (('number', int), )
+	needs = (('shell', IShell or something),)
+
+	def setUpDescription(self):
+		return initializing database system %d % self.number
+
+	def tearDownDescription(self):
+		return removing database system %d % self.number
+
+	def getNumber(self):
+		return self.number
+
+	def getDir(self):
+		return self.dbdir
+
+	def setUp(self):
+		self.dbdir = %s%d % \
+			(self.shell.getConfig('pgdata_prefix'), self.number)
+		proc = self.shell.runCommand(
+'initdb-%d' % self.number,
+'initdb', args = [
+'initdb', '-D', self.dbdir,
+'-A', 'trust', '--noclean'],
+lineBasedOutput=True)
+
+		d = defer.Deferred()
+		

Re: [HACKERS] Streaming replication, retrying from archive

2010-01-14 Thread Robert Haas
On Thu, Jan 14, 2010 at 10:23 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 I wasn't really asking if it's possible to fix, I meant Let's think
 about *how* to fix that.

Well...  maybe if it doesn't require too MUCH thought.

I'm thinking that HS+SR are going to be a bit like the Windows port -
they're going to require a few releases before they really work as
well as we'd like them too.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] EXPLAIN, utility statement parameters, and recent plpgsql changes

2010-01-14 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:

 Dimitri Fontaine dfonta...@hi-media.com writes:
 Tom Lane t...@sss.pgh.pa.us writes:
 This works well enough for regular DML statements, but it falls down for
 EXPLAIN which is a utility statement, because *parse analysis of utility
 statements doesn't do anything*.  EXPLAIN actually does the parse
 analysis of its contained statement at the beginning of execution.
 And that is too late, in the scenario Pavel exhibited.  Why is it too
 late?  Because SPI_cursor_open_internal() intentionally freezes the
 ParamListInfo struct after doing initial parsing: what it copies into
 the cursor portal is just a static list of data values without the
 parser hooks (see copyParamList).

 Would it make any sense for this function to get to call the hook in the
 case a utility statement is being processed?

 Well, the point of the hook is to change the results of parse
 transformation, so just calling it doesn't do much --- you have to apply
 the whole parse analysis process, *and keep the resulting tree*.

Could that be done in the function, in the phase you call doing initial
parsing?
-- 
dim

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Testing with concurrent sessions

2010-01-14 Thread Kevin Grittner
Markus Wanner mar...@bluegap.ch wrote:
 
 Okay, here we go: dtester version 0.0.
 
C'mon, you could have tried to inspire a *bit* more confidence by
calling it version 0.1 or something!  ;-)
 
 It's certainly missing lots of things, mainly documentation.
 However, I've attached a patch which integrates nicely into the
 Postgres Makefiles, so you just need to say: make dcheck.
 
That sounds very cool.
 
 That very same patch includes a test case with three concurrent 
 transactions with circular dependencies, where the current
 SERIALIZABLE isolation level fails to provide serializability.
 
Fantastic!  I'll expand that a bit
 
 Installing dtester itself is as simple as 'python setup.py
 install' in the extracted archive's directory.
 
 Go try it, read the code and simply ask, if you get stuck. I'll
 try to come up with some more documentation and such...
 
I'm reading through it all now.  Expect feedback soon!
 
And THANK YOU VERY MUCH!
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] EXPLAIN, utility statement parameters, and recent plpgsql changes

2010-01-14 Thread Tom Lane
Dimitri Fontaine dfonta...@hi-media.com writes:
 Tom Lane t...@sss.pgh.pa.us writes:
 Well, the point of the hook is to change the results of parse
 transformation, so just calling it doesn't do much --- you have to apply
 the whole parse analysis process, *and keep the resulting tree*.

 Could that be done in the function, in the phase you call doing initial
 parsing?

Within copyParamList?  Seems like rather an abuse of the design ... it's
just supposed to copy the ParamListInfo, not editorialize on the entire
content of the portal.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dbt2-shell script problem

2010-01-14 Thread Jonas J
Thanks for your support,

The problem was that Ubuntu links /bin/sh to /bin/dash and not /bin/bash,
that makes some scripts crashing. I done the correct linking and everything
works fine now.
Also thanks for Mark Wong support,

Jonas,



2010/1/11 Robert Haas robertmh...@gmail.com

 On Mon, Jan 11, 2010 at 4:18 PM, Jonas J autoram...@gmail.com wrote:
  I get the dbt2 project from the git tree. And i'm trying to make it work
  with postgres. The problem is in one of the dbt2 shell scripts. Since I
 dont
  understand of Shell Script Programing, here is my problem: (I will post
 the
  script on the end)

 Well this isn't really a help list... especially not for
 shell-script programming... especially for shell-scripts that aren't
 even part of PostgreSQL.

 But having said that... I think you must be running a wonky version of sh.

 $ DBNAME=rhaas ./broken-script -d /tmp/fodao
 Loading customer table...
 COPY customer FROM '/tmp/fodao/customer.data' WITH NULL AS '';
 ERROR:  relation customer does not exist

 ...Robert

 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] Streaming replication, retrying from archive

2010-01-14 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 I'm thinking that HS+SR are going to be a bit like the Windows port -
 they're going to require a few releases before they really work as
 well as we'd like them too.

I've assumed that from the get-go ;-).  It's one of the reasons that
we ought to label this release 9.0 if those features get in.  Such a
number would help clue folks that there might be some less than
entirely stable things about it.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clearing global statistics

2010-01-14 Thread Rafael Martinez
Greg Smith wrote:
 
 Before 8.3, we had the stats_reset_on_server_start parameter and
 the pg_postmaster_start_time() function. This was an easy way of
 resetting *all* statistics delivered by pg_stat_* and knowing when
 this was done. We were able to produce stats with information about
 sec/hours/days average values in an easy way.
 
 
 With this new feature I'm submitting, you can adjust your database 
 startup scripts to make this happen again.  Start the server, 
 immediately loop over every database and call pg_stat_reset on them
 all, and call pg_stat_reset_shared('bgwriter').  Now you've got
 completely cleared stats that are within a second or two of 
 pg_postmaster_start_time(), should be close enough to most purposes.
  Theoretically we could automate that better, but I've found it hard
 to justify working on given that it's not that difficult to handle
 outside of the database once the individual pieces are exposed.
 


Great, this is good enough and we get what we need. Thanks :-)

regards
-- 
 Rafael Martinez, r.m.guerr...@usit.uio.no
 Center for Information Technology Services
 University of Oslo, Norway

 PGP Public Key: http://folk.uio.no/rafael/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpython3

2010-01-14 Thread Joshua D. Drake
On Thu, 2010-01-14 at 05:39 -0700, James William Pye wrote:

 
  Python code is easy to read though.  If you'd said here's a great example 
  of how Function Modules are an improvement over what you can do with the 
  current pl/python, that would be infinitely more useful than the list of 
  language trivia related to them.  You should be aiming to put Peter on the 
  spot to respond to claims you make like you can't do this easily with the 
  current implementation after showing an elegant bit of code.
 
 Okay. So, some examples would help. The documentation is back up, so please 
 be sure to look at the numerous examples provided therein. In addition to 
 that, I'll try to get some contrasting examples posted as a follow-up to an 
 earlier message. In plpython you do X whereas in plpython3 you do Y.
 

The documentation is very thorough, thank you. I am still a fan of
getting this reviewed for potential inclusion but I firmly agree with
what Greg has already said.

What I would (as a non hacker) would look for is:

(1) Generalized benchmarks between plpython(core) and plpython3u

I know a lot of these are subjective, but it is still good to see if
there are any curves or points that bring the performance of either to
light.

(2) Example of the traceback facility, I know it is silly but I don't
have time to actually download head, apply the patch and test this. This
type of thing, showing debugging facilities within the function would be
killer.

(3) A distinct real world comparison where the core plpython falls down
(if it does) against the plpython3u implementation

I can't speak to your code quality, that is going to have to be someone
else.

Sincerely,

Joshua D. Drake

 
 Thanks.


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
Respect is earned, not gained through arbitrary and repetitive use or Mr. or 
Sir.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Source code for pg_bulkload

2010-01-14 Thread Dann Corbit
How can I obtain the source code for pg_bulkload?

I am interested in writing an API version, so that I can imbed this
loading facility into programs.

The page http://pgfoundry.org/projects/pgbulkload/ has links to the
binaries, but I want the source.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clearing global statistics

2010-01-14 Thread Greg Smith

Itagaki Takahiro wrote:

To be honest, I have a plan to add performance statistics counters to
postgres. It is not bgwriter's counters, but cluster-level. I'd like
to use your infrastructure in my work, too :)
  


Attached patch provides just that. It still works basically the same as 
my earlier version, except you pass it a name of what you want to reset, 
and if you don't give it the only valid one right now ('bgwriter') it 
rejects it (for now):


gsmith=# select checkpoints_req,buffers_alloc from pg_stat_bgwriter;
checkpoints_req | buffers_alloc
-+---
4 | 129
(1 row)

gsmith=# select pg_stat_reset_shared('bgwriter');
pg_stat_reset_shared
--

(1 row)

gsmith=# select checkpoints_req,buffers_alloc from pg_stat_bgwriter;
checkpoints_req | buffers_alloc
-+---
0 | 7
(1 row)

gsmith=# select pg_stat_reset_shared('rubbish');
ERROR: Unrecognized reset target

I turn the input text into an enum choice as part of composing the 
message to the stats collector. If you wanted to add some other shared 
cluster-wide reset capabilities into there you could re-use most of this 
infrastructure. Just add an extra enum value, map the text into that 
enum, and write the actual handler that does the reset work. Should be 
able to reuse the same new message type and external UI I implemented 
for this specific clearing feature.


I didn't see any interaction to be concerned about here with Magnus's 
suggestion he wanted to target stats reset on objects such as a single 
table at some point.


The main coding choice I wasn't really sure about is how I flag the 
error case where you pass bad in. I do that validation and throw 
ERRCODE_SYNTAX_ERROR before composing the message to the stats 
collector. Didn't know if I should create a whole new error code just 
for this specific case or if reusing another error code was more 
appropriate. Also, I didn't actually have the collector process itself 
validate the data at all, it just quietly ignores bad messages on the 
presumption everything is already being checked during message creation. 
That seems consistent with the other code here--the other message 
handlers only seem to throw errors when something really terrible 
happens, not when they just don't find something useful to do.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 1f70fd4..b630bc8 100644
*** a/doc/src/sgml/monitoring.sgml
--- b/doc/src/sgml/monitoring.sgml
*** postgres: replaceableuser/ replacea
*** 918,923 
--- 918,934 
 (requires superuser privileges)
/entry
   /row
+ 
+  row
+   entryliteralfunctionpg_stat_reset_shared/function()/literal/entry
+   entrytypetext/type/entry
+   entry
+Reset some of the shared statistics counters for the database cluster to
+zero (requires superuser privileges).  Calling 
+literalpg_stat_reset_shared('bgwriter')/ will zero all the values shown by
+structnamepg_stat_bgwriter/.
+   /entry
+  /row
  /tbody
 /tgroup
/table
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index 4fd2abf..7335a50 100644
*** a/src/backend/postmaster/pgstat.c
--- b/src/backend/postmaster/pgstat.c
*** static void pgstat_recv_tabstat(PgStat_M
*** 270,275 
--- 270,276 
  static void pgstat_recv_tabpurge(PgStat_MsgTabpurge *msg, int len);
  static void pgstat_recv_dropdb(PgStat_MsgDropdb *msg, int len);
  static void pgstat_recv_resetcounter(PgStat_MsgResetcounter *msg, int len);
+ static void pgstat_recv_resetsharedcounter(PgStat_MsgResetsharedcounter *msg, int len);
  static void pgstat_recv_autovac(PgStat_MsgAutovacStart *msg, int len);
  static void pgstat_recv_vacuum(PgStat_MsgVacuum *msg, int len);
  static void pgstat_recv_analyze(PgStat_MsgAnalyze *msg, int len);
*** pgstat_reset_counters(void)
*** 1153,1158 
--- 1154,1190 
  	pgstat_send(msg, sizeof(msg));
  }
  
+ /* --
+  * pgstat_reset_shared_counters() -
+  *
+  *	Tell the statistics collector to reset cluster-wide shared counters.
+  * --
+  */
+ void
+ pgstat_reset_shared_counters(const char *target)
+ {
+ 	PgStat_MsgResetsharedcounter msg;
+ 
+ 	if (pgStatSock  0)
+ 		return;
+ 
+ 	if (!superuser())
+ 		ereport(ERROR,
+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+  errmsg(must be superuser to reset statistics counters)));
+ 
+ 	if (strcmp(target, bgwriter) == 0)
+ 		msg.m_resettarget = RESET_BGWRITER;
+ 	else
+ 		{
+ 		ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+  errmsg(Unrecognized reset target)));
+ 		}
+ 
+ 	pgstat_setheader(msg.m_hdr, PGSTAT_MTYPE_RESETSHAREDCOUNTER);
+ 	pgstat_send(msg, sizeof(msg));
+ }
  
  /* --
   * pgstat_report_autovac() -
*** 

Re: [HACKERS] Source code for pg_bulkload

2010-01-14 Thread Alex Hunsaker
On Thu, Jan 14, 2010 at 14:13, Dann Corbit dcor...@connx.com wrote:
 How can I obtain the source code for pg_bulkload?

This is the wrong list, try
http://lists.pgfoundry.org/pipermail/pgbulkload-general/

That being said it seems to be right in the file list
http://pgfoundry.org/frs/?group_id=1000261
http://pgfoundry.org/frs/download.php/2282/pg_bulkload-2.4.0.tar.gz

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Add on_perl_init and proper destruction to plperl [PATCH]

2010-01-14 Thread Tim Bunce
This is the third of the patches to be split out from the former 'plperl
feature patch 1'.

Changes in this patch:

- Added plperl.on_perl_init GUC for DBA use (PGC_SIGHUP)
SPI functions are not available when the code is run.

- Added normal interpreter destruction behaviour
END blocks, if any, are run then objects are
destroyed, calling their DESTROY methods, if any.
SPI functions will die if called at this time.

Tim.
diff --git a/doc/src/sgml/plperl.sgml b/doc/src/sgml/plperl.sgml
index 6fee031..0054f5a 100644
*** a/doc/src/sgml/plperl.sgml
--- b/doc/src/sgml/plperl.sgml
*** CREATE TRIGGER test_valid_id_trig
*** 1030,1036 
/para
   /sect1
  
!  sect1 id=plperl-missing
titleLimitations and Missing Features/title
  
para
--- 1030,1100 
/para
   /sect1
  
!  sect1 id=plperl-under-the-hood
!   titlePL/Perl Under the Hood/title
! 
!  sect2 id=plperl-config
!   titleConfiguration/title
! 
!   para
!   This section lists configuration parameters that affect applicationPL/Perl/.
!   To set any of these parameters before applicationPL/Perl/ has been loaded,
!   it is necessary to have added quoteliteralplperl// to the
!   xref linkend=guc-custom-variable-classes list in
!   filenamepostgresql.conf/filename.
!   /para
! 
!   variablelist
! 
!  varlistentry id=guc-plperl-on-perl-init xreflabel=plperl.on_perl_init
!   termvarnameplperl.on_perl_init/varname (typestring/type)/term
!   indexterm
!primaryvarnameplperl.on_perl_init/ configuration parameter/primary
!   /indexterm
!   listitem
!para
!Specifies perl code to be executed when a perl interpreter is first initialized.
!The SPI functions are not available when this code is executed.
!If the code fails with an error it will abort the initialization of the interpreter
!and propagate out to the calling query, causing the current transaction
!or subtransaction to be aborted.
!/para
!para
! 	   The perl code is limited to a single string. Longer code can be placed
! 	   into a module and loaded by the literalon_perl_init/ string.
! 	   Examples:
! programlisting
! plplerl.on_perl_init = '$ENV{NYTPROF}=start=no; require Devel::NYTProf::PgPLPerl'
! plplerl.on_perl_init = 'use lib /my/app; use MyApp::PgInit;'
! /programlisting
!/para
!para
!Initialization will happen in the postmaster if the plperl library is included
!in literalshared_preload_libraries/ (see xref linkend=shared_preload_libraries),
!in which case extra consideration should be given to the risk of destabilizing the postmaster.
!/para
!para
!This parameter can only be set in the postgresql.conf file or on the server command line.
!/para
!   /listitem
!  /varlistentry
! 
!  varlistentry id=guc-plperl-use-strict xreflabel=plperl.use_strict
!   termvarnameplperl.use_strict/varname (typeboolean/type)/term
!   indexterm
!primaryvarnameplperl.use_strict/ configuration parameter/primary
!   /indexterm
!   listitem
!para
!When set true subsequent compilations of PL/Perl functions have the literalstrict/ pragma enabled.
!This parameter does not affect functions already compiled in the current session.
!/para
!   /listitem
!  /varlistentry
! 
!   /variablelist
! 
!  sect2 id=plperl-missing
titleLimitations and Missing Features/title
  
para
diff --git a/src/pl/plperl/plc_perlboot.pl b/src/pl/plperl/plc_perlboot.pl
index 769721d..5f6ae91 100644
*** a/src/pl/plperl/plc_perlboot.pl
--- b/src/pl/plperl/plc_perlboot.pl
***
*** 1,5 
  PostgreSQL::InServer::Util::bootstrap();
- PostgreSQL::InServer::SPI::bootstrap();
  
  use strict;
  use warnings;
--- 1,4 
diff --git a/src/pl/plperl/plperl.c b/src/pl/plperl/plperl.c
index 9277072..8315d5a 100644
*** a/src/pl/plperl/plperl.c
--- b/src/pl/plperl/plperl.c
*** static HTAB *plperl_proc_hash = NULL;
*** 138,143 
--- 138,145 
  static HTAB *plperl_query_hash = NULL;
  
  static bool plperl_use_strict = false;
+ static char *plperl_on_perl_init = NULL;
+ static bool plperl_ending = false;
  
  /* this is saved and restored by plperl_call_handler */
  static plperl_call_data *current_call_data = NULL;
*** Datum		plperl_validator(PG_FUNCTION_ARGS
*** 151,156 
--- 153,160 
  void		_PG_init(void);
  
  static PerlInterpreter *plperl_init_interp(void);
+ static void plperl_destroy_interp(PerlInterpreter **);
+ static void plperl_fini(void);
  
  static Datum plperl_func_handler(PG_FUNCTION_ARGS);
  static Datum plperl_trigger_handler(PG_FUNCTION_ARGS);
*** _PG_init(void)
*** 237,242 
--- 241,254 
  			 PGC_USERSET, 0,
  			 NULL, NULL);
  
+ 	DefineCustomStringVariable(plperl.on_perl_init,
+ 			gettext_noop(Perl code to execute when the perl interpreter is initialized.),
+ 			NULL,
+ 			

Re: [HACKERS] Streaming replication, retrying from archive

2010-01-14 Thread Heikki Linnakangas
Fujii Masao wrote:
 On Fri, Jan 15, 2010 at 12:23 AM, Heikki Linnakangas
 heikki.linnakan...@enterprisedb.com wrote:
 If we don't fix that within the server, we will need to document that
 caveat and every installation will need to work around that one way or
 another. Maybe with some monitoring software and an automatic restart. Ugh.

 I wasn't really asking if it's possible to fix, I meant Let's think
 about *how* to fix that.
 
 OK. How about the following (though it's a rough design)?
 
 (1)If walsender cannot read the WAL file because of ENOENT, it sends the
  special message indicating that error to walreceiver. This message is
  shipped on the COPY protocol.
 
 (2-a) If the message arrives, walreceiver exits by using proc_exit().
 (3-a) If the startup process detects the exit of walreceiver in
 WaitNextXLogAvailable(),
   it switches back to a normal archive recovery mode, closes
 the currently opened
   WAL file, resets some variables (readId, readSeg, etc), and
 calls FetchRecord()
   again. Then it tries to restore the WAL file from the
 archive if the restore_command
   is supplied, and switches to a streaming recovery mode again
 if invalid WAL is
   found.
 
 Or
 
 (2-b) If the message arrives, walreceiver executes restore_command,
 and then sets
   the receivedUpto to the end location of the restored WAL
 file. The restored file is
   expected to be filled because it doesn't exist in the
 primary's pg_xlog. So that
   update of the receivedUpto is OK.
 (3-b) After one WAL file is restored, walreceiver tries to connect to
 the primary, and
   starts replication again. If the ENOENT error occurs again,
 we go back to the (1).
 
 I like the latter approach since it's simpler. Thought?

Hmm. Executing restore_command in walreceiver process doesn't feel right
somehow. I'm thinking of:

Let's introduce a new boolean variable in shared memory that the
walreceiver can set to tell startup process if it's connected or
streaming, or disconnected. When startup process sees that walreceiver
is connected, it waits for receivedUpto to advance. Otherwise, it polls
the archive using restore_command.

To actually implement that requires some refactoring of the
ReadRecord/FetchRecord logic in xlog.c. However, it always felt a bit
hacky to me anyway, so that's not necessary a bad thing.

Now, one problem with this is that under the right conditions,
walreceiver might just succeed to reconnect, while the startup process
starts to restore the file from archive. That's OK, the streamed file
will be simply ignored, and the file restored from archive uses a
temporary filename that won't clash with the streamed file, but it feels
a bit strange to have the same file copied to the server via both
mechanisms.

See the replication-xlogrefactor branch in my git repository for a
prototype of that. We could also combine that with your 1st design, and
add the special message to indicate WAL already deleted, and change
the walreceiver restart logic as you suggested. Some restructuring of
Read/FetchRecord is probably required for that anyway.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] last CommitFest coming up in just under 24 hours

2010-01-14 Thread Andrew Dunstan



Robert Haas wrote:

Patch authors, please make sure your patches are listed on
commitfest.postgresql.org.

https://commitfest.postgresql.org/action/commitfest_view/open

All, we still need reviewers for the following patches.

New XLOG record indicating WAL-skipping
Fix large object support in pg_dump
knngist (WIP)
plpython3
Add utility functions to plperl

At least two people have expressed an interested in the perl patch, so
I am pretty sure it will get covered, if not in the first round then
later on, but feel free to sign up now anyway if you're interested.


  


I'm very glad people are taking an interest in the perl patches. I'd 
like to get the utility functions patch committed by the end of next 
week, so if you could encourage those reviewers (as yet unlisted on the 
Commitfest app) to act early that would be good.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby and query cancel

2010-01-14 Thread Andres Freund
On Thursday 14 January 2010 13:21:07 Simon Riggs wrote:
 On Wed, 2010-01-13 at 19:23 +, Simon Riggs wrote:
  On Wed, 2010-01-13 at 19:58 +0100, Andres Freund wrote:
I am still testing patch, so should be confident to commit tomorrow
barring issues.
   
   I have only looked at briefly because right now I dont have the time
   (going to eat at a friends place...) but I think I spotted an issue:
   The IsAbortedTransactionBlockState() check in RecoveryConflictInterrupt
   is not correct right now because that returns true for TBLOCK_SUBABORT
   as well. Wouldnt that mess with the case where were in a failed
   subxact and then rollback only that subxact?
  
  Well spotted, yes.
 
 Latest version of same patch, but uses conflict reasons passed-thru
 directly from recovery to backend.
 
 Please review, no commit before tomorrow.
I only noted a tiny thing (which was present earlier on):

snprintf(waitactivitymsg, sizeof(waitactivitymsg),
 waiting for max_standby_delay (%u ms),
 MaxStandbyDelay);
in ResolveRecoveryConflictWithVirtualXIDs.

Shouldnt that be seconds? Otherwise the check in WaitExceedsMaxStandbyDelay is 
wrong...


Andres

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standy introduced problem with query cancel behavior

2010-01-14 Thread Andres Freund
On Wednesday 13 January 2010 00:07:53 Simon Riggs wrote:
 On Tue, 2010-01-12 at 19:43 +0100, Andres Freund wrote:
  On Tuesday 12 January 2010 09:40:03 Simon Riggs wrote:
   On Tue, 2010-01-12 at 06:30 +0100, Andres Freund wrote:
Currently the patch does not yet do anything to avoid letting the
protocol out of sync. What do you think about adding a flag for error
codes not to communicate with the client (Similarly to COMERROR)?

So that one could do an elog(ERROR  ERROR_NO_SEND_CLIENT, .. or
such?
   
   Seems fairly important piece.
  
  Do you aggree on the approach then? Do you want to do it?
 
 If you would like to prototype something on this issue it would be
 gratefully received. I will review when submitted, though I may need
 other review also.
Will do - likely not before Saturday though.

 I'm still reworking other code, so things might change under you, though
 not deliberately so. I will post as soon as I can, which isn't yet.
No problem. Readapting a relatively minor amount of code isnt that hard.

Andres

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Add on_trusted_init and on_untrusted_init to plperl [PATCH]

2010-01-14 Thread Tim Bunce
This is the fourth of the patches to be split out from the former
'plperl feature patch 1'.

Changes in this patch:

- Adds plperl.on_trusted_init and plperl.on_untrusted_init GUCs
Both are PGC_USERSET.
SPI functions are not available when the code is run.
Errors are detected and reported as ereport(ERROR, ...)

- select_perl_context() state management improved
An error during interpreter initialization will leave
the state (interp_state etc) unchanged.

- The utf8fix code has been greatly simplified.

Tim.
diff --git a/doc/src/sgml/plperl.sgml b/doc/src/sgml/plperl.sgml
index 0054f5a..f2c91a9 100644
*** a/doc/src/sgml/plperl.sgml
--- b/doc/src/sgml/plperl.sgml
*** plplerl.on_perl_init = 'use lib /my/app
*** 1079,1084 
--- 1079,1120 
/listitem
   /varlistentry
  
+  varlistentry id=guc-plperl-on-trusted-init xreflabel=plperl.on_trusted_init
+   termvarnameplperl.on_trusted_init/varname (typestring/type)/term
+   indexterm
+primaryvarnameplperl.on_trusted_init/ configuration parameter/primary
+   /indexterm
+   listitem
+para
+Specifies perl code to be executed when the literalplperl/ perl interpreter
+is first initialized in a session. The perl code can only perform trusted operations.
+The SPI functions are not available when this code is executed.
+Changes made after a literalplperl/ perl interpreter has been initialized will have no effect.
+If the code fails with an error it will abort the initialization of the interpreter
+and propagate out to the calling query, causing the current transaction
+or subtransaction to be aborted.
+/para
+   /listitem
+  /varlistentry
+ 
+  varlistentry id=guc-plperl-on-untrusted-init xreflabel=plperl.on_untrusted_init
+   termvarnameplperl.on_untrusted_init/varname (typestring/type)/term
+   indexterm
+primaryvarnameplperl.on_untrusted_init/ configuration parameter/primary
+   /indexterm
+   listitem
+para
+Specifies perl code to be executed when the literalplperlu/ perl interpreter
+is first initialized in a session.
+The SPI functions are not available when this code is executed.
+Changes made after a literalplperlu/ perl interpreter has been initialized will have no effect.
+If the code fails with an error it will abort the initialization of the interpreter
+and propagate out to the calling query, causing the current transaction
+or subtransaction to be aborted.
+/para
+   /listitem
+  /varlistentry
+ 
   varlistentry id=guc-plperl-use-strict xreflabel=plperl.use_strict
termvarnameplperl.use_strict/varname (typeboolean/type)/term
indexterm
diff --git a/src/pl/plperl/GNUmakefile b/src/pl/plperl/GNUmakefile
index 7cd5721..f3cabad 100644
*** a/src/pl/plperl/GNUmakefile
--- b/src/pl/plperl/GNUmakefile
*** PERLCHUNKS = plc_perlboot.pl plc_safe_ba
*** 41,47 
  SHLIB_LINK = $(perl_embed_ldflags)
  
  REGRESS_OPTS = --dbname=$(PL_TESTDB) --load-language=plperl  --load-language=plperlu
! REGRESS = plperl plperl_trigger plperl_shared plperl_elog plperl_util plperlu
  # if Perl can support two interpreters in one backend, 
  # test plperl-and-plperlu cases
  ifneq ($(PERL),)
--- 41,47 
  SHLIB_LINK = $(perl_embed_ldflags)
  
  REGRESS_OPTS = --dbname=$(PL_TESTDB) --load-language=plperl  --load-language=plperlu
! REGRESS = plperl plperl_trigger plperl_shared plperl_elog plperl_util plperl_init plperlu
  # if Perl can support two interpreters in one backend, 
  # test plperl-and-plperlu cases
  ifneq ($(PERL),)
diff --git a/src/pl/plperl/expected/plperl_init.out b/src/pl/plperl/expected/plperl_init.out
index ...e69de29 .
diff --git a/src/pl/plperl/expected/plperl_shared.out b/src/pl/plperl/expected/plperl_shared.out
index 72ae1ba..c1c12c1 100644
*** a/src/pl/plperl/expected/plperl_shared.out
--- b/src/pl/plperl/expected/plperl_shared.out
***
*** 1,3 
--- 1,7 
+ -- test plperl.on_plperl_init via the shared hash
+ -- (must be done before plperl is initialized)
+ -- testing on_trusted_init gets run, and that it can alter %_SHARED
+ SET plperl.on_trusted_init = '$_SHARED{on_init} = 42';
  -- test the shared hash
  create function setme(key text, val text) returns void language plperl as $$
  
*** select getme('ourkey');
*** 24,26 
--- 28,36 
   ourval
  (1 row)
  
+ select getme('on_init');
+  getme 
+ ---
+  42
+ (1 row)
+ 
diff --git a/src/pl/plperl/plc_safe_ok.pl b/src/pl/plperl/plc_safe_ok.pl
index dc33dd6..7b36e33 100644
*** a/src/pl/plperl/plc_safe_ok.pl
--- b/src/pl/plperl/plc_safe_ok.pl
*** $PLContainer-permit(qw[caller]);
*** 27,32 
--- 27,33 
  }) or die $@;
  $PLContainer-deny(qw[caller]);
  
+ # called directly for plperl.on_trusted_init
  sub ::safe_eval {
  	my $ret = $PLContainer-reval(shift);
  	$@ =~ 

[HACKERS] GUC failure on exception

2010-01-14 Thread Andrew Dunstan


Tim Bunce just showed me the following oddity:

   andrew=# SET SESSION plperl.use_strict = on;
   SET
   andrew=# SHOW plperl.use_strict;
plperl.use_strict
   ---
on
   (1 row)

   andrew=# DO $$ elog(ERROR,error) $$ language plperl;
   ERROR:  error at line 1.
   CONTEXT:  PL/Perl anonymous code block
   andrew=# SHOW plperl.use_strict;
plperl.use_strict
   ---
off
   (1 row)


Somehow we have lost the setting, because the first use of plperl, which 
called the plperl init code, failed.


It appears that whatever rolls it back forgets to put the GUC setting 
back as it was, and now it's lost, which is pretty darn ugly. And you 
can now run code which fails the 'strict' tests.


If anyone has a quick idea about how to fix that would be nice. 
Otherwise I'll try to delve into it as time permits.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] remove redundant ownership checks

2010-01-14 Thread KaiGai Kohei
(2010/01/14 23:29), Stephen Frost wrote:
 * KaiGai Kohei (kai...@ak.jp.nec.com) wrote:
 Some of ALTER TABLE operations take multiple permission checks, not only
 ownership of the relation to be altered.
 
 Yes, exactly my point.  Those places typically just presume that the
 owner check has already been done.
 
 For example, ALTER TABLE with SET TABLESPACE option also need ACL_CREATE
 permission on the new tablespace, not only ownership of the relation.
 It means we need to gather two information before whole of the permission
 checks. (1) OID of the relation to be altered. (2) OID of the tablespace
 to be set.
 
 Right.  I would say that we should wait until we have all the necessary
 information to do the permissions checking, and then do it all at that
 point, similar to how we handle DML today.
 
 In my understanding, Stephen suggests that we should, ideally, rip out
 permission logic from the code closely-combined with the steps of
 implementation.
 
 This might be a confusion due to language, but I think of the
 implementation as being the work.  The check on permissions on the
 tablespace that you're describing above is done right before the work.
 For this case, specifically, ATPrepSetTableSpace() takes the action on
 line 6754: tab-newTableSpace = tablespaceId;
 Prior to that, it checks the tablespace permissions (but not the table
 permissions, since they've been checked already).  I would suggest we
 add a call to ATSimplePermissions() in ATPrepSetTableSpace at line 6745-
 right after the /* Check its permissions */ comment (which would be
 changed to check permissions for ALTER TABLE x SET TABLESPACE y).
 
 Of course, it does not mean all the checks should be
 moved just before simple_heap_update().
 
 No, I would have it earlier than simple_heap_update(), we don't need to
 go building the structures and whatnot needed to call
 simple_heap_update().

Sorry for this confusion. I used the just before simple_heap_update()
as a metaphor to mean much deep stage in execution phase.
It does not mean we should put security checks after the catalog updates.

 For this specific case though, I'm a bit torn by
 the fact that the work associated with changing the tablespace can
 actually happen in two distinct places- either through
 ATExecSetTableSpace, or in ATRewriteTables directly.
 ATExecSetTableSpace would actually be a good candidate rather than in
 the 'prep' stage, if all tablespace changes were done there.  The 'prep'
 stage worries me a bit since I'm not sure if all permissions checking
 is currently, or coulde be, done at that point, and I'd prefer that we
 use the same approach for permissions checking throughout the code- for
 example, it's either done in 'phase 3' (where we're going through the
 subcommands) or all done in 'phase 1/2', where we're setting things up.

It seems to me it is highly suggestive idea, and we should not ignore it.

Currently, ATPrepCmd() applies permission checks and set up recursion
for inherited tables, if necessary.

The following commands have its own variations:

* AT_AddColumn, AT_AddColumnToView, AT_AddOids
  It eventually calls ATPrepAddColumn(), because ColumnDef-inhcount of
  the child relation should be 1, not 0.

* AT_SetStatistics
  ATPrepSetStatistics() does same job with ATSimplePermissionsRelationOrIndex()
  except for it allows to alter system relation.

* AT_AddIndex
  It check table's permission here, then, it eventually checks permission
  to create a new index later.
  The point is that whether the index is an individual object class, or
  a property of the table. In fact, it has its own ownership, but it is
  a copy from the relation to be indexed. Its namespace is also a copy.
  In other word, it is equivalent to check properties of the relation.
  IMO, we should move all the permission checks in DefineIndex() to the
  caller side. In ALTER TABLE case, ATExecAddIndex() is a candidate.
  (It is also reason why DefineIndex() takes 'check_right' argument.)

* AT_AlterColumnType
  It calls ATPrepAlterColumnType() to check it is available, or not.

* AT_ChangeOwner
  It does all the task in ATExecChangeOwner(), and it check permission
  only when ownership is actually changed.

* AT_DropOids
  It recursively calls ATPrepCmd() with pseudo AT_DropColumn with oid.

* AT_SetTableSpace
  ATPrepSetTableSpace() checks permission on tablespace, in addition to
  the ownership of the relation checked in ATSimplePermissionsRelationOrIndex().

And, note that some of AT_* command already checks table's permission due to
the recursion of inheritance tree.

Example)
  static void
  ATExecDropColumn(List **wqueue, Relation rel, const char *colName, ...)
  {
:
/* At top level, permission check was done in ATPrepCmd, else do it */
if (recursing)
ATSimplePermissions(rel, false);
:

In addition, we need pay mention ATSimplePermissions() is a multi-functional
function.
 (1) Ensure that it is a relation (or possibly a view)
 (2) Ensure 

Re: [HACKERS] quoting psql varible as identifier

2010-01-14 Thread Robert Haas
On Mon, Jan 11, 2010 at 6:54 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
 No longer applies, please rebase.

 fixed, sorry

Hmm.  I think that pqEscapeIdentConn should be in a separate section
of the documentation, entitled Escaping Identifiers for Inclusion in
SQL Commands.  Or else we should merge the existing sections
Escaping Strings for Inclusion in SQL Commands and Escaping Binary
Strings for Inclusion in SQL Commands and then put this in there too.

On a perhaps-related note, does anyone understand why Escaping
Strings for Inclusion in SQL Commands is formatted in a way that is
needlessly inconsistent with the preceding and following sections?  I
was surprised by the magnitude of the doc diff hunk in this patch, but
when I looked at it it seems to read more clearly with these changes.

I have yet to fully review the code but on a quick glance it looks reasonable.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] KNNGiST for knn-search (WIP)

2010-01-14 Thread Robert Haas
2010/1/12 Teodor Sigaev teo...@sigaev.ru:
 Changes:

 - split patch to several ones
 - sync with current CVS

 Patch set is based on 0.5.1 version, difference between 0.5 and 0.6 should
 be only in planner patch.

 builtin_knngist_itself-0.6.gz  - patch to the gist itself
 builtin_knngist_proc-0.6.gz - patch for support knnsearch in point_ops
 builtin_knngist_planner-0.6.gz - planner patch to support knnearch
 builtin_knngist_contrib_btree_gist-0.6.gz - patch for contrib/btree_gist
 module
               patch provides - operation for various scalar types which is
               exactly  abs(a - b) function
 builtin_knngist_contrib_pg_trgm-0.6.gz - contrib/pg_trgm, like above,patch
               provides - distance between strings


 Patch set sill requires rbtree patch and point_ops patch (with Robert's
 changes)

Please update commitfest.postgresql.org.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpython3

2010-01-14 Thread Greg Smith

James William Pye wrote:

The documentation is back up, so please be sure to look at the numerous examples provided 
therein. In addition to that, I'll try to get some contrasting examples posted as a 
follow-up to an earlier message. In plpython you do X whereas in plpython3 you do 
Y.
  


I had read the documentation before so it wasn't that bad that I 
couldn't see it. Basically, the issue I had is that it's not really 
clear which features are unique to this implementation that make it 
compelling. So more targeted examples like you're considering now would 
help.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] PG_MODULE_MAGIC checks and pg_migrator

2010-01-14 Thread Bruce Momjian
Technically pg_migrator does not need the PG_MODULE_MAGIC version checks
because it doesn't do any data manipulation and it already checks the
new server version to determine if the function calls will resolve. 
However, we currently require PG_MODULE_MAGIC for all loaded libraries,
and this adds additional compile requirements on pg_migrator.

For example, right now pg_migrator can migrate to 8.4 and 8.5, but there
is no way to distribute a binary that will migrate to both because you
need different shared libraries with different PG_MODULE_MAGIC values.

One possible solution would be to distribute an 8.4-compiled
pg_migrator, and an 8.5-compiled pg_migrator.  That would work, except
once pg_migrator reaches an 8.5 version, things become very confusing
because you will potentially have pg_migrator 8.5 compiled for 8.4 and
8.5, and pg_migrator 8.4 compiled for 8.4 and 8.5.

One solution would be to drop migration support to 8.4 in pg_migrator
8.5, but that still leaves us with problems when we want to distribute a
pg_migrator 8.4 that can migrate to 8.5 alpha, e.g. on Windows.

Another option would be to distribute both 8.4 and 8.5 shared objects,
but that would require access to two source trees to perform the
compile, which seems very error-prone.

I am not sure what to suggest except perhaps that there be some way to
link in a shared object without the magic block checks.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming replication, retrying from archive

2010-01-14 Thread Fujii Masao
On Fri, Jan 15, 2010 at 7:19 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 Let's introduce a new boolean variable in shared memory that the
 walreceiver can set to tell startup process if it's connected or
 streaming, or disconnected. When startup process sees that walreceiver
 is connected, it waits for receivedUpto to advance. Otherwise, it polls
 the archive using restore_command.

Seems OK.

 See the replication-xlogrefactor branch in my git repository for a
 prototype of that. We could also combine that with your 1st design, and
 add the special message to indicate WAL already deleted, and change
 the walreceiver restart logic as you suggested. Some restructuring of
 Read/FetchRecord is probably required for that anyway.

Though I haven't read your branch much yet, there seems to be a corner
case which a partially-filled WAL file might be restored wrongly, which
would cause a PANIC error. So the primary should tell the last WAL file
which has been filled completely. And when that file has been restored
in the standby, the startup process should stop restoring any more files,
and try to wait for streaming again.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Package namespace and Safe init cleanup for plperl [PATCH]

2010-01-14 Thread Tim Bunce
This is the final plperl patch in the series from me.

Changes in this patch:

- Moved internal functions out of main:: namespace
into PostgreSQL::InServer and PostgreSQL::InServer::safe

- Restructured Safe compartment setup code
to generalize and separate the data from the logic.

Neither change has any user visible effects.

This patch will apply cleanly over the 'Add on_trusted_init and
on_untrusted_init to plperl' patch:
https://commitfest.postgresql.org/action/patch_view?id=271

Tim.
diff --git a/src/pl/plperl/plc_perlboot.pl b/src/pl/plperl/plc_perlboot.pl
index 5f6ae91..05ed049 100644
*** a/src/pl/plperl/plc_perlboot.pl
--- b/src/pl/plperl/plc_perlboot.pl
***
*** 1,23 
  PostgreSQL::InServer::Util::bootstrap();
  
  use strict;
  use warnings;
  use vars qw(%_SHARED);
  
! sub ::plperl_warn {
  	(my $msg = shift) =~ s/\(eval \d+\) //g;
  	chomp $msg;
! 	elog(NOTICE, $msg);
  }
! $SIG{__WARN__} = \::plperl_warn;
  
! sub ::plperl_die {
  	(my $msg = shift) =~ s/\(eval \d+\) //g;
  	die $msg;
  }
! $SIG{__DIE__} = \::plperl_die;
  
! sub ::mkfuncsrc {
  	my ($name, $imports, $prolog, $src) = @_;
  
  	my $BEGIN = join \n, map {
--- 1,25 
  PostgreSQL::InServer::Util::bootstrap();
  
+ package PostgreSQL::InServer;
+ 
  use strict;
  use warnings;
  use vars qw(%_SHARED);
  
! sub plperl_warn {
  	(my $msg = shift) =~ s/\(eval \d+\) //g;
  	chomp $msg;
! 	::elog(::NOTICE, $msg);
  }
! $SIG{__WARN__} = \plperl_warn;
  
! sub plperl_die {
  	(my $msg = shift) =~ s/\(eval \d+\) //g;
  	die $msg;
  }
! $SIG{__DIE__} = \plperl_die;
  
! sub mkfuncsrc {
  	my ($name, $imports, $prolog, $src) = @_;
  
  	my $BEGIN = join \n, map {
*** sub ::mkfuncsrc {
*** 30,44 
  	$name =~ s/::|'/_/g; # avoid package delimiters
  
  	my $funcsrc;
! 	$funcsrc .= qq[ undef *{'$name'}; *{'$name'} = sub { $BEGIN $prolog $src } ];
  	#warn plperl mkfuncsrc: $funcsrc\n;
  	return $funcsrc;
  }
  
  # see also mksafefunc() in plc_safe_ok.pl
! sub ::mkunsafefunc {
  	no strict; # default to no strict for the eval
! 	my $ret = eval(::mkfuncsrc(@_));
  	$@ =~ s/\(eval \d+\) //g if $@;
  	return $ret;
  }
--- 32,46 
  	$name =~ s/::|'/_/g; # avoid package delimiters
  
  	my $funcsrc;
! 	$funcsrc .= qq[ package main; undef *{'$name'}; *{'$name'} = sub { $BEGIN $prolog $src } ];
  	#warn plperl mkfuncsrc: $funcsrc\n;
  	return $funcsrc;
  }
  
  # see also mksafefunc() in plc_safe_ok.pl
! sub mkunsafefunc {
  	no strict; # default to no strict for the eval
! 	my $ret = eval(mkfuncsrc(@_));
  	$@ =~ s/\(eval \d+\) //g if $@;
  	return $ret;
  }
*** sub ::encode_array_literal {
*** 67,73 
  
  sub ::encode_array_constructor {
  	my $arg = shift;
! 	return quote_nullable($arg)
  		if ref $arg ne 'ARRAY';
  	my $res = join , , map {
  		(ref $_) ? ::encode_array_constructor($_)
--- 69,75 
  
  sub ::encode_array_constructor {
  	my $arg = shift;
! 	return ::quote_nullable($arg)
  		if ref $arg ne 'ARRAY';
  	my $res = join , , map {
  		(ref $_) ? ::encode_array_constructor($_)
diff --git a/src/pl/plperl/plc_safe_ok.pl b/src/pl/plperl/plc_safe_ok.pl
index 7b36e33..fcf5d54 100644
*** a/src/pl/plperl/plc_safe_ok.pl
--- b/src/pl/plperl/plc_safe_ok.pl
***
*** 1,39 
  use strict;
! use vars qw($PLContainer);
  
- $PLContainer = new Safe('PLPerl');
  $PLContainer-permit_only(':default');
  $PLContainer-permit(qw[:base_math !:base_io sort time require]);
  
- $PLContainer-share(qw[elog return_next
- 	spi_query spi_fetchrow spi_cursor_close spi_exec_query
- 	spi_prepare spi_exec_prepared spi_query_prepared spi_freeplan
- 	DEBUG LOG INFO NOTICE WARNING ERROR %_SHARED
- 	quote_literal quote_nullable quote_ident
- 	encode_bytea decode_bytea
- 	encode_array_literal encode_array_constructor
- 	looks_like_number
- ]);
- 
- # Load widely useful pragmas into the container to make them available.
  # (Temporarily enable caller here as work around for bug in perl 5.10,
  # which changed the way its Safe.pm works. It is quite safe, as caller is
  # informational only.)
  $PLContainer-permit(qw[caller]);
! ::safe_eval(q{
! 	require strict;
! 	require feature if $] = 5.01;
! 	1;
! }) or die $@;
  $PLContainer-deny(qw[caller]);
  
  # called directly for plperl.on_trusted_init
! sub ::safe_eval {
  	my $ret = $PLContainer-reval(shift);
  	$@ =~ s/\(eval \d+\) //g if $@;
  	return $ret;
  }
  
! sub ::mksafefunc {
! 	return ::safe_eval(::mkfuncsrc(@_));
  }
--- 1,56 
+ package PostgreSQL::InServer::safe;
+ 
  use strict;
! use warnings;
! 
! use vars qw($PLContainer $SafeClass @EvalInSafe @ShareIntoSafe);
! 
! # Load widely useful pragmas into the container to make them available.
! # These must be trusted to not expose a way to execute a string eval
! # or any kind of unsafe action that the untrusted code could exploit.
! # If in any doubt about a module then do not add it to this list.
! push @EvalInSafe, 'require feature' if $] = 5.01;
! push @EvalInSafe, 

Re: [HACKERS] Streaming replication status

2010-01-14 Thread Greg Smith

Fujii Masao wrote:

I'm thinking something like pg_standbys_xlog_location() [on the primary] which 
returns
one row per standby servers, showing pid of walsender, host name/
port number/user OID of the standby, the location where the standby
has written/flushed WAL. DBA can measure the gap from the
combination of pg_current_xlog_location() and pg_standbys_xlog_location()
via one query on the primary.



This function is useful but not essential for troubleshooting, I think.
So I'd like to postpone it.
  


Sure; in a functional system where primary and secondary are both up, 
you can assemble the info using the new functions you just added, so 
this other one is certainly optional.  I just took a brief look at the 
code of the features you added, and it looks like it exposes the minimum 
necessary to make this whole thing possible to manage.  I think it's OK 
if you postpone this other bit, more important stuff for you to work on.


So:  the one piece of information I though was most important to expose 
here at an absolute minimum is there now.  Good progress.  The other 
popular request that keeps popping up here is  providing an easy way to 
see how backlogged the archive_command is, to make it easier to monitor 
for out of disk errors that might prove catastrophic to replication.


I just spent some time looking through the WAL/archiving code in that 
context.  It looks to me that that this information isn't really stored 
anywhere right now.  The only thing that knows what segment is currently 
queued up to copy over is pgarch_ArchiverCopyLoop via its call to 
pgarch_readyXlog.  Now, this is a pretty brute-force piece of code:  it 
doesn't remember its previous work at all, it literally walks the 
archive_status directory looking for *.ready files that have names that 
look like xlog files, then returns the earliest.  That unfortunately 
means that it's not even thinking in the same terms as all these other 
functions, which are driven by the xlog_location advancing, and then the 
filename is computed from that.  All you've got is the filename at this 
point, and it's not even guaranteed to be real--you could easily fool 
this code if you dropped an inappropriately named file into that directory.


I could easily update this code path to save the name of the last 
archived file in memory while all this directory scanning is going on 
anyway, and then provide a UDF to expose that bit of information.  The 
result would need to have documentation that disclaims it like this:


pg_last_archived_xlogfile() text:  Get the name of the last file the 
archive_command [tried to|successfully] archived since the server was 
started.  If archiving is disabled or no xlog files have become ready to 
archive since startup, a blank line will be returned.  It is possible 
for this function to return a result that does not reflect an actual 
xlogfile if files are manually added to the server's archive_status 
directory.


I'd find this extremely handy as a hook for monitoring scripts that want 
to watch the server but don't have access to the filesystem directly, 
even given those limitations.  I'd prefer to have the tried to 
version, because it will populate with the name of the troublesome file 
it's stuck on even if archiving never gets its first segment delivered.


I'd happily write a patch to handle all that if I thought it would be 
accepted.  I fear that the whole approach will be considered a bit too 
hackish and get rejected on that basis though.  Not really sure of a 
right way to handle this though.  Anything better is going to be more 
complicated because it requires passing more information into the 
archiver, with little gain for that work beyond improving the quality of 
this diagnostic routine.  And I think most people would find what I 
described above useful enough.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com



Re: [HACKERS] Streaming replication status

2010-01-14 Thread Simon Riggs
On Thu, 2010-01-14 at 23:07 -0500, Greg Smith wrote:

 pg_last_archived_xlogfile() text:  Get the name of the last file the
 archive_command [tried to|successfully] archived since the server was
 started.  If archiving is disabled or no xlog files have become ready
 to archive since startup, a blank line will be returned.  

OK

 It is possible for this function to return a result that does not
 reflect an actual xlogfile if files are manually added to the server's
 archive_status directory.

 I'd find this extremely handy as a hook for monitoring scripts that
 want to watch the server but don't have access to the filesystem
 directly, even given those limitations.  I'd prefer to have the tried
 to version, because it will populate with the name of the troublesome
 file it's stuck on even if archiving never gets its first segment
 delivered.
 
 I'd happily write a patch to handle all that if I thought it would be
 accepted.  I fear that the whole approach will be considered a bit too
 hackish and get rejected on that basis though.  Not really sure of a
 right way to handle this though.  Anything better is going to be
 more complicated because it requires passing more information into the
 archiver, with little gain for that work beyond improving the quality
 of this diagnostic routine.  And I think most people would find what I
 described above useful enough.

Yes, please write it. It's separate from SR, so will not interfere.

-- 
 Simon Riggs   www.2ndQuadrant.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] lock_timeout GUC patch

2010-01-14 Thread Jaime Casanova
2010/1/13 Boszormenyi Zoltan z...@cybertec.at:

 Your smaller patch is attached, with the above strangeness. :-)


you still had to add this parameter to the postgresql.conf.sample in
the section about lock management

-- 
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PG_MODULE_MAGIC checks and pg_migrator

2010-01-14 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 For example, right now pg_migrator can migrate to 8.4 and 8.5, but there
 is no way to distribute a binary that will migrate to both because you
 need different shared libraries with different PG_MODULE_MAGIC values.

[ yawn... ]  By the time 8.5 (9.0?) ships, this will probably not be
true.  Quit stressing about it and figure on shipping a pg_migrator
per version.

 I am not sure what to suggest except perhaps that there be some way to
 link in a shared object without the magic block checks.

We are *not* doing that.  The potential for error, and costly debug
time, greatly outweighs any possible savings.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] quoting psql varible as identifier

2010-01-14 Thread Pavel Stehule
2010/1/15 Robert Haas robertmh...@gmail.com:
 On Mon, Jan 11, 2010 at 6:54 AM, Pavel Stehule pavel.steh...@gmail.com 
 wrote:
 No longer applies, please rebase.

 fixed, sorry



my idea was:

* string
   * escape_string
   * escape_ident
* bytea
   * escape_bytea

But I am not strong in it. Maybe this part of doc needs more love -
long time there are doc to deprecated functions. It could be moved.

Pavel
 Hmm.  I think that pqEscapeIdentConn should be in a separate section
 of the documentation, entitled Escaping Identifiers for Inclusion in
 SQL Commands.  Or else we should merge the existing sections
 Escaping Strings for Inclusion in SQL Commands and Escaping Binary
 Strings for Inclusion in SQL Commands and then put this in there too.

 On a perhaps-related note, does anyone understand why Escaping
 Strings for Inclusion in SQL Commands is formatted in a way that is
 needlessly inconsistent with the preceding and following sections?  I
 was surprised by the magnitude of the doc diff hunk in this patch, but
 when I looked at it it seems to read more clearly with these changes.

 I have yet to fully review the code but on a quick glance it looks reasonable.

 ...Robert


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] attoptions

2010-01-14 Thread Alex Hunsaker
On Sun, Jan 10, 2010 at 12:27, Robert Haas robertmh...@gmail.com wrote:
 I am not very happy with ATPrepSetOptions().  I basically just
 retained the logic from ATPrepSetDistinct(), but it doesn't really
 make sense in this context.  The idea that we want to support
 attdistinct for system tables and index columns was based on a very
 specific understanding of what that was going to do; for attoptions,
 well, it might make sense for the options that we have now, but it
 might not make sense for the next thing we want to add, and there's
 not going to be any easy fix for that.  Even as it stands, the
 n_distinct_inherited option is supported for both table columns and
 index columns, but it only actually does anything for table columns.

I say just do it in AT(Prep|Exec)SetOptions.  We could extend struct
relopt_gen... but that seems overkill and hard to do without knowing
what else might be in attoptions.  IMHO at this point its ok not to
worry about it util we have something we actually care about
restricting.

Comments on the patch below.  Minus those Im happy with it.

in tablecmds.c:~3682 (ATExecAddColumn)
seems to be either missing a comment or missing the handling of
attoptions all together?

Any thoughts on how its now a float8 vs float4? Its nice how it
matches n_distinct in pg_stats now.

pg_dump.c:
You do '' AS attoptions in a few places, should that be NULL? Not
that it really matters in pg_dump...

I tested all the things you would expect (pg_dump included).  The only
perhaps interesting thing is when creating or adding an inherited
table it does not pick up the parents attopts  I think its debatable
if it should, but it seems kind of strange that given alter table
parent will give the child tables the appropriate attopts (of course
ONLY works as you expect)

My favorite error of the day :) :
ERROR:  value -2 out of bounds for option n_distinct_inherited
DETAIL:  Valid values are between -1.00 and
179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.00.

See patch below on top of yours, it fixes some brainos:
*** a/src/backend/catalog/heap.c
--- b/src/backend/catalog/heap.c
***
*** 161,167  static FormData_pg_attribute a6 = {
  static FormData_pg_attribute a7 = {
0, {tableoid}, OIDOID, 0, sizeof(Oid),
TableOidAttributeNumber, 0, -1, -1,
!   true, 'p', 'i', true, false, false, true, 0, {0}
  };

  static const Form_pg_attribute SysAtt[] = {a1, a2, a3, a4, a5, a6, a7};
--- 161,167 
  static FormData_pg_attribute a7 = {
0, {tableoid}, OIDOID, 0, sizeof(Oid),
TableOidAttributeNumber, 0, -1, -1,
!   true, 'p', 'i', true, false, false, true, 0, {0}, {0}
  };

  static const Form_pg_attribute SysAtt[] = {a1, a2, a3, a4, a5, a6, a7};
*** a/src/backend/commands/tablecmds.c
--- b/src/backend/commands/tablecmds.c
***
*** 4218,4223  ATExecSetOptions(Relation rel, const char *colName,
Node *options,
--- 4218,4224 
newOptions = transformRelOptions(isnull ? (Datum) 0 : datum,
 (List 
*) options, NULL, NULL, false,
 
isReset);
+   /* Validate new options */
(void) attribute_reloptions(newOptions, true);

/* Build new tuple. */
*** a/src/include/catalog/pg_attribute.h
--- b/src/include/catalog/pg_attribute.h
***
*** 152,158  CATALOG(pg_attribute,1249) BKI_BOOTSTRAP
BKI_WITHOUT_OIDS BKI_ROWTYPE_OID(75) BK
aclitem attacl[1];

/* Column-level options */
!   aclitem attoptions[1];
  } FormData_pg_attribute;

  /*
--- 152,158 
aclitem attacl[1];

/* Column-level options */
!   textattoptions[1];
  } FormData_pg_attribute;

  /*

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Testing with concurrent sessions

2010-01-14 Thread Markus Wanner

Hi,

Kevin Grittner wrote:

C'mon, you could have tried to inspire a *bit* more confidence by
calling it version 0.1 or something!  ;-)


LOL

As David used to say: JFDI

 I found that I just needed to ask for python-twisted.

Oh, sorry, yes, requirements: python, twisted.

I must admit that I haven't ever tested on python 2.6 before. I'll try 
that (especially as it's the staircase to 3.0, IIUC).


Two more things: the concurrent update test (in the patch part) is 
complete, while the second one is just a skeleton, ATM. (Just does a 
concurrent COMMIT without actually doing anything).


Second: at the very end of pg_dtester.py, you find the line:
 reporter = StreamReporter()

Try a CursesReporter() instead, it gives much nicer output!

Regards

Markus Wanner

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] per-user pg_service.conf

2010-01-14 Thread Peter Eisentraut
On ons, 2010-01-13 at 23:49 +0200, Peter Eisentraut wrote:
 I was surprised/annoyed to find out that there is no way to have
 per-user pg_service.conf, something like ~/.pg_service.conf (well,
 except by export PGSYSCONFDIR).  That would be easy to add.  Comments?

Here's a patch.  Perhaps those who had said they would like that can
validate the behavior.
*** doc/src/sgml/libpq.sgml	2 Dec 2009 14:07:25 -	1.292
--- doc/src/sgml/libpq.sgml	14 Jan 2010 17:02:59 -
***
*** 5786,5791  myEventProc(PGEventId evtId, void *evtIn
--- 5786,5803 
  listitem
   para
indexterm
+primaryenvarPGSERVICEFILE/envar/primary
+   /indexterm
+   envarPGSERVICEFILE/envar specifies the name of the per-user
+   connection service file.  If not set, it defaults
+   to filename~/.pg_service.conf/
+   (see xref linkend=libpq-pgservice).
+  /para
+ /listitem
+ 
+ listitem
+  para
+   indexterm
 primaryenvarPGREALM/envar/primary
/indexterm
envarPGREALM/envar sets the Kerberos realm to use with
***
*** 5979,5985  myEventProc(PGEventId evtId, void *evtIn
 primaryenvarPGSYSCONFDIR/envar/primary
/indexterm
envarPGSYSCONFDIR/envar sets the directory containing the
!   filenamepg_service.conf/ file.
   /para
  /listitem
  
--- 5991,5998 
 primaryenvarPGSYSCONFDIR/envar/primary
/indexterm
envarPGSYSCONFDIR/envar sets the directory containing the
!   filenamepg_service.conf/ file and in a future version
!   possibly other system-wide configuration files.
   /para
  /listitem
  
***
*** 6055,6060  myEventProc(PGEventId evtId, void *evtIn
--- 6068,6076 
indexterm zone=libpq-pgservice
 primarypg_service.conf/primary
/indexterm
+   indexterm zone=libpq-pgservice
+primary.pg_service.conf/primary
+   /indexterm
  
para
 The connection service file allows libpq connection parameters to be
***
*** 6066,6077  myEventProc(PGEventId evtId, void *evtIn
/para
  
para
!To use this feature, copy
!filenameshare/pg_service.conf.sample/filename to
!filenameetc/pg_service.conf/filename and edit the file to add
!service names and parameters. This file can be used for client-only
!installs too. The file's location can also be specified by the
!envarPGSYSCONFDIR/envar environment variable.
/para
   /sect1
  
--- 6082,6111 
/para
  
para
!The connection service file can be a per-user service file
!at filename~/.pg_service.conf/filename or the location
!specified by the environment variable envarPGSERVICEFILE/envar,
!or it can be a system-wide file
!at filenameetc/pg_service.conf/filename or in the directory
!specified by the environment variable
!envarPGSYSCONFDIR/envar.  If service definitions with the same
!name exist in the user and the system file, the user file takes
!precedence.
!   /para
! 
!   para
!The file uses an quoteINI file/quote format where the section
!name is the service name and the parameters are connection
!parameters.  For example:
! programlisting
! # comment
! [mydb]
! host=somehost
! port=5433
! user=admin
! /programlisting
!An example file is provided at
!filenameshare/pg_service.conf.sample/filename.
/para
   /sect1
  
*** src/interfaces/libpq/fe-connect.c	2 Jan 2010 16:58:11 -	1.382
--- src/interfaces/libpq/fe-connect.c	14 Jan 2010 17:02:59 -
***
*** 269,274  static void defaultNoticeReceiver(void *
--- 269,279 
  static void defaultNoticeProcessor(void *arg, const char *message);
  static int parseServiceInfo(PQconninfoOption *options,
   PQExpBuffer errorMessage);
+ static int parseServiceFile(const char *serviceFile,
+ 			const char *service,
+ 			PQconninfoOption *options,
+ 			PQExpBuffer errorMessage,
+ 			bool *group_found);
  static char *pwdfMatchesString(char *buf, char *token);
  static char *PasswordFromFile(char *hostname, char *port, char *dbname,
   char *username);
***
*** 3088,3096  parseServiceInfo(PQconninfoOption *optio
  {
  	char	   *service = conninfo_getval(options, service);
  	char		serviceFile[MAXPGPATH];
  	bool		group_found = false;
! 	int			linenr = 0,
! i;
  
  	/*
  	 * We have to special-case the environment variable PGSERVICE here, since
--- 3093,3102 
  {
  	char	   *service = conninfo_getval(options, service);
  	char		serviceFile[MAXPGPATH];
+ 	char	   *env;
  	bool		group_found = false;
! 	int			status;
! 	struct stat stat_buf;
  
  	/*
  	 * We have to special-case the environment variable PGSERVICE here, since
***
*** 3100,3253  parseServiceInfo(PQconninfoOption *optio
  	if (service == NULL)
  		service = getenv(PGSERVICE);
  
  	/*
  	 * This could be used by any application so we can't use the binary
  	 * location to find our config 

Re: [HACKERS] Streaming replication status

2010-01-14 Thread Stefan Kaltenbrunner

Greg Smith wrote:

Fujii Masao wrote:

I'm thinking something like pg_standbys_xlog_location() [on the primary] which 
returns
one row per standby servers, showing pid of walsender, host name/
port number/user OID of the standby, the location where the standby
has written/flushed WAL. DBA can measure the gap from the
combination of pg_current_xlog_location() and pg_standbys_xlog_location()
via one query on the primary.



This function is useful but not essential for troubleshooting, I think.
So I'd like to postpone it.
  


Sure; in a functional system where primary and secondary are both up, 
you can assemble the info using the new functions you just added, so 
this other one is certainly optional.  I just took a brief look at the 
code of the features you added, and it looks like it exposes the minimum 
necessary to make this whole thing possible to manage.  I think it's OK 
if you postpone this other bit, more important stuff for you to work on.


agreed



So:  the one piece of information I though was most important to expose 
here at an absolute minimum is there now.  Good progress.  The other 
popular request that keeps popping up here is  providing an easy way to 
see how backlogged the archive_command is, to make it easier to monitor 
for out of disk errors that might prove catastrophic to replication.


I tend to disagree - in any reasonable production setup basic stulff 
like disk space usage is monitored by non-application specific matters.
While monitoring backlog might be interesting for other reasons, citing 
disk space usage/exhaustions seems just wrong.



[...]


I'd find this extremely handy as a hook for monitoring scripts that want 
to watch the server but don't have access to the filesystem directly, 
even given those limitations.  I'd prefer to have the tried to 
version, because it will populate with the name of the troublesome file 
it's stuck on even if archiving never gets its first segment delivered.


While fancy at all I think this goes way to far for the first cut at 
SR(or say this release), monitoring disk usage and tracking log files 
for errors are SOLVED issues in estabilished production setups. If you 
are in an environment that does neither for each and every server 
independent on what you have running on it, or a setup where the 
sysadmins are clueless and the poor DBA has to hack around that fact you 
have way bigger issues anyway.



Stefan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] lock_timeout GUC patch

2010-01-14 Thread Boszormenyi Zoltan
Jaime Casanova írta:
 2010/1/13 Boszormenyi Zoltan z...@cybertec.at:
   
 Your smaller patch is attached, with the above strangeness. :-)

   

 you still had to add this parameter to the postgresql.conf.sample in
 the section about lock management
   

Attached with the required change.

Thanks,
Zoltán Böszörményi

-- 
Bible has answers for everything. Proof:
But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
than these cometh of evil. (Matthew 5:37) - basics of digital technology.
May your kingdom come - superficial description of plate tectonics

--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
http://www.postgresql.at/

diff -dcrpN pgsql.orig/doc/src/sgml/config.sgml pgsql.5/doc/src/sgml/config.sgml
*** pgsql.orig/doc/src/sgml/config.sgml	2010-01-06 08:43:22.0 +0100
--- pgsql.5/doc/src/sgml/config.sgml	2010-01-13 18:59:19.0 +0100
*** COPY postgres_log FROM '/full/path/to/lo
*** 4191,4196 
--- 4191,4220 
/listitem
   /varlistentry
  
+  varlistentry id=guc-lock-timeout xreflabel=lock_timeout
+   termvarnamelock_timeout/varname (typeinteger/type)/term
+   indexterm
+primaryvarnamelock_timeout/ configuration parameter/primary
+   /indexterm
+   listitem
+para
+ Abort any statement that tries to acquire a heavy-weight lock (e.g. rows,
+ pages, tables, indices or other objects) and the lock has to wait more
+ than the specified number of milliseconds, starting from the time the
+ command arrives at the server from the client.
+ If varnamelog_min_error_statement/ is set to literalERROR/ or lower,
+ the statement that timed out will also be logged. A value of zero
+ (the default) turns off the limitation.
+/para
+ 
+para
+ Setting varnamelock_timeout/ in
+ filenamepostgresql.conf/ is not recommended because it
+ affects all sessions.
+/para  
+   /listitem   
+  /varlistentry
+ 
   varlistentry id=guc-vacuum-freeze-table-age xreflabel=vacuum_freeze_table_age
termvarnamevacuum_freeze_table_age/varname (typeinteger/type)/term
indexterm
diff -dcrpN pgsql.orig/doc/src/sgml/ref/lock.sgml pgsql.5/doc/src/sgml/ref/lock.sgml
*** pgsql.orig/doc/src/sgml/ref/lock.sgml	2009-09-18 08:26:40.0 +0200
--- pgsql.5/doc/src/sgml/ref/lock.sgml	2010-01-13 18:59:19.0 +0100
*** LOCK [ TABLE ] [ ONLY ] replaceable cla
*** 39,46 
 literalNOWAIT/literal is specified, commandLOCK
 TABLE/command does not wait to acquire the desired lock: if it
 cannot be acquired immediately, the command is aborted and an
!error is emitted.  Once obtained, the lock is held for the
!remainder of the current transaction.  (There is no commandUNLOCK
 TABLE/command command; locks are always released at transaction
 end.)
/para
--- 39,49 
 literalNOWAIT/literal is specified, commandLOCK
 TABLE/command does not wait to acquire the desired lock: if it
 cannot be acquired immediately, the command is aborted and an
!error is emitted. If varnamelock_timeout/varname is set to a value
!higher than 0, and the lock cannot be acquired under the specified
!timeout value in milliseconds, the command is aborted and an error
!is emitted. Once obtained, the lock is held for the remainder of  
!the current transaction.  (There is no commandUNLOCK
 TABLE/command command; locks are always released at transaction
 end.)
/para
diff -dcrpN pgsql.orig/doc/src/sgml/ref/select.sgml pgsql.5/doc/src/sgml/ref/select.sgml
*** pgsql.orig/doc/src/sgml/ref/select.sgml	2009-10-29 15:23:52.0 +0100
--- pgsql.5/doc/src/sgml/ref/select.sgml	2010-01-13 18:59:19.0 +0100
*** FOR SHARE [ OF replaceable class=param
*** 1121,1126 
--- 1121,1134 
 /para
  
 para
+ If literalNOWAIT/ option is not specified and varnamelock_timeout/varname
+ is set to a value higher than 0, and the lock needs to wait more than
+ the specified value in milliseconds, the command reports an error after
+ timing out, rather than waiting indefinitely. The note in the previous
+ paragraph applies to the varnamelock_timeout/varname, too.
+/para
+ 
+para
  If specific tables are named in literalFOR UPDATE/literal
  or literalFOR SHARE/literal,
  then only rows coming from those tables are locked; any other
diff -dcrpN pgsql.orig/src/backend/access/heap/heapam.c pgsql.5/src/backend/access/heap/heapam.c
*** pgsql.orig/src/backend/access/heap/heapam.c	2010-01-10 15:49:30.0 +0100
--- pgsql.5/src/backend/access/heap/heapam.c	2010-01-13 22:08:23.0 +0100
*** l1:
*** 2119,2125 
  		if (infomask  HEAP_XMAX_IS_MULTI)
  		{
  			/* wait for multixact */
! 			MultiXactIdWait((MultiXactId) xwait);
  			LockBuffer(buffer, 

  1   2   >