Re: [HACKERS] Transaction-scope advisory locks

2010-12-14 Thread Marko Tiikkaja

On 2010-12-14 4:23 AM +0200, Tom Lane wrote:

Marko Tiikkajamarko.tiikk...@cs.helsinki.fi  writes:

On 2010-12-14 1:08 AM +0200, Szymon Guz wrote:

In my opinion changing current behavior is not a good idea. I know some
software that relies on current behavior and this would break it. Maybe add
that as an option, or add another type of advisory lock?



Oh, I forgot to mention.  The patch doesn't change any existing
behaviour; the new behaviour can be invoked only by adding a new boolean
argument:


Uh, I don't think so.  It sure looks like you have changed the user
lockmethod to be transactional, ie, auto-release on commit/abort.


I was under the impression that passing sessionLock=true to 
LockAcquire(), combined with allLocks=false to LockReleaseAll() would be 
enough to prevent that from happening.  My tests seem to agree with this.


Am I missing something?


Regards,
Marko Tiikkaja

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_execute_from_file, patch v10

2010-12-14 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
 Has anyone thought twice about the security implications of that?
 Not to mention that in most cases, the very last thing we want is to
 have to specify an exact full path?

Well, the security is left same as before, superuser only. And Itagaki
showed that superuser are allowed to read any file anywhere already, so
we didn't change anything here.

 I think we'd be better off insisting that the extension files be under
 sharedir or some such place.

That's the case, but the rework of genfile.c is more general than just
support for extension, or I wouldn't have been asked for a separate
patch, would I?

 In any case, I concur with what I gather Robert is thinking, which is
 that there is no good reason to be exposing any of this at the SQL level.

That used to be done this way, you know, in versions between 0 and 6 of
the patch. Starting at version 7, the underlyiong facilities have been
splitted and exposed, because of the file encoding and server encoding
issues reported by Itagaki.

I propose that more than 2 of you guys get in agreement on what the good
specs are and wake me up after that so that I spawn the right version of
the patch, and if necessary, revise it.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_execute_from_file, patch v10

2010-12-14 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
 CREATE EXTENSION will be superuser to start with, no doubt, but I think
 we'll someday want to allow it to database owners, just as happened with
 CREATE LANGUAGE.  Let's not build it on top of operations that
 inherently involve security problems, especially when there's no need
 to.

That boils down to moving the superuser() test in the right functions,
it's now in the innermost facility to read files. If you have something
precise enough for me to work on it, please say, but I guess you'd spend
less time making the copy/paste in the code rather than in the mail.
That schedule optimisation is for you to make, though.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_execute_from_file, patch v10

2010-12-14 Thread Dimitri Fontaine
Robert Haas robertmh...@gmail.com writes:
 Well, I think it is best when a patch has just one purpose.  This
 seems to be sort of an odd hodge-podge of things.

The purpose here is clean-up the existing pg_read_file() facility so
that it's easy to build pg_execute_sql_file() on top of it.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transaction-scope advisory locks

2010-12-14 Thread Dimitri Fontaine
Marko Tiikkaja marko.tiikk...@cs.helsinki.fi writes:
 I often find myself wanting advisory locks that are automatically released
 when the transaction ends, so here's a small patch trying to do just that.

Excellent idea, I sure need that (been doing some pl stuff to track
locks granted then unlock them, transaction scope would mean pure SQL
function work). Thanks! :)

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transaction-scope advisory locks

2010-12-14 Thread Andres Freund
On Tuesday 14 December 2010 00:14:22 Marko Tiikkaja wrote:
 The lock space is the same though, but I don't feel strongly about it.
I feel strongly that it needs the same locking space. I pretty frequently have 
the need for multiple clients trying to acquiring a lock in transaction scope 
(i.e. for accessing the cache) and one/few acquiring it in session scope (for 
building the cache).

Andres

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread Peter Eisentraut
On mån, 2010-12-13 at 10:55 -0500, Tom Lane wrote:
 We don't normally invent specialized syntax for a specific datatype.
 Not even if it's in core.

I think the idea would be to make associative arrays a kind of
second-order object like arrays, instead of a data type.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Instrument checkpoint sync calls

2010-12-14 Thread Greg Smith

Robert Haas wrote:

I took a look at this and it looks generally good, but I'm wondering
why md.c is converting the results from an exact value to a floating
point, only to have xlog.c turn around and convert back to an integer.
 I think it could just return milliseconds directly, or if you're
worried about a checkpoint that takes more than 24 days to complete,
seconds and microseconds.  


Attached patch now does something like this, except without the 
roll-over concern.  INSTR_TIME_GET_MICROSEC returns a uint64 value.  I 
just made that the storage format for all these values until they're 
converted for display.  Test output:


LOG:  checkpoint starting: xlog
DEBUG:  checkpoint sync: number=1 file=base/16385/16480 time=10422.859 msec
DEBUG:  checkpoint sync: number=2 file=base/16385/16475_vm time=2896.614 
msec

DEBUG:  checkpoint sync: number=3 file=base/16385/16475.1 time=57.836 msec
DEBUG:  checkpoint sync: number=4 file=base/16385/16466 time=20.080 msec
DEBUG:  checkpoint sync: number=5 file=base/16385/16463 time=74.926 msec
DEBUG:  checkpoint sync: number=6 file=base/16385/16482 time=74.263 msec
DEBUG:  checkpoint sync: number=7 file=base/16385/16475_fsm time=7.062 msec
DEBUG:  checkpoint sync: number=8 file=base/16385/16475 time=35.164 msec
LOG:  checkpoint complete: wrote 2143 buffers (52.3%); 0 transaction log 
file(s) added, 0 removed, 3 recycled; write=1.213 s, sync=13.589 s, 
total=24.744 s; sync files=8, longest=10.422 s, average=1.698 s


This shows the hard truncation used, so 10422.859 msec becomes 10.422 
s.  I don't think allowing up to 0.999ms of error there is a problem 
given the expected scale.  But since none of the precision is lost until 
the end, that could be changed with only touching the final display 
formatting conversion of the value.  Following your general idea 
further, why throw away any resolution inside of md.c; let xlog.c decide 
how to show it.


Note that I also fixed the DEBUG level lines to only show their actual 
precision.  Before that was printing 6 digits to the right of the 
decimal point each time, the last three of which were always 0.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services and Supportwww.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books

diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 1ed9687..c9778df 100644
*** a/src/backend/access/transam/xlog.c
--- b/src/backend/access/transam/xlog.c
*** LogCheckpointEnd(bool restartpoint)
*** 6955,6964 
  {
  	long		write_secs,
  sync_secs,
! total_secs;
  	int			write_usecs,
  sync_usecs,
! total_usecs;
  
  	CheckpointStats.ckpt_end_t = GetCurrentTimestamp();
  
--- 6955,6969 
  {
  	long		write_secs,
  sync_secs,
! total_secs,
! longest_secs,
! average_secs;
  	int			write_usecs,
  sync_usecs,
! total_usecs,
! longest_usecs,
! average_usecs;
! 	uint64		average_sync_time;
  
  	CheckpointStats.ckpt_end_t = GetCurrentTimestamp();
  
*** LogCheckpointEnd(bool restartpoint)
*** 6974,6991 
  		CheckpointStats.ckpt_sync_end_t,
  		sync_secs, sync_usecs);
  
  	if (restartpoint)
  		elog(LOG, restartpoint complete: wrote %d buffers (%.1f%%); 
! 			 write=%ld.%03d s, sync=%ld.%03d s, total=%ld.%03d s,
  			 CheckpointStats.ckpt_bufs_written,
  			 (double) CheckpointStats.ckpt_bufs_written * 100 / NBuffers,
  			 write_secs, write_usecs / 1000,
  			 sync_secs, sync_usecs / 1000,
! 			 total_secs, total_usecs / 1000);
  	else
  		elog(LOG, checkpoint complete: wrote %d buffers (%.1f%%); 
  			 %d transaction log file(s) added, %d removed, %d recycled; 
! 			 write=%ld.%03d s, sync=%ld.%03d s, total=%ld.%03d s,
  			 CheckpointStats.ckpt_bufs_written,
  			 (double) CheckpointStats.ckpt_bufs_written * 100 / NBuffers,
  			 CheckpointStats.ckpt_segs_added,
--- 6979,7017 
  		CheckpointStats.ckpt_sync_end_t,
  		sync_secs, sync_usecs);
  
+ 	/*
+ 	 * Timing values returned from CheckpointStats are in microseconds.
+ 	 * Convert to the second plus microsecond form that TimestampDifference
+ 	 * returns for homogeneous printing.
+ 	 */
+ 	longest_secs = (long) (CheckpointStats.ckpt_longest_sync / 100);
+ 	longest_usecs = CheckpointStats.ckpt_longest_sync -
+ 		(uint64) longest_secs * 100;
+ 
+ 	average_sync_time = 0;
+ 	if (CheckpointStats.ckpt_sync_rels  0) 
+ 		average_sync_time = CheckpointStats.ckpt_agg_sync_time /
+ 			CheckpointStats.ckpt_sync_rels;
+ 	average_secs = (long) (average_sync_time / 100);
+ 	average_usecs = average_sync_time - (uint64) average_secs * 100;
+ 
  	if (restartpoint)
  		elog(LOG, restartpoint complete: wrote %d buffers (%.1f%%); 
! 			 write=%ld.%03d s, sync=%ld.%03d s, total=%ld.%03d s; 
! 			 sync files=%d, longest=%ld.%03d s, average=%ld.%03d s,
  			 CheckpointStats.ckpt_bufs_written,
  			 (double) 

Re: [HACKERS] CommitFest wrap-up

2010-12-14 Thread Florian Pflug
On Dec13, 2010, at 18:37 , Robert Haas wrote:
 We're now just a day or two from the end of this CommitFest and there
 are still a LOT of open patches - to be specific, 23.Here's a brief
 synopsis of where we are with the others, all IMO of course.
Thanks for putting this together!

 - serializable lock consistency - I am fairly certain this needs
 rebasing.  I don't have time to deal with it right away.  That sucks,
 because I think this is a really important change.
I can try to find some time to update the patch if it suffers from bit-rot. 
Would that help?

best regards,
Florian Pflug




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread Peter Eisentraut
On mån, 2010-12-13 at 08:50 +0100, Jan Urbański wrote:
 It would be cool to be able to transparently use hstores as Python
 dictionaries and vice versa. It would be easy enough with hstore as a
 core type, but with hstore as an addon it's not that easy.

I have been thinking about this class of problems for a while.  I think
the proper fix is to have a user-definable mapping between types and
languages.  It would be another pair of input/output functions,
essentially.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SQL/MED - core functionality

2010-12-14 Thread Peter Eisentraut
On mån, 2010-12-13 at 14:45 +0900, Shigeru HANADA wrote:
 Simple FDWs such as File FDW might not have concept of user on
 remote side.  In such case, it would be enough to control access
 privilege per local user with GRANT/REVOKE SELECT statement.

Right.  But it depends on the implementation.  You could, for example,
imagine a userdir FDW that reads from users' home directories.

 To allow omitting column definitions for that purpose, a way to create
 ero-column tables would have to be provided.  New syntax which allows
 FDWs to determine column definition would be necessary.
 
 ex)
 -- Create foo from the remote table foo on the server bar
 CREATE FOREIGN TABLE foo SERVER bar;
 -- Create zero-column table foo
 CREATE FOREIGN TABLE foo () SERVER bar;

That syntax seems pretty obvious.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CommitFest wrap-up

2010-12-14 Thread Robert Haas
On Tue, Dec 14, 2010 at 7:51 AM, Florian Pflug f...@phlo.org wrote:
 - serializable lock consistency - I am fairly certain this needs
 rebasing.  I don't have time to deal with it right away.  That sucks,
 because I think this is a really important change.
 I can try to find some time to update the patch if it suffers from bit-rot. 
 Would that help?

Yes!

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SQL/MED - core functionality

2010-12-14 Thread Shigeru HANADA
On Mon, 13 Dec 2010 21:51:40 -0500
Robert Haas robertmh...@gmail.com wrote:
 This actually doesn't apply cleanly.  There's a hunk in pg_class.h
 that is failing.

I might have missed recent changes about pg_class.relistemp.
I've fixed in local repo.

 I think attgenoptions is a poor choice of name for the concept it
 represents.  Surely it should be attfdwoptions.  But I am wondering
 whether we could actually go a step further and break this
 functionality off into a separate patch.  Do file_fdw and/or
 postgresql_fdw require column-level FDW options?  If not, splitting
 this part out looks like it would be a fairly significant
 simplification for v1

In current FDW implementation, column-level FDW options are used as:

1) force_not_null option of file_fdw.  COPY FROM accepts the option as
column name list, but it would be complicated to accept it in
table-level FDW option.

2) column name alias option in postgresql_fdw.

But they don't seem necessary to discuss basic design.

 Along similar lines, I think we could simplify the first version of
 this considerably by removing all the support for constraints on
 foreign tables.  It might be useful to have that some day, but in the
 interest of whittling this down to a manageable size, it seems like we
 could easily do without that for starters.

 On the other hand, I don't really see any advantage to allowing rules
 on foreign tables - ever.  Unless there's some reason we really need
 that, my gut feeling would be to rip it out and forget about it.
 
 The docs should avoid cut-and-pasting large quantities of the existing
 docs.  Instead, they should refer to the existing material.

CHECK constraint allowed to support constraint exclusion, but NOT NULL
is designed for just query-time constraint.

I'll simplify the patch and post patches 1-4 of below first.

essential part
1) Basic syntax for FOREIGN TABLE and FDW HANDLER
2) FDW API and ForeignScan execution
# These patches are split just to make review easy.

FDW implementation
3) pgsql_fdw
4) file_fdw

Additional features
5) NOT NULL constraint and query-time evaluation
6) column-level FDW option
- syntax and catalog
- column alias option for pgsql_fdw
- force_not_null option for file_fdw
7) RULE

 Copyright notice for new files should go through 2010, not 2009.

Will be fixed in next patch.
I also replaced all $PostgreSQL$ with actual file names.

Regards,
--
Shigeru Hanada



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transaction-scope advisory locks

2010-12-14 Thread Merlin Moncure
On Tue, Dec 14, 2010 at 7:07 AM, Andres Freund and...@anarazel.de wrote:
 On Tuesday 14 December 2010 00:14:22 Marko Tiikkaja wrote:
 The lock space is the same though, but I don't feel strongly about it.
 I feel strongly that it needs the same locking space. I pretty frequently have
 the need for multiple clients trying to acquiring a lock in transaction scope
 (i.e. for accessing the cache) and one/few acquiring it in session scope (for
 building the cache).

Not that I'm necessarily against the proposal, but what does this do
that can't already be done by locking a table or a table's row?

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transaction-scope advisory locks

2010-12-14 Thread Marko Tiikkaja

On 2010-12-14 4:19 PM +0200, Merlin Moncure wrote:

On Tue, Dec 14, 2010 at 7:07 AM, Andres Freundand...@anarazel.de  wrote:

On Tuesday 14 December 2010 00:14:22 Marko Tiikkaja wrote:

The lock space is the same though, but I don't feel strongly about it.

I feel strongly that it needs the same locking space. I pretty frequently have
the need for multiple clients trying to acquiring a lock in transaction scope
(i.e. for accessing the cache) and one/few acquiring it in session scope (for
building the cache).


Not that I'm necessarily against the proposal, but what does this do
that can't already be done by locking a table or a table's row?


Try without throwing an error.


Regards,
Marko Tiikkaja

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Instrument checkpoint sync calls

2010-12-14 Thread Alvaro Herrera

I gave this patch a look and it seems pretty good to me, except that I'm
uncomfortable with the idea of mdsync filling in the details for
CheckpointStats fields directly.  Would it work to pass a struct (say
SmgrSyncStats) from CheckPointBuffers to smgrsync and from there to
mdsync, have this function fill it, and return it back so that
CheckPointBuffers copies the data from this struct into CheckpointStats?

Another minor nitpick: inside the block when you call FileSync, why
check for log_checkpoints at all?  Seems to me that just checking for
zero of sync_start should be enough.  Alternatively, seems simpler to
just have a local var with the value of log_checkpoints at the start of
mdsync and use that throughout the function.  (Surely if someone turns
off log_checkpoints in the middle of a checkpoint, it's not really a
problem that we collect and report stats during that checkpoint.)

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transaction-scope advisory locks

2010-12-14 Thread Andres Freund
On Tuesday 14 December 2010 15:19:32 Merlin Moncure wrote:
 On Tue, Dec 14, 2010 at 7:07 AM, Andres Freund and...@anarazel.de wrote:
  On Tuesday 14 December 2010 00:14:22 Marko Tiikkaja wrote:
  The lock space is the same though, but I don't feel strongly about it.
  
  I feel strongly that it needs the same locking space. I pretty frequently
  have the need for multiple clients trying to acquiring a lock in
  transaction scope (i.e. for accessing the cache) and one/few acquiring
  it in session scope (for building the cache).
 
 Not that I'm necessarily against the proposal, but what does this do
 that can't already be done by locking a table or a table's row?
1. trylock without raising errors (the other possibility is nowait, but that 
doesnt work very well as it ERRORs).

2. mixing session and transaction scope (I would like to have that e.g. for 
materialized views. The writers uses session scope and the readers use 
transaction scope. Its not that easy to make code ERROR/exception safe when 
you only control some view or such. In contrast the computationally expensive 
part of computing the materialized view should be way much more easy to do 
sensibly in session scope).

3. nonlocking dequeuing of a table-based queue can e.g. be done with advisory 
locks but not with row level locks.

Andres

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Instrument checkpoint sync calls

2010-12-14 Thread Robert Haas
On Tue, Dec 14, 2010 at 9:29 AM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
 I gave this patch a look and it seems pretty good to me, except

Err, woops.  I just committed this as-is.  Sorry.

 that I'm
 uncomfortable with the idea of mdsync filling in the details for
 CheckpointStats fields directly.  Would it work to pass a struct (say
 SmgrSyncStats) from CheckPointBuffers to smgrsync and from there to
 mdsync, have this function fill it, and return it back so that
 CheckPointBuffers copies the data from this struct into CheckpointStats?

 Another minor nitpick: inside the block when you call FileSync, why
 check for log_checkpoints at all?  Seems to me that just checking for
 zero of sync_start should be enough.  Alternatively, seems simpler to
 just have a local var with the value of log_checkpoints at the start of
 mdsync and use that throughout the function.  (Surely if someone turns
 off log_checkpoints in the middle of a checkpoint, it's not really a
 problem that we collect and report stats during that checkpoint.)

Neither of these things bothers me, but we can certainly discuss...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SQL/MED - core functionality

2010-12-14 Thread Robert Haas
On Tue, Dec 14, 2010 at 1:16 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
 On the other hand, I don't really see any advantage to allowing rules
 on foreign tables - ever.  Unless there's some reason we really need
 that, my gut feeling would be to rip it out and forget about it.

 views, updateable views?

We already have those.  They have their own relkind.  Why would we
need to duplicate that here?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SQL/MED - core functionality

2010-12-14 Thread Itagaki Takahiro
On Tue, Dec 14, 2010 at 23:38, Robert Haas robertmh...@gmail.com wrote:
 On Tue, Dec 14, 2010 at 1:16 AM, Pavel Stehule pavel.steh...@gmail.com 
 wrote:
 On the other hand, I don't really see any advantage to allowing rules
 on foreign tables - ever.  Unless there's some reason we really need
 that, my gut feeling would be to rip it out and forget about it.

 views, updateable views?

 We already have those.  They have their own relkind.  Why would we
 need to duplicate that here?

We need RULEs or INSTEAD OF TRIGGERs to support updatable foreign tables.
Do you suggest to define a wrapper view if we want to create an updatable
foreign table? I think users don't like such kind of wrappers.

-- 
Itagaki Takahiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SQL/MED - core functionality

2010-12-14 Thread Robert Haas
On Tue, Dec 14, 2010 at 9:06 AM, Shigeru HANADA
han...@metrosystems.co.jp wrote:
 I'll simplify the patch and post patches 1-4 of below first.

 essential part
 1) Basic syntax for FOREIGN TABLE and FDW HANDLER
 2) FDW API and ForeignScan execution
 # These patches are split just to make review easy.

 FDW implementation
 3) pgsql_fdw
 4) file_fdw

 Additional features
 5) NOT NULL constraint and query-time evaluation
 6) column-level FDW option
    - syntax and catalog
    - column alias option for pgsql_fdw
    - force_not_null option for file_fdw
 7) RULE

This seems like a good plan.  As a procedural issue, please post
patches one and two on the same thread (perhaps this one), because
they can't be considered in isolation.  Each of the remaining patches
should be posted to its own thread.

I've moved the SQL/MED patches to CommitFest 2011-01, and created a
topic for them, SQL/MED, since it seems like we'll end up with a bunch
of them to review.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SQL/MED - core functionality

2010-12-14 Thread Robert Haas
On Tue, Dec 14, 2010 at 9:42 AM, Itagaki Takahiro
itagaki.takah...@gmail.com wrote:
 On Tue, Dec 14, 2010 at 23:38, Robert Haas robertmh...@gmail.com wrote:
 On Tue, Dec 14, 2010 at 1:16 AM, Pavel Stehule pavel.steh...@gmail.com 
 wrote:
 On the other hand, I don't really see any advantage to allowing rules
 on foreign tables - ever.  Unless there's some reason we really need
 that, my gut feeling would be to rip it out and forget about it.

 views, updateable views?

 We already have those.  They have their own relkind.  Why would we
 need to duplicate that here?

 We need RULEs or INSTEAD OF TRIGGERs to support updatable foreign tables.

We do?  Why can't the support for updating foreign tables be built-in
rather than trigger-based?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transaction-scope advisory locks

2010-12-14 Thread Tom Lane
Merlin Moncure mmonc...@gmail.com writes:
 Not that I'm necessarily against the proposal, but what does this do
 that can't already be done by locking a table or a table's row?

I agree with Andres' point about this: sometimes it'd be more convenient
for an advisory lock to be released automatically at transaction end.
If you have a mix of clients that want that behavior with others that
want a persistent hold on the same locks, you can't do it with regular
locks.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transaction-scope advisory locks

2010-12-14 Thread Andrew Dunstan



On 12/14/2010 09:51 AM, Tom Lane wrote:

Merlin Moncuremmonc...@gmail.com  writes:

Not that I'm necessarily against the proposal, but what does this do
that can't already be done by locking a table or a table's row?

I agree with Andres' point about this: sometimes it'd be more convenient
for an advisory lock to be released automatically at transaction end.
If you have a mix of clients that want that behavior with others that
want a persistent hold on the same locks, you can't do it with regular
locks.



Right. And that's why they need to be in the same lockspace.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SQL/MED - core functionality

2010-12-14 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Tue, Dec 14, 2010 at 9:42 AM, Itagaki Takahiro
 itagaki.takah...@gmail.com wrote:
 We need RULEs or INSTEAD OF TRIGGERs to support updatable foreign tables.

 We do?  Why can't the support for updating foreign tables be built-in
 rather than trigger-based?

It *has* to be built in.  What exactly would you imagine a rule or
trigger is going to do?  It won't have any below-SQL-level access to the
foreign table with which it could issue some magic command that's not
spelled UPDATE; and even if it did, why wouldn't you just spell that
command UPDATE?

There would be value in being able to fire triggers on foreign-table
updates just like you can on local tables.  It might well be that that
would just fall out of the implementation, since triggers are handled at
the top level of the executor, which shouldn't need to know the
difference.  But if it doesn't fall out easily, I don't mind postponing
that feature till later.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transaction-scope advisory locks

2010-12-14 Thread Merlin Moncure
On Tue, Dec 14, 2010 at 9:51 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Merlin Moncure mmonc...@gmail.com writes:
 Not that I'm necessarily against the proposal, but what does this do
 that can't already be done by locking a table or a table's row?

 I agree with Andres' point about this: sometimes it'd be more convenient
 for an advisory lock to be released automatically at transaction end.
 If you have a mix of clients that want that behavior with others that
 want a persistent hold on the same locks, you can't do it with regular
 locks.

right, plus 4:

automatic lock release on error.  right now if I'm grabbing
in-transaction lock inside a function, I have to put in sub
transaction handler to guarantee release if anything non trivial
happens mid lock.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Tab completion for view triggers in psql

2010-12-14 Thread David Fetter
On Mon, Dec 13, 2010 at 10:48:54PM -0500, Robert Haas wrote:
 On Tue, Nov 30, 2010 at 9:15 AM, David Fetter da...@fetter.org wrote:
  Patch attached. If you think my changes are ok,
  please change the patch status to Ready for Committer.
 
  Done :)
 
 I have committed part of this patch.

Great!

 The rest is attached.  I don't know that there's any problem with
 it, but I ran out of steam.

The issue with not committing it is that having a two-word condition
(INSTEAD OF vs. BEFORE or AFTER) means that thing that know about
preceding BEFORE or AFTER now have to look one word further backward,
at least as tab completion works now.

That we're in the position of having prevN_wd for N = 1..5 as the
current code exists is a sign that we need to refactor the whole
thing, as you've suggested before.

I'll work up a design and prototype for this by this weekend.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SQL/MED - core functionality

2010-12-14 Thread Itagaki Takahiro
On Tue, Dec 14, 2010 at 23:45, Robert Haas robertmh...@gmail.com wrote:
 We need RULEs or INSTEAD OF TRIGGERs to support updatable foreign tables.

 We do?  Why can't the support for updating foreign tables be built-in
 rather than trigger-based?

Do we have any concrete idea for the built-in update feature?
There are no definitions in the SQL standard about interface for updates.

So, I think RULE and TRIGGER are the best solution for now.
In addition, even if we support some kinds of built-in update feature,
I still think RULE and TRIGGER are useful, for example, logging purpose.

-- 
Itagaki Takahiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SQL/MED - core functionality

2010-12-14 Thread David Fetter
On Wed, Dec 15, 2010 at 12:48:59AM +0900, Itagaki Takahiro wrote:
 On Tue, Dec 14, 2010 at 23:45, Robert Haas robertmh...@gmail.com wrote:
  We need RULEs or INSTEAD OF TRIGGERs to support updatable foreign
  tables.
 
  We do?  Why can't the support for updating foreign tables be
  built-in rather than trigger-based?
 
 Do we have any concrete idea for the built-in update feature?  There
 are no definitions in the SQL standard about interface for updates.
 
 So, I think RULE and TRIGGER are the best solution for now.  In
 addition, even if we support some kinds of built-in update feature,
 I still think RULE and TRIGGER are useful, for example, logging
 purpose.

Please start with TRIGGER, and we can then discuss the whether and
possibly the how of RULEs later.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SQL/MED - core functionality

2010-12-14 Thread Robert Haas
On Tue, Dec 14, 2010 at 10:48 AM, Itagaki Takahiro
itagaki.takah...@gmail.com wrote:
 On Tue, Dec 14, 2010 at 23:45, Robert Haas robertmh...@gmail.com wrote:
 We need RULEs or INSTEAD OF TRIGGERs to support updatable foreign tables.

 We do?  Why can't the support for updating foreign tables be built-in
 rather than trigger-based?

 Do we have any concrete idea for the built-in update feature?
 There are no definitions in the SQL standard about interface for updates.

 So, I think RULE and TRIGGER are the best solution for now.
 In addition, even if we support some kinds of built-in update feature,
 I still think RULE and TRIGGER are useful, for example, logging purpose.

I think triggers are useful.  I see no reason to support rules.  If
the first version of our SQL/MED functionality is read-only, that's
fine.  But triggers are slow, clumsy, and expose implementation
details to users, so those should be something that we provide as a
way of making the database extensible, not something we use to build
core functionality.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Instrument checkpoint sync calls

2010-12-14 Thread Alvaro Herrera
Excerpts from Robert Haas's message of mar dic 14 11:34:55 -0300 2010:
 On Tue, Dec 14, 2010 at 9:29 AM, Alvaro Herrera
 alvhe...@commandprompt.com wrote:
  I gave this patch a look and it seems pretty good to me, except
 
 Err, woops.  I just committed this as-is.  Sorry.

I noticed :-)

  that I'm
  uncomfortable with the idea of mdsync filling in the details for
  CheckpointStats fields directly.  Would it work to pass a struct (say
  SmgrSyncStats) from CheckPointBuffers to smgrsync and from there to
  mdsync, have this function fill it, and return it back so that
  CheckPointBuffers copies the data from this struct into CheckpointStats?
 
  Another minor nitpick: inside the block when you call FileSync, why
  check for log_checkpoints at all?  Seems to me that just checking for
  zero of sync_start should be enough.  Alternatively, seems simpler to
  just have a local var with the value of log_checkpoints at the start of
  mdsync and use that throughout the function.  (Surely if someone turns
  off log_checkpoints in the middle of a checkpoint, it's not really a
  problem that we collect and report stats during that checkpoint.)
 
 Neither of these things bothers me, but we can certainly discuss...

Well, the second one was just about simplifying it, so never mind that.
But referring to CheckpointStats in md.c seems to me to be a violation
of modularity that ought to be fixed.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_execute_from_file, patch v10

2010-12-14 Thread Itagaki Takahiro
On Tue, Dec 14, 2010 at 18:01, Dimitri Fontaine dimi...@2ndquadrant.fr wrote:
 In any case, I concur with what I gather Robert is thinking, which is
 that there is no good reason to be exposing any of this at the SQL level.

 That used to be done this way, you know, in versions between 0 and 6 of
 the patch. Starting at version 7, the underlyiong facilities have been
 splitted and exposed, because of the file encoding and server encoding
 issues reported by Itagaki.

I'm confused which part of the patch is the point of the discussion.
  1. Relax pg_read_file() to be able to read any files.
  2. pg_read_binary_file()
  3. pg_execute_sql_string/file()

As I pointed out, 1 is reasonable as long as we restrict the usage
only to superuser. If we think it is a security hole, there are
the same issue in lo_import() and COPY FROM by superuser.

2 is a *fix* for the badly-designed pg_read_file() interface.
It should have returned bytea rather than text.

3 could simplify later EXTENSION patches, but it might not be
a large help because we can just use SPI_exec() instead of them
if we write codes with C.  I think the most useful parts of the
patch is reading a whole file with encoding, i.e., 1 and 2.

-- 
Itagaki Takahiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 On mån, 2010-12-13 at 10:55 -0500, Tom Lane wrote:
 We don't normally invent specialized syntax for a specific datatype.
 Not even if it's in core.

 I think the idea would be to make associative arrays a kind of
 second-order object like arrays, instead of a data type.

I haven't actually figured out what the benefit would be, other than
buzzword compliance and a chance to invent some random nonstandard
syntax.  If the element values all have to be the same type, you've
basically got hstore.  If they are allowed to be different types,
what have you got but a record?  Surely SQL can do composite types
already.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 On mån, 2010-12-13 at 08:50 +0100, Jan Urbański wrote:
 It would be cool to be able to transparently use hstores as Python
 dictionaries and vice versa. It would be easy enough with hstore as a
 core type, but with hstore as an addon it's not that easy.

 I have been thinking about this class of problems for a while.  I think
 the proper fix is to have a user-definable mapping between types and
 languages.  It would be another pair of input/output functions,
 essentially.

Interesting thought, but it still leaves you needing to solve the
problem of interconnecting two optional addons ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread Dmitriy Igrishin
2010/12/14 Pavel Stehule pavel.steh...@gmail.com

 2010/12/13 Dmitriy Igrishin dmit...@gmail.com:
 
 
  2010/12/14 Pavel Stehule pavel.steh...@gmail.com
 
  2010/12/13 Dmitriy Igrishin dmit...@gmail.com:
  
  
   2010/12/13 Pavel Stehule pavel.steh...@gmail.com
  
   2010/12/13 Dmitriy Igrishin dmit...@gmail.com:
   
   
2010/12/13 Pavel Stehule pavel.steh...@gmail.com
   
2010/12/13 Dmitriy Igrishin dmit...@gmail.com:
 There are a lot of operators and functions to work with hstore.
 Does it worth it to implement similar things only to make it
 possible using operator [] ?
   
yes
   

 2010/12/13 Pavel Stehule pavel.steh...@gmail.com

 
  name and interface - hstore is designed as external module -
 a
  internal class can be designed different.
  Could you actually name such a difference rather than
 pointing
  to
  some
  airily
  hint of one? That would make it way much easier to see where
  you
  want
  to
  go.

 My idea is:

 somevar['key'] = value
 value = somevar['key'];

 What type of value is? Can it be assoc. array ?
 Is it possible to indexing assoc. array by position ?
 Any many many other questions can be there. So,
 I don't think that assoc. arrays has common interface.
 Its still specialized type.
   
It's question. Minimally it can be a any known (defined) type -
composite type too. It would be nice if we can store data in
 native
format with constraints. Now Hstore can store only text - (note:
It's
terrible hard to write this as external module, so Hstore does
maximum
what is possible).
   
 But, Pavel, I feel you idea. You want to make the syntax
 clear in particular...
   
I like a possibility to define own types in pg. But sometimes, and
associative arrays is it, created interface is too specific -
 like
Hstore is it. PostgreSQL doesn't allow to extend a parser - and
Hstore
respects it in design. So when we could to move hstore
 functionality
to core, we can extend a parser, and we can create some general
usable
API. It can be big plus for stored procedures programming. This is
just my opinion - when Hstore will be in core, then we will not
 have
a
native associative array ever, so from my perspective is better
Hstore
as contrib module.
   
In my opinion, hstore is defined and implemented well. Its complete
in
most
cases. Therefore hstore is mature enough to be in core.
   
On the other hand associative arrays should be implemented from
scratch.
Very well. Let it be. But how integration hstore in core today can
interfere
with implementation of support for associative arrays in future ?
 Is
it
will
a big problem ?
  
   I think so it can be a problem. Any second implemented feature will
   have a handicap, because there will be a similar and realised
 feature.
   Maybe I am too pessimist, but  there are very minimal probability to
   common existence two similar features in core like hstore or
   associative arrays.  And because associative arrays are more general
   than hstore, I prefer a associative arrays.
  
   Okay. According to
   http://www.postgresql.org/docs/9.0/static/arrays.html
   PostreSQL array - collection of values of the same type -- built-in or
   user-defined. Assoc. arrays (maps) are generalized arrays by
 definition.
   So, maps in PostgreSQL should become a generalizations of an currently
   existing arrays.
   Furthermore, if we speak about generalization, map keys must be
   arbitrary
   typed. And yes, ordering operator must exists for a key type and so
   on...
   Otherwise it will be specialized type just for fancy operator[] with
   text argument user friendly, rather than map.
  
   Hstore works well and a
   moving it to core doesn't carry a new value. It's not comparable with
   TSearch2. What I know, contrib modules are not problem for DBA now
 and
   Hstore hasn't a complex installation like TSearch2 had. More -
 there
   are not a security issues that had to be solved with TSearch2.
  
   Why we need a Hstore in core? Why Hstore need be in core?
  
   Well, okay. Could you explain by what formal criterion types become
   built-in ?
 
  No I can't. Please, don't understand to me wrong. Usually I am not
  against to enhancing a core features. Just I see a significant risk,
  so PostgreSQL will not have a associative arrays ever, so I am talking
  about it. If I remember well, then in core are very old types from
  academic era and types that are necessary for ansi sql conformance.
  All others are controversial - there was a big war about XML, there is
  still very unsure JSON. TSearch2 is very specific. Very handy type
  like citext isn't in core. Significant argument for implementation a
  type in core is request on parser support.
 
  I believe that truth is born in the debate and I 

Re: [HACKERS] ALTER TABLE ... ADD FOREIGN KEY ... NOT ENFORCED

2010-12-14 Thread Chris Browne
t...@sss.pgh.pa.us (Tom Lane) writes:
 Robert Haas robertmh...@gmail.com writes:
 ...  On the
 other hand, there's clearly also a use case for this behavior.  If a
 bulk load of prevalidated data forces an expensive revalidation of
 constraints that are already known to hold, there's a real chance the
 DBA will be backed into a corner where he simply has no choice but to
 not use foreign keys, even though he might really want to validate the
 foreign-key relationships on a going-forward basis.

 There may well be a case to be made for doing this on grounds of
 practical usefulness.  I'm just voicing extreme skepticism that it can
 be supported by reference to the standard.

 Personally I'd prefer to see us look into whether we couldn't arrange
 for low-impact establishment of a verified FK relationship, analogous to
 CREATE INDEX CONCURRENTLY.  We don't let people just arbitrarily claim
 that a uniqueness condition exists, and ISTM that if we can handle that
 case we probably ought to be able to handle FK checking similarly.

I can point to a use case that has proven useful...

Slony-I deactivates indices during the subscription process, because it
is enormously more efficient to load the data into the tables
sans-indices, and then re-index afterwards.

The same would apply for FK constraints.

I observe that the deactivation of indices is the sole remaining feature
in Slony-I that still requires catalog access in a corruptive sense.
(With the caveat that this corruption is now only a temporary one; the
indexes are returned into play before the subscription process
finishes.)

That would be eliminated by adding in:
  ALTER TABLE ... DISABLE INDEX ...
  ALTER TABLE ... ENABLE INDEX ...

For similar to apply to FK constraints would involve similar logic.
-- 
output = reverse(moc.liamg @ enworbbc)
http://linuxdatabases.info/info/rdbms.html
The code should be beaten into submission -- Arthur Norman

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Complier warnings on mingw gcc 4.5.0

2010-12-14 Thread Andrew Dunstan



On 12/13/2010 06:45 PM, I wrote:

[ problem on Mingw with 'FATAL: parameter port cannot be changed 
without restarting the server' if client connection options are sent ]




It appears not to be related to how the environment is set at all, but 
to how the backend is handling PGOPTIONS.





Regina Obe has pointed out to me, and I have confirmed, that this does 
not happen with REL8_4_STABLE. So either we have introduced a bug since 
then or something we have done since then has tickled a Mingw bug.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Proposed Windows-specific change: Enable crash dumps (like core files)

2010-12-14 Thread Tom Lane
Craig Ringer cr...@postnewspapers.com.au writes:
 I've attached an updated patch that fixes a failure when compiling on 
 gcc/linux. The no-op inline installCrashDumpHandler() for unsupported 
 platforms was not declared static, so it was not being optimized out of 
 objects it wasn't used in and was causing symbol collisions during linkage.

Why in the world would you get involved in that portability mess for a
function that is called only once?  There's no possible performance
justification for making it inline.

I'm also wondering why you have got conflicting declarations in
postgres.h and port.h, and why none of these declarations follow
ANSI C (write (void) not ()).

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread David E. Wheeler
On Dec 13, 2010, at 11:37 PM, Jan Urbański wrote:

 A function with a hstore parameter called x would get a Python dictionary as 
 its input. A function said to be returning a hstore could return a dictionary 
 and if it would have only string keys/values, it would be changed into a 
 hstore (and if not, throw an ERROR). See the README for pyhstore and take out 
 pyhstore.parse/serialize.

It doesn't turn a returned dictionary into a RECORD? That's what PL/Perl does, 
FBOFW.

 There is already type conversion infrastructure in plpython, in the form of 
 two functions with a switch that takes the input/output type's OID. It'd be 
 adding a branch to the switches and taking the code from my pyhstore module 
 to parse the hstore to and fro.
 
 Then there's the compatibility argument. Hstores used to be passed as 
 strings, so it will break user code. I hate behaviour-changing GUCs as much 
 as anyone, but it seems the only option...

Can you overload the stringification of a dictionary to return the hstore 
string representation?

 How about going the other way around? Hstore would produce hstore_plpython.so 
 apart from hstore.so, if compiling with --with-python. Loading 
 hstore_plpython would register parser functions for hstores in plpython. 
 Additionally this could lead to hstore_plperl in the future etc.
 
 We would need to design some infrastructure for using such hooks in plpython 
 (and in embedded PLs in general) but then we sidestep the whole issue.

It would be better if there was some core support for the 
hash/ditionary/hstore/json/whatever data type, so that you didn't have to write 
a parser.

Best,

David


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread Robert Haas
On Tue, Dec 14, 2010 at 11:51 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Peter Eisentraut pete...@gmx.net writes:
 On mån, 2010-12-13 at 10:55 -0500, Tom Lane wrote:
 We don't normally invent specialized syntax for a specific datatype.
 Not even if it's in core.

 I think the idea would be to make associative arrays a kind of
 second-order object like arrays, instead of a data type.

 I haven't actually figured out what the benefit would be, other than
 buzzword compliance and a chance to invent some random nonstandard
 syntax.  If the element values all have to be the same type, you've
 basically got hstore.

Not exactly, because in hstore all the element values have to be,
specifically, text.  Having hstores of other kinds of objects would,
presumably, be useful.

 If they are allowed to be different types,
 what have you got but a record?  Surely SQL can do composite types
 already.

I think I mostly agree with this.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transaction-scope advisory locks

2010-12-14 Thread Tom Lane
Marko Tiikkaja marko.tiikk...@cs.helsinki.fi writes:
 On 2010-12-14 4:23 AM +0200, Tom Lane wrote:
 Uh, I don't think so.  It sure looks like you have changed the user
 lockmethod to be transactional, ie, auto-release on commit/abort.

 I was under the impression that passing sessionLock=true to 
 LockAcquire(), combined with allLocks=false to LockReleaseAll() would be 
 enough to prevent that from happening.  My tests seem to agree with this.

 Am I missing something?

All the places that look at LockMethodData-transactional ?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] BufFreelistLock

2010-12-14 Thread Jeff Janes
On Sun, Dec 12, 2010 at 6:48 PM, Jim Nasby j...@nasby.net wrote:
 On Dec 10, 2010, at 10:49 AM, Tom Lane wrote:
 Alvaro Herrera alvhe...@commandprompt.com writes:
 Excerpts from Jeff Janes's message of vie dic 10 12:24:34 -0300 2010:
 As far as I can tell, bgwriter never adds things to the freelist.
 That is only done at start up, and when a relation or a database is
 dropped.  The clock sweep does the vast majority of the work.

 AFAIU bgwriter runs the clock sweep most of the time (BgBufferSync).

 I think bgwriter just tries to write out dirty buffers so they'll be
 clean when the clock sweep reaches them.  It doesn't try to move them to
 the freelist.

 Yeah, it calls SyncOneBuffer which does nothing for the clock sweep.

 There might be some advantage in having it move buffers
 to a freelist that's just protected by a simple spinlock (or at least,
 a lock different from the one that protects the clock sweep).  The
 idea would be that most of the time, backends just need to lock the
 freelist for long enough to take a buffer off it, and don't run clock
 sweep at all.

 Yeah, the clock sweep code is very intensive compared to pulling a buffer 
 from the freelist, yet AFAICT nothing will run the clock sweep except 
 backends. Unless I'm missing something, the free list is practically useless 
 because buffers are only put there by InvalidateBuffer, which is only called 
 by DropRelFileNodeBuffers and DropDatabaseBuffers.

Buffers are also put on the freelist at start up (all of them).  But
of course any busy system with more data than buffers will rapidly
deplete them, and DropRelFileNodeBuffers and DropDatabaseBuffers are
generally not going to happen enough to be meaningful on most setups,
I would think.  I was wondering, if the steady state condition is to
always use the clock sweep, if that shouldn't be the only mechanism
that exists.

 So we make backends queue up behind the freelist lock with very little odds 
 of getting a buffer, then we make them queue up for the clock sweep lock and 
 make them actually run the clock sweep.

It is the same lock that governs both.  Given the simplicity of the
checking that the freelist is empty, I don't think it adds much
overhead.


 BTW, when we moved from 96G to 192G servers I tried increasing shared buffers 
 from 8G to 28G and performance went down enough to be noticeable (we don't 
 have any good benchmarks, so I cant really quantify the degradation). Going 
 back to 8G brought performance back up, so it seems like it was the change in 
 shared buffers that caused the issue (the larger servers also have 24 cores 
 vs 16).

What kind of work load do you have (intensity of reading versus
writing)?  How intensely concurrent is the access?

 My immediate thought was that we needed more lock partitions, but I haven't 
 had the chance to see if that helps. ISTM the issue could just as well be due 
 to clock sweep suddenly taking over 3x longer than before.

It would surprise me if most clock sweeps need to make anything near a
full pass over the buffers for each allocation (but technically it
wouldn't need to do that take 3x longer.  It could be that the
fraction of a pass it needs to make is merely proportional to
shared_buffers.  That too would surprise me, though).  You could
compare the number of passes with the number of allocations to see how
much sweeping is done per allocation.  However, I don't think the
number of passes is reported anywhere, unless you compile with #define
BGW_DEBUG and
run with debug2.

I wouldn't expect an increase in shared_buffers to make contention on
BufFreelistLock worse.  If the increased buffers are used to hold
heavily-accessed data, then you will find the pages you want in
shared_buffers more often, and so need to run the clock-sweep less
often.  That should make up for longer sweeps.  But if the increased
buffers are used to hold data that is just read once and thrown away,
then the clock sweep shouldn't need to sweep very far before finding a
candidate.

But of course being able to test would be better than speculation.


Cheers,

Jeff

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Complier warnings on mingw gcc 4.5.0

2010-12-14 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 On 12/13/2010 06:45 PM, I wrote:
 [ problem on Mingw with 'FATAL: parameter port cannot be changed 
 without restarting the server' if client connection options are sent ]

 Regina Obe has pointed out to me, and I have confirmed, that this does 
 not happen with REL8_4_STABLE. So either we have introduced a bug since 
 then or something we have done since then has tickled a Mingw bug.

Hm.  Does it happen in 9.0, or just HEAD?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Tue, Dec 14, 2010 at 11:51 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 If the element values all have to be the same type, you've
 basically got hstore.

 Not exactly, because in hstore all the element values have to be,
 specifically, text.  Having hstores of other kinds of objects would,
 presumably, be useful.

Maybe, but I'm sure they'd have far less application than hstore.
There's a reason why that's based on text and not some other type ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread Andrew Dunstan



On 12/14/2010 12:06 PM, Robert Haas wrote:

I haven't actually figured out what the benefit would be, other than
buzzword compliance and a chance to invent some random nonstandard
syntax.  If the element values all have to be the same type, you've
basically got hstore.

Not exactly, because in hstore all the element values have to be,
specifically, text.  Having hstores of other kinds of objects would,
presumably, be useful.



I love hstore, and I've used it a lot, but I don't think there's much 
future in doing this. This is part of what JSON would buy us, isn't it?


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Complier warnings on mingw gcc 4.5.0

2010-12-14 Thread Andrew Dunstan



On 12/14/2010 12:10 PM, Tom Lane wrote:

Andrew Dunstanand...@dunslane.net  writes:

On 12/13/2010 06:45 PM, I wrote:

[ problem on Mingw with 'FATAL: parameter port cannot be changed
without restarting the server' if client connection options are sent ]

Regina Obe has pointed out to me, and I have confirmed, that this does
not happen with REL8_4_STABLE. So either we have introduced a bug since
then or something we have done since then has tickled a Mingw bug.

Hm.  Does it happen in 9.0, or just HEAD?




Yes, it happens on 9.0. I guess I need to start some git triangulation.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread Robert Haas
2010/12/14 Andrew Dunstan and...@dunslane.net:
 On 12/14/2010 12:06 PM, Robert Haas wrote:

 I haven't actually figured out what the benefit would be, other than
 buzzword compliance and a chance to invent some random nonstandard
 syntax.  If the element values all have to be the same type, you've
 basically got hstore.

 Not exactly, because in hstore all the element values have to be,
 specifically, text.  Having hstores of other kinds of objects would,
 presumably, be useful.


 I love hstore, and I've used it a lot, but I don't think there's much future
 in doing this. This is part of what JSON would buy us, isn't it?

Well, JSON would give you numbers and booleans, but that's a pretty
small subset of all the types in the universe.  I think the main thing
JSON would give you is hierarchical structure.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread Pavel Stehule
2010/12/14 Tom Lane t...@sss.pgh.pa.us:
 Robert Haas robertmh...@gmail.com writes:
 On Tue, Dec 14, 2010 at 11:51 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 If the element values all have to be the same type, you've
 basically got hstore.

 Not exactly, because in hstore all the element values have to be,
 specifically, text.  Having hstores of other kinds of objects would,
 presumably, be useful.

 Maybe, but I'm sure they'd have far less application than hstore.
 There's a reason why that's based on text and not some other type ...


I don't think. For example - numeric array indexed with string is
often use case. Now you have to use a PLperl hashs.

Pavel

                        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread Robert Haas
2010/12/14 Tom Lane t...@sss.pgh.pa.us:
 Robert Haas robertmh...@gmail.com writes:
 On Tue, Dec 14, 2010 at 11:51 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 If the element values all have to be the same type, you've
 basically got hstore.

 Not exactly, because in hstore all the element values have to be,
 specifically, text.  Having hstores of other kinds of objects would,
 presumably, be useful.

 Maybe, but I'm sure they'd have far less application than hstore.
 There's a reason why that's based on text and not some other type ...

Sure.  You can smash anything to a string, and it's often a very
practical way to go, though not always.

I am not necessarily expressing any interest in building such a
facility, just pointing one way that it might hypothetically have an
advantage over hstore.  Whether it's worth pursuing is another
question.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread Oleg Bartunov

On Tue, 14 Dec 2010, Robert Haas wrote:


On Tue, Dec 14, 2010 at 11:51 AM, Tom Lane t...@sss.pgh.pa.us wrote:

Peter Eisentraut pete...@gmx.net writes:

On m?n, 2010-12-13 at 10:55 -0500, Tom Lane wrote:

We don't normally invent specialized syntax for a specific datatype.
Not even if it's in core.



I think the idea would be to make associative arrays a kind of
second-order object like arrays, instead of a data type.


I haven't actually figured out what the benefit would be, other than
buzzword compliance and a chance to invent some random nonstandard
syntax.  If the element values all have to be the same type, you've
basically got hstore.


Not exactly, because in hstore all the element values have to be,
specifically, text.  Having hstores of other kinds of objects would,
presumably, be useful.


agree, we already thought about this, but then others got exited to 
remove hstore limitations. We, probably, could revive our ideas, so

better now to decide if hstore will be 1st class citizen in postgres.




If they are allowed to be different types,
what have you got but a record?  Surely SQL can do composite types
already.


I think I mostly agree with this.




Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread Oleg Bartunov

On Tue, 14 Dec 2010, Andrew Dunstan wrote:




On 12/14/2010 12:06 PM, Robert Haas wrote:

I haven't actually figured out what the benefit would be, other than
buzzword compliance and a chance to invent some random nonstandard
syntax.  If the element values all have to be the same type, you've
basically got hstore.

Not exactly, because in hstore all the element values have to be,
specifically, text.  Having hstores of other kinds of objects would,
presumably, be useful.



I love hstore, and I've used it a lot, but I don't think there's much future 
in doing this. This is part of what JSON would buy us, isn't it?


Just wondering about JSON, are there anyone who signed already to work on JSON 
or it's just a theoretical discussions ? If so, I agree, having JSON 
properly implemented and simple wrapper for hstore just for compatibility, would

be very nice.



cheers

andrew




Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread Robert Haas
On Tue, Dec 14, 2010 at 12:25 PM, Oleg Bartunov o...@sai.msu.su wrote:
 Just wondering about JSON, are there anyone who signed already to work on
 JSON or it's just a theoretical discussions ? If so, I agree, having JSON
 properly implemented and simple wrapper for hstore just for compatibility,
 would
 be very nice.

Three different people developed patches, and I think we don't really
have unanimity on which way to go with it.  I've kind of been thinking
we should wait for a broader consensus on which way to go with it...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Complier warnings on mingw gcc 4.5.0

2010-12-14 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 On 12/13/2010 04:34 PM, Tom Lane wrote:
 Oh really ... are we using src/port/unsetenv.c on that platform?
 I wonder if that little hack is incompatible with latest mingw
 libraries ...

 It is using pgwin32_putenv() and pgwin32_unsetenv(). It appears not to 
 be related to how the environment is set at all, but to how the backend 
 is handling PGOPTIONS.

Mmm, right.  Even if that was messed up, it could only manifest as
pg_regress shipping a bogus connection request packet.  But speaking of
which ...

 Here's a TCP level dump of traffic showing the problem. The client is on 
 Linux.

 18:34:03.106882 IP aurelia.34700  192.168.10.109.postgres: Flags [P.], 
 seq 9:86, ack 2, win 46, options [nop,nop,TS val 1504831233 ecr 
 1085898], length 77
  0x:  4500 0081 f95d 4000 4006 aaf3 c0a8 0a68  e.@.@..h
  0x0010:  c0a8 0a6d 878c 1538 a55b 18ce c920 b723  ...m...8.[.#
  0x0020:  8018 002e 07ae  0101 080a 59b1 e701  Y...
  0x0030:  0010 91ca  004d 0003  7573 6572  ...Muser
  0x0040:  0070 6772 756e 6e65 7200 6461 7461 6261  .pgrunner.databa
  0x0050:  7365 0070 6f73 7467 7265 7300 6f70 7469  se.postgres.opti
  0x0060:  6f6e 7300 2d63 206c 6f67 5f6d 696e 5f6d  ons.-c.log_min_m
  0x0070:  6573 7361 6765 733d 7761 726e 696e 6700  essages=warning.
  0x0080:  00   .

This seems quite odd now that I look at it.  The packet contents imply
that libpq saw PGOPTIONS=-c log_min_messages=warning and no other
environment variables that would cause it to append stuff to the
connection request.  Which is not at all how pg_regress ought to behave,
even assuming that the buildfarm script sets up PGOPTIONS that way.
I'd expect to see settings for timezone, datestyle, and intervalstyle
in there.  What was the client here exactly?

Another line of attack is that we know from the response packet that the
failure is being reported at guc.c:4794.  It would be really useful to
know what the call stack is there.  Could you change that elog to an
elog(PANIC) and get a stack trace from the ensuing core dump?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread Oleg Bartunov

On Tue, 14 Dec 2010, Robert Haas wrote:


On Tue, Dec 14, 2010 at 12:25 PM, Oleg Bartunov o...@sai.msu.su wrote:

Just wondering about JSON, are there anyone who signed already to work on
JSON or it's just a theoretical discussions ? If so, I agree, having JSON
properly implemented and simple wrapper for hstore just for compatibility,
would
be very nice.


Three different people developed patches, and I think we don't really
have unanimity on which way to go with it.  I've kind of been thinking
we should wait for a broader consensus on which way to go with it...


AFAIK, they have no index support, which I consider as a big project, so
I think in ideal situation it could be done for 9.2, or even for 9.3 if
there will be no support for developers. We need company, which said I need 
it, I pay for it.


Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 2010/12/14 Andrew Dunstan and...@dunslane.net:
 On 12/14/2010 12:06 PM, Robert Haas wrote:
 Not exactly, because in hstore all the element values have to be,
 specifically, text.  Having hstores of other kinds of objects would,
 presumably, be useful.

 I love hstore, and I've used it a lot, but I don't think there's much future
 in doing this. This is part of what JSON would buy us, isn't it?

 Well, JSON would give you numbers and booleans, but that's a pretty
 small subset of all the types in the universe.

Sure, but once you have those three, the set of remaining use-cases for
a generalized hstore has dropped from epsilon to epsilon cubed.  I don't
think there's much space left there for a useful type that doesn't make
the full jump to record (ie, allowing each value to be any SQL type).

Also, the more cases you support, the harder it is to write code that
processes the type, as we already saw in the other thread about
record-access functions.  It's not unlikely that something more flexible
than JSON would be less useful not more so, because of the ensuing usage
complexity.  (This is part of why hstore seems to be occupying a sweet
spot --- it may not cover everything you want to do, but it's all text
does simplify usage.)

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread Tom Lane
Oleg Bartunov o...@sai.msu.su writes:
 On Tue, 14 Dec 2010, Robert Haas wrote:
 On Tue, Dec 14, 2010 at 12:25 PM, Oleg Bartunov o...@sai.msu.su wrote:
 Just wondering about JSON, are there anyone who signed already to work on
 JSON or it's just a theoretical discussions ?

 Three different people developed patches, and I think we don't really
 have unanimity on which way to go with it.  I've kind of been thinking
 we should wait for a broader consensus on which way to go with it...

 AFAIK, they have no index support, which I consider as a big project, so
 I think in ideal situation it could be done for 9.2, or even for 9.3 if
 there will be no support for developers. We need company, which said I need 
 it, I pay for it.

For the sort of problems we're discussing here, whether there's index
support or not for JSON is practically irrelevant.  I agree we'd want
some nice indexing ability eventually, but it hardly seems like job #1.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_execute_from_file, patch v10

2010-12-14 Thread Robert Haas
On Tue, Dec 14, 2010 at 11:48 AM, Itagaki Takahiro
itagaki.takah...@gmail.com wrote:
 I'm confused which part of the patch is the point of the discussion.
  1. Relax pg_read_file() to be able to read any files.
  2. pg_read_binary_file()
  3. pg_execute_sql_string/file()

 As I pointed out, 1 is reasonable as long as we restrict the usage
 only to superuser. If we think it is a security hole, there are
 the same issue in lo_import() and COPY FROM by superuser.

 2 is a *fix* for the badly-designed pg_read_file() interface.
 It should have returned bytea rather than text.

 3 could simplify later EXTENSION patches, but it might not be
 a large help because we can just use SPI_exec() instead of them
 if we write codes with C.  I think the most useful parts of the
 patch is reading a whole file with encoding, i.e., 1 and 2.

So there are really four changes in here, right?

1. Relax pg_read_file() to be able to read any files.
2. pg_read_binary_file()
3. pg_execute_sql_string()/file()
4. ability to read a file in a given encoding (rather than the client encoding)

I think I agree that #1 doesn't open any security hole that doesn't
exist already.  We have no similar check for COPY, and both are
superuser-only.  I also see that this is useful for the extensions
work, if that code wants to internally DirectFunctionCall to
pg_read_file.

I think #2 might be a nice thing to have, but I'm not sure what it has
to do with extensions.

I don't see why we need #3.

I think #4 is useful.  I am not clear whether it is needed for the
extension stuff or not.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] ALTER TABLE ... REPLACE WITH

2010-12-14 Thread Simon Riggs

There are various applications where we want to completely replace the
contents of a table with new/re-calculated data.

It seems fairly obvious to be able to do this like...
1. Prepare new data into new_table and build indexes
2. Swap old for new
BEGIN;
DROP TABLE old_table;
ALTER TABLE new_table RENAME to old_table;
COMMIT;

Step (2) works, but any people queuing to access the table will see 
  ERROR:  could not open relation with OID x
What we need is a way to atomically replace the contents of a table
without receiving this error. (You can't use views).

What I propose is to write a function/command to allow this to be
explicitly achievable by the server.

ALTER TABLE old_table
  REPLACE WITH new_table;

This would do the following:
* Check that *content* definitions of old and new are the same
* Drop all old indexes
* Move new relfilenode into place
* Move all indexes from new to old (the set of indexes may change)
* All triggers, non-index constraints, defaults etc would remain same
* new_table is TRUNCATEd.

TRUNCATE already achieves something similar, and is equivalent to
REPLACE WITH an empty table, so we know it is possible. Obviously this
breaks MVCC, but the applications for this don't care.

Of course, as with all things, this can be done with a function and some
dodgy catalog updates. I'd rather avoid that and have this as a full
strength capability on the server, since it has a very wide range of
potential applications of use to all Postgres users.

Similar, though not inspired by EXCHANGE PARTITION in Oracle.

It looks a short project to me, just some checks and a few updates.

Objections?

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_execute_from_file, patch v10

2010-12-14 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 So there are really four changes in here, right?

 1. Relax pg_read_file() to be able to read any files.
 2. pg_read_binary_file()
 3. pg_execute_sql_string()/file()
 4. ability to read a file in a given encoding (rather than the client 
 encoding)

 I think I agree that #1 doesn't open any security hole that doesn't
 exist already.

That function would never have been accepted into core at all without a
locked-down range of how much of the filesystem it would let you get at.
There is nothing whatsoever in the extensions proposal that justifies
dropping that restriction.  If you want to put it up as a separately
proposed, separately justified patch, go ahead ... but I'll vote against
it even then.  (I will also point out that on SELinux-based systems,
relaxing the restriction would be completely useless anyway.)

 I think #2 might be a nice thing to have, but I'm not sure what it has
 to do with extensions.

Agreed.  There might be some use for #4 in connection with extensions,
but I don't see that #2 is related.

BTW, it appears to me that pg_read_file expects server encoding not
client encoding.  Minor detail only, but let's be clear what it is
we're talking about.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_execute_from_file, patch v10

2010-12-14 Thread Robert Haas
On Tue, Dec 14, 2010 at 1:38 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 So there are really four changes in here, right?

 1. Relax pg_read_file() to be able to read any files.
 2. pg_read_binary_file()
 3. pg_execute_sql_string()/file()
 4. ability to read a file in a given encoding (rather than the client 
 encoding)

 I think I agree that #1 doesn't open any security hole that doesn't
 exist already.

 That function would never have been accepted into core at all without a
 locked-down range of how much of the filesystem it would let you get at.

I have some angst about opening it up wide, but I'm also having a hard
time seeing what problem it creates that you can't already create with
COPY FROM or lo_import().

 I think #2 might be a nice thing to have, but I'm not sure what it has
 to do with extensions.

 Agreed.  There might be some use for #4 in connection with extensions,
 but I don't see that #2 is related.

 BTW, it appears to me that pg_read_file expects server encoding not
 client encoding.  Minor detail only, but let's be clear what it is
 we're talking about.

OK.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Instrument checkpoint sync calls

2010-12-14 Thread Robert Haas
On Tue, Dec 14, 2010 at 11:47 AM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
  that I'm
  uncomfortable with the idea of mdsync filling in the details for
  CheckpointStats fields directly.  Would it work to pass a struct (say
  SmgrSyncStats) from CheckPointBuffers to smgrsync and from there to
  mdsync, have this function fill it, and return it back so that
  CheckPointBuffers copies the data from this struct into CheckpointStats?

 But referring to CheckpointStats in md.c seems to me to be a violation
 of modularity that ought to be fixed.

Hmm.  I guess I can't get worked up about it.  We could do what you
propose, but I'm not sure what purpose it would serve.  It's not as if
mdsync() can possibly serve any other purpose other than to be the
guts of a checkpoint.  It seems to me that if we really wanted to get
this disentangled from the checkpoint stats stuff we'd also need to
think about moving that elog(DEBUG1) I added out of the function
somehow, but I think that would just be a lot of notional complexity
for no particular benefit.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE ... REPLACE WITH

2010-12-14 Thread Tom Lane
Simon Riggs si...@2ndquadrant.com writes:
 There are various applications where we want to completely replace the
 contents of a table with new/re-calculated data.

 It seems fairly obvious to be able to do this like...
 1. Prepare new data into new_table and build indexes
 2. Swap old for new
 BEGIN;
 DROP TABLE old_table;
 ALTER TABLE new_table RENAME to old_table;
 COMMIT;

Why not

BEGIN;
TRUNCATE TABLE;
... load new data ...
COMMIT;

 What I propose is to write a function/command to allow this to be
 explicitly achievable by the server.

 ALTER TABLE old_table
   REPLACE WITH new_table;

I don't think the cost/benefit ratio of this is anywhere near as good
as you seem to think (ie, you're both underestimating the work involved
and overstating the benefit).  I'm also noticing a lack of specification
as to trigger behavior, foreign keys, etc.  The apparent intention to
disregard FKs entirely is particularly distressing,

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE ... REPLACE WITH

2010-12-14 Thread Simon Riggs
On Tue, 2010-12-14 at 13:54 -0500, Tom Lane wrote:
 Simon Riggs si...@2ndquadrant.com writes:
  There are various applications where we want to completely replace the
  contents of a table with new/re-calculated data.
 
  It seems fairly obvious to be able to do this like...
  1. Prepare new data into new_table and build indexes
  2. Swap old for new
  BEGIN;
  DROP TABLE old_table;
  ALTER TABLE new_table RENAME to old_table;
  COMMIT;
 
 Why not
 
 BEGIN;
 TRUNCATE TABLE;
 ... load new data ...
 COMMIT;

The above is atomic, but not fast.

The intention is to produce an atomic swap with as small a lock window
as possible, to allow it to happen in real operational systems. 

At the moment we have a choice of fast or atomic. We need both.

(Note that there are 2 utilities that already do this, but the
operations aren't supported in core Postgres).

  What I propose is to write a function/command to allow this to be
  explicitly achievable by the server.
 
  ALTER TABLE old_table
REPLACE WITH new_table;
 
 I don't think the cost/benefit ratio of this is anywhere near as good
 as you seem to think (ie, you're both underestimating the work involved
 and overstating the benefit).  I'm also noticing a lack of specification
 as to trigger behavior, foreign keys, etc.  The apparent intention to
 disregard FKs entirely is particularly distressing,

No triggers would be fired. All constraints that exist on old_table
must also exist on new_table. As I said, lots of checks required, no
intention to add back doors.

(Disregard FKs is the other project, not connected other than both are
operations on tables designed to improve manageability of large tables.)

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE ... REPLACE WITH

2010-12-14 Thread Robert Haas
On Tue, Dec 14, 2010 at 1:54 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 BEGIN;
 TRUNCATE TABLE;
 ... load new data ...
 COMMIT;

Because then you have to take an AccessExclusiveLock on the target
table, of course.

If we had some kind of TRUNCATE CONCURRENTLY, I think that'd address a
large portion of the use case for the proposed feature.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread David E. Wheeler
On Dec 14, 2010, at 9:31 AM, Robert Haas wrote:

 Three different people developed patches, and I think we don't really
 have unanimity on which way to go with it.  I've kind of been thinking
 we should wait for a broader consensus on which way to go with it...

There needs to be a discussion for that to happen, but it seems to have been 
dropped. Have the three developers who worked on patches all given up?

Best,

David


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Triggered assertion !(tp.t_data-t_infomask HEAP_XMAX_INVALID) in heap_delete() on HEAD [PATCH]

2010-12-14 Thread Florian Pflug
Hi

In the process of re-verifying my serializable lock consistency patch, I ran
the fk_concurrency testsuite against *unpatched* HEAD for comparison.

My build of HEAD had asserts enabled, and I promptly triggered
Assert(!(tp.t_data-t_infomask  HEAP_XMAX_INVALID))
in heap_delete().

The seems wrong, if result was set to HeapTupleUpdated because the tuple was 
invisible
to the crosscheck snapshot, its xmax may very well be invalid.

Simply removing the assert isn't an option, because right after the assert the 
tuple's
xmax is copied into update_xmax. Thus the attached patch takes care to set 
update_xmax
to InvalidTransactionId explicitly in case the update is prevented by the 
crosscheck snapshot.

heap_update() suffers from the same problem and is treated similarly by the 
patch.

Note that this patch conflicts with the serializable_lock_consistency patch, 
since it
changes that assert too, but in a different way.

best regards,
Florian Pflug


fix_assert_xmaxinvalid.v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE ... REPLACE WITH

2010-12-14 Thread Josh Berkus
On 12/14/10 11:07 AM, Robert Haas wrote:
 Because then you have to take an AccessExclusiveLock on the target
 table, of course.

Well, you have to do that for DROP TABLE as well, and I don't see any
way around doing it for REPLACE WITH.

As for the utility of this command: there is no question that I would
use it.  I'm not sure I like the syntax (I'd prefer REPLACE TABLE 
WITH _), but that's painting the bike shed.  While the command may
appear frivolous and unnecessary syntactical ornamentation to some, I
have to say that doing the table doesy-doe which this command
addresses is something I have written scripts for on at least 50% of my
professional clients.  It keeps coming up.

In order for REPLACE WITH to be really useful, though, we need a command
cloning at table design with *all* constraints, FKs, keys, and indexes.
 Currently, I still don't think we have that ... do we?

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE ... REPLACE WITH

2010-12-14 Thread Heikki Linnakangas

On 14.12.2010 20:27, Simon Riggs wrote:

There are various applications where we want to completely replace the
contents of a table with new/re-calculated data.

It seems fairly obvious to be able to do this like...
1. Prepare new data into new_table and build indexes
2. Swap old for new
BEGIN;
DROP TABLE old_table;
ALTER TABLE new_table RENAME to old_table;
COMMIT;

Step (2) works, but any people queuing to access the table will see
   ERROR:  could not open relation with OID x


Could we make that work without error?

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE ... REPLACE WITH

2010-12-14 Thread Robert Haas
On Tue, Dec 14, 2010 at 2:34 PM, Josh Berkus j...@agliodbs.com wrote:
 On 12/14/10 11:07 AM, Robert Haas wrote:
 Because then you have to take an AccessExclusiveLock on the target
 table, of course.

 Well, you have to do that for DROP TABLE as well, and I don't see any
 way around doing it for REPLACE WITH.

Sure, but in Simon's proposal you can load the data FIRST and then
take a lock just long enough to do the swap.  That's very different
from needing to hold the lock during the whole data load.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_execute_from_file, patch v10

2010-12-14 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
 Robert Haas robertmh...@gmail.com writes:
 So there are really four changes in here, right?

 1. Relax pg_read_file() to be able to read any files.
 2. pg_read_binary_file()
 3. pg_execute_sql_string()/file()
 4. ability to read a file in a given encoding (rather than the client 
 encoding)

 I think I agree that #1 doesn't open any security hole that doesn't
 exist already.

 That function would never have been accepted into core at all without a
 locked-down range of how much of the filesystem it would let you get at.

Ok. Previously pg_read_file() only allows absolute file names that point
into DataDir or into Log_directory. It used not to work in the first
versions of the extension's patch, but with the current code, the check
passes on a development install here: extension.c is only giving
genfile.c absolute file names.

Please note that debian will default to have DataDir in a different
place than the sharepath:

  http://packages.debian.org/sid/amd64/postgresql-contrib-9.0/filelist

  PGDATA:/var/lib/postgresql/9.1/main 
  sharepath: /usr/share/postgresql/9.1/contrib
  libdir:/usr/lib/postgresql/9.1/lib

So I'm not sure how if it will play nice with such installs, or if
there's already some genfile.c patching on debian.

 I think #2 might be a nice thing to have, but I'm not sure what it has
 to do with extensions.

 Agreed.  There might be some use for #4 in connection with extensions,
 but I don't see that #2 is related.

Well, in fact, the extension's code is using either execute_sql_file()
or read_text_file_with_endoding() then @extschema@ replacement then
execute_sql_string(), all those functions called directly thanks to
#include utils/genfile.h. No DirectFunctionCall'ing, we can easily
remove SQL callable forms.

So what we need is 2, 3 and 4 (because 4 builds on 2).

 BTW, it appears to me that pg_read_file expects server encoding not
 client encoding.  Minor detail only, but let's be clear what it is
 we're talking about.

Hence the refactoring in the patch. Ask Itagaki for details with funny
environments using some file encoding that does not exists in the server
yet ain't client_encoding and can't be. I didn't follow the use case in
details, but he was happy with the current way of doing things and not
with any previous one.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE ... REPLACE WITH

2010-12-14 Thread Simon Riggs
On Tue, 2010-12-14 at 21:35 +0200, Heikki Linnakangas wrote:
 On 14.12.2010 20:27, Simon Riggs wrote:
  There are various applications where we want to completely replace the
  contents of a table with new/re-calculated data.
 
  It seems fairly obvious to be able to do this like...
  1. Prepare new data into new_table and build indexes
  2. Swap old for new
  BEGIN;
  DROP TABLE old_table;
  ALTER TABLE new_table RENAME to old_table;
  COMMIT;
 
  Step (2) works, but any people queuing to access the table will see
 ERROR:  could not open relation with OID x
 
 Could we make that work without error?

Possibly, and good thinking, but its effectively the same patch, just
syntax free since we still need to do lots of checking to avoid swapping
oranges with lemons.

I prefer explicit syntax because its easier to be certain that you've
got it right.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE ... REPLACE WITH

2010-12-14 Thread Simon Riggs
On Tue, 2010-12-14 at 11:34 -0800, Josh Berkus wrote:

 In order for REPLACE WITH to be really useful, though, we need a
 command cloning at table design with *all* constraints, FKs, keys, and
 indexes.  Currently, I still don't think we have that ... do we? 

Being able to vary the indexes when we REPLACE is a good feature.

We only need to check that datatypes and constraints match.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE ... REPLACE WITH

2010-12-14 Thread Simon Riggs
On Tue, 2010-12-14 at 11:34 -0800, Josh Berkus wrote:
 As for the utility of this command: there is no question that I would
 use it.  I'm not sure I like the syntax (I'd prefer REPLACE TABLE 
 WITH _), but that's painting the bike shed.

REPLACE TABLE ying WITH yang

is probably easier to implement than hacking at the ALTER TABLE code
mountain. 

   While the command may
 appear frivolous and unnecessary syntactical ornamentation to some, I
 have to say that doing the table doesy-doe which this command
 addresses is something I have written scripts for on at least 50% of
 my professional clients.  It keeps coming up. 

Yeh.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread Jan Urbański
On 14/12/10 18:05, David E. Wheeler wrote:
 On Dec 13, 2010, at 11:37 PM, Jan Urbański wrote:
 
 A function said to be returning a hstore could return a dictionary and if it 
 would have only string keys/values, it would be changed into a hstore (and 
 if not, throw an ERROR).
 
 It doesn't turn a returned dictionary into a RECORD? That's what PL/Perl 
 does, FBOFW.

If the function is declared to return a hstore, it transforms the
dictionary to a hstore.

IOW: if the return type of a PL/Python function is known, PL/Python
will try to convert the object returned by Python into a Postgres type,
according to some rules, that depend on the type. For instance, a if a
function is said to return booleans, PL/Python would take the return
value of the Python function invocation, cast it to a boolean using
Python casting rules and the return a Postgres boolean depending on the
result of the cast. If a type is unknown, PL/Python just casts it to
string using Python rules and feeds it to the type's input function.

The whole point of this thread is how to make hstore a known type.

 Then there's the compatibility argument. Hstores used to be passed as 
 strings, so it will break user code. I hate behaviour-changing GUCs as much 
 as anyone, but it seems the only option...
 
 Can you overload the stringification of a dictionary to return the hstore 
 string representation?

Mmm, interesting thought. I don't particularily like it, because mucking
with the stringification of a built-in type is a big POLA violation (and
there would be other problems as well). And you still have to go through
the Python dict - string - hstore cycle, instead of cutting the string
step out.

 How about going the other way around? Hstore would produce 
 hstore_plpython.so apart from hstore.so, if compiling with --with-python. 
 Loading hstore_plpython would register parser functions for hstores in 
 plpython. Additionally this could lead to hstore_plperl in the future etc.

 We would need to design some infrastructure for using such hooks in plpython 
 (and in embedded PLs in general) but then we sidestep the whole issue.
 
 It would be better if there was some core support for the 
 hash/ditionary/hstore/json/whatever data type, so that you didn't have to 
 write a parser.

I'm not writing the parser, that's the point. You could provide a
pure-Python solution that would do the parsing, but that's fragile, slow
and ugly. The idea is: PL/Python notices that the function is supposed
to return a hstore. It takes the output of the Python call and uses
functions from hstore.so to construct the hstore and return it. Same
thing would happen with json.

Cheers,
Jan

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread David E. Wheeler
On Dec 14, 2010, at 11:52 AM, Jan Urbański wrote:

 If the function is declared to return a hstore, it transforms the
 dictionary to a hstore.

Oh, right. Duh.

 Can you overload the stringification of a dictionary to return the hstore 
 string representation?
 
 Mmm, interesting thought. I don't particularily like it, because mucking
 with the stringification of a built-in type is a big POLA violation (and
 there would be other problems as well). And you still have to go through
 the Python dict - string - hstore cycle, instead of cutting the string
 step out.

Could you do it with a subclass of Dictionary? I'm thinking only of the params 
passed to the function here, not returned. It doesn't matter what the return 
value stringifies as if you can use functions to do the transformation from 
dict to hstore.

 It would be better if there was some core support for the 
 hash/ditionary/hstore/json/whatever data type, so that you didn't have to 
 write a parser.
 
 I'm not writing the parser, that's the point. You could provide a
 pure-Python solution that would do the parsing, but that's fragile, slow
 and ugly. The idea is: PL/Python notices that the function is supposed
 to return a hstore. It takes the output of the Python call and uses
 functions from hstore.so to construct the hstore and return it. Same
 thing would happen with json.

Right, that sounds great. No reason why we couldn't support many of these 
hash-like things, eh? The question then is just identifying those types. That 
would be fantastic for PL/Perl, too.

Best,

David


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Triggered assertion !(tp.t_data-t_infomask HEAP_XMAX_INVALID) in heap_delete() on HEAD [PATCH]

2010-12-14 Thread Tom Lane
Florian Pflug f...@phlo.org writes:
 In the process of re-verifying my serializable lock consistency patch, I ran
 the fk_concurrency testsuite against *unpatched* HEAD for comparison.

 My build of HEAD had asserts enabled, and I promptly triggered
 Assert(!(tp.t_data-t_infomask  HEAP_XMAX_INVALID))
 in heap_delete().

 The seems wrong, if result was set to HeapTupleUpdated because the tuple was 
 invisible
 to the crosscheck snapshot, its xmax may very well be invalid.

This patch seems certainly wrong.  Please provide an actual test case
rather than just asserting we should change this.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Triggered assertion !(tp.t_data-t_infomask HEAP_XMAX_INVALID) in heap_delete() on HEAD [PATCH]

2010-12-14 Thread Florian Pflug
On Dec14, 2010, at 21:18 , Tom Lane wrote:
 Florian Pflug f...@phlo.org writes:
 In the process of re-verifying my serializable lock consistency patch, I ran
 the fk_concurrency testsuite against *unpatched* HEAD for comparison.
 
 My build of HEAD had asserts enabled, and I promptly triggered
Assert(!(tp.t_data-t_infomask  HEAP_XMAX_INVALID))
 in heap_delete().
 
 The seems wrong, if result was set to HeapTupleUpdated because the tuple was 
 invisible
 to the crosscheck snapshot, its xmax may very well be invalid.
 
 This patch seems certainly wrong.  Please provide an actual test case
 rather than just asserting we should change this.


Running my FK concurrency test suite against HEAD as of today with 100 
transaction / client triggers this within a few seconds or so. The test suite 
can be found at https://github.com/fgp/fk_concurrency.

./fk_concurrency.sh tx/client native path to pg host or patch to socket

Could you explain what seems to be wrong with my patch? If you believe that 
it's impossible for a tuple to be visible under the query's snapshot but 
invisible to the crosscheck snapshot, unless it was deleted, that's *not* the 
case! For RI checks in serializable transactions, the *crosscheck* snapshot is 
the serializable snapshot, while the query's snapshot is obtained with 
GetLatetsSnapshot(). This is the relevant snippet from ri_trigger.c, 
ri_PerformCheck():

  if (IsolationUsesXactSnapshot()  detectNewRows)
  {
CommandCounterIncrement();  /* be sure all my own work is visible */
test_snapshot = GetLatestSnapshot();
crosscheck_snapshot = GetTransactionSnapshot();
  }
  else
  {
/* the default SPI behavior is okay */
test_snapshot = InvalidSnapshot;
crosscheck_snapshot = InvalidSnapshot;
  }

best regards,
Florian Pflug


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Triggered assertion !(tp.t_data-t_infomask HEAP_XMAX_INVALID) in heap_delete() on HEAD [PATCH]

2010-12-14 Thread Tom Lane
Florian Pflug f...@phlo.org writes:
 Could you explain what seems to be wrong with my patch?

I'm unconvinced that this is the proper response to whatever the problem
is; and if it is the right response, it seems to still need a good bit
more work.  You didn't even update the functions' header comments, let
alone look at their callers to see how they'd be affected by an
InvalidTransactionId result.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transaction-scope advisory locks

2010-12-14 Thread Marko Tiikkaja

On 2010-12-14 7:05 PM +0200, Tom Lane wrote:

Marko Tiikkajamarko.tiikk...@cs.helsinki.fi  writes:

On 2010-12-14 4:23 AM +0200, Tom Lane wrote:

Uh, I don't think so.  It sure looks like you have changed the user
lockmethod to be transactional, ie, auto-release on commit/abort.



I was under the impression that passing sessionLock=true to
LockAcquire(), combined with allLocks=false to LockReleaseAll() would be
enough to prevent that from happening.  My tests seem to agree with this.



Am I missing something?


All the places that look at LockMethodData-transactional ?


As far as I can tell, every code path that looks at 
LockMethodData-transactional either has an explicit sessionLock boolean 
or looks whether owner == NULL to actually check whether the lock in 
question is a session lock or not instead of blindly trusting 
-transactional.



Regards,
Marko Tiikkaja

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] unlogged tables vs. GIST

2010-12-14 Thread Robert Haas
On Sat, Nov 13, 2010 at 9:09 PM, Robert Haas robertmh...@gmail.com wrote:
 The fact that it's easy doesn't make it workable.  I would point out for
 starters that AMs might (do) put WAL locations and/or XIDs into indexes.
 Occasionally copying very old LSNs or XIDs back into active files seems
 pretty dangerous.

 I haven't examined the GIST, GIN, or hash index code in detail so I am
 not sure whether there are any hazards there; the btree case does not
 seem to have any issues of this type.  Certainly, if an index AM puts
 an XID into an empty index, that's gonna break.  I would consider that
 a pretty odd thing to do, though.  An LSN seems less problematic since
 the LSN space does not wrap; it should just look like an index that
 was created a long time ago and never updated (which, in effect, it
 is).

I'm still not convinced there's any hazard of this type, but there is,
apparently, a problem with failing to emit XLOG records for GIST
indexes, because they apparently use LSNs to detect concurrent page
splits (see Heikki's commit on November 16th, aka
2edc5cd493ce3d7834026970e9d3cd00e203f51a) and the hack he inserted to
work around that problem for temporary tables isn't going to work for
unlogged tables.  I suppose we could disallow unlogged GIST indexes.
Or we could allow them but still XLOG some operations anyway to make
sure that the LSN advances at the appropriate time.  That seems pretty
ugly, though.  Any other ideas?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Complier warnings on mingw gcc 4.5.0

2010-12-14 Thread Andrew Dunstan



On 12/14/2010 12:42 PM, Tom Lane wrote:

This seems quite odd now that I look at it.  The packet contents imply
that libpq saw PGOPTIONS=-c log_min_messages=warning and no other
environment variables that would cause it to append stuff to the
connection request.  Which is not at all how pg_regress ought to behave,
even assuming that the buildfarm script sets up PGOPTIONS that way.
I'd expect to see settings for timezone, datestyle, and intervalstyle
in there.  What was the client here exactly?



Maybe I didn't explain this properly. The trace was not from pg_regress. 
It was from a connection from a standard Linux psql client.




Another line of attack is that we know from the response packet that the
failure is being reported at guc.c:4794.  It would be really useful to
know what the call stack is there.  Could you change that elog to an
elog(PANIC) and get a stack trace from the ensuing core dump?





I can try that. Not sure how easy that is on Windows.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unlogged tables vs. GIST

2010-12-14 Thread Heikki Linnakangas

On 14.12.2010 23:06, Robert Haas wrote:

On Sat, Nov 13, 2010 at 9:09 PM, Robert Haasrobertmh...@gmail.com  wrote:

The fact that it's easy doesn't make it workable.  I would point out for
starters that AMs might (do) put WAL locations and/or XIDs into indexes.
Occasionally copying very old LSNs or XIDs back into active files seems
pretty dangerous.


I haven't examined the GIST, GIN, or hash index code in detail so I am
not sure whether there are any hazards there; the btree case does not
seem to have any issues of this type.  Certainly, if an index AM puts
an XID into an empty index, that's gonna break.  I would consider that
a pretty odd thing to do, though.  An LSN seems less problematic since
the LSN space does not wrap; it should just look like an index that
was created a long time ago and never updated (which, in effect, it
is).


I'm still not convinced there's any hazard of this type, but there is,
apparently, a problem with failing to emit XLOG records for GIST
indexes, because they apparently use LSNs to detect concurrent page
splits (see Heikki's commit on November 16th, aka
2edc5cd493ce3d7834026970e9d3cd00e203f51a) and the hack he inserted to
work around that problem for temporary tables isn't going to work for
unlogged tables.  I suppose we could disallow unlogged GIST indexes.
Or we could allow them but still XLOG some operations anyway to make
sure that the LSN advances at the appropriate time.  That seems pretty
ugly, though.  Any other ideas?


Hmm, the first idea that comes to mind is to use a counter like the 
GetXLogRecPtrForTemp() counter I used for temp tables, but global, in 
shared memory. However, that's a bit problematic because if we store a 
value from that counter to LSN, it's possible that the counter overtakes 
the XLOG insert location, and you start to get xlog flush errors. We 
could avoid that if we added a new field to the GiST page header, and 
used that to store the value in the parent page instead of the LSN.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unlogged tables vs. GIST

2010-12-14 Thread Robert Haas
On Tue, Dec 14, 2010 at 4:24 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 Hmm, the first idea that comes to mind is to use a counter like the
 GetXLogRecPtrForTemp() counter I used for temp tables, but global, in shared
 memory. However, that's a bit problematic because if we store a value from
 that counter to LSN, it's possible that the counter overtakes the XLOG
 insert location, and you start to get xlog flush errors. We could avoid that
 if we added a new field to the GiST page header, and used that to store the
 value in the parent page instead of the LSN.

That doesn't seem ideal, either, because now you're eating up some
number of bytes per page in every GIST index just on the off chance
that one of them is unlogged.  Unless there's a way to do it only for
unlogged GIST indexes, but it seems like that could be messy.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Triggered assertion !(tp.t_data-t_infomask HEAP_XMAX_INVALID) in heap_delete() on HEAD [PATCH]

2010-12-14 Thread Florian Pflug
On Dec14, 2010, at 21:52 , Tom Lane wrote:
 Florian Pflug f...@phlo.org writes:
 Could you explain what seems to be wrong with my patch?
 
 I'm unconvinced that this is the proper response to whatever the problem
 is;
Well, you didn't comment on the part of my previous e-mail that *did*
explain why I believe this is the proper response...

 and if it is the right response, it seems to still need a good bit
 more work.  You didn't even update the functions' header comments, let
 alone look at their callers to see how they'd be affected by an
 InvalidTransactionId result.

Well, I hit this while re-verifying the serializable lock consistency stuff
with a current HEAD, so I didn't really want to spend more time on this than
necessary. Especially since that patch replaces the assert in question anyway.

So I moved the assert to a safe place and HeapTupleHeaderGetXmax() with
it, since if the assert fails HeapTupleHeaderGetXmax() will return garbage
anyway (read: a multi-xid instead of an xid in some cases!).

For non-assert-enabled builds, the only effect of the patch is thus to
consistently return InvalidTransactionId if the crosscheck snapshot turns
HeapTupleMayBeUpdated into HeapTupleUpdated. Which certainly seems to be
an improvement over sometimes returning InvalidTransactionId, sometimes
a locker's xid, and sometime's a multi-xid.

best regards,
Florian Pflug


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] BufFreelistLock

2010-12-14 Thread Jim Nasby

On Dec 14, 2010, at 11:08 AM, Jeff Janes wrote:

 On Sun, Dec 12, 2010 at 6:48 PM, Jim Nasby j...@nasby.net wrote:
 
 BTW, when we moved from 96G to 192G servers I tried increasing shared 
 buffers from 8G to 28G and performance went down enough to be noticeable (we 
 don't have any good benchmarks, so I cant really quantify the degradation). 
 Going back to 8G brought performance back up, so it seems like it was the 
 change in shared buffers that caused the issue (the larger servers also have 
 24 cores vs 16).
 
 What kind of work load do you have (intensity of reading versus
 writing)?  How intensely concurrent is the access?

It writes at the rate of ~3-5MB/s, doing ~700TPS on average. It's hard to judge 
the exact read mix, because it's running on a 192G server (actually, 512G now, 
but 192G when I tested). The working set is definitely between 96G and 192G; we 
saw a major performance improvement last year when we went to 192G, but we 
haven't seen any improvement moving to 512G.

We typically have 10-20 active queries at any point.

 My immediate thought was that we needed more lock partitions, but I haven't 
 had the chance to see if that helps. ISTM the issue could just as well be 
 due to clock sweep suddenly taking over 3x longer than before.
 
 It would surprise me if most clock sweeps need to make anything near a
 full pass over the buffers for each allocation (but technically it
 wouldn't need to do that take 3x longer.  It could be that the
 fraction of a pass it needs to make is merely proportional to
 shared_buffers.  That too would surprise me, though).  You could
 compare the number of passes with the number of allocations to see how
 much sweeping is done per allocation.  However, I don't think the
 number of passes is reported anywhere, unless you compile with #define
 BGW_DEBUG and
 run with debug2.
 
 I wouldn't expect an increase in shared_buffers to make contention on
 BufFreelistLock worse.  If the increased buffers are used to hold
 heavily-accessed data, then you will find the pages you want in
 shared_buffers more often, and so need to run the clock-sweep less
 often.  That should make up for longer sweeps.  But if the increased
 buffers are used to hold data that is just read once and thrown away,
 then the clock sweep shouldn't need to sweep very far before finding a
 candidate.

Well, we're talking about a working set that's between 96 and 192G, but only 8G 
(or 28G) of shared buffers. So there's going to be a pretty large amount of 
buffer replacement happening. We also have 210 tables where the ratio of heap 
buffer hits to heap reads is over 1000, so the stuff that is in shared buffers 
probably keeps usage_count quite high. Put these two together, and we're 
probably spending a fairly significant amount of time running the clock sweep.

Even excluding our admittedly unusual workload, there is still significant 
overhead in running the clock sweep vs just grabbing something off of the free 
list (assuming we had separate locks for the two operations). Does anyone know 
what the overhead of getting a block from the filesystem cache is? I wonder how 
many buffers you can move through in the same amount of time. Put another way, 
at some point you have to check enough buffers to find a free one that you just 
doubled the amount of time it takes to get data from the filesystem cache into 
a shared buffer.

 But of course being able to test would be better than speculation.

Yeah, I'm working on getting pg_buffercache installed so we can see what's 
actually in the cache.

Hmm... I wonder how hard it would be to hack something up that has a separate 
process that does nothing but run the clock sweep. We'd obviously not run a 
hack in production, but we're working on being able to reproduce a production 
workload. If we had a separate clock-sweep process we could get an idea of 
exactly how much work was involved in keeping free buffers available.

BTW, given our workload I can't see any way of running at debug2 without having 
a large impact on performance.
--
Jim C. Nasby, Database Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unlogged tables vs. GIST

2010-12-14 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Tue, Dec 14, 2010 at 4:24 PM, Heikki Linnakangas
 heikki.linnakan...@enterprisedb.com wrote:
 Hmm, the first idea that comes to mind is to use a counter like the
 GetXLogRecPtrForTemp() counter I used for temp tables, but global, in shared
 memory. However, that's a bit problematic because if we store a value from
 that counter to LSN, it's possible that the counter overtakes the XLOG
 insert location, and you start to get xlog flush errors. We could avoid that
 if we added a new field to the GiST page header, and used that to store the
 value in the parent page instead of the LSN.

 That doesn't seem ideal, either, because now you're eating up some
 number of bytes per page in every GIST index just on the off chance
 that one of them is unlogged.

On-disk compatibility seems problematic here as well.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP patch for parallel pg_dump

2010-12-14 Thread Robert Haas
On Tue, Dec 7, 2010 at 3:23 AM, Koichi Suzuki koichi@gmail.com wrote:
 This is what Postgres-XC is doing between a coordinator and a
 datanode.    Coordinator may correspond to poolers/loadbalancers.
 Does anyone think it makes sense to extract XC implementation of
 snapshot shipping to PostgreSQL itself?

Perhaps, though of course it would need to be re-licensed.  I'd be
happy to see us pursue a snapshot cloning framework, wherever it comes
from.  I remain unconvinced that it should be made a hard requirement
for parallel pg_dump, but of course if we can get it implemented then
the point becomes moot.

Let's not let this fall on the floor.  Someone should pursue this,
whether it's Joachim or Koichi or someone else.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unlogged tables vs. GIST

2010-12-14 Thread Robert Haas
On Tue, Dec 14, 2010 at 4:55 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Tue, Dec 14, 2010 at 4:24 PM, Heikki Linnakangas
 heikki.linnakan...@enterprisedb.com wrote:
 Hmm, the first idea that comes to mind is to use a counter like the
 GetXLogRecPtrForTemp() counter I used for temp tables, but global, in shared
 memory. However, that's a bit problematic because if we store a value from
 that counter to LSN, it's possible that the counter overtakes the XLOG
 insert location, and you start to get xlog flush errors. We could avoid that
 if we added a new field to the GiST page header, and used that to store the
 value in the parent page instead of the LSN.

 That doesn't seem ideal, either, because now you're eating up some
 number of bytes per page in every GIST index just on the off chance
 that one of them is unlogged.

 On-disk compatibility seems problematic here as well.

Good point.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Segfault related to pg_authid when running initdb from git master

2010-12-14 Thread Peter Geoghegan
Here's the output I see when $SUBJECT occurs, on a pg freshly built
from git master with --enable-debug and --enable-cassert:

[postg...@peter bin]$ uname -a
Linux peter.laptop 2.6.35.9-64.fc14.x86_64 #1 SMP Fri Dec 3 12:19:41
UTC 2010 x86_64 x86_64 x86_64 GNU/Linux
[postg...@peter bin]$ ./initdb -D /var/lib/pgsql/data
The files belonging to this database system will be owned by user postgres.
This user must also own the server process.

The database cluster will be initialized with locale en_IE.utf8.
The default database encoding has accordingly been set to UTF8.
The default text search configuration will be set to english.

fixing permissions on existing directory /var/lib/pgsql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 24MB
creating configuration files ... ok
creating template1 database in /var/lib/pgsql/data/base/1 ... ok
initializing pg_authid ... sh: line 1: 23515 Segmentation fault
(core dumped) /usr/local/pgsql/bin/postgres --single -F -O -c
search_path=pg_catalog -c exit_on_error=true template1  /dev/null
child process exited with exit code 139
initdb: removing contents of data directory /var/lib/pgsql/data


I'm having difficulty producing a useful backtrace, because the
segfault seemingly doesn't actually occur within initdb - it occurs
within a postgres process. If someone could tell me the trick to
attaching to that process under these circumstances, I could look into
it further.

The trouble seems occur here, at line 1224 of initdb.c:

for (line = pg_authid_setup; *line != NULL; line++)
PG_CMD_PUTS(*line);

After I see the segmentation fault in stderr, gdb reports that initdb
has received SIGPIPE.

Hope that helps.

-- 
Regards,
Peter Geoghegan

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread Jan Urbański
On 14/12/10 17:52, Tom Lane wrote:
 Peter Eisentraut pete...@gmx.net writes:
 On mån, 2010-12-13 at 08:50 +0100, Jan Urbański wrote:
 It would be cool to be able to transparently use hstores as Python
 dictionaries and vice versa. It would be easy enough with hstore as a
 core type, but with hstore as an addon it's not that easy.
 
 I have been thinking about this class of problems for a while.  I think
 the proper fix is to have a user-definable mapping between types and
 languages.  It would be another pair of input/output functions,
 essentially.
 
 Interesting thought, but it still leaves you needing to solve the
 problem of interconnecting two optional addons ...

So I've been thinking about hot these two optional addons could
interact, and here's a sketchy plan:

1) the hstore contrib module when compiling with --with-python generates
a loadable hstore_plpython.so
2) hstore_plpython, when loaded, sets a rendezvous variable that points
to a structure containing two parser functions
3) plpython when converting PG datums to Python objects looks for the
hstore's rendezvous variable and if it finds it, it uses the parsers

Problem: what to do it hstore_plpython gets loaded, but hstore is not
yet loaded. hstore_plpython will want to DirectFunctionCall(hstore_in),
so loading hstore_plpython without loading hstore will result in an
ereport(ERROR, undefined symbol hstore_in) with an errhint of please
load hstore first. I could live with that, if no one has a better idea.

If that's OK, I'll go and code it. The rest is dressing on top, which I
would happily skip.

3a) optionally, there's a custom GUC for plpython called
plpython_hstore_as_dict that can be always, never or
when_available. always means that if the rendezvous variable is not
set, you get an ERROR when you pass in a hstore variable. never means
that regardless of hstore_plpython loading it gets ignored.
when_available is what I described above. The default is still to be
bikeshed. I think we can skip that, as someone loading hstore_plpython
is supposed to know that it will affect plpython functions, end of
story. Also: the hstore parsers would detect if the return value is a
dict and if not, would cast it to text and try to parse it. So if you
were hand-crafting your hstores in plpython before, it will still work,
thus making the compatibility argument moot.

Does that sound acceptable?

Jan

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_execute_from_file, patch v10

2010-12-14 Thread Itagaki Takahiro
On Wed, Dec 15, 2010 at 04:39, Dimitri Fontaine dimi...@2ndquadrant.fr wrote:
 Well, in fact, the extension's code is using either execute_sql_file()
 or read_text_file_with_endoding() then @extschema@ replacement then
 execute_sql_string(), all those functions called directly thanks to
 #include utils/genfile.h. No DirectFunctionCall'ing, we can easily
 remove SQL callable forms.

 So what we need is 2, 3 and 4 (because 4 builds on 2).

No, 3 is not needed. You can use SPI_exec() directly instead of
exporting execute_sql_string().

-- 
Itagaki Takahiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP patch for parallel pg_dump

2010-12-14 Thread Koichi Suzuki
Robert;

Thank you very much for your advice.   Indeed, I'm considering to
change the license to PostgreSQL's one.   It may take a bit more
though...
--
Koichi Suzuki



2010/12/15 Robert Haas robertmh...@gmail.com:
 On Tue, Dec 7, 2010 at 3:23 AM, Koichi Suzuki koichi@gmail.com wrote:
 This is what Postgres-XC is doing between a coordinator and a
 datanode.    Coordinator may correspond to poolers/loadbalancers.
 Does anyone think it makes sense to extract XC implementation of
 snapshot shipping to PostgreSQL itself?

 Perhaps, though of course it would need to be re-licensed.  I'd be
 happy to see us pursue a snapshot cloning framework, wherever it comes
 from.  I remain unconvinced that it should be made a hard requirement
 for parallel pg_dump, but of course if we can get it implemented then
 the point becomes moot.

 Let's not let this fall on the floor.  Someone should pursue this,
 whether it's Joachim or Koichi or someone else.

 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE ... REPLACE WITH

2010-12-14 Thread Simon Riggs
On Tue, 2010-12-14 at 16:19 -0800, Josh Berkus wrote:

 Without some means of doing a clone of the table in a single command,
 you've eliminated half the scripting work, but not helped at all with
 the other half.

I'm not trying to eliminate scripting work, I'm trying to minimise the
lock window with a reliable and smooth atomic switcheroo.

 Actually, you know what would be ideal?
 
 REPLACE TABLE old_table WITH SELECT ...
 
 Give it some thought ...

I have; the above would hold the lock window open while the SELECT runs
and that is explicitly something we are trying to avoid.

Good creative input though, thank you.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Getting ERROR: no unpinned buffers available on HEAD, should I investigate?

2010-12-14 Thread Florian Pflug
Hi

When I run my FK concurrency test suite on an unpatched HEAD, I'm getting quite 
a lot of no unpinned buffers available errors.

Increasing shared_buffers from 32MB (the default) to 64MB makes the errors go 
away, as does setting fsync=off.

I'm not sure how many buffers a particular backend can hold at any time. I'd 
have though it'd be something like one per relation and scan-in-progress, with 
indices counting as relations. In that case, with max_connections=100, my tests 
shouldn't pin more than a couple of hundred buffers at a time. That, however, 
amounts to about 1MB or so of pinned buffers, so it shouldn't break with 
shared_buffers=32MB.

I'm also confused by fsync=off making a difference. That should make writing 
dirty buffers slower, but would it affect the number of buffers pinned?

In short, I'm wondering whether I might have hit a bug, or if I should just 
increase shared_buffers and move on.

best regards,
Florian Pflug


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstores in pl/python

2010-12-14 Thread James William Pye
On Dec 13, 2010, at 6:16 PM, Tom Lane wrote:
 how do you identify which type OID is really hstore?


How about an identification field on pg_type?

CREATE TYPE hstore ..., IDENTIFIER 'org.postgresql.hstore';
-- Where the identifier is an arbitrary string.

Type information can be looked up by the PL, and the I/O functions can
be dynamically resolved using the identifier.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] V3: Idle in transaction cancellation

2010-12-14 Thread Robert Haas
On Sat, Oct 30, 2010 at 4:49 AM, Andres Freund and...@anarazel.de wrote:
  Here is a proposed patch which enables cancellation of $subject.

Disclaimer: This isn't my area of expertise, so take the below with a
grain or seven of salt.

It sort of looks to me like the LOG_NO_CLIENT error flag and the
silent_error_while_idle flag are trying to cooperate to get the effect
of throwing an error without actually throwing an error.  I'm
wondering if it would be at all sensible to do that more directly by
making ProcessInterrupts() call AbortCurrentTransaction() in this
case.

Assuming that works at all, it would presumably mean that the client
would thereafter get something like this:

current transaction is aborted, commands ignored until end of transaction block

...which might be thought unhelpful.  But that could be fixed either
by modifying errdetail_abort() or perhaps even by abstracting the
current transaction is aborted, commands... message into a function
that could produce an entirely different message if on either the
first or all calls within a given transaction.

I'm not sure if this would work, or if it's better.  I'm just throwing
it out there, because the current approach looks a little grotty to
me.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE ... REPLACE WITH

2010-12-14 Thread Josh Berkus
On 12/14/10 11:43 AM, Simon Riggs wrote:
 On Tue, 2010-12-14 at 11:34 -0800, Josh Berkus wrote:
 
 In order for REPLACE WITH to be really useful, though, we need a
 command cloning at table design with *all* constraints, FKs, keys, and
 indexes.  Currently, I still don't think we have that ... do we? 
 
 Being able to vary the indexes when we REPLACE is a good feature.
 
 We only need to check that datatypes and constraints match.

No, you're missing my point ... currently we don't have a command which
says make an identical clone of this table.  CREATE TABLE AS allows us
to copy all of the data for the table, but not the full table design.
CREATE TABLE LIKE gives us most of the design (although it still won't
copy FKs) but won't copy the data.

However, for the usual do-si-do case, you need to populate the data
using a query and not clone all the data.  What you'd really need is
something like:

CREATE TABLE new_table LIKE old_table ( INCLUDING ALL ) FROM SELECT ...

.. which would create the base tabledef, copy in the data from the
query, and then apply all the constraints, indexes, defaults, etc.

Without some means of doing a clone of the table in a single command,
you've eliminated half the scripting work, but not helped at all with
the other half.

Actually, you know what would be ideal?

REPLACE TABLE old_table WITH SELECT ...

Give it some thought ...

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Triggered assertion !(tp.t_data-t_infomask HEAP_XMAX_INVALID) in heap_delete() on HEAD [PATCH]

2010-12-14 Thread Florian Pflug
On Dec14, 2010, at 22:34 , Florian Pflug wrote:
 For non-assert-enabled builds, the only effect of the patch is thus to
 consistently return InvalidTransactionId if the crosscheck snapshot turns
 HeapTupleMayBeUpdated into HeapTupleUpdated. Which certainly seems to be
 an improvement over sometimes returning InvalidTransactionId, sometimes
 a locker's xid, and sometime's a multi-xid.

I've updated the patch to explain that in the comments above heap_update()
and heap_delete(). I've taken a brief look at the callers of these functions,
but fail to see how to improve things there. Things work currently because
crosschecking is only used in serializable mode, while the tuple's xmax is
only required in read committed mode to double-check the tuple chain when
following the ctid pointers. But the API doesn't enforce that at all :-(

I'm not willing to clean that mess up, since the serializable lock consistency
patch changes all these areas, *and* makes the cleanup easier by getting rid
of the crosscheck snapshot entirely. 

I still believe that applying this patch now is worth it, for the reasons
already explained, but in the end that's obviously something for a committer
to decide.

Patch with updated comments attached.

best regards,
Florian Pflug


fix_assert_xmaxinvalid.v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP patch for parallel pg_dump

2010-12-14 Thread Robert Haas
On Tue, Dec 14, 2010 at 7:06 PM, Koichi Suzuki koichi@gmail.com wrote:
 Thank you very much for your advice.   Indeed, I'm considering to
 change the license to PostgreSQL's one.   It may take a bit more
 though...

You wouldn't necessarily need to relicense all of Postgres-XC
(although that would be cool, too, at least IMO), just the portion you
were proposing for commit to PostgreSQL.  Or it doesn't sound like it
would be infeasible for someone to code this up from scratch.  But we
should try to make something good happen here!

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Segfault related to pg_authid when running initdb from git master

2010-12-14 Thread Robert Haas
On Tue, Dec 14, 2010 at 5:21 PM, Peter Geoghegan
peter.geoghega...@gmail.com wrote:
 Here's the output I see when $SUBJECT occurs, on a pg freshly built
 from git master with --enable-debug and --enable-cassert:

I am suspicious of the fact that you are invoking initdb as ./initdb.
Is it possible you're invoking this from the build tree, and there's
an installed copy out there that doesn't match, but is getting used?
Like maybe in /usr/local/pgsql/bin?

 creating template1 database in /var/lib/pgsql/data/base/1 ... ok
 initializing pg_authid ... sh: line 1: 23515 Segmentation fault
 (core dumped) /usr/local/pgsql/bin/postgres --single -F -O -c
 search_path=pg_catalog -c exit_on_error=true template1  /dev/null
 child process exited with exit code 139

Can you fire up gdb on this core dump, using gdb
/usr/local/pgsql/bin/postgres /path/to/coredump?  Or, another
possibility is to run initdb with --noclean and then run the command,
without routing the output to /dev/null:

/usr/local/pgsql/bin/postgres --single -F -O -c
search_path=pg_catalog -c exit_on_error=true template1

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >