Re: [HACKERS] Re: [COMMITTERS] pgsql: Recognize functional dependency on primary keys.

2010-08-16 Thread Greg Smith

Tom Lane wrote:

Bruce Momjian br...@momjian.us writes:
  

OK, I put it back, but I still feel we might not need it anymore.



Even if you're willing to believe that the questions will stop once
we have this feature, that won't happen for more than a year.
  


As a general comment on this, I've gotten two rounds of complaints about 
MySQL migrations bit by this problem in the last year, and I found it 
handy to point them to the FAQ entry.  Even if one of the forms starts 
to work in 9.1 eventually, I'd like to see a comment about this issue 
hang around somewhere for future reference.  Note that in both cases the 
whole operation involved was rather brain dead and returning silly 
indeterminate results in MySQL, but they didn't realize it.  No 
objections to the PostgreSQL limitation once they understood it was 
fixing a subtle bug in the original too.


I was thinking of adding this one as an example for my next MySQL vs. 
PostgreSQL paper update, it's a great example of the focus on 
correctness differences between the two databases.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us



Re: [HACKERS] Cost of AtEOXact_Buffers in --enable-cassert

2010-08-16 Thread Greg Smith

Tom Lane wrote:

Nobody else seems to have commented, but maybe what this suggests is
we need to be able to individually disable a few of the most expensive
checks.  I'm not sure what a reasonable API is for that ... not sure
that I like the thought of a GUC for each one.
  


I'd really like to be able to do more long-running performance tests 
with the rest of the assertions on, to help catch bugs in higher level 
code.  There's maybe three major categories of these people might want 
to turn off, right?  If you consider wal_debug as an example of 
something similar that's already there, the footprint of supporting that is:


-15 lines in config.sgml
-12 lines in guc.c
-4 lines in xlog.c

Plus all the places that have the #IFDEF around them to only enable if 
this is on, which is the same regardless of the UI to toggle it.  So 
there'd be ~35 lines of new code per option to add them in the same way, 
as GUCs you can view but not set, and that aren't included in the 
postgresql.conf.sample and such.


Right now --enable-cassert = debug_assertions makes it easy on the user 
side to figure out whether they have the expensive debugging stuff 
turned on from a UI everybody knows--type a psql command.  I'm biased 
toward just doing the minor cut/paste bloat to do something similar for 
the most expensive performance bits too.  Then, as this escapes into the 
wild, we can continue to sort out performance reports that easily.  As 
I've ran into a few times now, not everybody even has pg_config 
installed, because it's usually sitting in a devel package instead of 
the main server one.


If it's compiler option only, no GUC, and you have to use pg_config to 
figure out if you did it, that's completely acceptable too.  I don't 
have a strong opinion here, just a preference.  No arguments from me if 
the decision is that's too much code to add for something so marginal.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal / proof of concept: Triggers on VIEWs

2010-08-16 Thread Dean Rasheed
On 14 August 2010 23:22, Dean Rasheed dean.a.rash...@gmail.com wrote:
 I'll try to post an updated patch then, with some real trigger code,


I've moved this to a new thread, with a WIP patch that allow 3 types
of triggers to be added to VIEWs:
http://archives.postgresql.org/pgsql-hackers/2010-08/msg01030.php

Comments welcome.

Cheers,
Dean

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Git conversion progress report and call for testing assistance

2010-08-16 Thread Magnus Hagander
On Wed, Aug 11, 2010 at 23:10, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 On 21/07/10 23:40, Magnus Hagander wrote:

 I've also set up the git server and the scripts around it, that we can
 eventually use. This includes commit email sending, commit policy
 enforcement
 (no merge commits, correct author/committer tag etc) and proper access
 control
 (a modified version of the one on git.postgresql.org - since we definitely
 don't want any external dependencies for the main repository).

 This is all available for testing now.

 Seems to work great. I've got a local clone of that set up now, with
 workdirs for backbranches (per the Committing Using a Single Clone and
 multiple workdirs instructions I added to the Committing_with_Git wiki
 page).

 I also tested with a bunch of funny commits, including merge commits,
 non-fast-forward commits and a commit with bogus Committer, and the
 post-commit hook correctly rejected all those.

Great, thanks for testing that. I assume this means that the commits
that *should* go through did so? ;)


 Marc has set up a mailinglist at pgsql-committers-t...@postgresql.org
 where
 commit messages from the new system is sent. If you care about what they
 look
 like, subscribe there and wait for one to show up :-) Subscription is done
 the usual way.

 My posts to that lists are been stalled, but looking at the commit mail in
 the stalled post reply it looks OK.

Hmm. Were you subscribed to the list? With the same address as your
commits are coming in from?

If not, that would explain it. If you did, I need to figure out why ;)



-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON Patch for PostgreSQL - BSON Support?

2010-08-16 Thread Greg Smith

Robert Haas wrote:

On Sun, Aug 15, 2010 at 11:03 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  

I knew there would be a lot of critters crawling out as soon as we
turned over this rock.  Which other data-formats-of-the-week shall
we immortalize as core PG types?



PER-encoded ASN.1, for when you really need something human-readable?  :-)
  


I like to prioritize with hits on Google as a proxy for popularity:

XML:  341M
CSV:  132M
JSON:  96M
YAML:  6M
ASN.1:  1.1M
BSON:  130K
Protocol Buffers:  86K
OGDL:  45K

I think there's a strong case to add JSON, as it may very well be the 
most popular data-text serialization format out there not yet 
supported.  It's certainly beyond a format of the week at this point.


XML is like violence: if it doesn't solve your problem, you aren't 
using enough of it. - Chris Maden


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us



Re: [HACKERS] shared_preload_libraries is ignored in single user mode

2010-08-16 Thread KaiGai Kohei
Although nobody paid an attention, it seems to me a problem to be fixed.

The attached patch fixes the problem using a simple idea which adds
process_shared_preload_libraries() at PostgresMain() when we launched
it in single-user mode.

Thanks,

(2010/08/05 15:08), KaiGai Kohei wrote:
 I found out shared_preload_libraries setting is ignored when we launch
 postgres in single user mode.
 
 In this case, postgres command is launched with --single argument,
 then the main() directly invokes PostgresMain(); without going through
 PostmasterMain() which calls process_shared_preload_libraries().
 
 I think we should put the following code block on somewhere within
 PostgresMain() to fix up it.
 
/*
 * If not under postmaster, shared preload libraries are not
 * loaded yet, so we try to load them here.
 */
if (!IsUnderPostmaster)
process_shared_preload_libraries();
 
 The reason why I want to use modules in single user mode is to assign
 initial security labels on database objects just after initdb.
 But, the GUC is ignored, we cannot invokes the routines in the module. :(
 
 Thanks,

-- 
KaiGai Kohei kai...@ak.jp.nec.com
*** a/src/backend/tcop/postgres.c
--- b/src/backend/tcop/postgres.c
***
*** 3479,3484  PostgresMain(int argc, char *argv[], const char *username)
--- 3479,3494 
  	}
  
  	/*
+ 	 * Load shared preload libraries on single-user mode.
+ 	 * Because this routine is not invoked from PostmasterMain() that
+ 	 * loads shared preload libraries prior to fork the backend process,
+ 	 * the 'shared_preload_libraries' shall be implicitly ignored.
+ 	 * So, we need to load them on single-user mode.
+ 	 */
+ 	if (!IsUnderPostmaster)
+ 		process_shared_preload_libraries();
+ 
+ 	/*
  	 * You might expect to see a setsid() call here, but it's not needed,
  	 * because if we are under a postmaster then BackendInitialize() did it.
  	 */

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] patch: psql variables tabcomplete

2010-08-16 Thread Pavel Stehule
Hello

I found so there are not support for tabcomple of psql variables.

Regards

Pavel Stehule
*** ./src/bin/psql/tab-complete.c.orig	2010-08-14 15:59:49.0 +0200
--- ./src/bin/psql/tab-complete.c	2010-08-16 11:44:49.0 +0200
***
*** 2517,2522 
--- 2517,2551 
  
  		COMPLETE_WITH_LIST(my_list);
  	}
+ 	else if (strcmp(prev_wd, \\set) == 0)
+ 	{
+ 		const char **varnames;
+ 		int nvars = 0;
+ 		int size = 100;
+ 		struct _variable *ptr;
+ 		
+ 		varnames = (const char **) pg_malloc(size * sizeof(char *));
+ 		
+ 		/* fill array of varnames */
+ 		for (ptr = pset.vars-next; ptr; ptr = ptr-next)
+ 		{
+ 			if (nvars == size)
+ 			{
+ size = size + 100;
+ varnames = realloc(varnames, size * sizeof(char *));
+ if (!varnames)
+ {
+ psql_error(out of mempry\n);
+ exit(EXIT_FAILURE);
+ }
+ 			}
+ 			varnames[nvars++] = ptr-name;
+ 		}
+ 		varnames[nvars] = NULL;
+ 		COMPLETE_WITH_LIST(varnames);
+ 		
+ 		free(varnames);
+ 	}
  	else if (strcmp(prev_wd, \\sf) == 0 || strcmp(prev_wd, \\sf+) == 0)
  		COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_functions, NULL);
  	else if (strcmp(prev_wd, \\cd) == 0 ||

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: psql variables tabcomplete

2010-08-16 Thread Thom Brown
On 16 August 2010 10:52, Pavel Stehule pavel.steh...@gmail.com wrote:
 Hello

 I found so there are not support for tabcomple of psql variables.

 Regards

 Pavel Stehule



s/out of mempry/out of memory/

-- 
Thom Brown
Registered Linux user: #516935

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: psql variables tabcomplete

2010-08-16 Thread Pavel Stehule
2010/8/16 Thom Brown t...@linux.com:
 On 16 August 2010 10:52, Pavel Stehule pavel.steh...@gmail.com wrote:
 Hello

 I found so there are not support for tabcomple of psql variables.

 Regards

 Pavel Stehule



 s/out of mempry/out of memory/

ugh - thank you

Pavel


 --
 Thom Brown
 Registered Linux user: #516935


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: psql variables tabcomplete

2010-08-16 Thread Pavel Stehule
fixed spelling

Regards

Pavel Stehule


2010/8/16 Pavel Stehule pavel.steh...@gmail.com:
 Hello

 I found so there are not support for tabcomple of psql variables.

 Regards

 Pavel Stehule

*** ./src/bin/psql/tab-complete.c.orig	2010-08-14 15:59:49.0 +0200
--- ./src/bin/psql/tab-complete.c	2010-08-16 13:30:54.0 +0200
***
*** 2517,2522 
--- 2517,2551 
  
  		COMPLETE_WITH_LIST(my_list);
  	}
+ 	else if (strcmp(prev_wd, \\set) == 0)
+ 	{
+ 		const char **varnames;
+ 		int nvars = 0;
+ 		int size = 100;
+ 		struct _variable *ptr;
+ 		
+ 		varnames = (const char **) pg_malloc(size * sizeof(char *));
+ 		
+ 		/* fill array of varnames */
+ 		for (ptr = pset.vars-next; ptr; ptr = ptr-next)
+ 		{
+ 			if (nvars == size)
+ 			{
+ size = size + 100;
+ varnames = realloc(varnames, size * sizeof(char *));
+ if (!varnames)
+ {
+ psql_error(out of memory\n);
+ exit(EXIT_FAILURE);
+ }
+ 			}
+ 			varnames[nvars++] = ptr-name;
+ 		}
+ 		varnames[nvars] = NULL;
+ 		COMPLETE_WITH_LIST(varnames);
+ 		
+ 		free(varnames);
+ 	}
  	else if (strcmp(prev_wd, \\sf) == 0 || strcmp(prev_wd, \\sf+) == 0)
  		COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_functions, NULL);
  	else if (strcmp(prev_wd, \\cd) == 0 ||

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] patch for pg_ctl.c to add windows service start-type

2010-08-16 Thread Quan Zongliang
Hi, all

I modified pg_ctl.c to add a new option for Windows service start-type.
new option is -S [auto|demand]

For example, the command can be used under Windows:
pg_ctl register -N s-name -S auto
or
pg_ctl register -N s-name -S demand

The created service will be SERVICE_AUTO_START or SERVICE_DEMAND_START 
respectively.

regards

-- 
Quan Zongliang quanzongli...@gmail.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cost of AtEOXact_Buffers in --enable-cassert

2010-08-16 Thread Robert Haas
On Mon, Aug 16, 2010 at 4:00 AM, Greg Smith g...@2ndquadrant.com wrote:
 Tom Lane wrote:

 Nobody else seems to have commented, but maybe what this suggests is
 we need to be able to individually disable a few of the most expensive
 checks.  I'm not sure what a reasonable API is for that ... not sure
 that I like the thought of a GUC for each one.


 I'd really like to be able to do more long-running performance tests with
 the rest of the assertions on, to help catch bugs in higher level code.
  There's maybe three major categories of these people might want to turn
 off, right?  If you consider wal_debug as an example of something similar
 that's already there, the footprint of supporting that is:

 -15 lines in config.sgml
 -12 lines in guc.c
 -4 lines in xlog.c

 Plus all the places that have the #IFDEF around them to only enable if this
 is on, which is the same regardless of the UI to toggle it.  So there'd be
 ~35 lines of new code per option to add them in the same way, as GUCs you
 can view but not set, and that aren't included in the postgresql.conf.sample
 and such.

 Right now --enable-cassert = debug_assertions makes it easy on the user
 side to figure out whether they have the expensive debugging stuff turned on
 from a UI everybody knows--type a psql command.  I'm biased toward just
 doing the minor cut/paste bloat to do something similar for the most
 expensive performance bits too.  Then, as this escapes into the wild, we can
 continue to sort out performance reports that easily.  As I've ran into a
 few times now, not everybody even has pg_config installed, because it's
 usually sitting in a devel package instead of the main server one.

 If it's compiler option only, no GUC, and you have to use pg_config to
 figure out if you did it, that's completely acceptable too.  I don't have a
 strong opinion here, just a preference.  No arguments from me if the
 decision is that's too much code to add for something so marginal.

What if we just added one GUC whose remit was to disable some of the
things that enable-cassert does, with a comma-separated list of values
specifying which ones?  We could turn it into a bit-field under the
covers.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch for pg_ctl.c to add windows service start-type

2010-08-16 Thread Dave Page
On Mon, Aug 16, 2010 at 12:49 PM, Quan Zongliang
quanzongli...@gmail.com wrote:
 Hi, all

 I modified pg_ctl.c to add a new option for Windows service start-type.
 new option is -S [auto|demand]

 For example, the command can be used under Windows:
 pg_ctl register -N s-name -S auto
 or
 pg_ctl register -N s-name -S demand

 The created service will be SERVICE_AUTO_START or SERVICE_DEMAND_START 
 respectively.

Hi,

Please post the patch to the mailing list, and add it to the next
commitfest: https://commitfest.postgresql.org/action/commitfest_view?id=7

Thanks!

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Synchronous replication

2010-08-16 Thread Heikki Linnakangas

On 05/08/10 13:40, Fujii Masao wrote:

On Wed, Aug 4, 2010 at 12:35 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com  wrote:

There's some race conditions with the signaling. If another process finishes
XLOG flush and sends the signal when a walsender has just finished one
iteration of its main loop, walsender will reset xlogsend_requested and go
to sleep. It should not sleep but send the pending WAL immediately.


Yep. To avoid that race condition, xlogsend_requested should be reset to
false after sleep and before calling XLogSend(). I attached the updated
version of the patch.


There's still a small race condition: if you receive the signal just 
before entering pg_usleep(), it will not be interrupted.


Of course, on platforms where signals don't interrupt sleep, the problem 
is even bigger. Magnus reminded me that we can use select() instead of 
pg_usleep() on such platforms, but that's still vulnerable to the race 
condition.


ppoll() or pselect() could be used, but I don't think they're fully 
portable. I think we'll have to resort to the self-pipe trick mentioned 
in the Linux select(3) man page:



  On systems that  lack  pselect(),  reliable  (and
   more  portable)  signal  trapping  can  be achieved using the self-pipe
   trick (where a signal handler writes a byte to a pipe whose  other  end
   is monitored by select() in the main program.)


Another idea is to use something different than Unix signals, like 
ProcSendSignal/ProcWaitForSignal which are implemented using semaphores.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] security label support, part.2

2010-08-16 Thread Stephen Frost
KaiGai,

* KaiGai Kohei (kai...@ak.jp.nec.com) wrote:
 The purpose of this restriction is to ensure an access control decision
 using parent's label being also consistent on child tables.

Robert and I understand the concern that you have.  The answer, at least
for now, is that we don't agree with you.  PG doesn't consider child
tables to be independent objects when they're being accessed through the
parent.  As such, they don't have their own permissions checking.

 If we control accesses on child tables using child's label, no need to
 restrict an identical label within an entire inheritance hierarchy.
 But it needs to provide the original rte-requiredPerms of child tables.
 Now it is cleared at expand_inherited_rtentry(), so we have no way to
 control accesses on child tables using child's label. :(

If you want to argue that we should care about the childs permissions,
or do something different with regard to inheritance, then you need to
make that argument for all of PG, not just try to do what you think is
right in the security definer framework.

 From viewpoint of MAC, both of the following SQLs should be denied,
 when accesses on parent_tbl is allowed, but child_tbl is denied.

KaiGai, this is not a MAC vs. DAC difference.  This is a question of
what is an object and if a child table is really an independent object
from a parent table.  In PG, we've decided they're not.  We should
probably do more to make that clearer in PG, rather than have different
parts of the system treat them differently.

Thanks,

Stephen


signature.asc
Description: Digital signature


[HACKERS] Git migration timeline

2010-08-16 Thread Magnus Hagander
Hi!

According to the decision at the developer meeting, the migration to
git should happen 17-20 Aug. Here's my proposed timeline. This will
obviously affect development work some, and since the original
timeline called for us having already released 9.0 buy then ;)

1. Tuesday evening, around 19:00 central european time, which is 17:00
GMT or 12:00 EST, I will freeze the current cvs repository. I will do
this by disabling committer login on that box, so please note that
this will also make it impossible for committers to do a cvs update
from the authenticated repository. The anonymous repository will of
course still be available.

2. Tuesday evening and wednesday morning (european time again, please
extrapolate as necessary yourself :P), I will convert the latest cvs
version including running all my comparison scripts. (this takes quite
a long time). once the conversion is done, I'll snapshot whatever we
have by then so we can back out to that.

3. I will then as soon as this is done run the script to remove
keywords from active branches.

4. At this point, I will push a copy of this to the
postgresql-migration.git project on git.postgresql.org, asking
everybody to test. Those of you around, please do so as quickly as you
can. It will also be made available to all committers at this time as
a read-only repository.

5. Once enough is verified, we'll open the repository up for commits.
I expect this to happen sometime on thursday probably, assuming there
were no issues.

6. Once things have been running for a few days, we'll replace the
postgresql.git project on git.postgresql.org with an automatic push of
the repository, renaming the old one to something like
postgresql-old.git for a while.

7. As things are done, I'll get the anoncvs mirror moving as soon as
possible. But I want to make sure that the git stuff is fully working
before we do that.


This means the following:
* If anybody is working on some specific patch they want to get in
soon, please try to do so before tuesday evening european time.

* If you are working off the old git tree, you are recommended to do a
pull sometime between that and thuersday. That'll make sure you have
the latest copy of the *old* git repository. You can then generate a
patch off that and submit it or re-apply to your old repository. Note
that the new repository will *NOT* be compatible with the old one, so
you'll need to do a manual move over. All commits will have new
SHA1'sl.

* Buildfarm coverage will obviously be weak until BF members have been
converted to run off git. Those running directly against anoncvs
should have a fair chance to still work on the new one once that's up,
but the rsync-the-cvs-repository method will no longer work.

Um, I'm sure there's more, but...
.
Are there any objections to this timeline, or will this work out
reasonably well for everybody?


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Committers info for the git migration - URGENT!

2010-08-16 Thread Magnus Hagander
Attention committers!!  ;)

When we migrate to git, we will do a one-time mapping of your old
username to an email address (as was discussed on the developer
meeting in Ottawa earlier this year). This is stamped on every commit
you have ever done, since that's how git works. It's part of the
commit itself, so it *cannot* be changed once this is done. We can of
course change which email address is used for *new* commits, but not
for the existing once once they have gone in.

The current mapping used is the same one as on git.postgresql.org (see
attached file).

Per discussions earlier on this list, we encourage people to use an
email address that is permanent and stable, and does not for example
change if you change your ISP or if you change employer. But in the
end, the decision of which email address to use is up to you.

If you want to *change* your email address from the one in the list
attached here, please let me know *ASAP*. At the latest I need to know
this before tuesday evening european time (see separate email about
migration timeline).

Since there is now a very tight timeline on this (sorry!), please ping
any other committers you have on your IM/IRC list that you know don't
read their -hackers email daily...

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/
  'adunstan' : ('Andrew Dunstan', 'and...@dunslane.net'),
  'alvherre' : ('Alvaro Herrera', 'alvhe...@alvh.no-ip.org'),
  'barry' : ('Barry Lind', 'ba...@xythos.com'),
  'darcy' : ('D''Arcy J.M. Cain', 'da...@druid.net'),
  'davec' : ('Dave Cramer', 'da...@fastcrypt.com'),
  'dennis' : ('Dennis Bjorklund', 'd...@zigo.dhs.org'),
  'heikki' : ('Heikki Linnakangas', 'hei...@enterprisedb.com'),
  'inoue' : ('Hiroshi Inoue', 'in...@tpf.co.jp'),
  'ishii' : ('Tatsuo Ishii', 'is...@postgresql.org'),
  'itagaki' : ('Itagaki Takahiro', 'itagaki.takah...@oss.ntt.co.jp'),
  'joe' : ('Joe Conway', 'm...@joeconway.com'),
  'jurka' : ('Kris Jurka', 'bo...@ejurka.com'),
  'mergl' : ('Edmund Mergl', 'e.me...@bawue.de'),
  'meskes' : ('Michael Meskes', 'mes...@postgresql.org'),
  'mha' : ('Magnus Hagander', 'mag...@hagander.net'),
  'momjian' : ('Bruce Momjian', 'br...@momjian.us'),
  'neilc' : ('Neil Conway', 'ne...@samurai.com'),
  'peter' : ('Peter Mount', 'pe...@retep.org.uk'),
  'petere' : ('Peter Eisentraut', 'pete...@gmx.net'),
  'pgsql' : ('PostgreSQL Daemon', 'webmas...@postgresql.org'),
  'pjw' : ('Philip Warner', 'p...@rhyme.com.au'),
  'rhaas' : ('Robert Haas', 'robertmh...@gmail.com'),
  'scrappy' : ('Marc G. Fournier', 'scra...@hub.org'),
  'sriggs' : ('Simon Riggs', 'si...@2ndquadrant.com'),
  'stark' : ('Greg Stark', 'st...@mit.edu'),
  'teodor' : ('Teodor Sigaev', 'teo...@sigaev.ru'),
  'tgl' : ('Tom Lane', 't...@sss.pgh.pa.us'),
  'thomas' : ('Thomas G. Lockhart', 'lockh...@alumni.caltech.edu'),
  'vadim' : ('Vadim B. Mikheev', 'vadi...@yahoo.com'),
  'vev' : ('Vince Vielhaber', 'v...@michvhf.com'),
  'wieck' : ('Jan Wieck', 'janwi...@yahoo.com'),


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal / proof of concept: Triggers on VIEWs

2010-08-16 Thread David Fetter
On Mon, Aug 16, 2010 at 09:05:12AM +0100, Dean Rasheed wrote:
 On 14 August 2010 23:22, Dean Rasheed dean.a.rash...@gmail.com wrote:
  I'll try to post an updated patch then, with some real trigger
  code,
 
 I've moved this to a new thread, with a WIP patch that allow 3 types
 of triggers to be added to VIEWs:
 http://archives.postgresql.org/pgsql-hackers/2010-08/msg01030.php
 
 Comments welcome.

Please add this to the next commitfest :)

https://commitfest.postgresql.org/action/commitfest_view?id=7

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/pgSQL EXECUTE '..' USING with unknown

2010-08-16 Thread Cédric Villemain
2010/8/5 Heikki Linnakangas heikki.linnakan...@enterprisedb.com:
 There's a little problem with EXECUTE USING when the parameters are of type
 unknown (going back to 8.4 where EXECUTE USING was introduced):

 do $$
 BEGIN
  EXECUTE 'SELECT to_date($1, $2)' USING '17-DEC-80', 'DD-MON-YY';
 END;
 $$;
 ERROR:  failed to find conversion function from unknown to text
 CONTEXT:  SQL statement SELECT to_date($1, $2)
 PL/pgSQL function inline_code_block line 2 at EXECUTE statement

 The corresponding case works fine when used with PREPARE/EXECUTE:

Yes, and you point out another thing. EXECUTE is a way to bypass the
named prepare statement, to be sure query is replanned each time.
Unfortunely the current implementation of EXECUTE USING is not working
this way. If I read correctly, the internal cursor receive parameters
and is similar to a named prepare in the plan it produce.

I am in favor to have a complete replan for EXECUTE USING, or at least
change the docs:
http://www.postgresql.org/docs/current/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN
«An EXECUTE with a simple constant command string and some USING
parameters, as in the first example above, is functionally equivalent
to just writing the command directly in PL/pgSQL and allowing
replacement of PL/pgSQL variables to happen automatically. The
important difference is that EXECUTE will re-plan the command on each
execution, generating a plan that is specific to the current parameter
values; whereas PL/pgSQL normally creates a generic plan and caches it
for re-use. In situations where the best plan depends strongly on the
parameter values, EXECUTE can be significantly faster; while when the
plan is not sensitive to parameter values, re-planning will be a
waste.»



 postgres=# PREPARE foostmt AS SELECT to_date($1, $2);
 PREPARE
 postgres=# EXECUTE foostmt ('17-DEC-80', 'DD-MON-YY');
  to_date
 
  1980-12-17
 (1 row)

 With PREPARE/EXECUTE, the query is analyzed with parse_analyze_varparams()
 which allows unknown param types to be deduced from the context. Seems we
 should use that for EXECUTE USING as well, but there's no SPI interface for
 that.

 Thoughts? Should we add an SPI_prepare_varparams() function and use that?

 --
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




-- 
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Git migration timeline

2010-08-16 Thread Tom Lane
Magnus Hagander mag...@hagander.net writes:
 According to the decision at the developer meeting, the migration to
 git should happen 17-20 Aug. Here's my proposed timeline. This will
 obviously affect development work some, and since the original
 timeline called for us having already released 9.0 buy then ;)

Core is currently talking about exactly when we want to push out
9.0beta5 and/or 9.1alpha1, and the scheduling of the git transition has
to wait on that decision.  I don't think we're going to be ready for it
to begin on Tuesday.  Ignoring the exact timing, this plan sounds good
except for this bit:

 1. Tuesday evening, around 19:00 central european time, which is 17:00
 GMT or 12:00 EST, I will freeze the current cvs repository. I will do
 this by disabling committer login on that box, so please note that
 this will also make it impossible for committers to do a cvs update
 from the authenticated repository.

That's not really going to do, especially since it will also lock out
cvs log.  I certainly want to do a final update and cvs2cl after the
last commits have happened, and I imagine other people will want that
too.  If there were a way to make the repository read-only that would be
nice, but I think what we're going to need to settle for is relying on
the committers to not do any more commits after the specified time.

I can see the point of wanting to be dead certain the repository isn't
changing under you during the data migration.  Perhaps we need an agreed
window between last call for commits and the actual lock-out.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Git migration timeline

2010-08-16 Thread Aidan Van Dyk
* Tom Lane t...@sss.pgh.pa.us [100816 09:58]:
 
 That's not really going to do, especially since it will also lock out
 cvs log.  I certainly want to do a final update and cvs2cl after the
 last commits have happened, and I imagine other people will want that
 too.  If there were a way to make the repository read-only that would be
 nice, but I think what we're going to need to settle for is relying on
 the committers to not do any more commits after the specified time.
 
 I can see the point of wanting to be dead certain the repository isn't
 changing under you during the data migration.  Perhaps we need an agreed
 window between last call for commits and the actual lock-out.

What about 2 minutes long for the repository to be synced to wherever
he wants to do the conversion from.  There isn't any reason for the
repository to be locked while he does the conversion/checks, but he does
need a guaranteed consistent copy of the repository to work from.

I'ld suggest having the repository locked for long enough to:
1) Get a consistent rsync/snapshot/tarball of $CVSROOT
2) Send an email to committers saying CVS repository has been taken,
   don't commit/tag/branch on it anymore please

And then unlock it with the expectation that committers will continue in
their good stewardship of the CVS repository (in this case meaning not
make any changes to it anymore) as they are already doing.

a.

-- 
Aidan Van Dyk Create like a god,
ai...@highrise.ca   command like a king,
http://www.highrise.ca/   work like a slave.


signature.asc
Description: Digital signature


Re: [HACKERS] Git migration timeline

2010-08-16 Thread Magnus Hagander
On Mon, Aug 16, 2010 at 15:58, Tom Lane t...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 According to the decision at the developer meeting, the migration to
 git should happen 17-20 Aug. Here's my proposed timeline. This will
 obviously affect development work some, and since the original
 timeline called for us having already released 9.0 buy then ;)

 Core is currently talking about exactly when we want to push out
 9.0beta5 and/or 9.1alpha1, and the scheduling of the git transition has
 to wait on that decision.  I don't think we're going to be ready for it
 to begin on Tuesday.  Ignoring the exact timing, this plan sounds good
 except for this bit:

Ok. The sooner you can let me (and others) know, the better.

Note that if it gets pushed past the end of this week (for the whole
process), it's quite likely that the whole process will be stretched
out further. I'll have other committments I need to take care of
meanwhile then - this week i've set of a fair amount of dedicated time
to work on this. Next week, for example, almost all the work would
have to be done durning the evenings (european time) outside  of work
hours, which obviously makes for a lot less available time.


 1. Tuesday evening, around 19:00 central european time, which is 17:00
 GMT or 12:00 EST, I will freeze the current cvs repository. I will do
 this by disabling committer login on that box, so please note that
 this will also make it impossible for committers to do a cvs update
 from the authenticated repository.

 That's not really going to do, especially since it will also lock out
 cvs log.  I certainly want to do a final update and cvs2cl after the
 last commits have happened, and I imagine other people will want that
 too.  If there were a way to make the repository read-only that would be
 nice, but I think what we're going to need to settle for is relying on
 the committers to not do any more commits after the specified time.

You can still rsync the copy off the anoncvs server (or use the
anoncvs one). That one is an exact rsync of the main one, so I don't
see why something like cvs2cl wouldn't work off that one
automatically. I was just assuming every developer was already using
an rsynced copy for that kind of work :-)

I would like to make the repo readonly if I could, but I don't know a
reliable way of doing that with cvs. If someone can enlighten me on
one, that's obviously preferred.


 I can see the point of wanting to be dead certain the repository isn't
 changing under you during the data migration.  Perhaps we need an agreed
 window between last call for commits and the actual lock-out.

To prevent that, I just need to shut it down for 2 minutes to rsync
the latest changes off it and onto the new git box. Maybe that's how
we should do it.

This is a defense against committers forgetting about things and
accidentally committing to the repository *after* it's been
snapshotted. I guess we can just declare those as ignorant and throw
away those commits.. :-)


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Git migration timeline

2010-08-16 Thread Tom Lane
Magnus Hagander mag...@hagander.net writes:
 On Mon, Aug 16, 2010 at 15:58, Tom Lane t...@sss.pgh.pa.us wrote:
 I can see the point of wanting to be dead certain the repository isn't
 changing under you during the data migration.  Perhaps we need an agreed
 window between last call for commits and the actual lock-out.

 To prevent that, I just need to shut it down for 2 minutes to rsync
 the latest changes off it and onto the new git box. Maybe that's how
 we should do it.

That sounds like a reasonable scheme to me, especially since it leaves
the cvs repository functional.  Dunno about you, but I want a fallback
plan in case this turns into a disaster ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Git migration timeline

2010-08-16 Thread Robert Haas
On Mon, Aug 16, 2010 at 9:58 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 1. Tuesday evening, around 19:00 central european time, which is 17:00
 GMT or 12:00 EST, I will freeze the current cvs repository. I will do
 this by disabling committer login on that box, so please note that
 this will also make it impossible for committers to do a cvs update
 from the authenticated repository.

 That's not really going to do, especially since it will also lock out
 cvs log.  I certainly want to do a final update and cvs2cl after the
 last commits have happened, and I imagine other people will want that
 too.  If there were a way to make the repository read-only that would be
 nice, but I think what we're going to need to settle for is relying on
 the committers to not do any more commits after the specified time.

 I can see the point of wanting to be dead certain the repository isn't
 changing under you during the data migration.  Perhaps we need an agreed
 window between last call for commits and the actual lock-out.

I would feel a lot more comfortable if the CVS repository was locked
down before we start running the migration.  Any accidental commits
would be a nuisance.  However, giving a window between last call for
commits and actual lock-out seems reasonable.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] security label support, part.2

2010-08-16 Thread Kevin Grittner
Stephen Frost sfr...@snowman.net wrote:
 
 PG doesn't consider child tables to be independent objects when
 they're being accessed through the parent. As such, they don't
 have their own permissions checking.
 
I've been thinking about this from the perspective of possible
eventual use by the Wisconsin Courts, and want to throw out my
perspective on long-term direction here, without venturing any
opinion on the immediate issue.
 
It wouldn't be totally insane for the courts to some day use
inheritance to deal with court cases.  All court cases have much in
common and have the same basic structure, but specific case types
need to store some additional information.  If we did that, we would
want different permissions on different case types -- for example,
juvenile cases are not open to the public as many case types are. 
We would also need the ability to revoke public permissions on
specific rows, as judges can seal cases or various pieces of
information on a case (like the address of a stalker victim).
 
The point being, we would want a structure something like (picking a
few of our case types):
 
Case
 \_ ChargeableCase
 \_ FelonyCase
 \_ MisdemeanorCase
 \_ CivilForfeitureCase
 \_ JuvenileCase
 \_ NonchargableCase
 \_ CivilCase
 \_ SmallClaimsCase
 \_ ProbateCase
 \_ MentalCommitmentCase
 
Just because most of these case types are a matter of public record
and subject to open records law disclosure requests (which we
largely avoid by putting what we can on the web site), juvenile and
mental commitment cases are confidential; unless you need to handle
something related to such a case to support its progress through the
courts, you're not supposed to see anything beyond such sketchy
information as the existence of the case number, a filing date, and
a caption where names are replaced by initials (e.g., In the
interest of E.B.) -- and even that information is held back from
the web site because of possible data mining attacks.
 
Many of the features KaiGai has discussed would fit nicely with
court requirements -- and might even be prerequisites for
considering moving security to the database level.  Mandating
identical security for all tables in a hierarchy would be a problem.
We'd want to be able to `grant select on Case to public` and then
`revoke select on JuvenileCase, MentalCommitmentCase from
public` and have those cases disappear from selects against the
ancestor levels unless the user has the appropriate permission.  Or
less convenient, but still feasible, would be to grant nothing at
the ancestor levels, and grant what is appropriate at each child
level and have that affect the results of a query against the
ancestor.
 
Of course, if one was really careful, this could all be done by
adding views with appropriate permissions and blocking access to the
underlying ancestor tables, but that seems like a lot more work and
rather more error prone.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Git migration timeline

2010-08-16 Thread Robert Haas
On Mon, Aug 16, 2010 at 10:15 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 On Mon, Aug 16, 2010 at 15:58, Tom Lane t...@sss.pgh.pa.us wrote:
 I can see the point of wanting to be dead certain the repository isn't
 changing under you during the data migration.  Perhaps we need an agreed
 window between last call for commits and the actual lock-out.

 To prevent that, I just need to shut it down for 2 minutes to rsync
 the latest changes off it and onto the new git box. Maybe that's how
 we should do it.

 That sounds like a reasonable scheme to me, especially since it leaves
 the cvs repository functional.  Dunno about you, but I want a fallback
 plan in case this turns into a disaster ...

I don't think anybody proposed permanently deleting the CVS repository.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Git migration timeline

2010-08-16 Thread Magnus Hagander
On Mon, Aug 16, 2010 at 16:15, Tom Lane t...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 On Mon, Aug 16, 2010 at 15:58, Tom Lane t...@sss.pgh.pa.us wrote:
 I can see the point of wanting to be dead certain the repository isn't
 changing under you during the data migration.  Perhaps we need an agreed
 window between last call for commits and the actual lock-out.

 To prevent that, I just need to shut it down for 2 minutes to rsync
 the latest changes off it and onto the new git box. Maybe that's how
 we should do it.

 That sounds like a reasonable scheme to me, especially since it leaves
 the cvs repository functional.  Dunno about you, but I want a fallback
 plan in case this turns into a disaster ...

Oh, I had no intention whatsoever of *removing* it. Just disabling login to it.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Committers info for the git migration - URGENT!

2010-08-16 Thread Robert Haas
On Mon, Aug 16, 2010 at 9:38 AM, Magnus Hagander mag...@hagander.net wrote:
 Attention committers!!  ;)

 When we migrate to git, we will do a one-time mapping of your old
 username to an email address (as was discussed on the developer
 meeting in Ottawa earlier this year). This is stamped on every commit
 you have ever done, since that's how git works. It's part of the
 commit itself, so it *cannot* be changed once this is done. We can of
 course change which email address is used for *new* commits, but not
 for the existing once once they have gone in.

 The current mapping used is the same one as on git.postgresql.org (see
 attached file).

 Per discussions earlier on this list, we encourage people to use an
 email address that is permanent and stable, and does not for example
 change if you change your ISP or if you change employer. But in the
 end, the decision of which email address to use is up to you.

 If you want to *change* your email address from the one in the list
 attached here, please let me know *ASAP*. At the latest I need to know
 this before tuesday evening european time (see separate email about
 migration timeline).

 Since there is now a very tight timeline on this (sorry!), please ping
 any other committers you have on your IM/IRC list that you know don't
 read their -hackers email daily...

Please make me rh...@postgresql.org - thanks.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/pgSQL EXECUTE '..' USING with unknown

2010-08-16 Thread Tom Lane
=?ISO-8859-1?Q?C=E9dric_Villemain?= cedric.villemain.deb...@gmail.com writes:
 Yes, and you point out another thing. EXECUTE is a way to bypass the
 named prepare statement, to be sure query is replanned each time.
 Unfortunely the current implementation of EXECUTE USING is not working
 this way.

Uh ... what do you base that statement on?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] shared_preload_libraries is ignored in single user mode

2010-08-16 Thread Robert Haas
2010/8/16 KaiGai Kohei kai...@ak.jp.nec.com:
 Although nobody paid an attention, it seems to me a problem to be fixed.

 The attached patch fixes the problem using a simple idea which adds
 process_shared_preload_libraries() at PostgresMain() when we launched
 it in single-user mode.

I have no confidence at all that this is a sane thing to do.  I think
any enhanced security provider that needs system objects to be
labelled should provide a script to label them after the fact.  You
can't count on everyone who wants to use SE-PostgreSQL having made
that decision at initdb time.  I think we want to keep single-user
mode as lean and mean as possible, so that people can rely on it when
they need to fix their broken database.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON Patch for PostgreSQL - BSON Support?

2010-08-16 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 If BSON is simply in effect an efficient encoding of JSON, then it's not 
 clear to me that we would want another type at all. Rather, we might 
 want to consider storing the data in this supposedly more efficient 
 format, and maybe also some conversion routines.

Hmm, that's an interesting plan ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] security label support, part.2

2010-08-16 Thread Stephen Frost
* Kevin Grittner (kevin.gritt...@wicourts.gov) wrote:
 Many of the features KaiGai has discussed would fit nicely with
 court requirements -- and might even be prerequisites for
 considering moving security to the database level.  Mandating
 identical security for all tables in a hierarchy would be a problem.

What you're describing isn't how inheiritance used to work in PG anyway,
so it's not really like we've made things worse.  What used to happen is
that if your query against the parent table happened to hit a table you
didn't have access to, it'd fail outright with a permissions error, not
just skip over the things you didn't have access to.  That certainly
wasn't ideal.

I think what you're really looking for is RLS (Row-Level Security),
which I think we would want to implement independently of the
inheiritance system (though it'd have to work with it, of course).
That's certainly something that I think would be great to have in PG and
would ideally be something which would address both of your sometimes
everything is public except rows which look like X and all of these
types are non-public situations.

I don't believe it's something that could be addressed *only* by
inheiritance though, in any case.

Thanks,

Stephen


signature.asc
Description: Digital signature


[HACKERS] Bogus data in lock file shouldn't be FATAL?

2010-08-16 Thread Tom Lane
This complaint:
http://archives.postgresql.org/pgsql-admin/2010-08/msg00111.php

seems to suggest that this code in CreateLockFile() is not well-thought-out:

if (other_pid = 0)
elog(FATAL, bogus data in lock file \%s\: \%s\,
 filename, buffer);

as it means that a corrupted (empty, in this case) postmaster.pid file
prevents the server from starting until somebody intervenes manually.

I think that the original concern was that if we couldn't read valid
data out of postmaster.pid then we couldn't be sure if there were a
conflicting postmaster running.  But if that's the plan then
CreateLockFile is violating it further down, where it happily skips the 
PGSharedMemoryIsInUse check if it fails to pull shmem ID numbers from
the file.

We could perhaps address that risk another way: after having written
postmaster.pid, try to read it back to verify that it contains what we
wrote, and abort if not.  Then, if we can't read it during startup,
it's okay to assume there is no conflicting postmaster.

Or, given the infrequency of complaints, maybe it's better not to touch
this.  Thoughts?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON Patch for PostgreSQL - BSON Support?

2010-08-16 Thread Robert Haas
On Mon, Aug 16, 2010 at 11:05 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Andrew Dunstan and...@dunslane.net writes:
 If BSON is simply in effect an efficient encoding of JSON, then it's not
 clear to me that we would want another type at all. Rather, we might
 want to consider storing the data in this supposedly more efficient
 format, and maybe also some conversion routines.

 Hmm, that's an interesting plan ...

It is interesting, but I'm not sure that it will actually work out
well in practice.  If what we want to do is compress JSON, TOAST will
do that for us without any additional code, and probably a lot more
efficiently.  Of course, until someone tests it, we're just
speculating wildly.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Committers info for the git migration - URGENT!

2010-08-16 Thread Itagaki Takahiro
2010/8/16 Magnus Hagander mag...@hagander.net:
 If you want to *change* your email address from the one in the list
 attached here, please let me know *ASAP*. At the latest I need to know
 this before tuesday evening european time (see separate email about
 migration timeline).

Could you change my address to itagaki.takah...@gmail.com ?
Thanks.

-- 
Itagaki Takahiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bogus data in lock file shouldn't be FATAL?

2010-08-16 Thread Robert Haas
On Mon, Aug 16, 2010 at 11:18 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 This complaint:
 http://archives.postgresql.org/pgsql-admin/2010-08/msg00111.php

 seems to suggest that this code in CreateLockFile() is not well-thought-out:

                if (other_pid = 0)
                        elog(FATAL, bogus data in lock file \%s\: \%s\,
                                 filename, buffer);

 as it means that a corrupted (empty, in this case) postmaster.pid file
 prevents the server from starting until somebody intervenes manually.

 I think that the original concern was that if we couldn't read valid
 data out of postmaster.pid then we couldn't be sure if there were a
 conflicting postmaster running.  But if that's the plan then
 CreateLockFile is violating it further down, where it happily skips the
 PGSharedMemoryIsInUse check if it fails to pull shmem ID numbers from
 the file.

 We could perhaps address that risk another way: after having written
 postmaster.pid, try to read it back to verify that it contains what we
 wrote, and abort if not.  Then, if we can't read it during startup,
 it's okay to assume there is no conflicting postmaster.

What if it was readable when written but has since become unreadable?

My basic feeling on this is that manual intervention to start the
server is really undesirable and we should try hard to avoid needing
it.  That having been said, accidentally starting two postmasters at
the same time that are accessing the same data files would be several
orders of magnitude worse.  We can't afford to compromise on any
interlock mechanisms that are necessary to prevent that from
happening.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Conflicted names of error conditions.

2010-08-16 Thread Dmitriy Igrishin
Hey all,

According to
http://www.postgresql.org/docs/9.0/static/errcodes-appendix.html
some error conditions has non-unique *names*. There are:
modifying_sql_data_not_permitted,
prohibited_sql_statement_attempted,
reading_sql_data_not_permitted
from SQL Routine Exception and External Routine Exception classes.

It should be?

Regards,
Dmitriy


Re: [HACKERS] JSON Patch for PostgreSQL - BSON Support?

2010-08-16 Thread Christopher Browne
On Mon, Aug 16, 2010 at 11:21 AM, Robert Haas robertmh...@gmail.com wrote:
 On Mon, Aug 16, 2010 at 11:05 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Andrew Dunstan and...@dunslane.net writes:
 If BSON is simply in effect an efficient encoding of JSON, then it's not
 clear to me that we would want another type at all. Rather, we might
 want to consider storing the data in this supposedly more efficient
 format, and maybe also some conversion routines.

 Hmm, that's an interesting plan ...

 It is interesting, but I'm not sure that it will actually work out
 well in practice.  If what we want to do is compress JSON, TOAST will
 do that for us without any additional code, and probably a lot more
 efficiently.  Of course, until someone tests it, we're just
 speculating wildly.

Yep, that was exactly what struck me.  TOAST is quite likely to be a
good answer for this.

The reason to want some other binary format would be if there were
other benefits to be had.

An XML encoding format could be interesting if it allowed having
GIST-ish indexes to search for tags particularly efficiently.  I say
XML encoding because I've not got any reason to think that a
JSON/BSON-only format would necessarily be preferable.

But interesting isn't the same thing as the right answer.  For
now, TOAST seems perfectly reasonable.

If there's some wire format for XML that would allow more efficient
data transfer, that would be an improvement.  BSON sounds like it's
something like that, but only if it's better than flavour of the
week.
-- 
http://linuxfinances.info/info/linuxdistributions.html

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON Patch for PostgreSQL - BSON Support?

2010-08-16 Thread Joshua D. Drake
On Mon, 2010-08-16 at 11:40 -0400, Christopher Browne wrote:
 On Mon, Aug 16, 2010 at 11:21 AM, Robert Haas robertmh...@gmail.com wrote:
  On Mon, Aug 16, 2010 at 11:05 AM, Tom Lane t...@sss.pgh.pa.us wrote:
  Andrew Dunstan and...@dunslane.net writes:
  If BSON is simply in effect an efficient encoding of JSON, then it's not
  clear to me that we would want another type at all. Rather, we might
  want to consider storing the data in this supposedly more efficient
  format, and maybe also some conversion routines.
 
  Hmm, that's an interesting plan ...
 
  It is interesting, but I'm not sure that it will actually work out
  well in practice.  If what we want to do is compress JSON, TOAST will
  do that for us without any additional code, and probably a lot more
  efficiently.  Of course, until someone tests it, we're just
  speculating wildly.
 
 Yep, that was exactly what struck me.  TOAST is quite likely to be a
 good answer for this.

Except: How much JSON data will actually be TOASTed?

Joshua D. Drake

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bogus data in lock file shouldn't be FATAL?

2010-08-16 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Mon, Aug 16, 2010 at 11:18 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 We could perhaps address that risk another way: after having written
 postmaster.pid, try to read it back to verify that it contains what we
 wrote, and abort if not.  Then, if we can't read it during startup,
 it's okay to assume there is no conflicting postmaster.

 What if it was readable when written but has since become unreadable?

Yup, that's the weak spot in any such assumption.  One might also draw
an analogy to the case of failing to open postmaster.pid because of
permissions change, which seems at least as likely as a data change.
And we consider that as fatal, for good reason I think.

 My basic feeling on this is that manual intervention to start the
 server is really undesirable and we should try hard to avoid needing
 it.  That having been said, accidentally starting two postmasters at
 the same time that are accessing the same data files would be several
 orders of magnitude worse.  We can't afford to compromise on any
 interlock mechanisms that are necessary to prevent that from
 happening.

Yeah.  At the same time, it's really really bad to encourage people to
remove postmaster.pid manually as the first attempt to fix anything.
That completely destroys whatever interlock you thought you had.  So
it's not too hard to make a case that avoiding this scenario will really
make things safer not less so.

The bottom line here is that it's not clear to me whether changing this
would be a net reliability improvement or not.  Maybe better to leave
it alone.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Git migration timeline

2010-08-16 Thread Joshua D. Drake
On Mon, 2010-08-16 at 16:19 +0200, Magnus Hagander wrote:
 On Mon, Aug 16, 2010 at 16:15, Tom Lane t...@sss.pgh.pa.us wrote:
  Magnus Hagander mag...@hagander.net writes:
  On Mon, Aug 16, 2010 at 15:58, Tom Lane t...@sss.pgh.pa.us wrote:
  I can see the point of wanting to be dead certain the repository isn't
  changing under you during the data migration.  Perhaps we need an agreed
  window between last call for commits and the actual lock-out.
 
  To prevent that, I just need to shut it down for 2 minutes to rsync
  the latest changes off it and onto the new git box. Maybe that's how
  we should do it.
 
  That sounds like a reasonable scheme to me, especially since it leaves
  the cvs repository functional.  Dunno about you, but I want a fallback
  plan in case this turns into a disaster ...
 
 Oh, I had no intention whatsoever of *removing* it. Just disabling login to 
 it.

+1

JD

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bogus data in lock file shouldn't be FATAL?

2010-08-16 Thread Alvaro Herrera
Excerpts from Tom Lane's message of lun ago 16 11:18:32 -0400 2010:

 We could perhaps address that risk another way: after having written
 postmaster.pid, try to read it back to verify that it contains what we
 wrote, and abort if not.  Then, if we can't read it during startup,
 it's okay to assume there is no conflicting postmaster.

Makes sense.

BTW some past evidence:
http://archives.postgresql.org/message-id/e3e180dc0905070719q58136caai23fbb777fd3c0...@mail.gmail.com

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bogus data in lock file shouldn't be FATAL?

2010-08-16 Thread Alvaro Herrera
Excerpts from Tom Lane's message of lun ago 16 11:49:51 -0400 2010:

 The bottom line here is that it's not clear to me whether changing this
 would be a net reliability improvement or not.  Maybe better to leave
 it alone.

In that case, maybe consider fsync'ing it.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Conflicted names of error conditions.

2010-08-16 Thread Tom Lane
Dmitriy Igrishin dmit...@gmail.com writes:
 According to
 http://www.postgresql.org/docs/9.0/static/errcodes-appendix.html
 some error conditions has non-unique *names*. There are:
 modifying_sql_data_not_permitted,
 prohibited_sql_statement_attempted,
 reading_sql_data_not_permitted
 from SQL Routine Exception and External Routine Exception classes.

 It should be?

Yup, that's what the SQL standard calls them :-(.  In practice, either
underlying SQLSTATE will match that name in an EXCEPTION block, so
it doesn't matter a whole lot.  If you have a case where you feel it
does matter, you can trap by the SQLSTATE code instead.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bogus data in lock file shouldn't be FATAL?

2010-08-16 Thread Tom Lane
Alvaro Herrera alvhe...@commandprompt.com writes:
 Excerpts from Tom Lane's message of lun ago 16 11:49:51 -0400 2010:
 The bottom line here is that it's not clear to me whether changing this
 would be a net reliability improvement or not.  Maybe better to leave
 it alone.

 In that case, maybe consider fsync'ing it.

Hrm ... I had supposed we did fsync lockfiles, but a look at the code
shows not.  This seems like a clear oversight.  I think we should not
only add that, but back-patch it.  It seems entirely plausible that the
lack of an fsync is exactly what led to the original complaint.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] refactoring comment.c

2010-08-16 Thread Alvaro Herrera
Excerpts from KaiGai Kohei's message of lun ago 16 00:19:54 -0400 2010:
 (2010/08/16 11:50), Robert Haas wrote:

 When we were developing largeobject access controls, Tom Lane commented
 as follows:
 
 * Re: [HACKERS] [PATCH] Largeobject access controls
   http://marc.info/?l=pgsql-hackersm=125548822906571w=2
 | I notice that the patch decides to change the pg_description classoid for
 | LO comments from pg_largeobject's OID to pg_largeobject_metadata's.  This
 | will break existing clients that look at pg_description (eg, pg_dump and
 | psql, plus any other clients that have any intelligence about comments,
 | for instance it probably breaks pgAdmin).  And there's just not a lot of
 | return that I can see.  I agree that using pg_largeobject_metadata would
 | be more consistent given the new catalog layout, but I'm inclined to think
 | we should stick to the old convention on compatibility grounds.  Given
 | that choice, for consistency we'd better also use pg_largeobject's OID not
 | pg_largeobject_metadata's in pg_shdepend entries for LOs.
 
 He concerned about existing applications which have knowledge about internal
 layout of system catalogs, then I fixed up the patch according to the 
 suggestion.

I think that with this patch we have the return for the change that we
didn't have previously.  A patch that changes it should also fix pg_dump
and psql at the same time, but IMHO it doesn't make sense to keep adding
grotty hacks on top of it.

Maybe we could do with a grotty hack in obj_description() instead?
(...checks...) 
Oh, it's defined as a SQL function directly in pg_proc.h :-(

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bogus data in lock file shouldn't be FATAL?

2010-08-16 Thread Tom Lane
Alvaro Herrera alvhe...@commandprompt.com writes:
 In that case, maybe consider fsync'ing it.

BTW, I see you already proposed that in the thread at
http://archives.postgresql.org/message-id/e3e180dc0905070719q58136caai23fbb777fd3c0...@mail.gmail.com
I'm not sure how come the idea fell through the cracks, but we should
surely have done it then.  I think I was reading the initial complaint
as being just logfile corruption, and failed to make the connection to
the postmaster.pid file.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Committers info for the git migration - URGENT!

2010-08-16 Thread Alvaro Herrera
Excerpts from Magnus Hagander's message of lun ago 16 09:38:12 -0400 2010:

 If you want to *change* your email address from the one in the list
 attached here, please let me know *ASAP*. At the latest I need to know
 this before tuesday evening european time (see separate email about
 migration timeline).

FWIW my address is fine.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Committers info for the git migration - URGENT!

2010-08-16 Thread Joe Conway
On 08/16/2010 06:38 AM, Magnus Hagander wrote:
 The current mapping used is the same one as on git.postgresql.org (see
 attached file).
 
 Per discussions earlier on this list, we encourage people to use an
 email address that is permanent and stable, and does not for example
 change if you change your ISP or if you change employer. But in the
 end, the decision of which email address to use is up to you.
 
 If you want to *change* your email address from the one in the list
 attached here, please let me know *ASAP*. At the latest I need to know
 this before tuesday evening european time (see separate email about
 migration timeline).

Interesting list -- some names on there I haven't seen in a long time...

Mine is fine.

Thanks,

Joe

-- 
Joe Conway
credativ LLC: http://www.credativ.us
Linux, PostgreSQL, and general Open Source
Training, Service, Consulting,  24x7 Support



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] JSON Patch for PostgreSQL - BSON Support?

2010-08-16 Thread Charles Pritchard

On 8/16/10 8:40 AM, Christopher Browne wrote:

On Mon, Aug 16, 2010 at 11:21 AM, Robert Haasrobertmh...@gmail.com  wrote:
   

On Mon, Aug 16, 2010 at 11:05 AM, Tom Lanet...@sss.pgh.pa.us  wrote:
 

Andrew Dunstanand...@dunslane.net  writes:
   

If BSON is simply in effect an efficient encoding of JSON, then it's not
clear to me that we would want another type at all. Rather, we might
want to consider storing the data in this supposedly more efficient
format, and maybe also some conversion routines.
 

Hmm, that's an interesting plan ...
   

It is interesting, but I'm not sure that it will actually work out
well in practice.  If what we want to do is compress JSON, TOAST will
do that for us without any additional code, and probably a lot more
efficiently.  Of course, until someone tests it, we're just
speculating wildly.
 

Yep, that was exactly what struck me.  TOAST is quite likely to be a
good answer for this.

The reason to want some other binary format would be if there were
other benefits to be had.

An XML encoding format could be interesting if it allowed having
GIST-ish indexes to search for tags particularly efficiently.  I say
XML encoding because I've not got any reason to think that a
JSON/BSON-only format would necessarily be preferable.

But interesting isn't the same thing as the right answer.  For
now, TOAST seems perfectly reasonable.

If there's some wire format for XML that would allow more efficient
data transfer, that would be an improvement.  BSON sounds like it's
something like that, but only if it's better than flavour of the
week.
   


XML encoding has certainly been investigated within the W3C public docs:
http://www.w3.org/2003/08/binary-interchange-workshop/Report.html  
(discussion)

http://www.w3.org/TR/xbc-characterization/ (summary)

Leading to the current draft of EXI:
http://www.w3.org/XML/EXI/

The spec is a rather large undertaking. It makes sense to add to the XML 
ToDo wiki page.

EXI will certainly be better than TOAST for larger XML docs.

...

BSON does not compress text content -- TOAST would still have its 
advantages.

It mainly shortens the representation of JSON data structures.

Again, I think the primary benefit of BSON would be data traversal.
The benefit is the same for a client receiving BSON, as the server.

Data lengths are specified, allowing quick optimizations for things like 
key_exists
and equivalencies. Client's supporting BSON could benefit from a quick 
pass-through.
And I'd imagine a very slight benefit toward indexing, were GIN / hstore 
processes used.


Still, as has been noted on this thread.. We don't have numbers to work 
with.
With json as a core data type; and bson as a possible function working 
with the json
type, there's not much of a risk going in either direction (text + 
TOAST, bson + TOAST).








--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/pgSQL EXECUTE '..' USING with unknown

2010-08-16 Thread Heikki Linnakangas

On 16/08/10 03:35, Tom Lane wrote:

Heikki Linnakangasheikki.linnakan...@enterprisedb.com  writes:

One approach is to handle the conversion from unknown to the right data
type transparently in the backend. Attached patch adds a
coerce-param-hook for fixed params that returns a CoerceViaIO node to
convert the param to the right type at runtime. That's quite similar to
the way unknown constants are handled.


The idea of using a coerce_hook instead of inventing several new API
layers is attractive, but have you checked that there are no callers
for which this would be a bad idea?


That code is used in a lot of different contexts, but I can't see any 
where this could cause a problem. In general, I can't think of a case 
where we would want to throw an error on an unknown parameter where we 
accept an unknown constant at the same location. Completely rejecting 
unknown parameters might make sense in some contexts, but that's not the 
current behavior either, unknown parameters are accepted in some contexts.



Another issue is that this fails to mimic the usual varparams behavior
that a Param of unknown type should be resolved to only one type when it
is referenced in multiple places.  I'm not sure that that's a critical
behavior, but I'm definitely not sure that it's not.


Yeah, that's exactly what I was referring to when I said:

The patch doesn't currently check that a parameter is only resolved to one type 
in the same query, but that can be added.


I'll add that check. Better to be conservative and relax it later if 
needed, than to be lenient now and regret it later.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Committers info for the git migration - URGENT!

2010-08-16 Thread Magnus Hagander
On Mon, Aug 16, 2010 at 17:20, Itagaki Takahiro
itagaki.takah...@gmail.com wrote:
 2010/8/16 Magnus Hagander mag...@hagander.net:
 If you want to *change* your email address from the one in the list
 attached here, please let me know *ASAP*. At the latest I need to know
 this before tuesday evening european time (see separate email about
 migration timeline).

 Could you change my address to itagaki.takah...@gmail.com ?
 Thanks.

Updated.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON Patch for PostgreSQL - BSON Support?

2010-08-16 Thread Joseph Adams
On Sun, Aug 15, 2010 at 11:47 PM, Andrew Dunstan and...@dunslane.net wrote:

 If BSON is simply in effect an efficient encoding of JSON, then it's not
 clear to me that we would want another type at all. Rather, we might want to
 consider storing the data in this supposedly more efficient format, and
 maybe also some conversion routines.

An issue is that the current JSON data type implementation preserves
the original text (meaning if you say '[1,2,\u0020   ]'::JSON, it
will yield '[1,2,\u0020   ]' rather than '[1,2, ]' .  I haven't
researched BSON much at all, but I seriously doubt that part of its
spec includes handling external JSON encoding details like whitespace
and superfluous escapes.

Even though I spent a long time implementing it, the original text
preservation feature should be dropped, in my opinion.  Users tend to
care more about the data inside of a JSON value rather than how it's
encoded, and replacement of values can have funky consequences on
indentation when working with indented JSON text (similar to how
pasting in a text editor puts pasted content at the wrong indent
level).

By dropping original text preservation, in the future (or now), JSON
could be encoded in BSON (or a more efficient format) in the database
rather than in JSON-encoded text.

Also, an idea would be to make json_send and json_recv (binary JSON
send/receive) use BSON rather than JSON-encoded text, as
sending/receiving JSON-encoded text is exactly what text send/receive
do.


Joey Adams

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal / proof of concept: Triggers on VIEWs

2010-08-16 Thread Dean Rasheed
On 16 August 2010 14:45, David Fetter da...@fetter.org wrote:
 Please add this to the next commitfest :)


Done.

Thanks,
Dean


 https://commitfest.postgresql.org/action/commitfest_view?id=7

 Cheers,
 David.
 --
 David Fetter da...@fetter.org http://fetter.org/
 Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
 Skype: davidfetter      XMPP: david.fet...@gmail.com
 iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

 Remember to vote!
 Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON Patch for PostgreSQL - BSON Support?

2010-08-16 Thread Heikki Linnakangas

On 16/08/10 20:17, Joseph Adams wrote:

Also, an idea would be to make json_send and json_recv (binary JSON
send/receive) use BSON rather than JSON-encoded text, as
sending/receiving JSON-encoded text is exactly what text send/receive
do.


The usual reason to use the binary format is performance, so it doesn't 
make much sense to use BSON for that if the internal storage format is 
something else. It would most likely be slower, not faster, than sending 
the string as is.


Of course, if you switch to using BSON as the internal storage format, 
then it's natural to use that for the binary I/O format too.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Triggers on VIEWs

2010-08-16 Thread Dean Rasheed
On 15 August 2010 18:38, Dean Rasheed dean.a.rash...@gmail.com wrote:
 There are still a number of things left todo:
  - extend ALTER VIEW with enable/disable trigger commands

On further reflection, I wonder if the ability to disable VIEW
triggers is needed/wanted at all. I just noticed that while it is
possible to disable a RULE on a TABLE, it is not possible to do so on
VIEW. This certainly makes sense for the _RETURN rule, although
possibly some people might have a use for disabling other rules on
views. The situation with triggers is similar - disabling an INSTEAD
OF trigger would be pointless, and could only lead to errors when
updating the view. Some people may have a use case for disabling
BEFORE and AFTER statement triggers on views, but I suspect that the
number of such cases is small, and I'm tempted to omit this, for now
at least.

Thoughts?

 - Dean

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Git migration timeline

2010-08-16 Thread Tom Lane
I wrote:
 Magnus Hagander mag...@hagander.net writes:
 According to the decision at the developer meeting, the migration to
 git should happen 17-20 Aug. Here's my proposed timeline. This will
 obviously affect development work some, and since the original
 timeline called for us having already released 9.0 buy then ;)

 Core is currently talking about exactly when we want to push out
 9.0beta5 and/or 9.1alpha1, and the scheduling of the git transition has
 to wait on that decision.  I don't think we're going to be ready for it
 to begin on Tuesday.

The current feeling among core seems to be that we should allow the git
conversion to proceed according to Magnus' proposed schedule.  This
would mean that we will *not* be able to wrap either a 9.0 update or
9.1alpha1 releases this week.  What we'd probably try to do instead is
wrap 9.1alpha1 early next week, as the first attempt to generate
tarballs from the git repository, and then wrap 9.0beta5 (or rc1) later
in the week.  This slips the 9.0 schedule an extra week, but considering
that nobody seems to be doing anything about the open 9.0 issues, that
was likely to happen anyway.

If anybody's really unhappy with this timeline, speak up now.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Triggers on VIEWs

2010-08-16 Thread Tom Lane
Dean Rasheed dean.a.rash...@gmail.com writes:
 On 15 August 2010 18:38, Dean Rasheed dean.a.rash...@gmail.com wrote:
 There are still a number of things left todo:
  - extend ALTER VIEW with enable/disable trigger commands

 On further reflection, I wonder if the ability to disable VIEW
 triggers is needed/wanted at all.

AFAIK the only real use for disabling triggers is in connection with
trigger-based replication systems.  A view wouldn't be carrying
replication-related triggers anyway, so I think this could certainly
be left out of a first cut.

You aren't saying that you can see some reason why this couldn't be
added later, if wanted, correct?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Todays git migration results

2010-08-16 Thread Tom Lane
Magnus Hagander mag...@hagander.net writes:
 Attached is a ZIP file with the diffs generated when converting the
 cvs repo to git based off a cvs snapshot from this morning. It
 contains a diff file for every branch and every tag present. (If a
 file is missing, that means there were no diffs for that branch/tag).

 It's a lot of diffs - 135. But many of those are because the exact sam
 ething is in all tags on a branch. The directory unique contains one
 copy of a unique set of diffs (doesn't look at the individual changes,
 just the complete diff file), which is only 30 different files.

 As before, almost everything seems related to the initial import and
 vendor branch. There is nothing in any code.

I'm curious about the discrepancies in the $Date$ tags in some of the
doc/FAQ_xxx files.  It's surely not a showstopper, but I'd feel better
if we understood the cause of that.  Everything else seems to be
disagreement about the vendor branch version numbers, which I'm happy
to write off as a conversion artifact.

The other thing that I'd like to see some data on is the commit log
entries.  Can we produce anything comparable to cvs2cl output from
the test repository?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Git migration timeline

2010-08-16 Thread Robert Haas
On Mon, Aug 16, 2010 at 1:47 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I wrote:
 Magnus Hagander mag...@hagander.net writes:
 According to the decision at the developer meeting, the migration to
 git should happen 17-20 Aug. Here's my proposed timeline. This will
 obviously affect development work some, and since the original
 timeline called for us having already released 9.0 buy then ;)

 Core is currently talking about exactly when we want to push out
 9.0beta5 and/or 9.1alpha1, and the scheduling of the git transition has
 to wait on that decision.  I don't think we're going to be ready for it
 to begin on Tuesday.

 The current feeling among core seems to be that we should allow the git
 conversion to proceed according to Magnus' proposed schedule.  This
 would mean that we will *not* be able to wrap either a 9.0 update or
 9.1alpha1 releases this week.  What we'd probably try to do instead is
 wrap 9.1alpha1 early next week, as the first attempt to generate
 tarballs from the git repository, and then wrap 9.0beta5 (or rc1) later
 in the week.  This slips the 9.0 schedule an extra week, but considering
 that nobody seems to be doing anything about the open 9.0 issues, that
 was likely to happen anyway.

 If anybody's really unhappy with this timeline, speak up now.

That sounds good, except for the part about nobody doing anything
about the 9.0 open issues.  Those issues are:

* Backup procedure is wrong? - Nobody's been able to clearly
articulate what the problem is, and according to Fujii Masao it's been
this way since 8.2.

* Walreceiver crashes in AIX - The person who originally reported this
problem has been slow to get back to us with the information needed to
figure out what is going on.  It would be helpful if someone else
could test to see if this a general problem with AIX, or something
specific to the reporter's installation.  Or if someone wants to
provide me with a login...

* BUG #5595: Documentation is not installs from VPATH build. - I'm not
sure this is really a release-blocker, but either way, I have been
assuming it's Peter's issue to fix.

* trace_recovery_messages inconsistency - This one seems like a simple
case of deciding what makes most sense, and doing it.  We should
definitely fix this before the wrap.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Todays git migration results

2010-08-16 Thread Magnus Hagander
On Mon, Aug 16, 2010 at 20:11, Tom Lane t...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 Attached is a ZIP file with the diffs generated when converting the
 cvs repo to git based off a cvs snapshot from this morning. It
 contains a diff file for every branch and every tag present. (If a
 file is missing, that means there were no diffs for that branch/tag).

 It's a lot of diffs - 135. But many of those are because the exact sam
 ething is in all tags on a branch. The directory unique contains one
 copy of a unique set of diffs (doesn't look at the individual changes,
 just the complete diff file), which is only 30 different files.

 As before, almost everything seems related to the initial import and
 vendor branch. There is nothing in any code.

 I'm curious about the discrepancies in the $Date$ tags in some of the
 doc/FAQ_xxx files.  It's surely not a showstopper, but I'd feel better
 if we understood the cause of that.  Everything else seems to be
 disagreement about the vendor branch version numbers, which I'm happy
 to write off as a conversion artifact.

 The other thing that I'd like to see some data on is the commit log
 entries.  Can we produce anything comparable to cvs2cl output from
 the test repository?

For a single branch, just do git log branchname, e.g. git log
master or git log REL8_2_STABLE on your clone.

Is that enough, or do you need one for all branches at once?

(if you don't have a local clone of it, lmk and I can generate that
output for you)

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Return of the Solaris vacuum polling problem -- anyone remember this?

2010-08-16 Thread Josh Berkus
All,

This is something I'd swear we fixed around 8.3.2. However, I'm seeing
it again in production, and was wondering if anyone could remember what
the root cause was and how we fixed it.

The problem is that sometimes (but not the majority of times) autovaccum
with cost_delay is going into a pathological cycle where it polls the
system clock after reading every single disk page of a table. On large
tables, this results in vacuum not completing within the lifetime of the
server.  In most cases, killing autovaccuum and restarting it will cause
it to behave normally.

The below is the truss from the exhibited issue on 8.3.11 on Solaris
10u7, compiled with sun cc:

pollsys(0xFD7FFFDF9030, 0, 0xFD7FFFDF90C0, 0x) = 0
lseek(39, 0x28F88000, SEEK_SET) = 0x28F88000
write(39, 0E10\0\0E0 CB5C101\001\0.., 8192)   = 8192
lseek(39, 0x28FCA000, SEEK_SET) = 0x28FCA000
read(39,  q\r\0\0 `9CD2B001\001\0.., 8192)= 8192
pollsys(0xFD7FFFDF9030, 0, 0xFD7FFFDF90C0, 0x) = 0
read(39,  F0E\0\090A888 H01\001\0.., 8192)= 8192
pollsys(0xFD7FFFDF9030, 0, 0xFD7FFFDF90C0, 0x) = 0
read(39,  q\r\0\0C819D3B001\001\0.., 8192)= 8192
pollsys(0xFD7FFFDF9030, 0, 0xFD7FFFDF90C0, 0x) = 0
lseek(39, 0x28F9, SEEK_SET) = 0x28F9
write(39, 0E10\0\0 0 gB7C101\001\0.., 8192)   = 8192
lseek(39, 0x28FD, SEEK_SET) = 0x28FD
read(39,  q\r\0\0 X 8D3B001\001\0.., 8192)= 8192
pollsys(0xFD7FFFDF9030, 0, 0xFD7FFFDF90C0, 0x) = 0
read(39,  t0F\0\0 H +8F !01\001\0.., 8192)= 8192
pollsys(0xFD7FFFDF9030, 0, 0xFD7FFFDF90C0, 0x) = 0
read(39,  q\r\0\0F0 sD3B001\001\0.., 8192)= 8192
pollsys(0xFD7FFFDF9030, 0, 0xFD7FFFDF90C0, 0x) = 0
read(39,  F0E\0\0 0C888 H01\001\0.., 8192)= 8192
pollsys(0xFD7FFFDF9030, 0, 0xFD7FFFDF90C0, 0x) = 0
pollsys(0xFD7FFFDF9030, 0, 0xFD7FFFDF90C0, 0x) = 0
lseek(39, 0x28FDA000, SEEK_SET) = 0x28FDA000
read(39,  q\r\0\0C0D1D3B001\001\0.., 8192)= 8192
pollsys(0xFD7FFFDF9030, 0, 0xFD7FFFDF90C0, 0x) = 0
read(39,  q\r\0\0D8F0D3B001\001\0.., 8192)= 8192
pollsys(0xFD7FFFDF9030, 0, 0xFD7FFFDF90C0, 0x) = 0
read(39,  F0E\0\0800189 H01\001\0.., 8192)= 8192
pollsys(0xFD7FFFDF9030, 0, 0xFD7FFFDF90C0, 0x) = 0
read(39,  q0F\0\0D0 ^A9F701\001\0.., 8192)= 8192
pollsys(0xFD7FFFDF9030, 0, 0xFD7FFFDF90C0, 0x) = 0
read(39,  F0E\0\010 ?89 H01\001\0.., 8192)= 8192
pollsys(0xFD7FFFDF9030, 0, 0xFD7FFFDF90C0, 0x) = 0
read(39,  q\r\0\0 x mD4B001\001\0.., 8192)= 8192
pollsys(0xFD7FFFDF9030, 0, 0xFD7FFFDF90C0, 0x) = 0
read(39,  F0E\0\0 X _89 H01\001\0.., 8192)= 8192
pollsys(0xFD7FFFDF9030, 0, 0xFD7FFFDF90C0, 0x) = 0
read(39,  q\r\0\0 @ADD4B001\001\0.., 8192)= 8192

For contrast, this is normal behavior:

read(10,  }\0\0\0 X82 E301\0\0\0.., 8192)= 8192
read(10,  }\0\0\018 4 ME301\0\0\0.., 8192)= 8192
read(10,  }\0\0\0E881 NE301\0\0\0.., 8192)= 8192
semop(16777221, 0xFD7FFFDF8FB8, 1)  = 0
read(10,  }\0\0\0 PEE \E301\0\0\0.., 8192)= 8192
read(10,  }\0\0\0\b k ^E301\0\0\0.., 8192)= 8192
read(10,  }\0\0\0 8E0 jE301\0\0\0.., 8192)= 8192
read(10,  }\0\0\0 P07 nE301\0\0\0.., 8192)= 8192
read(10,  }\0\0\0D885 xE301\0\0\0.., 8192)= 8192
read(10,  }\0\0\0  8D }E301\0\0\0.., 8192)= 8192
read(10,  }\0\0\0 xD280E301\0\0\0.., 8192)= 8192
read(10,  }\0\0\010DF8CE301\0\0\0.., 8192)= 8192
read(10,  }\0\0\0E09E8EE301\0\0\0.., 8192)= 8192
read(10,  }\0\0\0C8E29CE301\0\0\0.., 8192)= 8192
read(10,  }\0\0\080889EE301\0\0\0.., 8192)= 8192
read(10,  }\0\0\0B0 UADE301\0\0\0.., 8192)= 8192
read(10,  }\0\0\0C0E4BCE301\0\0\0.., 8192)= 8192
read(10,  }\0\0\0   !C0E301\0\0\0.., 8192)= 8192
read(10,  }\0\0\010 UCDE301\0\0\0.., 8192)= 8192
read(10,  }\0\0\0F8EBCEE301\0\0\0.., 8192)= 8192
read(10,  }\0\0\08092DDE301\0\0\0.., 8192)= 8192
read(10,  }\0\0\0A8 QDFE301\0\0\0.., 8192)= 8192
read(10,  }\0\0\0 x cEDE301\0\0\0.., 8192)= 8192
read(10,  }\0\0\0D8 EFE301\0\0\0.., 8192)= 8192
read(10,  }\0\0\0 P15FAE301\0\0\0.., 8192)= 8192
read(10,  }\0\0\0C8C0FDE301\0\0\0.., 8192)= 8192
read(10,  }\0\0\0 PFC\tE401\0\0\0.., 8192)= 8192
read(10,  }\0\0\0 8C3\rE401\0\0\0.., 8192)= 8192
read(10,  }\0\0\0 @890FE401\0\0\0.., 8192)= 8192
read(10,  }\0\0\0   r11E401\0\0\0.., 8192)= 8192


-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Writeable CTEs Desgin Doc on Wiki

2010-08-16 Thread Robert Haas
On Sun, Aug 15, 2010 at 7:44 AM, Hitoshi Harada umi.tan...@gmail.com wrote:
 We (Marko, David Fetter and I) discussed on IRC about design of
 writeable CTEs. It does and will contain not only syntax but also
 miscellaneous specifications, general rules and restrictions. I hope
 this will help the patch reviews and stop dangerous design at the
 early stage. If you find something wrong, or have request, please
 notify.

 http://wiki.postgresql.org/wiki/WriteableCTEs

 We will keep to add details. Any comments are welcome.

There are really two separate features here, and it might be worth
giving them separate names and separate designs (and separate
patches).  Allowing the main query to be an insert, update, or delete
seems easier than allowing the toplevel CTEs to contain those
constructs, although I might be wrong about that.

Under features, what is DCL?  There has been previous talk of allowing
WITH (COPY ...) and I am personally of the opinion that it would be
nice to be able to do WITH (EXPLAIN ...).  DDL seems like a poor idea.

P.S. Call me a prude, but your choice of shorthand for
insert-update-delete may not be the best.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Todays git migration results

2010-08-16 Thread Tom Lane
Magnus Hagander mag...@hagander.net writes:
 On Mon, Aug 16, 2010 at 20:11, Tom Lane t...@sss.pgh.pa.us wrote:
 The other thing that I'd like to see some data on is the commit log
 entries.  Can we produce anything comparable to cvs2cl output from
 the test repository?

 For a single branch, just do git log branchname, e.g. git log
 master or git log REL8_2_STABLE on your clone.

 Is that enough, or do you need one for all branches at once?

Well, I guess there are two sub-parts to my question then.  First, and
most important for the immediate issue, have you done anything to verify
the commit-message data matches between the cvs and git repositories?

Second, does git offer a way to collate matching log entries across
multiple branches?  The main advantage of cvs2cl has always been that it
would do that, so that you didn't have to look at five independent log
entries after a commit that fixed the same bug in five branches...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Todays git migration results

2010-08-16 Thread Magnus Hagander
On Mon, Aug 16, 2010 at 20:27, Tom Lane t...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 On Mon, Aug 16, 2010 at 20:11, Tom Lane t...@sss.pgh.pa.us wrote:
 The other thing that I'd like to see some data on is the commit log
 entries.  Can we produce anything comparable to cvs2cl output from
 the test repository?

 For a single branch, just do git log branchname, e.g. git log
 master or git log REL8_2_STABLE on your clone.

 Is that enough, or do you need one for all branches at once?

 Well, I guess there are two sub-parts to my question then.  First, and
 most important for the immediate issue, have you done anything to verify
 the commit-message data matches between the cvs and git repositories?

Not beyond manually looking both using git log and the gitweb interface.


 Second, does git offer a way to collate matching log entries across
 multiple branches?  The main advantage of cvs2cl has always been that it
 would do that, so that you didn't have to look at five independent log
 entries after a commit that fixed the same bug in five branches...

I don't know about that - somebody else?

If there isn't one, it should be fairly simple to write one, since
tracking the changelog is much easier in git than cvs (given that it's
global and not per-file).

But what really is the usecase there? If you run git log on
REL8_4_STABLE you get the changelog for REL8_4_STABLE, and if you run
it on master, you get it for the tip. In that mode, you'll never end
up getting them twice.

Are you saying you want a cross-branch changelog, that also groups all
the commit messages together? Or am I misunderstanding you?


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Git migration timeline

2010-08-16 Thread Kevin Grittner
Robert Haas robertmh...@gmail.com wrote:
 
 That sounds good, except for the part about nobody doing anything
 about the 9.0 open issues.  Those issues are:
 
 [four issues listed]
 
Nobody responded when I asked about this recently, but shouldn't
that list include BUG #5607: memmory leak in ecpg?  We have a
patch from Zoltán Böszörményi from before this bug report which
seems to address the issue and which Michael Meskes said Feel free
to apply.
 
We don't want to ship 9.0 with known memory leaks, do we?
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Triggers on VIEWs

2010-08-16 Thread Dean Rasheed
On 16 August 2010 18:50, Tom Lane t...@sss.pgh.pa.us wrote:
 Dean Rasheed dean.a.rash...@gmail.com writes:
 On 15 August 2010 18:38, Dean Rasheed dean.a.rash...@gmail.com wrote:
 There are still a number of things left todo:
  - extend ALTER VIEW with enable/disable trigger commands

 On further reflection, I wonder if the ability to disable VIEW
 triggers is needed/wanted at all.

 AFAIK the only real use for disabling triggers is in connection with
 trigger-based replication systems.  A view wouldn't be carrying
 replication-related triggers anyway, so I think this could certainly
 be left out of a first cut.

 You aren't saying that you can see some reason why this couldn't be
 added later, if wanted, correct?


Yes. It should be easy to add later if wanted. I just don't see much
use for it, and I don't want to add more to an already quite big
patch, if it's not really needed.

Regards,
Dean

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Todays git migration results

2010-08-16 Thread Tom Lane
Magnus Hagander mag...@hagander.net writes:
 On Mon, Aug 16, 2010 at 20:27, Tom Lane t...@sss.pgh.pa.us wrote:
 Second, does git offer a way to collate matching log entries across
 multiple branches?

 But what really is the usecase there?

Generating back-branch update release notes, mainly.  We usually do that
first for the newest back branch, and then copy paste and edit as needed
into the older ones.  It's a lot easier to see what needs to be adjusted
if you're looking at something like

2010-08-13 12:27  tgl

* src/backend/: catalog/namespace.c, utils/cache/plancache.c
(REL9_0_STABLE), catalog/namespace.c, utils/cache/plancache.c
(REL8_3_STABLE), catalog/namespace.c, utils/cache/plancache.c
(REL8_4_STABLE), catalog/namespace.c, utils/cache/plancache.c: Fix
Assert failure in PushOverrideSearchPath when trying to restore a
search path that specifies useTemp, but there is no active temp
schema in the current session.  (This can happen if the path was
saved during a transaction that created a temp schema and was later
rolled back.)  For existing callers it's sufficient to ignore the
useTemp flag in this case, though we might later want to offer an
option to create a fresh temp schema.  So far as I can tell this is
just an Assert failure: in a non-assert build, the code would push
a zero onto the new search path, which is useless but not very
harmful.  Per bug report from Heikki.

than half a dozen independent lists.

I've also found that answering questions about when some old patch got
added is easier from this format than I think it'd be if I had only
per-branch lists.  I do have both the combined log history and
per-branch log histories at hand (from separate cvs2cl runs), but I find
that I hardly ever consult the latter.

It's not a showstopper, but if git can't do it I'll be disappointed.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Todays git migration results

2010-08-16 Thread Magnus Hagander
On Mon, Aug 16, 2010 at 20:45, Tom Lane t...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 On Mon, Aug 16, 2010 at 20:27, Tom Lane t...@sss.pgh.pa.us wrote:
 Second, does git offer a way to collate matching log entries across
 multiple branches?

 But what really is the usecase there?

 Generating back-branch update release notes, mainly.  We usually do that
 first for the newest back branch, and then copy paste and edit as needed
 into the older ones.  It's a lot easier to see what needs to be adjusted
 if you're looking at something like

 2010-08-13 12:27  tgl

        * src/backend/: catalog/namespace.c, utils/cache/plancache.c
        (REL9_0_STABLE), catalog/namespace.c, utils/cache/plancache.c
        (REL8_3_STABLE), catalog/namespace.c, utils/cache/plancache.c
        (REL8_4_STABLE), catalog/namespace.c, utils/cache/plancache.c: Fix
        Assert failure in PushOverrideSearchPath when trying to restore a
        search path that specifies useTemp, but there is no active temp
        schema in the current session.  (This can happen if the path was
        saved during a transaction that created a temp schema and was later
        rolled back.)  For existing callers it's sufficient to ignore the
        useTemp flag in this case, though we might later want to offer an
        option to create a fresh temp schema.  So far as I can tell this is
        just an Assert failure: in a non-assert build, the code would push
        a zero onto the new search path, which is useless but not very
        harmful.  Per bug report from Heikki.

 than half a dozen independent lists.

 I've also found that answering questions about when some old patch got
 added is easier from this format than I think it'd be if I had only
 per-branch lists.  I do have both the combined log history and
 per-branch log histories at hand (from separate cvs2cl runs), but I find
 that I hardly ever consult the latter.

Hmm. Ok.

I don't know if it does, so I'll hope someone else can tell us if it does :-)

BTW, you do have things like git log --grep=foo to search the log
directly, instead of working through cvs2cl output of course, but that
doesn't quite solve your problem, I can see that.


 It's not a showstopper, but if git can't do it I'll be disappointed.

If there's no way to do it offhand, I'm pretty sure we can write a
short script that does it for us.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Conflicted names of error conditions.

2010-08-16 Thread Dmitriy Igrishin
Thanks for you answer, Tom!

I've implemented mapping between SQLSTATE codes and C++ exception
classes of my library. And of course, I've resolved the conflict of names
by giving a proper name to my classes.

Regards,
Dmitriy

2010/8/16 Tom Lane t...@sss.pgh.pa.us

 Dmitriy Igrishin dmit...@gmail.com writes:
  According to
  http://www.postgresql.org/docs/9.0/static/errcodes-appendix.html
  some error conditions has non-unique *names*. There are:
  modifying_sql_data_not_permitted,
  prohibited_sql_statement_attempted,
  reading_sql_data_not_permitted
  from SQL Routine Exception and External Routine Exception classes.

  It should be?

 Yup, that's what the SQL standard calls them :-(.  In practice, either
 underlying SQLSTATE will match that name in an EXCEPTION block, so
 it doesn't matter a whole lot.  If you have a case where you feel it
 does matter, you can trap by the SQLSTATE code instead.

regards, tom lane



Re: [HACKERS] Git migration timeline

2010-08-16 Thread Tom Lane
Kevin Grittner kevin.gritt...@wicourts.gov writes:
 Nobody responded when I asked about this recently, but shouldn't
 that list include BUG #5607: memmory leak in ecpg?  We have a
 patch from Zoltán Böszörményi from before this bug report which
 seems to address the issue and which Michael Meskes said Feel free
 to apply.
 
 We don't want to ship 9.0 with known memory leaks, do we?

Better a memory leak than broken ecpg ;-).  Nobody except Michael
is terribly comfortable with that code, so we'd all rather wait for
him to review and apply the patch.

More generally, pre-existing bugs have never been considered release
stoppers.  At this point what we would block the release for is *new*
bugs in 9.0.  (An exception to that general rule is pre-existing bugs
that would require an initdb to fix; but this one isn't that either.)

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


9.0 open issues (was Re: [HACKERS] Git migration timeline)

2010-08-16 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 That sounds good, except for the part about nobody doing anything
 about the 9.0 open issues.  Those issues are:

 * Backup procedure is wrong? - Nobody's been able to clearly
 articulate what the problem is, and according to Fujii Masao it's been
 this way since 8.2.

Just because it's old doesn't mean it's not a bug ;-).  Normally I would
say that a pre-existing documentation problem isn't a release blocker,
but in this case that procedure is likely to become of interest to a
whole lot more people than it was before, thanks to SR/HS.  So we need
to understand whether there's a problem and fix it.

 * Walreceiver crashes in AIX - The person who originally reported this
 problem has been slow to get back to us with the information needed to
 figure out what is going on.

Yes.  If he doesn't provide adequate data, and we can't reproduce it
elsewhere, we should not consider this a release blocker.

 * BUG #5595: Documentation is not installs from VPATH build. - I'm not
 sure this is really a release-blocker, but either way, I have been
 assuming it's Peter's issue to fix.

I believe this is a regression from prior branches, probably caused by
the switch away from distributing the prebuilt docs as sub-tarballs.
As such, it's at least a candidate release-blocker.

 * trace_recovery_messages inconsistency - This one seems like a simple
 case of deciding what makes most sense, and doing it.  We should
 definitely fix this before the wrap.

Agreed, it's a matter of getting some consensus.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Writeable CTEs Desgin Doc on Wiki

2010-08-16 Thread David Fetter
On Mon, Aug 16, 2010 at 02:25:50PM -0400, Robert Haas wrote:
 On Sun, Aug 15, 2010 at 7:44 AM, Hitoshi Harada umi.tan...@gmail.com wrote:
  We (Marko, David Fetter and I) discussed on IRC about design of
  writeable CTEs. It does and will contain not only syntax but also
  miscellaneous specifications, general rules and restrictions. I hope
  this will help the patch reviews and stop dangerous design at the
  early stage. If you find something wrong, or have request, please
  notify.
 
  http://wiki.postgresql.org/wiki/WriteableCTEs
 
  We will keep to add details. Any comments are welcome.
 
 There are really two separate features here, and it might be worth
 giving them separate names and separate designs (and separate
 patches).  Allowing the main query to be an insert, update, or delete
 seems easier than allowing the toplevel CTEs to contain those
 constructs, although I might be wrong about that.

Interesting.  I'd kinda seen them as the same thing.

 Under features, what is DCL?

Data Control Language, i.e. GRANT and REVOKE.

 There has been previous talk of allowing WITH (COPY ...) and I am
 personally of the opinion that it would be nice to be able to do
 WITH (EXPLAIN ...).  DDL seems like a poor idea.

It may be, but I can see use cases for partitioning...

 P.S. Call me a prude, but your choice of shorthand for
 insert-update-delete may not be the best.

Then I presume you'll be supporting my idea of using the word span
for temporal data types rather than the current idea whose name
appears in academic literature.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Return of the Solaris vacuum polling problem -- anyone remember this?

2010-08-16 Thread Joe Conway
On 08/16/2010 11:24 AM, Josh Berkus wrote:
 All,
 
 This is something I'd swear we fixed around 8.3.2. However, I'm seeing
 it again in production, and was wondering if anyone could remember what
 the root cause was and how we fixed it.

I've also recently heard a report of vacuum hanging on 8.3.x on Solaris
Sparc. Any chance you can get a backtrace from a build with debug symbols?

Joe

-- 
Joe Conway
credativ LLC: http://www.credativ.us
Linux, PostgreSQL, and general Open Source
Training, Service, Consulting,  24x7 Support



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] Return of the Solaris vacuum polling problem -- anyone remember this?

2010-08-16 Thread Josh Berkus

 I've also recently heard a report of vacuum hanging on 8.3.x on Solaris
 Sparc. Any chance you can get a backtrace from a build with debug symbols?

The problem is that we haven't been able to reproduce the bug in
testing.  Like I said, it only seems to happen occasionally ... like
maybe once in 10 or 20 (or more?) autovacuums.  We've never been seen it
with a manual vacuum at all.

And we can't rebuild the production servers.

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Return of the Solaris vacuum polling problem -- anyone remember this?

2010-08-16 Thread Tom Lane
Josh Berkus j...@agliodbs.com writes:
 This is something I'd swear we fixed around 8.3.2. However, I'm seeing
 it again in production, and was wondering if anyone could remember what
 the root cause was and how we fixed it.

Hmm, I can't find anything in the 8.3-series CVS logs suggesting that
there was a post-8.3.0 fix related to vacuum delays.

 The problem is that sometimes (but not the majority of times) autovaccum
 with cost_delay is going into a pathological cycle where it polls the
 system clock after reading every single disk page of a table.

What I find interesting about that trace is the large proportion of
writes.  That appears to me to indicate that it's *not* a matter of
vacuum delays, or at least not just a matter of that.  The process seems
to be getting involved in having to dump dirty buffers to disk.  Perhaps
the background writer is malfunctioning?

Another idea that comes to mind is that you have vacuum_cost_page_dirty
set to an unreasonably large value, so that autovac is blocking whenever
it has to write even one page.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] refactoring comment.c

2010-08-16 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 5. Since I'm hoping Tom will read this, I ran it through filterdiff.  :-)

OK, I looked ;-).  It mostly looks good, but of course I've got some
opinions...

 2. I haven't done anything about moving the definition of
 ObjectAddress elsewhere, as Alvaro suggested, because I'm not sure
 quite where it ought to go.  I still think it's a good idea, though
 I'm not dead set on it, either.  Suggestions?

I think the problem is you're trying to put this into backend/parser
which is not really the right place for it.  It's an execution-time
animal not a parse-time animal.  I would put it into backend/catalog,
perhaps named objectaddress.c, and similarly the header file would be
objectaddress.h.  Then it would be reasonable to move struct
ObjectAddress into this header and have dependency.h #include it.
There might be some other stuff in dependency.c that more naturally
belongs here, too.

 3. I fixed the issue Kaigai Kohei spotted, regarding
 LargeObjectRelationId vs. LargeObjectMetadataRelationId, by adding a
 grotty hack.  However, I feel that I'm not so much adding a new grotty
 hack as working around an existing grotty hack which was added for
 reasons I'm unclear on.  Is there a pg_upgrade-related reason not to
 revert the original hack instead?

It's not pg_upgrade (nor psql, nor pg_dump) we are protecting here.
It's third-party applications that might understand the contents of
pg_description, pg_depend, etc.  I think that hack is quite small
and localized enough to live with, rather than causing a flag day
for an unknown number of clients by changing the representation.

+   /*
+* Translate the parser representation which identifies this object into
+* an ObjectAddress. get_object_address() will throw an error if the
+* object does not exist.
+*/
+   address = get_object_address(stmt-objtype, stmt-objname, 
stmt-objargs,
+relation, 
ShareUpdateExclusiveLock);

I think this comment should also explicitly mention that we're getting a
lock to protect against concurrent DROPs.

+ /*
+  * Translate an object name and arguments (as passed by the parser) to an
+  * ObjectAddress.
+  *
+  * The returned object will be locked using the specified lockmode.  If a
+  * sub-object is looked up, the parent object will be locked instead.
+  *
+  * We don't currently provide a function to release the locks acquired here;
+  * typically, the lock must be held until commit to guard against a concurrent
+  * drop operation.
+  */
+ ObjectAddress
+ get_object_address(ObjectType objtype, List *objname, List *objargs,
+  Relation *relp, LOCKMODE lockmode)

This comment's a bit shy of a load too, since it totally fails to
mention the relp argument, or specify what the caller is required to
do with it, or explain how the locking on the relation works.

As a matter of style, I'd suggest putting the single externally callable
function (ie get_object_address) at the top of the file not the bottom.
People shouldn't have to read through the entire file before finding out
what API it is supposed to provide to the outside world.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Todays git migration results

2010-08-16 Thread Alex Hunsaker
On Mon, Aug 16, 2010 at 12:45, Tom Lane t...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 On Mon, Aug 16, 2010 at 20:27, Tom Lane t...@sss.pgh.pa.us wrote:
 Second, does git offer a way to collate matching log entries across
 multiple branches?

 But what really is the usecase there?

 Generating back-branch update release notes, mainly.

 2010-08-13 12:27  tgl

        * src/backend/: catalog/namespace.c, utils/cache/plancache.c
        (REL9_0_STABLE), catalog/namespace.c, utils/cache/plancache.c
        (REL8_3_STABLE), catalog/namespace.c, utils/cache/plancache.c
        (REL8_4_STABLE), catalog/namespace.c, utils/cache/plancache.c:

Yeah... it cant really.  You can git log --decorate which will add any
tags or branches a commit is in, but it breaks for merges and only
works if the commit hash is the same (and if its the *current* commit
on the branch I think).  Skimming the git mailing list, it seems the
powers that be think the above is stupid pointless and wrong (out of
touch with reality or what?).   Basically the argument is if you want
to back patch something you probably need to change it in some way and
touch up the commit message anyway.  So just include any relevant info
in the commit message and you can script something to parse and
extract that info if you care. This (long) thread sums it up
http://thread.gmane.org/gmane.comp.version-control.git/95381/focus=95386.

How exactly patches get applied into back branches?  Has that been
spelled out somewhere?  There are a lot of ways to do it.  For
instance git.git seems to apply the patch to the earliest branch first
and then merge it on up so that everything can share the same
commit/hash.  That looks like a royal PITA to me, and I assume the
plan is to just cherry-pick commits back.  As long as we use git
cherry-pick -x, I agree with Magnus, it should be fairly easy to write
a short script to do it. II'll even volunteer if the above is
basically the only requirement :-).

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Todays git migration results

2010-08-16 Thread Tom Lane
Alex Hunsaker bada...@gmail.com writes:
 How exactly patches get applied into back branches?  Has that been
 spelled out somewhere?  There are a lot of ways to do it.  For
 instance git.git seems to apply the patch to the earliest branch first
 and then merge it on up so that everything can share the same
 commit/hash.  That looks like a royal PITA to me, and I assume the
 plan is to just cherry-pick commits back.  As long as we use git
 cherry-pick -x, I agree with Magnus, it should be fairly easy to write
 a short script to do it. II'll even volunteer if the above is
 basically the only requirement :-).

There was discussion about that before, but I don't know whether we
really have a solution that will work comfortably.  A couple of
comments:

* My practice has always been to develop a fix in HEAD first and then
work backwards.  I'm going to resist any tool that tries to force me
to do it the other way.  There are a couple of reasons for that: one,
I'm generally more familiar with HEAD, and two, I want HEAD to have the
cleanest solution.  If you do an old branch first, you'll probably come
up with a solution that is good for that branch but could be improved
in newer ones, eg by using some subroutine or facility that doesn't
exist earlier.  Forward-patching won't encourage you to find that.

* My experience is that a patch that has to go back more than one or two
branches is almost never exactly the same on each branch, even without
any of the non-trivial changes suggested above.  We constantly do things
like rearrange the arguments of some function that's used everywhere.
So patch is definitely not smart enough to back-patch the fixes by
itself.  Maybe git will be a lot smarter but I'm not expecting miracles.
Anything that is based on same hash is pretty much guaranteed to
not do what I need.

I'd be satisfied with a tool that merges commit reports if they have the
same log message and occur at approximately the same time, which is the
heuristic that cvs2cl uses.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Writeable CTEs Desgin Doc on Wiki

2010-08-16 Thread Robert Haas
On Mon, Aug 16, 2010 at 3:00 PM, David Fetter da...@fetter.org wrote:
 There has been previous talk of allowing WITH (COPY ...) and I am
 personally of the opinion that it would be nice to be able to do
 WITH (EXPLAIN ...).  DDL seems like a poor idea.

 It may be, but I can see use cases for partitioning...

Like what?

 P.S. Call me a prude, but your choice of shorthand for
 insert-update-delete may not be the best.

 Then I presume you'll be supporting my idea of using the word span
 for temporal data types rather than the current idea whose name
 appears in academic literature.

Wise guy.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Todays git migration results

2010-08-16 Thread Robert Haas
On Mon, Aug 16, 2010 at 4:33 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I'd be satisfied with a tool that merges commit reports if they have the
 same log message and occur at approximately the same time, which is the
 heuristic that cvs2cl uses.

So how do you run cvs2cl?  Do you run it once in a while and save the
output someplace?  Or what?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Return of the Solaris vacuum polling problem -- anyone remember this?

2010-08-16 Thread Joe Conway
On 08/16/2010 12:12 PM, Josh Berkus wrote:
 
 I've also recently heard a report of vacuum hanging on 8.3.x on Solaris
 Sparc. Any chance you can get a backtrace from a build with debug symbols?
 
 The problem is that we haven't been able to reproduce the bug in
 testing.  Like I said, it only seems to happen occasionally ... like
 maybe once in 10 or 20 (or more?) autovacuums.  We've never been seen it
 with a manual vacuum at all.
 
 And we can't rebuild the production servers.

Hmmm, well I don't know how to reproduce it on demand either -- I'll try
to get a backtrace from the wild if possible. I'll keep you posted...

Joe

-- 
Joe Conway
credativ LLC: http://www.credativ.us
Linux, PostgreSQL, and general Open Source
Training, Service, Consulting,  24x7 Support



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] Return of the Solaris vacuum polling problem -- anyone remember this?

2010-08-16 Thread Alvaro Herrera
Excerpts from Joe Conway's message of lun ago 16 16:47:19 -0400 2010:
 On 08/16/2010 12:12 PM, Josh Berkus wrote:
  
  I've also recently heard a report of vacuum hanging on 8.3.x on Solaris
  Sparc. Any chance you can get a backtrace from a build with debug symbols?
  
  The problem is that we haven't been able to reproduce the bug in
  testing.  Like I said, it only seems to happen occasionally ... like
  maybe once in 10 or 20 (or more?) autovacuums.  We've never been seen it
  with a manual vacuum at all.
  
  And we can't rebuild the production servers.
 
 Hmmm, well I don't know how to reproduce it on demand either -- I'll try
 to get a backtrace from the wild if possible. I'll keep you posted...

FWIW there's also a report of it hanging in FreeBSD, but sadly when the
process is inspected under truss, it dies because of its parent PID
attribute changing underneath and thus triggering the safety feature
that makes it die if the parent postmaster disappears.

I suspect that the problem may lie in the cost_delay rebalance code in
autovacuum.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Writeable CTEs Desgin Doc on Wiki

2010-08-16 Thread David Fetter
On Mon, Aug 16, 2010 at 04:34:07PM -0400, Robert Haas wrote:
 On Mon, Aug 16, 2010 at 3:00 PM, David Fetter da...@fetter.org wrote:
  There has been previous talk of allowing WITH (COPY ...) and I am
  personally of the opinion that it would be nice to be able to do
  WITH (EXPLAIN ...).  DDL seems like a poor idea.
 
  It may be, but I can see use cases for partitioning...
 
 Like what?

Managing partitions, or creating partitions in some way the partition
syntax doesn't anticipate, whatever it turns out to be.

  P.S. Call me a prude, but your choice of shorthand for
  insert-update-delete may not be the best.
 
  Then I presume you'll be supporting my idea of using the word
  span for temporal data types rather than the current idea whose
  name appears in academic literature.
 
 Wise guy.

In all seriousness, the temporal stuff is giving us a fantastic
opportunity, as a project, to break our tradition of lousy naming.

Span is descriptive, mnemonic, and easy to spell.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Return of the Solaris vacuum polling problem -- anyone remember this?

2010-08-16 Thread Alvaro Herrera
Excerpts from Alvaro Herrera's message of lun ago 16 16:58:31 -0400 2010:

 I suspect that the problem may lie in the cost_delay rebalance code in
 autovacuum.

Hmm, so we have this code:

void
AutoVacuumUpdateDelay(void)
{
if (MyWorkerInfo)
{
VacuumCostDelay = MyWorkerInfo-wi_cost_delay;
VacuumCostLimit = MyWorkerInfo-wi_cost_limit;
}
}

where the MyWorkerInfo bits come from shared memory and can be modified
by other autovac worker processes.  We could read an incomplete value
into our variables.  But this only makes sense if an int variable can
be subject to a partial read/write, which we already assume not to be so
(c.f. GetNewTransactionId).

In any case, if you happen to see this reoccur, could you please attach
GDB to the misbehaving worker and see what VacuumCostDelay and
VacuumCostLimit print out as?

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON Patch for PostgreSQL - BSON Support?

2010-08-16 Thread Charles Pritchard

On 8/15/10 8:47 PM, Andrew Dunstan wrote:

On 08/15/2010 11:03 PM, Tom Lane wrote:

Charles Pritchardch...@jumis.com  writes:

I'd originally sent this to Joseph Adams, as he has been working on
adding a JSON datatype.
I've suggested supporting BSON, as there are many client 
implementations

available,

I knew there would be a lot of critters crawling out as soon as we
turned over this rock.  Which other data-formats-of-the-week shall
we immortalize as core PG types?




If BSON is simply in effect an efficient encoding of JSON, then it's 
not clear to me that we would want another type at all. Rather, we 
might want to consider storing the data in this supposedly more 
efficient format, and maybe also some conversion routines.


I agree that we don't want in core a huge array of general 
serialization formats. The one thing that JSON has going for it for 
general use, in my view, is that, unlike hstore, the structure is not 
flat. That makes it potentially useful for various purposes, 
especially complex structured function arguments, in places where 
using hstore can be rather limiting, and xml overly verbose.

While I certainly haven't done homework on this -- I agree with Andrew.

Storing internally as BSON (if it holds up to its premise) would mean 
more efficient traversal
of internal objects in the future, if we were to have JSON-related 
functions/selectors.


The core type would still be json, and would return as text, a json string,
but internally it would be stored as BSON, and a function would be 
available,

json_to_bson(typedjsoncol::json), returning a binary string.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON Patch for PostgreSQL - BSON Support?

2010-08-16 Thread Kevin Grittner
Charles Pritchard ch...@jumis.com wrote:
 
 Storing internally as BSON (if it holds up to its premise) would
 mean more efficient traversal of internal objects in the future,
 if we were to have JSON-related functions/selectors.
 
How about the fact that not all JSON objects can be represented in
BSON (if the JSON object has a very long string), and not all BSON
objects can be represented in JSON (if the BSON object has an
array).  Or do we invent our own flavors of one or both to cover the
mismatch?
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Todays git migration results

2010-08-16 Thread Alex Hunsaker
On Mon, Aug 16, 2010 at 14:33, Tom Lane t...@sss.pgh.pa.us wrote:
 Alex Hunsaker bada...@gmail.com writes:
 How exactly patches get applied into back branches?

 There was discussion about that before, but I don't know whether we
 really have a solution that will work comfortably.

I don't either, not being a -commiter I don't really follow that area much :-)

  A couple of
 comments:

 * My practice has always been to develop a fix in HEAD first and then
 work backwards.  I'm going to resist any tool that tries to force me
 to do it the other way.

Yep, I agree and as you pointed out it does not work anyway (in the
sense of being able to keep the same commit id/hash) because you end
up needing to change things.

 I'd be satisfied with a tool that merges commit reports if they have the
 same log message and occur at approximately the same time, which is the
 heuristic that cvs2cl uses.

I dont think it would be to hard to code that up (main worry is it
might be dog slow).  BTW the point about git cherry-pick -x is that it
includes the original commit hash in the commit message.  That way we
don't  have to do any guess work based on commit time and log message.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Todays git migration results

2010-08-16 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Mon, Aug 16, 2010 at 4:33 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I'd be satisfied with a tool that merges commit reports if they have the
 same log message and occur at approximately the same time, which is the
 heuristic that cvs2cl uses.

 So how do you run cvs2cl?  Do you run it once in a while and save the
 output someplace?  Or what?

Yeah, it's a bit too slow to do on every sync.  I run it every week or
two and keep the output in a text file.  Usually what I want the history
for is stuff that happened awhile ago, so the fact that it's not 100% up
to date is seldom a factor.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Todays git migration results

2010-08-16 Thread Tom Lane
Alex Hunsaker bada...@gmail.com writes:
 On Mon, Aug 16, 2010 at 14:33, Tom Lane t...@sss.pgh.pa.us wrote:
 I'd be satisfied with a tool that merges commit reports if they have the
 same log message and occur at approximately the same time, which is the
 heuristic that cvs2cl uses.

 I dont think it would be to hard to code that up (main worry is it
 might be dog slow).

Well, cvs2cl is pretty dog-slow too.  As I was just saying to Robert,
it doesn't really matter since I only run it a couple times a month.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON Patch for PostgreSQL - BSON Support?

2010-08-16 Thread Tom Lane
Kevin Grittner kevin.gritt...@wicourts.gov writes:
 Charles Pritchard ch...@jumis.com wrote:
 Storing internally as BSON (if it holds up to its premise) would
 mean more efficient traversal of internal objects in the future,
 if we were to have JSON-related functions/selectors.
 
 How about the fact that not all JSON objects can be represented in
 BSON (if the JSON object has a very long string), and not all BSON
 objects can be represented in JSON (if the BSON object has an
 array).

Well, if it's not just a binary encoding of JSON, I think we can forget
about it ... certainly it won't work in the form I was visualizing.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON Patch for PostgreSQL - BSON Support?

2010-08-16 Thread Andres Freund
1;2403;0cOn Mon, Aug 16, 2010 at 05:02:47PM -0500, Kevin Grittner wrote:
 Charles Pritchard ch...@jumis.com wrote:

  Storing internally as BSON (if it holds up to its premise) would
  mean more efficient traversal of internal objects in the future,
  if we were to have JSON-related functions/selectors.
 How about the fact that not all JSON objects can be represented in
 BSON (if the JSON object has a very long string)
Any such long string wont be representable in pg anyway. Or am I
missing something here?

Besides that I have to say that I find it pretty strange to design a
supposedly generic file-format with a 32bit signed integer length...

 , and not all BSON objects can be represented in JSON (if the BSON object has 
 an
 array).  Or do we invent our own flavors of one or both to cover the
 mismatch?
The BSON representation could be purely internal...

Andres

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON Patch for PostgreSQL - BSON Support?

2010-08-16 Thread Joseph Adams
On Mon, Aug 16, 2010 at 7:24 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Well, if it's not just a binary encoding of JSON, I think we can forget
 about it ... certainly it won't work in the form I was visualizing.

                        regards, tom lane

I just read the spec, and BSON has a lot of bells and whistles
attached (such as labeling binary data with a subtype like UUID or
MD5).  With a couple exceptions (see below), any JSON can be converted
to BSON (but the way BSON does arrays is silly: item indices are
stored as strings), but not all BSONs can be converted to JSON without
losing some type details.

Others already mentioned that you can't convert 2 billion byte long
JSON strings to BSON.  Another issue is that BSON cannot encode all
JSON numbers without precision loss.  JSON can hold any number
matching

'-'? (0 | [1-9][0-9]*) ('.' [0-9]+)? ([Ee] [+-]? [0-9]+)?

but BSON pidgenholes numeric values to either double, int32, int64, or
a 12-byte MongoDB Object ID.  Thus, for people who expect JSON to be
able to hold arbitrary-precision numbers (which the JSON data type in
my patch can), using BSON for transfer or storage will violate that
expectation.

Now that I know more about BSON, my opinion is that it shouldn't be
used as the transfer or storage format of the JSON data type.  Maybe
if someone wants to do the work, BSON could be implemented as a
contrib module, and functions could be provided in that module to
convert to/from JSON with documented caveats.


Joey Adams

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] shared_preload_libraries is ignored in single user mode

2010-08-16 Thread KaiGai Kohei
(2010/08/16 23:40), Robert Haas wrote:
 2010/8/16 KaiGai Koheikai...@ak.jp.nec.com:
 Although nobody paid an attention, it seems to me a problem to be fixed.

 The attached patch fixes the problem using a simple idea which adds
 process_shared_preload_libraries() at PostgresMain() when we launched
 it in single-user mode.
 
 I have no confidence at all that this is a sane thing to do.  I think
 any enhanced security provider that needs system objects to be
 labelled should provide a script to label them after the fact.  You
 can't count on everyone who wants to use SE-PostgreSQL having made
 that decision at initdb time.  I think we want to keep single-user
 mode as lean and mean as possible, so that people can rely on it when
 they need to fix their broken database.
 
I also agree it is nonsense to make access control decision during
initdb phase, but it is not the reason why I want to fix this problem.

I plan to provide a script that assigns initial security label after
the initdb, but before launching postmaster. This script tries to execute
postgres in single-user mode, then labels database objects according to
the system setting. But the sepgsql module is not loaded currently.

I want to kick this job in single-user mode, not normal processing mode,
because we can simplify several stuffs. For example, we don't need to
check whether the user has privilege to assign initial labels, because
it is obvious people who launch initdb has superpower on whole of the
database. In addition, we don't need to consider a possibility that
someone create a new database object during initial labeling.

So, I'd like to fix the problem.

Thanks,
-- 
KaiGai Kohei kai...@ak.jp.nec.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Git migration timeline

2010-08-16 Thread Tom Lane
Magnus Hagander mag...@hagander.net writes:
 According to the decision at the developer meeting, the migration to
 git should happen 17-20 Aug. Here's my proposed timeline. This will
 obviously affect development work some, and since the original
 timeline called for us having already released 9.0 buy then ;)

 1. Tuesday evening, around 19:00 central european time, which is 17:00
 GMT or 12:00 EST, I will freeze the current cvs repository. I will do
 this by disabling committer login on that box, so please note that
 this will also make it impossible for committers to do a cvs update
 from the authenticated repository.

So, per discussion, I'd like to suggest that we have a quiet time for
say three hours before the repository is closed off, to give interested
committers a chance to capture final snapshots of the current
repository.

IOW, please no more CVS commits after 14:00 GMT tomorrow, Tuesday 8/17.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] shared_preload_libraries is ignored in single user mode

2010-08-16 Thread Itagaki Takahiro
2010/8/17 KaiGai Kohei kai...@ak.jp.nec.com:
 I want to kick this job in single-user mode, not normal processing mode,

Does an explicit LOAD work in single-user mode?
I think LOAD just after login works as same as it was preloaded,
unless it allocates shared memory.

-- 
Itagaki Takahiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] shared_preload_libraries is ignored in single user mode

2010-08-16 Thread KaiGai Kohei
(2010/08/17 9:02), Itagaki Takahiro wrote:
 2010/8/17 KaiGai Koheikai...@ak.jp.nec.com:
 I want to kick this job in single-user mode, not normal processing mode,
 
 Does an explicit LOAD work in single-user mode?
 I think LOAD just after login works as same as it was preloaded,
 unless it allocates shared memory.
 
Thanks, I never thought this idea.

It works and solves the matter, but I think the right way is to fix
the problem, rather than a workaround in scripts...

Thanks,
-- 
KaiGai Kohei kai...@ak.jp.nec.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Return of the Solaris vacuum polling problem -- anyone remember this?

2010-08-16 Thread Josh Berkus

 Another idea that comes to mind is that you have vacuum_cost_page_dirty
 set to an unreasonably large value, so that autovac is blocking whenever
 it has to write even one page.

Nope.  Default.  And total cost was raised to 1000.

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] security label support, part.2

2010-08-16 Thread KaiGai Kohei
(2010/08/16 22:14), Stephen Frost wrote:
 KaiGai,
 
 * KaiGai Kohei (kai...@ak.jp.nec.com) wrote:
 The purpose of this restriction is to ensure an access control decision
 using parent's label being also consistent on child tables.
 
 Robert and I understand the concern that you have.  The answer, at least
 for now, is that we don't agree with you.  PG doesn't consider child
 tables to be independent objects when they're being accessed through the
 parent.  As such, they don't have their own permissions checking.
 
 If we control accesses on child tables using child's label, no need to
 restrict an identical label within an entire inheritance hierarchy.
 But it needs to provide the original rte-requiredPerms of child tables.
 Now it is cleared at expand_inherited_rtentry(), so we have no way to
 control accesses on child tables using child's label. :(
 
 If you want to argue that we should care about the childs permissions,
 or do something different with regard to inheritance, then you need to
 make that argument for all of PG, not just try to do what you think is
 right in the security definer framework.
 
  From viewpoint of MAC, both of the following SQLs should be denied,
 when accesses on parent_tbl is allowed, but child_tbl is denied.
 
 KaiGai, this is not a MAC vs. DAC difference.  This is a question of
 what is an object and if a child table is really an independent object
 from a parent table.  In PG, we've decided they're not.  We should
 probably do more to make that clearer in PG, rather than have different
 parts of the system treat them differently.
 
Ahh, yes, the question is what is an object, not a MAC vs DAC.

Indeed, PG does not try to handle child table as an independent object
from a parent table. However, if so, it seems to me strange that we can
assign individual ownership and access privileges on child tables.

If we stand on the perspective that child tables are a part of the
parent table, isn't it necessary to keep same ownership and access
privileges between parent and children? It seems to me the current
implementation is in the halfway from the perspective of child
tables as independent object to the perspective of child tables as
a part of parent table.

If PG can keep consistency of ownership and access privileges between
parent and children, it is quite natural we keep consistency of labels,
isn't it?

Thanks,
-- 
KaiGai Kohei kai...@ak.jp.nec.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON Patch for PostgreSQL - BSON Support?

2010-08-16 Thread Josh Berkus

 but BSON pidgenholes numeric values to either double, int32, int64, or
 a 12-byte MongoDB Object ID.  Thus, for people who expect JSON to be
 able to hold arbitrary-precision numbers (which the JSON data type in
 my patch can), using BSON for transfer or storage will violate that
 expectation.

Good lord.  I'd suggest that maybe we wait for BSON v. 2.0 instead.

Is BSON even any kind of a standard?

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >