Re: [HACKERS] creating CHECK constraints as NOT VALID

2011-06-18 Thread Jaime Casanova
On Thu, Jun 16, 2011 at 4:10 AM, Jaime Casanova ja...@2ndquadrant.com wrote:
 On Wed, Jun 15, 2011 at 7:08 PM, Alvaro Herrera
 alvhe...@commandprompt.com wrote:

 Yeah, nothing serious.  Updated patch attached.  The wording in the doc
 changes could probably use some look over.


 looks good to me... at least it compiles, and function as i would expect...
 tomorrow i will read the code more carefully and look at the docs, but
 probably this is just fine to be commited...


I think that this paragraph is confusing, but not being an natural
english speaker i will let others give their opinion here:
!   If NOT VALID is not specified,
!   the command will only succeed if all columns using the
!   domain satisfy the new constraint.
!   The constraint is going to be enforced on new data inserted into columns
!   using the domain in all cases, regardless of literalNOT VALID/.
!   literalNOT VALID/ is only accepted for literalCHECK/
constraints.


Even if it is redundant maybe here should say This form validates a
constraint previously added as NOT VALID, otherwise someone could
think that by default constraints are not enforced and should be
validated
+ termVALIDATE CONSTRAINT/term
+ listitem
+  para
+   This form validates a constraint previously added, that is, verify that
+   all data in columns using the domain satisfy the specified constraint.
+  /para
+ /listitem
+/varlistentry


otherwise the patch looks good, and works fine in every test i could imagine...

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Don't use cp -i in the example WAL archive_command.

2011-06-18 Thread Thom Brown
On 18 June 2011 04:13, Bruce Momjian br...@momjian.us wrote:
 Bruce Momjian wrote:

 Wow, this is the first I am hearing GNU cp -i can return zero exit if it
 doesn't do the copy.  I tested this on Ubuntu 10.04 using cp 7.4 and
 got:

       $ touch x y
       $ cp -i x y; echo $?
       cp: overwrite `y'? n
       0

 I see the same on my anchent BSD/OS machine too:

       $ touch x y
       $ cp -i x y; echo $?
       overwrite y? n
       0

 Were we expecting an error if the file already existed?  Assuming that,
 we should assume the file will always exist so basically archiving will
 never progress.  Is this what we want?  I just wasn't aware we were
 expecting an already-existing this to be an error --- I thought we just
 didn't want to overwrite it.

 I tested on FreeBSD 7.4 and saw a 1 error return:

        $ touch x y
        $ cp -i x y; echo $?
        overwrite y? (y/n [n]) n
        not overwritten
        1

And on a Mac (so through Darwin 10.7.0 a BSD version too):

toucan:tmp thom$ touch x y
toucan:tmp thom$ cp -i x y; echo $?
overwrite y? (y/n [n]) n
not overwritten
1

Thom

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Range Types and extensions

2011-06-18 Thread Jeff Davis
On Fri, 2011-06-10 at 00:26 +0200, Florian Pflug wrote:
 Maybe that check should just be removed? If one views the range
 '[L, U)' as a concise way of expressing L = x AND x  U for some
 x, then allowing the case L  U seems quite natural. There won't
 be any such x of course, but the range is still valid, just empty.

[ Please excuse the late reply, I was on vacation. ]

That's an interesting perspective, but I don't think it's a good idea. 

Up to this point, I've considered a range value to be a set of
contiguous values, and the endpoints just happen to be a way to
represent that set. If changing the collation changes a set of positive
cardinality into an empty set, clearly it's a different value.

We don't want the COLLATE clause to change the value, because things
that do change the value (like a typecast) should offer the opportunity
to call a function so that you can verify that it's valid or change it
to some canonical form.

So, I believe that you are proposing to change the concept of a range
value from a contiguous set of values to a pair of bounds. There are
numerous implications, one of which is that I don't think that we can
maintain the equality of all empty ranges. Consider these expressions,
where x is a non-empty range with collation A, but is empty in
collation B (and * means range intersection):

  (x COLLATE B) COLLATE A
  ((x COLLATE B) * '(-Inf, Inf)') COLLATE A
  ('-'::textrange * '(-Inf, Inf)') COLLATE A

All of those expressions should be equal (according to global
substitutibility, as Darren mentioned). But they can't be, because the
last expression is always an empty range, whereas the first one is not
(because merely changing the collation back and forth offers no
opportunity to even notice that you have an empty range at one point).
So, I believe that we'd be stuck with non-equal empty ranges, as well as
many other possibly non-intuitive implications.

So, I lean strongly toward the interpretation that a range is a
contiguous set of values, and changing the collation should not change
the value. Things that do change the value (like a typecast) should
offer the opportunity to handle cases like this with a function call,
but changing collation does not.

This leaves making the collation a part of the range type itself (as
Robert suggested).

Comments?

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another swing at JSON

2011-06-18 Thread Bernd Helmle



--On 17. Juni 2011 18:06:58 -0400 Joseph Adams joeyadams3.14...@gmail.com 
wrote:



Done.  Note that this module builds, tests, and installs successfully
with USE_PGXS=1.  However, building without USE_PGXS=1 produces the
following:

CREATE EXTENSION json;
ERROR:  incompatible library /usr/lib/postgresql/json.so: version
mismatch DETAIL:  Server is version 9.1, library is version 9.2.

Similar problems occur with a couple other modules I tried (hstore, intarray).


Hmm, works for me. Seems you have messed up your installation in some way 
(build against current -HEAD but running against a 9.1?).


I'm going to review in the next couple of days again.

--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Don't use cp -i in the example WAL archive_command.

2011-06-18 Thread Dickson S. Guedes
2011/6/18 Thom Brown t...@linux.com:
[.. cut ..]
 And on a Mac (so through Darwin 10.7.0 a BSD version too):

 toucan:tmp thom$ touch x y
 toucan:tmp thom$ cp -i x y; echo $?
 overwrite y? (y/n [n]) n
 not overwritten
 1

On AIX 5.3

bash-3.00$ touch x y
bash-3.00$ cp -i x y; echo $?
overwrite y? n
0

-- 
Dickson S. Guedes
mail/xmpp: gue...@guedesoft.net - skype: guediz
http://guedesoft.net - http://www.postgresql.org.br

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Don't use cp -i in the example WAL archive_command.

2011-06-18 Thread Tom Lane
Thom Brown t...@linux.com writes:
 On 18 June 2011 04:13, Bruce Momjian br...@momjian.us wrote:
 I tested on FreeBSD 7.4 and saw a 1 error return:

 And on a Mac (so through Darwin 10.7.0 a BSD version too):

Yeah, see yesterday's discussion on pgsql-admin.  I think the behavior
with the error return may be a BSD-ism.  In any case, GNU cp does *not*
do what we want, and that accounts for a sufficiently large fraction of
machines in the field that I think it's just unsafe to suggest using
cp -i so prominently.  There are too many people who'll just copy and
paste the first example provided, especially if the warning to test it
is buried several paragraphs later.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] plpgsql performance - SearchCatCache issue

2011-06-18 Thread Pavel Stehule
Hello

I tried to optimize repeated assign in plpgsql with elimination
unnecessary  palloc/free calls.

I tested changes on simple bublesort

postgres=# \sf buble
CREATE OR REPLACE FUNCTION public.buble(integer[])
 RETURNS integer[]
 LANGUAGE plpgsql
AS $function$
declare
  unsorted bool := true;
  aux int;
begin
  while unsorted
  loop
unsorted := false;
for i in array_lower($1,1) .. array_upper($1, 1) - 1
loop
  if $1[i]  $1[i+1] then
aux := $1[i];
$1[i] := $1[i+1];
$1[i+1] := aux;
unsorted := true;
  end if;
end loop;
  end loop;
  return $1;
end
$function$

The performance tests shows so this optimization is useless. But when
I checked a oprofile' result I was surprised by high a SearchCatCache
calls.

3008 13.0493  SearchCatCache
1306  5.6657  ExecEvalParamExtern
1143  4.9586  GetSnapshotData
1122  4.8675  AllocSetAlloc
1058  4.5898  MemoryContextAllocZero
1002  4.3469  ExecMakeFunctionResultNoSets
986   4.2775  ExecEvalArrayRef
851   3.6918  LWLockAcquire
783   3.3968  LWLockRelease
664   2.8806  RevalidateCachedPlan
646   2.8025  AllocSetFree
568   2.4641  array_ref
551   2.3904  CopySnapshot
519   2.2515  AllocSetReset
510   2.2125  array_set
492   2.1344  PopActiveSnapshot
381   1.6529  ArrayGetOffset
369   1.6008  AcquireExecutorLocks
348   1.5097  pfree
347   1.5054  MemoryContextAlloc
313   1.3579  bms_is_member
285   1.2364  CatalogCacheComputeHashValue
267   1.1583  PushActiveSnapshot
266   1.1540  hash_uint32
253   1.0976  pgstat_init_function_usage
233   1.0108  array_seek.clone.0

when I mark function buble as immutable I got a profile:

3006 18.6384  SearchCatCache
1239  7.6823  ExecEvalParamExtern
1061  6.5786  MemoryContextAllocZero
931   5.7726  ExecMakeFunctionResultNoSets
881   5.4625  ExecEvalArrayRef
590   3.6582  RevalidateCachedPlan
580   3.5962  array_ref
518   3.2118  AllocSetAlloc
488   3.0258  array_set
447   2.7716  AllocSetReset
383   2.3748  AcquireExecutorLocks
334   2.0709  bms_is_member
311   1.9283  ArrayGetOffset
285   1.7671  CatalogCacheComputeHashValue
269   1.6679  pgstat_init_function_usage
240   1.4881  hash_uint32
237   1.4695  ResourceOwnerForgetPlanCacheRef
214   1.3269  oideq
210   1.3021  ReleaseCachedPlan
204   1.2649  array_seek.clone.0
202   1.2525  ResourceOwnerForgetCatCacheRef
196   1.2153  SearchSysCache
188   1.1657  pg_detoast_datum
185   1.1471  ArrayGetNItems
183   1.1347  ExecEvalConst
181   1.1223  DirectFunctionCall1Coll
178   1.1037  hashoid
176   1.0913  check_stack_depth
174   1.0789  heap_getsysattr
174   1.0789  pgstat_end_function_usage
173   1.0727  FunctionCall2Coll

Is this profile expected?

Regards

Pavel Stehule

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Don't use cp -i in the example WAL archive_command.

2011-06-18 Thread Bruce Momjian
Tom Lane wrote:
 Thom Brown t...@linux.com writes:
  On 18 June 2011 04:13, Bruce Momjian br...@momjian.us wrote:
  I tested on FreeBSD 7.4 and saw a 1 error return:
 
  And on a Mac (so through Darwin 10.7.0 a BSD version too):
 
 Yeah, see yesterday's discussion on pgsql-admin.  I think the behavior
 with the error return may be a BSD-ism.  In any case, GNU cp does *not*
 do what we want, and that accounts for a sufficiently large fraction of
 machines in the field that I think it's just unsafe to suggest using
 cp -i so prominently.  There are too many people who'll just copy and
 paste the first example provided, especially if the warning to test it
 is buried several paragraphs later.

Agreed.  Even if we could decide whether we want an existing file to
cause cp to fail or succeed, the bigger problem is that 'test ! -f $FILE
 cp' and 'cp -i' often don't do the same thing, to the point where it
doesn't even seem worth mentioning the idea of using 'cp -i' at all.

I frankly don't think most users are competent enough to be able to test
their cp -i command, end even if they are, that script might migrate to
a machine that handles cp -i differently.

I think we should just document the test ! -f version and be done with
it, and maybe mention cp -i as non-portable.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade using appname to lock out other users

2011-06-18 Thread Bruce Momjian
Bruce Momjian wrote:
 I meant the PGPASSWORD environment variable:
 
   indexterm
primaryenvarPGPASSWORD/envar/primary
   /indexterm
   envarPGPASSWORD/envar behaves the same as the xref
   linkend=libpq-connect-password connection parameter.
   Use of this environment variable
   is not recommended for security reasons, as some operating systems
   allow non-root users to see process environment variables via
   applicationps/; instead consider using the
   filename~/.pgpass/ file (see xref linkend=libpq-pgpass).
 
 The only other way to do this is to pass it on the command line, but
 some options don't allow that (pg_ctl), and PGPASSFILE is going to
 require me to create a dummy .pgpass password file in a valid format and
 use that.

One interesting idea would be to write the server password in the
PGPASSFILE format, and allow the server and libpq to read the same file
by pointing PGPASSFILE at that file.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] crash-safe visibility map, take five

2011-06-18 Thread Noah Misch
On Fri, Jun 17, 2011 at 01:21:17PM -0400, Robert Haas wrote:
 On Thu, Jun 16, 2011 at 11:17 PM, Noah Misch n...@leadboat.com wrote:
  I suggest revisiting the suggestion in
  http://archives.postgresql.org/message-id/27743.1291135...@sss.pgh.pa.us and
  including a latestRemovedXid in xl_heap_visible. ?The range of risky xids is
  the same for setting a visibility map bit as for deleting a dead tuple, and
  the same operation (VACUUM) produces both conflicts.
 
 See Heikki's follow-up email.  The standby has to ignore all-visible
 bits anyway, because the fact that a transaction is all-visible on the
 master does not imply that it is all-visible on the standby.  So I
 don't think there's a problem here.

Indeed.  If we conflicted on the xmin of the most-recent all-visible tuple
when setting the bit, all-visible on the standby would become a superset of
all-visible on the master, and we could cease to ignore the bits.  This is an
excellent trade-off for masters that do UPDATE and DELETE, because they're
already conflicting (or using avoidance measures) on similar xids at VACUUM
time.  (Some INSERT-only masters will disfavor the trade-off, but we could
placate them with a GUC.  On the other hand, hot_standby_feedback works and
costs an INSERT-only master so little.)

  I proceeded to do some immediate-shutdown tests to see if we get the bits 
  set
  as expected. ?I set up a database like this:
  ? ? ? ?CREATE EXTENSION pageinspect;
  ? ? ? ?CREATE TABLE t (c int);
  ? ? ? ?INSERT INTO t VALUES (2);
  ? ? ? ?CHECKPOINT;
  I normally cleared bits with UPDATE t SET c = 1; CHECKPOINT; and set them
  with VACUUM t. ?I checked bits with this query:
  ? ? ? ?SELECT to_hex(get_byte(get_raw_page('t', 'vm', 0), 24)),
  ? ? ? ? ? ? ? to_hex(flags::int) FROM page_header(get_raw_page('t', 0));
  The row from that query should generally be 0,1 (bits unset) or 1,5 (bits
  set). ?0,5 is fine after a crash. ?1,1 means we've broken our contract: the 
  VM
  bit is set while PD_ALL_VISIBLE is not set.
 
  First test was to clear bits, checkpoint, then VACUUM and SIGQUIT the
  postmaster. ?The system came back up with 1/1 bits. ?I poked around enough 
  to
  see that XLByteLE(lsn, PageGetLSN(page)) was failing, but I did not dig any
  deeper toward a root cause. ?If you have trouble reproducing this, let me 
  know
  so I can assemble a complete, self-contained test case.
 
 Thank you for putting these test cases together.  As you can probably
 tell, I was having difficulty figuring out exactly how to test this.

No problem.  I ran these test cases with the new patch:

-Description-   -Expected bits- 
-Result-
set bit, VACUUM commit, crash   1,5 
1,5
set bit, crash  0,1 
0,1
set bit, flush heap page, crash 0,5 
0,5
set bit, flush vm page, crash   1,5 
1,5

xlog flush request locations look correct, too.  Overall, looking good.  Do
you know of other notable cases to check?  The last two were somewhat tricky
to inject; if anyone wishes to reproduce them, I'm happy to share my recipe.

One more thing came to mind: don't we need a critical section inside the if
block in visibilitymap_set()?  I can't see anything in there that could
elog(ERROR, ...), but my general impression is that we use one anyway.

I ran one repetition of my benchmark from last report, and the amount of WAL
has not changed.  Given that, I think the previous runs remain valid.

 I think that the problem here is that the sense of that test is
 exactly backwards from what it should be.  IIUC, the word precedes
 in the previous comment should in fact say follows.

Ah; quite so.

  + ? ? ? ? ? ? /* Don't set the bit if replay has already passed this 
  point. */
  + ? ? ? ? ? ? if (XLByteLE(lsn, PageGetLSN(BufferGetPage(vmbuffer
  + ? ? ? ? ? ? ? ? ? ? visibilitymap_set(reln, xlrec-block, lsn, 
  vmbuffer);
 
  ... wouldn't it be better to do this unconditionally? ?Some later record 
  will
  unset it if needed, because all replay functions that clear the bit do so
  without consulting the vm page LSN. ?On the other hand, the worst 
  consequence
  of the way you've done it is VACUUM visiting the page one more time to set 
  the
  bit.
 
 Hmm, now that I look at it, I think this test is backwards too.  I
 think you might be right that it wouldn't hurt anything to go ahead
 and set it, but I'm inclined to leave it in for now.

Okay.  Consider expanding the comment to note that this is out of conservatism
rather than known necessity.

  I think it's worth noting, perhaps based on your explanation in the
  second-to-last paragraph of
  http://archives.postgresql.org/message-id/BANLkTi=b7jvmq6fa_exlcygzuyv1u9a...@mail.gmail.com
  that the answer may be incorrect again after the recheck. ?We don't care:
  

Re: [HACKERS] Re: [COMMITTERS] pgsql: Don't use cp -i in the example WAL archive_command.

2011-06-18 Thread Andrew Dunstan



On 06/18/2011 09:19 AM, Tom Lane wrote:

Thom Brownt...@linux.com  writes:

On 18 June 2011 04:13, Bruce Momjianbr...@momjian.us  wrote:

I tested on FreeBSD 7.4 and saw a 1 error return:

And on a Mac (so through Darwin 10.7.0 a BSD version too):

Yeah, see yesterday's discussion on pgsql-admin.  I think the behavior
with the error return may be a BSD-ism.  In any case, GNU cp does *not*
do what we want, and that accounts for a sufficiently large fraction of
machines in the field that I think it's just unsafe to suggest using
cp -i so prominently.  There are too many people who'll just copy and
paste the first example provided, especially if the warning to test it
is buried several paragraphs later.




Yeah, I'm glad to see this go anyway. I always thought the example was 
too cute by half. Using an explicit test is much nicer and more obvious.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Don't use cp -i in the example WAL archive_command.

2011-06-18 Thread Bruce Momjian
Andrew Dunstan wrote:
 
 
 On 06/18/2011 09:19 AM, Tom Lane wrote:
  Thom Brownt...@linux.com  writes:
  On 18 June 2011 04:13, Bruce Momjianbr...@momjian.us  wrote:
  I tested on FreeBSD 7.4 and saw a 1 error return:
  And on a Mac (so through Darwin 10.7.0 a BSD version too):
  Yeah, see yesterday's discussion on pgsql-admin.  I think the behavior
  with the error return may be a BSD-ism.  In any case, GNU cp does *not*
  do what we want, and that accounts for a sufficiently large fraction of
  machines in the field that I think it's just unsafe to suggest using
  cp -i so prominently.  There are too many people who'll just copy and
  paste the first example provided, especially if the warning to test it
  is buried several paragraphs later.
 
  
 
 Yeah, I'm glad to see this go anyway. I always thought the example was 
 too cute by half. Using an explicit test is much nicer and more obvious.

I think the only real risk to the 'test' example is the possibility that
they will mistype the pathname in one of the two places it is required,
or forget to change both of them at the same time.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: a validator for configuration files

2011-06-18 Thread Florian Pflug
On Jun16, 2011, at 22:34 , Alexey Klyukin wrote:
 Attached is the v2 of the patch to show all parse errors at postgresql.conf.
 Changes (per review and suggestions from Florian):
 
 - do not stop on the first error during postmaster's start.
 - fix errors in processing files from include directives.
 - show only a single syntax error per line, i.e. fast forward to the EOL 
 after coming across the first one.
 - additional comments/error messages, code cleanup

Looks mostly good.

I found one issue which I missed earlier. As it stands, the behaviour
of ProcessConfigFile() changes subtly when IsUnderPostmaster because of
the early abort on errors. Now, in theory the outcome should still be
the same, since we don't apply any settings if there's an error in one
of them. But still, there's a risk that this code isn't 100% waterproof,
and then we'd end up with different settings in the postmaster compared
to the backends. The benefit seems also quite small - since the backends
emit their messages at DEBUG2, you usually won't see the difference
anyway. 

The elevel setting at the start of ProcessConfigFile() also seemed
needlessly complex, since we cannot have IsUnderPostmaster and 
context == PGCS_POSTMASTER at the same time.

I figured it'd be harder to explain the fixes I have in mind than
simply do them and let the code speak. Attached you'll find an updated
version of your v2 patch (called v2a) as well as an incremental patch
against your v2 (called v2a_delta).

The main changes are the removal of the early aborts when
IsUnderPostmaster and the simplification of the elevel setting
logic in ProcessConfigFile().

The updated patch also adds a few comments. If you agree with my changes,
feel free to mark this patch Ready for Committer.

best regards,
Florian Pflug



pg_parser_continue_on_error_v2a_delta.patch
Description: Binary data


pg_parser_continue_on_error_v2a.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: Allow \dd to show constraint comments

2011-06-18 Thread Kohei KaiGai
I checked the v4 patch.

At first, I noticed three missing object classes although COMMENT ON allows to
set a description on 'collation', 'extension' and 'foreign table'.
In addition, this pg_comments system view supports 'access method' class, but
we cannot set a comment on access methods using COMMENT ON statement.

Regarding to the data-type of objnamespace, how about an idea to define a new
data type such as 'regschema' and cast objnamespace into this type?
If we have such data type, user can reference string expression of schema name,
and also available to use OID joins.

Thanks,

2011/6/5 Josh Kupershmidt schmi...@gmail.com:
 On Tue, May 24, 2011 at 10:31 PM, Josh Kupershmidt schmi...@gmail.com wrote:
 Attached is a rebased patch. From a quick look, it seems that most of
 the object types missing from \dd are already covered by pg_comments
 (cast, constraint, conversion, domain, language, operator class,
 operator family). A few objects would probably still need to be added
 (foreign data wrapper, server).

 Here's another update to this patch. Includes:
  * rudimentary doc page for pg_comments
  * 'foreign data wrapper' and 'server' comment types now included in
 pg_comments; regression test updated

 Still TODO:
  * psql's \dd should read from pg_comments. But I think we need some
 simple way to distinguish comments on system objects from non-system
 objects, which we'd need for differentiating \dd from \ddS, not to
 mention being useful for ad-hoc queries. I'm open to ideas here.

 Josh


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers





-- 
KaiGai Kohei kai...@kaigai.gr.jp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pika buildfarm member failure on isolationCheck tests

2011-06-18 Thread Rémi Zara
Hi,

Pika failed at the isolationCheck phase, hitting an assertion:

TRAP: FailedAssertion(!(!((oldSerXidControl-tailXid) != ((TransactionId) 0)) 
|| TransactionIdFollows(xid, oldSerXidControl-tailXid)), File: predicate.c, 
Line: 991)

see 
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=pikadt=2011-06-17%2015%3A45%3A30

It seems that for this stage, the server log (which contains the failed 
assertion) is not sent back to the buildfarm server. Maybe that should be fixed 
?

Regards,

Rémi Zara

smime.p7s
Description: S/MIME cryptographic signature


Re: [HACKERS] .gitignore for some of cygwin files

2011-06-18 Thread Magnus Hagander
On Thu, Jun 16, 2011 at 12:36, Peter Eisentraut pete...@gmx.net wrote:
 On tor, 2011-06-09 at 16:25 +0200, Magnus Hagander wrote:
 Based on this list, a global exclude for *.exe and lib*dll.def
 seems reasonable.

 We already have finer-grained excludes for various lib*dll.def
 variations in the libpq and ecpg subdirectories.  Those should be
 cleaned up if we are adding a global one.

There's no if, since we already added the global one.

So - I've removed the local ones.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] XPATH evaluation

2011-06-18 Thread Radosław Smogura
Nicolas Barbier nicolas.barb...@gmail.com Friday 17 of June 2011 17:29:57
 2011/6/17, Andrew Dunstan and...@dunslane.net:
  On 06/17/2011 10:55 AM, Radosław Smogura wrote:
  XML canonization preservs whitespaces, if I remember
  well, I think there is example.
  
  In any case if I will store image in XML (I've seen this), preservation
  of white spaces and new lines is important.
  
  If you store images you should encode them anyway, in base64 or hex.
 
 Whitespace that is not at certain obviously irrelevant places (such as
 right after , between attributes, outside of the whole document,
 etc), and that is not defined to be irrelevant by some schema (if the
 parser is schema-aware), is relevant. You cannot just muck around with
 it and consider that correct.
 
  More generally, data that needs that sort of preservation should
  possibly be in CDATA nodes.
 
 CDATA sections are just syntactic sugar (a form of escaping):
 
 URL:http://www.w3.org/TR/xml-infoset/#omitted
 
 Appendix D: What is not in the Information Set
 [..]
 19. The boundaries of CDATA marked sections.
 
 Therefore, there is not such thing as a CDATA node that would be
 different from just text (Infoset-wise).
 
 Note that that does not mean that binary data is never supposed to be
 altered or that all binary data is to be accepted: e.g., whether
 newlines are represented using \n, \r, or \r\n is irrelevant;
 also, binary data that is not valid according to the used encoding
 must of course not be accepted.
 
 Nicolas

I would like to send patch to remove formatting. How to deal with collapsing 
blank nodes I don't know.

Regards,
Radek
diff --git a/src/backend/utils/adt/xml.c b/src/backend/utils/adt/xml.c
index 702b9e3..07464fb 100644
--- a/src/backend/utils/adt/xml.c
+++ b/src/backend/utils/adt/xml.c
@@ -3262,7 +3262,7 @@ xml_xmlnodetoxmltype(xmlNodePtr cur)
 		buf = xmlBufferCreate();
 		PG_TRY();
 		{
-			xmlNodeDump(buf, NULL, cur, 0, 1);
+			xmlNodeDump(buf, NULL, cur, 0, 0);
 			result = xmlBuffer_to_xmltype(buf);
 		}
 		PG_CATCH();

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Grouping Sets

2011-06-18 Thread Mariano Mara
Hi hackers (and specially Pavel Stehule),
 I could really use the grouping set feature for some complex queries I'm
 migrating from other db vendor. If my WEB searching is precise, this
 wiki page [1] and this thread[2] are the last updates on the subject. 
 I'm willing to test how these functions in my project but some questions first:
  1- is there an up-to-date version of the patch that I should be aware of?
  2- Can I apply that patch to 8.4.8?
  3- any extra recommendations?  

TIA,
Mariano

[1] http://wiki.postgresql.org/wiki/Grouping_Sets
[2] http://archives.postgresql.org/pgsql-hackers/2010-08/msg00647.php

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Range Types and extensions

2011-06-18 Thread Florian Pflug
On Jun18, 2011, at 10:10 , Jeff Davis wrote:
 On Fri, 2011-06-10 at 00:26 +0200, Florian Pflug wrote:
 So, I believe that you are proposing to change the concept of a range
 value from a contiguous set of values to a pair of bounds.

Yeah. Mostly though because I figured that'd make defining their
semantics easier, not necessarily because that interpretation is
better, though.

 There are
 numerous implications, one of which is that I don't think that we can
 maintain the equality of all empty ranges. Consider these expressions,
 where x is a non-empty range with collation A, but is empty in
 collation B (and * means range intersection):
 
  (x COLLATE B) COLLATE A
  ((x COLLATE B) * '(-Inf, Inf)') COLLATE A
  ('-'::textrange * '(-Inf, Inf)') COLLATE A
 
 All of those expressions should be equal (according to global
 substitutibility, as Darren mentioned). But they can't be, because the
 last expression is always an empty range, whereas the first one is not
 (because merely changing the collation back and forth offers no
 opportunity to even notice that you have an empty range at one point).
 So, I believe that we'd be stuck with non-equal empty ranges, as well as
 many other possibly non-intuitive implications.

Yeah. Once you give up the idea that range is a set, extensionality
(i.e. the axiom there's only one empty range or more precisely
there only one range which no object is a member of) has to go too.

 So, I lean strongly toward the interpretation that a range is a
 contiguous set of values,

Yeah, I agree now, mainly because defining them as a set give rise
to richer semantics than defining them to be a pair. If someone
needs just a pair of values and maybe a BETWEEN operator, that is
easily done with CREATE TYPE and a few SQL or PLPGSQL functions.

 and changing the collation should not change
 the value. Things that do change the value (like a typecast) should
 offer the opportunity to handle cases like this with a function call,
 but changing collation does not.
 
 This leaves making the collation a part of the range type itself (as
 Robert suggested).

Yes, that seems necessary for consistency. That leaves the question
of what to do if someone tries to modify a textrange's collation with
a COLLATE clause. For example,

For example, whats the result of
  'Ä' in '[A,Z']::textrange_german COLLATE 'C'
where 'Ä' is a german Umlaut-A which sorts after 'A' but before 'B'
in locale 'de_DE' but sorts after 'Z' in locale 'C'. (I'm assuming
that textrange_german was defined with collation 'de_DE').

With the set-based definition of ranges, the only sensible thing
is to simply ignore the COLLATE clause I think.

best regards,
Florian Pflug


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pika buildfarm member failure on isolationCheck tests

2011-06-18 Thread Andrew Dunstan



On 06/18/2011 11:57 AM, Rémi Zara wrote:

Hi,

Pika failed at the isolationCheck phase, hitting an assertion:

TRAP: FailedAssertion(!(!((oldSerXidControl-tailXid) != ((TransactionId) 0)) || 
TransactionIdFollows(xid, oldSerXidControl-tailXid)), File: predicate.c, Line: 991)

see 
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=pikadt=2011-06-17%2015%3A45%3A30

It seems that for this stage, the server log (which contains the failed 
assertion) is not sent back to the buildfarm server. Maybe that should be fixed 
?




Change committed. See 
https://github.com/PGBuildFarm/client-code/commit/dd49c04. It will be 
in the next release.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Don't use cp -i in the example WAL archive_command.

2011-06-18 Thread Bruce Momjian
Bruce Momjian wrote:
 Andrew Dunstan wrote:
  
  
  On 06/18/2011 09:19 AM, Tom Lane wrote:
   Thom Brownt...@linux.com  writes:
   On 18 June 2011 04:13, Bruce Momjianbr...@momjian.us  wrote:
   I tested on FreeBSD 7.4 and saw a 1 error return:
   And on a Mac (so through Darwin 10.7.0 a BSD version too):
   Yeah, see yesterday's discussion on pgsql-admin.  I think the behavior
   with the error return may be a BSD-ism.  In any case, GNU cp does *not*
   do what we want, and that accounts for a sufficiently large fraction of
   machines in the field that I think it's just unsafe to suggest using
   cp -i so prominently.  There are too many people who'll just copy and
   paste the first example provided, especially if the warning to test it
   is buried several paragraphs later.
  
 
  
  Yeah, I'm glad to see this go anyway. I always thought the example was 
  too cute by half. Using an explicit test is much nicer and more obvious.
 
 I think the only real risk to the 'test' example is the possibility that
 they will mistype the pathname in one of the two places it is required,
 or forget to change both of them at the same time.

Perhaps we should recommend:

cd /path  test ! -f %f  cp %p %f

That is shorter and removes the duplicate problem.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Don't use cp -i in the example WAL archive_command.

2011-06-18 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 Perhaps we should recommend:
   cd /path  test ! -f %f  cp %p %f
 That is shorter and removes the duplicate problem.

Um ... %p is relative to the database directory.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Range Types and extensions

2011-06-18 Thread Jeff Davis
On Sat, 2011-06-18 at 22:19 +0200, Florian Pflug wrote:
 Yes, that seems necessary for consistency. That leaves the question
 of what to do if someone tries to modify a textrange's collation with
 a COLLATE clause. For example,
 
 For example, whats the result of
   'Ä' in '[A,Z']::textrange_german COLLATE 'C'
 where 'Ä' is a german Umlaut-A which sorts after 'A' but before 'B'
 in locale 'de_DE' but sorts after 'Z' in locale 'C'. (I'm assuming
 that textrange_german was defined with collation 'de_DE').
 
 With the set-based definition of ranges, the only sensible thing
 is to simply ignore the COLLATE clause I think.

I think rejecting it makes more sense, so a range would not be a
collatable type; it just happens to use collations of the subtype
internally.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch for 9.2: enhanced errors

2011-06-18 Thread Steve Singer

On 11-06-08 04:14 PM, Pavel Stehule wrote:

Hello

Attached patch implements a new erros's fields that describes table,
colums related to error. This enhanced info is limited to constraints
and RI.



Here is my review of this patch

Submission Review:

The patch applies cleanly against master
The patch does not include any documentation updates (see note below to 
update config.sgml)
The patch does not include any unit tests. At a minimum it should add a 
few tests with verbosity set to verbose



Usability Review

The patch adds the ability to get more information about the reasons a 
query failed.  Pavel indicates that this is a building block for a later 
patch.   This sounds like a worthwhile goal, without this patch I don't 
see another good way of getting the details on what columns make up the 
constraint that fails, other than making additional queries into the 
catalog.


This patch should not impact pg_dump or pg_upgrade.

Pavel has submitted a related patch that adds support for this feature 
to plpgsql,  in theory other pl's might want to use the information this 
patch exposes.



Feature Test
---

The error messages behave as described with \set verbosity verbose.

I tried this using both the 8.3 and 9.0 versions of psql (against a 
postgresql server with this patch) and things worked as expected (the 
extra error messages did not show).  I also tried the patched psql 
against an 8.3 backend and verified that we don't get strange behaviour 
going against an older backend with verbosity set.


I tried adding multiple constraints to a table and inserting a row that 
violates them, only one of the constraints showed up in the error 
message, this is fine and consistent with the existing behaviour



Consider this example of an error that gets generated

ERROR:  23505: duplicate key value violates unique constraint A_pkey
DETAIL:  Key (a)=(1) already exists.
LOCATION:  _bt_check_unique, nbtinsert.c:433
CONSTRAINT:  A_pkey
SCHEMA:  public
TABLE:  A
COLUMN:  a
STATEMENT:  insert into A values (1);

I think that both the CONSTRAINT, and TABLE name should be double quoted 
like A_pkey is above.  When doing this make sure you don't break the 
quoting in the CSV format log.



Performance Review
-
I don't see this patch impacting performance, I did not conduct any 
performance tests.



Coding Review
-


In tupdesc.c

line 202 the existing code is performing a deep copy of ConstrCheck.  Do 
you need to copy nkeys and conkey here as well?


Then at line 250 ccname is freed but not conkey


postgres_ext.h line 55
+ #define PG_DIAG_SCHEMA_NAME's'
+ #define PG_DIAG_TABLE_NAME't'
+ #define PG_DIAG_COLUMN_NAMES'c'
+ #define PG_DIAG_CONSTRAINT_NAME 'n'

The assignment of letters to parameters seems arbitrary to me, I don't 
have a different non-arbitrary mapping in mind but if anyone else does 
they should speak up.  I think it will be difficult to change this after 
9.2 goes out.



elog.c:
***
*** 2197,2202 
--- 2299,2319 
  if (application_name)
  appendCSVLiteral(buf, application_name);

+ /* constraint_name */
+ appendCSVLiteral(buf, edata-constraint_name);
+ appendStringInfoChar(buf, ',');
+
+ /* schema name */
+ appendCSVLiteral(buf, edata-schema_name);
+ appendStringInfoChar(buf, ',');

You need to update config.sgml at the same time you update this format.
You need to append a , after application name but before 
constraintName.   As it stands the CSV log has something like:

.nbtinsert.c:433,psqla_pkey,public,a,a


nbtinsert.c

pg_get_indrelation is named differently than everything else in this 
file (ie _bt...).  My guess is that this function belongs somewhere else 
but I don't know the code well enough to say where you should move it too.




Everything I've mentioned above is a minor issue, I will move the patch 
to 'waiting for author' and wait for you to release an updated patch.


Steve Singer

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] crash-safe visibility map, take five

2011-06-18 Thread Robert Haas
On Sat, Jun 18, 2011 at 9:37 AM, Noah Misch n...@leadboat.com wrote:
 Indeed.  If we conflicted on the xmin of the most-recent all-visible tuple
 when setting the bit, all-visible on the standby would become a superset of
 all-visible on the master, and we could cease to ignore the bits.  This is an
 excellent trade-off for masters that do UPDATE and DELETE, because they're
 already conflicting (or using avoidance measures) on similar xids at VACUUM
 time.  (Some INSERT-only masters will disfavor the trade-off, but we could
 placate them with a GUC.  On the other hand, hot_standby_feedback works and
 costs an INSERT-only master so little.)

I hadn't really considered the possibility that making more things
conflict on the standby server could work out to a win.  I think that
would need some careful testing, and I don't want to get tied up in
that for purposes of this patch.  There is no law against a WAL format
change, so we can always do it later if someone wants to do the
legwork.

 No problem.  I ran these test cases with the new patch:

 -Description-                                           -Expected bits-       
   -Result-
 set bit, VACUUM commit, crash           1,5                                   
   1,5
 set bit, crash                                          0,1                   
                   0,1
 set bit, flush heap page, crash         0,5                                   
   0,5
 set bit, flush vm page, crash           1,5                                   
   1,5

 xlog flush request locations look correct, too.  Overall, looking good.  Do
 you know of other notable cases to check?  The last two were somewhat tricky
 to inject; if anyone wishes to reproduce them, I'm happy to share my recipe.

Those sound like interesting recipes...

 One more thing came to mind: don't we need a critical section inside the if
 block in visibilitymap_set()?  I can't see anything in there that could
 elog(ERROR, ...), but my general impression is that we use one anyway.

Seems like a good idea.

 I ran one repetition of my benchmark from last report, and the amount of WAL
 has not changed.  Given that, I think the previous runs remain valid.

As far as I can see, the upshot of those benchmarks was that more WAL
was generated, but the time to execute vacuum didn't really change.  I
think that's going to be pretty typical.  In your test, the additional
WAL amounted to about 0.6% of the amount of data VACUUM is dirtying,
which I think is pretty much in the noise, assuming wal_buffers and
checkpoint_segments are tuned to suitable values.

The other thing to worry about from a performance view is whether the
push-ups required to relocate the clear-the-vm-bits logic to within
the critical sections is going to have a significant impact on
ordinary insert/update/delete operations.  I was a bit unsure at first
about Heikki's contention that it wouldn't matter, but after going
through the exercise of writing the code I think I see now why he
wasn't concerned.  The only possibly-noticeable overhead comes from
the possibility that we might decide not to pin the visibility map
buffer before locking the page(s) we're operating on, and need to
unlock-pin-and-relock.  But for that to happen, VACUUM's got to buzz
through and mark the page all-visible just around the time that we
examine bit.  And I have a tough time imagining that happening very
often.  You have to have a page that has been modified since the last
VACUUM, but not too recently (otherwise it's not actually all-visible)
and then VACUUM has to get there and process it at almost the same
moment that someone decides to make another modification.  It's hard
even to think about an artificial test case that would hit that
situation with any regularity.

 Hmm, now that I look at it, I think this test is backwards too.  I
 think you might be right that it wouldn't hurt anything to go ahead
 and set it, but I'm inclined to leave it in for now.

 Okay.  Consider expanding the comment to note that this is out of conservatism
 rather than known necessity.

Done.

 Probably not worth mentioning, but there remains one space instead of one tab
 after visibilitymap_clear in this line:
  *              visibilitymap_clear      - clear a bit in the visibility map

Gee, I'm not sure whether there's a one true way of doing that.  I
know we have a no-spaces-before-tabs rule but I'm not sure what the
guidelines are for indenting elsewhere in the line.

 FWIW, I just do C-s TAB in emacs and then scan for places where the boundaries
 of the highlighted regions fail to line up vertically.

vim.

  This fills RelationGetBufferForTuple()'s needs, but the name doesn't seem 
  to
  fit the task too well. ?This function doesn't check the pin or that the 
  buffer
  applies to the correct relation. ?Consider naming it something like
  `visibilitymap_buffer_covers_block'.

 I think that this may be related to the fact that visibilitymap_pin()
 doesn't do what you might expect.  But 

Re: [HACKERS] crash-safe visibility map, take five

2011-06-18 Thread Noah Misch
On Sat, Jun 18, 2011 at 10:04:52PM -0400, Robert Haas wrote:
 On Sat, Jun 18, 2011 at 9:37 AM, Noah Misch n...@leadboat.com wrote:
  Indeed. ?If we conflicted on the xmin of the most-recent all-visible tuple
  when setting the bit, all-visible on the standby would become a superset of
  all-visible on the master, and we could cease to ignore the bits. ?This is 
  an
  excellent trade-off for masters that do UPDATE and DELETE, because they're
  already conflicting (or using avoidance measures) on similar xids at VACUUM
  time. ?(Some INSERT-only masters will disfavor the trade-off, but we could
  placate them with a GUC. ?On the other hand, hot_standby_feedback works and
  costs an INSERT-only master so little.)
 
 I hadn't really considered the possibility that making more things
 conflict on the standby server could work out to a win.  I think that
 would need some careful testing, and I don't want to get tied up in
 that for purposes of this patch.  There is no law against a WAL format
 change, so we can always do it later if someone wants to do the
 legwork.

Fair enough.

  I ran one repetition of my benchmark from last report, and the amount of WAL
  has not changed. ?Given that, I think the previous runs remain valid.
 
 As far as I can see, the upshot of those benchmarks was that more WAL
 was generated, but the time to execute vacuum didn't really change.  I
 think that's going to be pretty typical.  In your test, the additional
 WAL amounted to about 0.6% of the amount of data VACUUM is dirtying,
 which I think is pretty much in the noise, assuming wal_buffers and
 checkpoint_segments are tuned to suitable values.

I think that's about right.  The timings were too unstable to conclude much,
so I've focused on the WAL volume, which isn't worrisome.

 The other thing to worry about from a performance view is whether the
 push-ups required to relocate the clear-the-vm-bits logic to within
 the critical sections is going to have a significant impact on
 ordinary insert/update/delete operations.  I was a bit unsure at first
 about Heikki's contention that it wouldn't matter, but after going
 through the exercise of writing the code I think I see now why he
 wasn't concerned.

I agree; that part of the cost looks unmeasurable.

 The only possibly-noticeable overhead comes from
 the possibility that we might decide not to pin the visibility map
 buffer before locking the page(s) we're operating on, and need to
 unlock-pin-and-relock.  But for that to happen, VACUUM's got to buzz
 through and mark the page all-visible just around the time that we
 examine bit.  And I have a tough time imagining that happening very
 often.  You have to have a page that has been modified since the last
 VACUUM, but not too recently (otherwise it's not actually all-visible)
 and then VACUUM has to get there and process it at almost the same
 moment that someone decides to make another modification.  It's hard
 even to think about an artificial test case that would hit that
 situation with any regularity.

I can't see it happening very often, either.


This version of the patch has some bogus hunks in int.c, int8.c, and heap.c
reverting a few of your other recent commits.  Looks good otherwise.  I've
marked the patch Ready for Committer.

Thanks,
nm

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [v9.2] DROP Reworks Part.0 - 'missing_ok' support of get_object_address

2011-06-18 Thread Robert Haas
On Tue, Jun 14, 2011 at 8:06 AM, Kohei KaiGai kai...@kaigai.gr.jp wrote:
 The attached patch is a preparation to rework implementation of DROP statement
 using a common code. That intends to apply get_object_address() to resolve 
 name
 of objects to be removed, and eventually minimizes the number of places to put
 permission checks.

 Its first step is an enhancement of get_object_address; to accept 'missing_ok'
 argument to handle cases when IF EXISTS clause is supplied.
 If 'missing_ok' was true and the supplied name was not found, the patched
 get_object_address() returns an ObjectAddress with InvalidOid as objectId.
 If 'missing_ok' was false, its behavior is not changed.

Let's consistently make missing_ok the last argument to all of the
functions to which we're adding it.

I don't think it's a good idea for get_relation_by_qualified_name() to
be second-guessing the error message that RangeVarGetRelid() feels
like throwing.

I think that attempting to fetch the column foo.bar when foo doesn't
exist should be an error even if missing_ok is true.  I believe that's
consistent with what we do elsewhere.  (If it *were* necessary to open
the relation without failing if it's not there, you could use
try_relation_openrv instead of doing as you've done here.)

There is certainly a more compact way of writing the logic in
get_object_address_typeobj.  Also, I think that function should be
called get_object_address_type(); the obj on the end seems
redundant.

Apart from those comments this looks pretty sensible.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Identifying no-op length coercions

2011-06-18 Thread Robert Haas
On Sat, Jun 11, 2011 at 5:13 PM, Noah Misch n...@leadboat.com wrote:
 Sounds good.  Updated patch attached, incorporating your ideas.  Before 
 applying
 it, run this command to update the uninvolved pg_proc.h DATA entries:
  perl -pi -e 's/PGUID(\s+\d+){4}/$ 0/' src/include/catalog/pg_proc.h

This doesn't quite apply any more.  I think the pgindent run broke it slightly.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] crash-safe visibility map, take five

2011-06-18 Thread Robert Haas
On Sat, Jun 18, 2011 at 10:41 PM, Noah Misch n...@leadboat.com wrote:
 This version of the patch has some bogus hunks in int.c, int8.c, and heap.c
 reverting a few of your other recent commits.

Woops, good catch.  Corrected version attached, in case anyone else
wants to take a look at it.

 Looks good otherwise.  I've
 marked the patch Ready for Committer.

Thanks.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


visibility-map-v7.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: Allow \dd to show constraint comments

2011-06-18 Thread Robert Haas
On Sat, Jun 18, 2011 at 1:43 PM, Josh Kupershmidt schmi...@gmail.com wrote:
 Regarding to the data-type of objnamespace, how about an idea to define a new
 data type such as 'regschema' and cast objnamespace into this type?
 If we have such data type, user can reference string expression of schema 
 name,
 and also available to use OID joins.

 Are you suggesting we leave the structure of pg_comments unchanged,
 but introduce a new 'regschema' type so that if users want to easily
 display the schema name of an object, they can just do:

  SELECT objnamespace::regschema, ...
    FROM pg_comments WHERE ... ;

 That seems reasonable to me.

In the past I think the feeling has been that reg* types are primarily
useful for objects with schema-qualified names.  Translating a table
OID into a table name is actually sort of non-trivial, at least if you
want to schema-qualify its name when and only when necessary.  The use
case seems quite a bit weaker for schemas, since you can easily pull
the right value from pg_namespace with a subquery if you are so
inclined.

It is perhaps worth noting that the patch now under discussion on this
thread does something quite a lot different than what the subject
would suggest.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Identifying no-op length coercions

2011-06-18 Thread Noah Misch
On Sat, Jun 18, 2011 at 10:57:13PM -0400, Robert Haas wrote:
 On Sat, Jun 11, 2011 at 5:13 PM, Noah Misch n...@leadboat.com wrote:
  Sounds good. ?Updated patch attached, incorporating your ideas. ?Before 
  applying
  it, run this command to update the uninvolved pg_proc.h DATA entries:
  ?perl -pi -e 's/PGUID(\s+\d+){4}/$ 0/' src/include/catalog/pg_proc.h
 
 This doesn't quite apply any more.  I think the pgindent run broke it 
 slightly.

Hmm, I just get two one-line offsets when applying it to current master.  Note
that you need to run the perl invocation before applying the patch.  Could you
provide full output of your `patch' invocation, along with any reject files?

Thanks,
nm

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpgsql performance - SearchCatCache issue

2011-06-18 Thread Robert Haas
On Sat, Jun 18, 2011 at 9:21 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
 Is this profile expected?

I've certainly seen profiles before where the catcache overhead was
significant.  I don't think that I've seen SearchCatCache() quite this
high on any of the profiling I've done, but then again I don't tend to
profile the same things you do, so maybe that's not surprising.  I
think the interesting question is probably where are all those calls
coming from? and can we optimize any of them away? rather than how
do we make SearchCatCache() run faster?.   I would be willing to bet
money that the latter is largely an exercise in futility.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Identifying no-op length coercions

2011-06-18 Thread Robert Haas
On Sat, Jun 18, 2011 at 11:06 PM, Noah Misch n...@leadboat.com wrote:
 On Sat, Jun 18, 2011 at 10:57:13PM -0400, Robert Haas wrote:
 On Sat, Jun 11, 2011 at 5:13 PM, Noah Misch n...@leadboat.com wrote:
  Sounds good. ?Updated patch attached, incorporating your ideas. ?Before 
  applying
  it, run this command to update the uninvolved pg_proc.h DATA entries:
  ?perl -pi -e 's/PGUID(\s+\d+){4}/$ 0/' src/include/catalog/pg_proc.h

 This doesn't quite apply any more.  I think the pgindent run broke it 
 slightly.

 Hmm, I just get two one-line offsets when applying it to current master.  Note
 that you need to run the perl invocation before applying the patch.  Could you
 provide full output of your `patch' invocation, along with any reject files?

Ah, crap.  You're right.  I didn't follow your directions for how to
apply the patch.  Sorry.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch for 9.2: enhanced errors

2011-06-18 Thread Steve Singer

On 11-06-18 06:36 PM, Steve Singer wrote:

On 11-06-08 04:14 PM, Pavel Stehule wrote:

Here is my review of this patch

Submission Review:

The patch applies cleanly against master
The patch does not include any documentation updates (see note below 
to update config.sgml)
The patch does not include any unit tests. At a minimum it should add 
a few tests with verbosity set to verbose




On second thought tests might not work. Is the only way to get the 
constraint details are in verbose mode where line numbers from the c 
file are also included? If so then this won't work for the regression 
tests.   Having the diff comparison fail every time someone makes an 
unrelated change to a source file isn't what we want.



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Identifying no-op length coercions

2011-06-18 Thread Robert Haas
On Sat, Jun 18, 2011 at 11:12 PM, Robert Haas robertmh...@gmail.com wrote:
 On Sat, Jun 18, 2011 at 11:06 PM, Noah Misch n...@leadboat.com wrote:
 On Sat, Jun 18, 2011 at 10:57:13PM -0400, Robert Haas wrote:
 On Sat, Jun 11, 2011 at 5:13 PM, Noah Misch n...@leadboat.com wrote:
  Sounds good. ?Updated patch attached, incorporating your ideas. ?Before 
  applying
  it, run this command to update the uninvolved pg_proc.h DATA entries:
  ?perl -pi -e 's/PGUID(\s+\d+){4}/$ 0/' src/include/catalog/pg_proc.h

 This doesn't quite apply any more.  I think the pgindent run broke it 
 slightly.

 Hmm, I just get two one-line offsets when applying it to current master.  
 Note
 that you need to run the perl invocation before applying the patch.  Could 
 you
 provide full output of your `patch' invocation, along with any reject files?

 Ah, crap.  You're right.  I didn't follow your directions for how to
 apply the patch.  Sorry.

I think you need to update the comment in simplify_function() to say
that we have three strategies, rather than two.  I think it would also
be appropriate to add a longish comment just before the test that
calls protransform, explaining what the charter of that function is
and why the mechanism exists.

Documentation issues aside, I see very little not to like about this.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] creating CHECK constraints as NOT VALID

2011-06-18 Thread Robert Haas
On Sat, Jun 18, 2011 at 2:57 AM, Jaime Casanova ja...@2ndquadrant.com wrote:
 looks good to me... at least it compiles, and function as i would expect...
 tomorrow i will read the code more carefully and look at the docs, but
 probably this is just fine to be commited...

 I think that this paragraph is confusing, but not being an natural
 english speaker i will let others give their opinion here:
 !       If NOT VALID is not specified,
 !       the command will only succeed if all columns using the
 !       domain satisfy the new constraint.
 !       The constraint is going to be enforced on new data inserted into 
 columns
 !       using the domain in all cases, regardless of literalNOT VALID/.
 !       literalNOT VALID/ is only accepted for literalCHECK/
 constraints.

I agree.  That's pretty contorted.  How about something like this:

When a new constraint is added to a domain, all columns using that
domain will be checked against the newly added constraint.  These
checks can be suppressed by adding the new constraint using the NOT
VALID option; the constraint can later be made valid using ALTER
DOMAIN .. VALIDATE CONSTRAINT.  Newly inserted or updated rows are
always checked against all constraints, even those marked NOT VALID.

In lieu of:

 commandALTER DOMAIN/command conforms to the acronymSQL/acronym
 standard,
!except for the literalOWNER/, literalSET SCHEMA/ and
!literalVALIDATE CONSTRAINT/ variants,
!as well as the literalNOT VALID/ clause of the literalADD
CONSTRAINT/ variant,
 which are productnamePostgreSQL/productname extensions.
/para

I suggest: ALTER DOMAIN conforms to the SQL standard, except for the
OWNER, SET SCHEMA, and VALIDATE CONSTRAINT variants, which are
PostgreSQL extensions.  The NOT VALID clause of the ADD CONSTRAINT
variant is also a PostgreSQL extension.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI tuning points

2011-06-18 Thread Robert Haas
On Fri, Jun 17, 2011 at 5:50 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
 The attached patch addresses one of the open non-blockers for beta3.

 These are tuning points which emerged in testing.  The first is more
 likely to be helpful.  The second may be very important in a few
 types of transaction mixes, but I threw in a lot of weasel words and
 qualifiers because someone could easily try this to bring down the
 transaction retry rate, but suffer a net loss in throughput because
 less efficient plans could be chosen.  I hope I made that point in a
 reasonable fashion, although I'm certainly open to suggestions for
 better wording.

This is good advice, but I think it could use a bit more wordsmithing.
 How about something like this:

When the system is forced to combine multiple page-level predicate
locks into a single relation-level predicate lock because the
predicate lock table is short of memory, an increase in the rate of
serialization failures may occur.  You can avoid this by increasing
max_pred_locks_per_transaction.

A sequential scan will always necessitate a table-level predicate
lock.  This can result in an increased rate of serialization failures.
 It may be helpful to encourage the use of index scans by reducing
random_page_cost or increasing cpu_tuple_cost.  Be sure to etc.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers