Re: [HACKERS] Maximum number of WAL files in the pg_xlog directory

2015-03-31 Thread Jehan-Guillaume de Rorthais
On Tue, 3 Mar 2015 11:15:13 -0500
Bruce Momjian br...@momjian.us wrote:

 On Tue, Oct 14, 2014 at 01:21:53PM -0400, Bruce Momjian wrote:
  On Tue, Oct 14, 2014 at 09:20:22AM -0700, Jeff Janes wrote:
   On Mon, Oct 13, 2014 at 12:11 PM, Bruce Momjian br...@momjian.us wrote:
   
   
   I looked into this, and came up with more questions.  Why is
   checkpoint_completion_target involved in the total number of WAL
   segments?  If checkpoint_completion_target is 0.5 (the default), the
   calculation is:
   
           (2 + 0.5) * checkpoint_segments + 1
   
   while if it is 0.9, it is:
   
           (2 + 0.9) * checkpoint_segments + 1
   
   Is this trying to estimate how many WAL files are going to be created
   during the checkpoint?  If so, wouldn't it be (1 +
   checkpoint_completion_target), not 2 +.  My logic is you have the
   old WAL files being checkpointed (that's the 1), plus you have new WAL
   files being created during the checkpoint, which would be
   checkpoint_completion_target * checkpoint_segments, plus one for the
   current WAL file.
   
   
   WAL is not eligible to be recycled until there have been 2 successful
   checkpoints.
   
   So at the end of a checkpoint, you have 1 cycle of WAL which has just
   become eligible for recycling,
   1 cycle of WAL which is now expendable but which is kept anyway, and
   checkpoint_completion_target worth of WAL which has occurred while the
   checkpoint was occurring and is still needed for crash recovery.
  
  OK, so based on this analysis, what is the right calculation?  This?
  
  (1 + checkpoint_completion_target) * checkpoint_segments + 1 +
  max(wal_keep_segments, checkpoint_segments)
 
 Now that we have min_wal_size and max_wal_size in 9.5, I don't see any
 value to figuring out the proper formula for backpatching.

I guess it worth backpatching the documentation as 9.4 - 9.1 will be supported
for somes the next 4 years

-- 
Jehan-Guillaume de Rorthais
Dalibo
http://www.dalibo.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] vac truncation scan problems

2015-03-31 Thread Jeff Janes
On Mon, Mar 30, 2015 at 8:54 PM, Jeff Janes jeff.ja...@gmail.com wrote:

 After freeing up the rows at the end of the table so it is eligible for
 truncation, then running a manual VACUUM to actually release the space, I
 kept running into the problem that the truncation scan was consistently
 suspended and then aborted due to a conflicting lock requested/held.

 But the perversity is that that conflicting lock request can only be
 coming, as far as I can tell, from the autovac process.  I'm not sure how
 this happens, as I thought autovac never waited for locks but only obtained
 one if it were instantaneously available, but that it is the only
 explanation I can think of.

 I'm not seeing this in 9.4, but I'm not sure how deterministic it is so
 maybe that is just luck.



It looks like the culprit is this:

commit 0d831389749a3baaced7b984205b9894a82444b9
Author: Alvaro Herrera alvhe...@alvh.no-ip.org
Date:   Wed Mar 18 11:52:33 2015 -0300

Rationalize vacuuming options and parameters

I'd guess the autovac nature of the autovac process is getting lost in
there where, but I don't see where.

Cheers,

Jeff


Re: [HACKERS] Maximum number of WAL files in the pg_xlog directory

2015-03-31 Thread Jehan-Guillaume de Rorthais
On Tue, 31 Mar 2015 08:24:15 +0200
Jehan-Guillaume de Rorthais j...@dalibo.com wrote:

 On Tue, 3 Mar 2015 11:15:13 -0500
 Bruce Momjian br...@momjian.us wrote:
 
  On Tue, Oct 14, 2014 at 01:21:53PM -0400, Bruce Momjian wrote:
   On Tue, Oct 14, 2014 at 09:20:22AM -0700, Jeff Janes wrote:
On Mon, Oct 13, 2014 at 12:11 PM, Bruce Momjian br...@momjian.us
wrote:


I looked into this, and came up with more questions.  Why is
checkpoint_completion_target involved in the total number of WAL
segments?  If checkpoint_completion_target is 0.5 (the default), the
calculation is:

        (2 + 0.5) * checkpoint_segments + 1

while if it is 0.9, it is:

        (2 + 0.9) * checkpoint_segments + 1

Is this trying to estimate how many WAL files are going to be
created during the checkpoint?  If so, wouldn't it be (1 +
checkpoint_completion_target), not 2 +.  My logic is you have the
old WAL files being checkpointed (that's the 1), plus you have new WAL
files being created during the checkpoint, which would be
checkpoint_completion_target * checkpoint_segments, plus one for the
current WAL file.


WAL is not eligible to be recycled until there have been 2 successful
checkpoints.

So at the end of a checkpoint, you have 1 cycle of WAL which has just
become eligible for recycling,
1 cycle of WAL which is now expendable but which is kept anyway, and
checkpoint_completion_target worth of WAL which has occurred while the
checkpoint was occurring and is still needed for crash recovery.
   
   OK, so based on this analysis, what is the right calculation?  This?
   
 (1 + checkpoint_completion_target) * checkpoint_segments + 1 +
 max(wal_keep_segments, checkpoint_segments)
  
  Now that we have min_wal_size and max_wal_size in 9.5, I don't see any
  value to figuring out the proper formula for backpatching.
 
 I guess it worth backpatching the documentation as 9.4 - 9.1 will be
 supported for somes the next 4 years

Sorry, lack of caffeine this morning. Fired the mail before correcting
and finishing it:

I guess it worth backpatching the documentation as 9.4 - 9.1 will be supported
for some more years.

I'll give it a try this week.

Regards,
-- 
Jehan-Guillaume de Rorthais
Dalibo
http://www.dalibo.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] vac truncation scan problems

2015-03-31 Thread Michael Paquier
On Tue, Mar 31, 2015 at 3:42 PM, Jeff Janes jeff.ja...@gmail.com wrote:

 On Mon, Mar 30, 2015 at 8:54 PM, Jeff Janes jeff.ja...@gmail.com wrote:

 After freeing up the rows at the end of the table so it is eligible for
 truncation, then running a manual VACUUM to actually release the space, I
 kept running into the problem that the truncation scan was consistently
 suspended and then aborted due to a conflicting lock requested/held.

 But the perversity is that that conflicting lock request can only be
 coming, as far as I can tell, from the autovac process.  I'm not sure how
 this happens, as I thought autovac never waited for locks but only obtained
 one if it were instantaneously available, but that it is the only
 explanation I can think of.

 I'm not seeing this in 9.4, but I'm not sure how deterministic it is so
 maybe that is just luck.



 It looks like the culprit is this:

 commit 0d831389749a3baaced7b984205b9894a82444b9
 Author: Alvaro Herrera alvhe...@alvh.no-ip.org
 Date:   Wed Mar 18 11:52:33 2015 -0300

 Rationalize vacuuming options and parameters

 I'd guess the autovac nature of the autovac process is getting lost in
 there where, but I don't see where.


Hm. I ran a couple of tests and am noticing that a manual VACUUM is not
able to truncate all the pages (it should, no?)... For example with your
test case on REL9_4_STABLE VACUUM VERBOSE reports that all the pages are
truncated:
INFO:  0: pgbench_accounts: truncated 16394 to 0 pages

OK, on HEAD this does not seem to work:
INFO:  0: pgbench_accounts: truncated 16394 to 13554 pages

But if I try as well with 4559167c (0d831389~1) I am getting a similar
result:
INFO:  0: pgbench_accounts: truncated 16394 to 3309 pages

I will try to bisect to the origin of that. This may be related to what you
are seeing.
Regards,
-- 
Michael


Re: [HACKERS] pg_dump / copy bugs with big lines ?

2015-03-31 Thread Ronan Dunklau
Le lundi 30 mars 2015 18:45:41 Jim Nasby a écrit :
 On 3/30/15 5:46 AM, Ronan Dunklau wrote:
  Hello hackers,
  
  I've tried my luck on pgsql-bugs before, with no success, so I report
  these
  problem here.
  
  The documentation mentions the following limits for sizes:
  
  Maximum Field Size  1 GB
  Maximum Row Size1.6 TB
  
  However, it seems like rows bigger than 1GB can't be COPYed out:
  
  ro=# create table test_text (c1 text, c2 text);
  CREATE TABLE
  ro=# insert into test_text (c1) VALUES (repeat('a', 536870912));
  INSERT 0 1
  ro=# update test_text set c2 = c1;
  UPDATE 1
  
  Then, trying to dump or copy that results in the following error:
  
  ro=# COPY test_text TO '/tmp/test';
  ERROR:  out of memory
  DÉTAIL : Cannot enlarge string buffer containing 536870913 bytes by
  536870912 more bytes.
  
  In fact, the same thing happens when using a simple SELECT:
  
  ro=# select * from test_text ;
  ERROR:  out of memory
  DÉTAIL : Cannot enlarge string buffer containing 536870922 bytes by
  536870912 more bytes.
  
  In the case of COPY, the server uses a StringInfo to output the row. The
  problem is, a StringInfo is capped to MAX_ALLOC_SIZE (1GB - 1), but a row
  should be able to hold much more than that.
 
 Yeah, shoving a whole row into one StringInfo is ultimately going to
 limit a row to 1G, which is a far cry from what the docs claim. There's
 also going to be problems with FE/BE communications, because things like
 pq_sendbyte all use StringInfo as a buffer too. So while Postgres can
 store a 1.6TB row, you're going to find a bunch of stuff that doesn't
 work past around 1GB.
 
  So, is this a bug ? Or is there a caveat I would have missed in the
  documentation ?
 
 I suppose that really depends on your point of view. The real question
 is whether we think it's worth fixing, or a good idea to change the
 behavior of StringInfo.
 

 StringInfo uses int's to store length, so it could possibly be changed,
 but then you'd just error out due to MaxAllocSize.
 
 Now perhaps those could both be relaxed, but certainly not to the extent
 that you can shove an entire 1.6TB row into an output buffer.

Another way to look at it would be to work in small chunks. For the first test 
case (rows bigger than 1GB), maybe the copy command could be rewritten to work 
in chunks, flushing the output more often if needed.

For the conversion related issues, I don't really see any other solution than 
extending StrinigInfo to allow for more than 1GB of data. On the other hand, 
those one can easily be circumvented by using a COPY ... WITH binary.

 
 The other issue is that there's a LOT of places in code that blindly
 copy detoasted data around, so while we technically support 1GB toasted
 values you're probably going to be quite unhappy with performance. I'm
 actually surprised you haven't already seen this with 500MB objects.
 
 So long story short, I'm not sure how worthwhile it would be to try and
 fix this. We probably should improve the docs though.
 

I think that having data that can't be output by pg_dump is quite surprising, 
and if this is not fixable, I agree that it should clearly be documented.

 Have you looked at using large objects for what you're doing? (Note that
 those have their own set of challenges and limitations.)

Yes I do. This particular customer of ours did not mind the performance 
penalty of using bytea objects as long as it was convenient to use. 

 
  We also hit a second issue, this time related to bytea encoding.
 
 There's probably several other places this type of thing could be a
 problem. I'm thinking of conversions in particular.

Yes, thats what the two other test cases I mentioned are about: any conversion 
leadng to a size greater than 1GB results in an error, even implicit 
conversions like doubling antislashes in the output.

-- 
Ronan Dunklau
http://dalibo.com - http://dalibo.org

signature.asc
Description: This is a digitally signed message part.


Re: [HACKERS] pg_rewind tests

2015-03-31 Thread Michael Paquier
On Tue, Mar 31, 2015 at 9:59 AM, Peter Eisentraut pete...@gmx.net wrote:

 There are some small issues with the pg_rewind tests.

 This technique

 check: all
 $(prove_check) :: local
 $(prove_check) :: remote

 for passing arguments to prove does not work with the tools included
 in Perl 5.8.

 While sorting out the portability issues in the TAP framework during the
 9.4 release cycle, we had set 5.8 as the oldest Perl version that is
 supported.  (It's the Perl version in RHEL 5.)  I suggest using
 environment variables instead, unless we want to change that.


That's good to know. And I think that by using environment variables it is
necessary to pass an additional flag to prove_check (see
PG_PROVE_TEST_MODE) in the patch attached because prove_check kicks several
instructions, a command mkdir to begin with.


 Moreover,

 if ($test_mode == local)
 ...
 elsif ($test_mode == remote)

 don't work, because those are numerical comparisons, not string
 comparisons.  So the remote branch is never actually run.


That's eq I guess.


 Finally, RewindTest.pm should use

 use strict;
 use warnings;

 and the warnings caused by that should be addressed.


All those things addressed give more or less the patch attached.

While looking at that I noticed two additional issues:
- In remote mode, the connection string to the promoted standby was
incorrect when running pg_rewind, leading to connection errors
- At least in my environment, a sleep of 1 after the standby promotion was
not sufficient to make the tests work.
Regards,
-- 
Michael
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 7c39d82..4108783 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -323,7 +323,7 @@ endef
 define prove_check
 $(MKDIR_P) tmp_check/log
 $(MAKE) -C $(top_builddir) DESTDIR='$(CURDIR)'/tmp_check/install install '$(CURDIR)'/tmp_check/log/install.log 21
-cd $(srcdir)  TESTDIR='$(CURDIR)' PATH=$(CURDIR)/tmp_check/install$(bindir):$$PATH $(call add_to_path,$(ld_library_path_var),$(CURDIR)/tmp_check/install$(libdir)) top_builddir='$(CURDIR)/$(top_builddir)' PGPORT='6$(DEF_PGPORT)' $(PROVE) $(PG_PROVE_FLAGS) $(PROVE_FLAGS) t/*.pl
+cd $(srcdir)  TESTDIR='$(CURDIR)' PG_PROVE_TEST_MODE='$(PG_PROVE_TEST_MODE)' PATH=$(CURDIR)/tmp_check/install$(bindir):$$PATH $(call add_to_path,$(ld_library_path_var),$(CURDIR)/tmp_check/install$(libdir)) top_builddir='$(CURDIR)/$(top_builddir)' PGPORT='6$(DEF_PGPORT)' $(PROVE) $(PG_PROVE_FLAGS) $(PROVE_FLAGS) t/*.pl
 endef
 
 else
diff --git a/src/bin/pg_rewind/Makefile b/src/bin/pg_rewind/Makefile
index efd4988..2bda545 100644
--- a/src/bin/pg_rewind/Makefile
+++ b/src/bin/pg_rewind/Makefile
@@ -47,6 +47,8 @@ clean distclean maintainer-clean:
 	rm -f pg_rewind$(X) $(OBJS) xlogreader.c
 	rm -rf tmp_check regress_log
 
-check: all
-	$(prove_check) :: local
-	$(prove_check) :: remote
+check:
+	$(eval PG_PROVE_TEST_MODE = local)
+	$(prove_check)
+	$(eval PG_PROVE_TEST_MODE = remote)
+	$(prove_check)
diff --git a/src/bin/pg_rewind/RewindTest.pm b/src/bin/pg_rewind/RewindTest.pm
index 0f8f4ca..f748bd1 100644
--- a/src/bin/pg_rewind/RewindTest.pm
+++ b/src/bin/pg_rewind/RewindTest.pm
@@ -29,6 +29,9 @@ package RewindTest;
 # master and standby servers. The data directories are also available
 # in paths $test_master_datadir and $test_standby_datadir
 
+use strict;
+use warnings;
+
 use TestLib;
 use Test::More;
 
@@ -58,8 +61,8 @@ our @EXPORT = qw(
 
 # Adjust these paths for your environment
 my $testroot = ./tmp_check;
-$test_master_datadir=$testroot/data_master;
-$test_standby_datadir=$testroot/data_standby;
+our $test_master_datadir=$testroot/data_master;
+our $test_standby_datadir=$testroot/data_standby;
 
 mkdir $testroot;
 
@@ -73,8 +76,8 @@ my $port_standby=$port_master + 1;
 my $log_path;
 my $tempdir_short;
 
-$connstr_master=port=$port_master;
-$connstr_standby=port=$port_standby;
+my $connstr_master=port=$port_master;
+my $connstr_standby=port=$port_standby;
 
 $ENV{PGDATABASE} = postgres;
 
@@ -127,7 +130,8 @@ sub append_to_file
 
 sub init_rewind_test
 {
-	($testname, $test_mode) = @_;
+	my $testname = shift;
+	my $test_mode = shift;
 
 	$log_path=regress_log/pg_rewind_log_${testname}_${test_mode};
 
@@ -195,11 +199,13 @@ sub promote_standby
 	# Now promote slave and insert some new data on master, this will put
 	# the master out-of-sync with the standby.
 	system_or_bail(pg_ctl -w -D $test_standby_datadir promote $log_path 21);
-	sleep 1;
+	sleep 2;
 }
 
 sub run_pg_rewind
 {
+	my $test_mode = shift;
+
 	# Stop the master and be ready to perform the rewind
 	system_or_bail(pg_ctl -w -D $test_master_datadir stop -m fast $log_path 21);
 
@@ -212,7 +218,7 @@ sub run_pg_rewind
 	# overwritten during the rewind.
 	copy($test_master_datadir/postgresql.conf, $testroot/master-postgresql.conf.tmp);
 	# Now run pg_rewind
-	if ($test_mode == local)
+	if ($test_mode eq local)
 	{
 		# Do rewind using a local pgdata as source
 		# Stop the master and be 

Re: [HACKERS] Bug #10432 failed to re-find parent key in index

2015-03-31 Thread Heikki Linnakangas

On 03/30/2015 09:57 PM, Peter Geoghegan wrote:

On Mon, Mar 30, 2015 at 7:50 PM, Joshua D. Drake j...@commandprompt.com wrote:

We have a database that has run into this problem. The version is 9.1.15 on
Linux. I note in this thread:

http://www.postgresql.org/message-id/cam-w4hp34ppwegtcwjbznwhq0cmu-lxna62vjku8qrtwlob...@mail.gmail.com

That things appear to be fixed in 9.4 but they have not been back-patched?
What is the current status?


I believe that Heikki said he'd backpatch that when 9.4 was considered
very stable. I don't think that we've reached that level of confidence
in the invasive B-Tree bugfixes that went into 9.4 yet.


I have no intention to backpatch the changes. Too big, too invasive. 
Perhaps we could consider it after a year or two, once 9.4 is indeed 
very stable, but at that point you have to wonder if it's really worth 
the trouble anymore. If someone has runs into that issue frequently, he 
probably should just upgrade to 9.4.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Tables cannot have INSTEAD OF triggers

2015-03-31 Thread Aliouii Ali

hi all, 
back in 
2011(http://www.postgresql.org/message-id/1305138588.8811.3.ca...@vanquo.pezone.net),
 an question the same as this one was asked 
the anwser was : 

I think they're very useful on views, but I
couldn't think of a use-case for having them on tables. ISTM that
anything an INSTEAD OF trigger on a table could do, could equally well
be done in a BEFORE trigger.

no not really there is a use-case : in partitioned table ( instead of defining 
before trigger on the master table that return null as the doc states, it will 
be good things to have instead of trigger that return NEW)  so that query like 
insert/update ... .. RETURNING will be handdy and gain some performance, 
otherwise we will have to do an insert and select to get the same jobs done

and about :
If we did support INSTEAD OF triggers on tables, we would also need to
decide how they interact with BEFORE/AFTER triggers - do they fire in
between them, or do they replace them? I could see arguments for
either behaviour.

we already have the three trigger defined on view. the same behavior goes on 
table.
in the doc http://www.postgresql.org/docs/9.4/static/trigger-definition.html it 
mention that if a one trigger return a no null value then fire the next one 
else ignore 

some guys  on postgresql irc channel says that it is easy to implement :) . so 
it will be good to have  it in the next minor or major release ..




Re: [HACKERS] proposal: row_to_array function

2015-03-31 Thread Pavel Stehule
2015-03-29 21:20 GMT+02:00 Pavel Stehule pavel.steh...@gmail.com:



 2015-03-29 20:27 GMT+02:00 Tom Lane t...@sss.pgh.pa.us:

 Pavel Stehule pavel.steh...@gmail.com writes:
  here is rebased patch.
  It contains both patches - row_to_array function and foreach array
 support.

 While I don't have a problem with hstore_to_array, I don't think that
 row_to_array is a very good idea; it's basically encouraging people to
 throw away SQL datatypes altogether and imagine that everything is text.


 This is complementation of ARRAY API - we have row_to_json, probably will
 have row_to_jsonb, row_to_hstore and row_to_array is relative logical.
 Casting to text is not fast, but on second hand - working with text arrays
 is fast.

 I know so casting to text is a problem, but if you iterate over record's
 fields, then you have to find common shared type due sharing plans - and
 text arrays can be simple solution.

 Now, with current possibilities I'll do full sql expression SELECT key,
 value FROM each(hstore(ROW)) or FOREACH ARRAY hstore_to_matrix(hstore(ROW))

 row_to_array(ROW) can reduce a hstore overhead

 any other solution based on PL/Perl or PL/Python are slower due PL engine
 start and due same transformation to some form of structured text.




 They've already bought into that concept if they are using hstore or
 json, so smashing elements of those containers to text is not a problem.
 But that doesn't make this version a good thing.

 (In any case, those who insist can get there through row_to_json, no?)

 Also, could we please *not* mix up these two very independent features?
 foreach array as implemented here may or may not be a good thing, but
 it should get its own discussion.


 ok, I'll send two patches.


attachments contains previous patch separated to two independent patches.

Regards

Pavel






 regards, tom lane



commit 0b432fd3a42132d287c4395b13f8a25ab294
Author: Pavel Stehule pavel.steh...@gooddata.com
Date:   Tue Mar 31 14:43:27 2015 +0200

row_to_array

diff --git a/src/backend/utils/adt/rowtypes.c b/src/backend/utils/adt/rowtypes.c
index a65e18d..1a64d8e 100644
--- a/src/backend/utils/adt/rowtypes.c
+++ b/src/backend/utils/adt/rowtypes.c
@@ -21,6 +21,7 @@
 #include catalog/pg_type.h
 #include funcapi.h
 #include libpq/pqformat.h
+#include utils/array.h
 #include utils/builtins.h
 #include utils/lsyscache.h
 #include utils/typcache.h
@@ -1810,3 +1811,90 @@ btrecordimagecmp(PG_FUNCTION_ARGS)
 {
 	PG_RETURN_INT32(record_image_cmp(fcinfo));
 }
+
+/*
+ * transform any record to array in format [key1, value1, key2, value2 [, ...]]
+ *
+ * This format is compatible with hstore_to_array function
+ */
+Datum
+row_to_array(PG_FUNCTION_ARGS)
+{
+	HeapTupleHeader		rec = PG_GETARG_HEAPTUPLEHEADER(0);
+	TupleDesc		rectupdesc;
+	Oid			rectuptyp;
+	int32			rectuptypmod;
+	HeapTupleData		rectuple;
+	int	ncolumns;
+	Datum 		*recvalues;
+	bool  		*recnulls;
+	ArrayBuildState		*builder;
+	int	i;
+
+	/* Extract type info from the tuple itself */
+	rectuptyp = HeapTupleHeaderGetTypeId(rec);
+	rectuptypmod = HeapTupleHeaderGetTypMod(rec);
+	rectupdesc = lookup_rowtype_tupdesc(rectuptyp, rectuptypmod);
+	ncolumns = rectupdesc-natts;
+
+	/* Build a temporary HeapTuple control structure */
+	rectuple.t_len = HeapTupleHeaderGetDatumLength(rec);
+	ItemPointerSetInvalid((rectuple.t_self));
+	rectuple.t_tableOid = InvalidOid;
+	rectuple.t_data = rec;
+
+	recvalues = (Datum *) palloc(ncolumns * sizeof(Datum));
+	recnulls = (bool *) palloc(ncolumns * sizeof(bool));
+
+	/* Break down the tuple into fields */
+	heap_deform_tuple(rectuple, rectupdesc, recvalues, recnulls);
+
+	/* Prepare target array */
+	builder = initArrayResult(TEXTOID, CurrentMemoryContext, true);
+
+	for (i = 0; i  ncolumns; i++)
+	{
+		Oid	columntyp = rectupdesc-attrs[i]-atttypid;
+		Datum		value;
+		bool		isnull;
+
+		/* Ignore dropped columns */
+		if (rectupdesc-attrs[i]-attisdropped)
+			continue;
+
+		builder = accumArrayResult(builder,
+			CStringGetTextDatum(NameStr(rectupdesc-attrs[i]-attname)),
+			false,
+			TEXTOID,
+			CurrentMemoryContext);
+
+		if (!recnulls[i])
+		{
+			char *outstr;
+			bool		typIsVarlena;
+			Oid		typoutput;
+			FmgrInfo		proc;
+
+			getTypeOutputInfo(columntyp, typoutput, typIsVarlena);
+			fmgr_info_cxt(typoutput, proc, CurrentMemoryContext);
+			outstr = OutputFunctionCall(proc, recvalues[i]);
+
+			value = CStringGetTextDatum(outstr);
+			isnull = false;
+		}
+		else
+		{
+			value = (Datum) 0;
+			isnull = true;
+		}
+
+		builder = accumArrayResult(builder,
+		value, isnull,
+		TEXTOID,
+		CurrentMemoryContext);
+	}
+
+	ReleaseTupleDesc(rectupdesc);
+
+	PG_RETURN_DATUM(makeArrayResult(builder, CurrentMemoryContext));
+}
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index a96d369..1b4c578 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -891,6 +891,8 @@ DATA(insert OID = 376 ( 

Re: [HACKERS] Parallel Seq Scan

2015-03-31 Thread Amit Kapila
On Mon, Mar 30, 2015 at 8:11 PM, Robert Haas robertmh...@gmail.com wrote:

 On Fri, Mar 27, 2015 at 2:34 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
  The reason of this problem is that above tab-completion is executing
  query [1] which contains subplan for the funnel node and currently
  we don't have capability (enough infrastructure) to support execution
  of subplans by parallel workers.  Here one might wonder why we
  have choosen Parallel Plan (Funnel node) for such a case and the
  reason for same is that subplans are attached after Plan generation
  (SS_finalize_plan()) and if want to discard such a plan, it will be
  much more costly, tedious and not worth the effort as we have to
  eventually make such a plan work.
 
  Here we have two choices to proceed, first one is to support execution
  of subplans by parallel workers and second is execute/scan locally for
  Funnel node having subplan (don't launch workers).

 It looks to me like the is an InitPlan, not a subplan.  There
 shouldn't be any problem with a Funnel node having an InitPlan; it
 looks to me like all of the InitPlan stuff is handled by common code
 within the executor (grep for initPlan), so it ought to work here the
 same as it does for anything else.  What I suspect is failing
 (although you aren't being very clear about it here) is the passing
 down of the parameters set by the InitPlan to the workers.

It is failing because we are not passing InitPlan itself (InitPlan is
nothing but a list of SubPlan) and I tried tried to describe in previous
mail [1] what we need to do to achieve the same, but in short, it is not
difficult to pass down the required parameters (like plan-InitPlan or
plannedstmt-subplans), rather the main missing part is the handling
of such parameters in worker side (mainly we need to provide support
for all plan nodes which can be passed as part of InitPlan in readfuncs.c).
I am not against supporting InitPlan's on worker side, but just wanted to
say that if possible why not leave that for first version.


[1]
I have tried to evaluate what it would take us to support execution
of subplans by parallel workers.  We need to pass the sub plans
stored in Funnel Node (initPlan) and corresponding subplans stored
in planned statement (subplans)  as subplan's stored in Funnel node
has reference to subplans in planned statement.  Next currently
readfuncs.c (functions to read different type of nodes) doesn't support
reading any type of plan node, so we need to add support for reading all
kind
of plan nodes (as subplan can have any type of plan node) and similarly
to execute any type of Plan node, we might need more work (infrastructure).

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Doubt about AccessExclusiveLock in ALTER TABLE .. SET ( .. );

2015-03-31 Thread Fabrízio de Royes Mello
On Mon, Mar 30, 2015 at 8:14 PM, Fabrízio de Royes Mello 
fabriziome...@gmail.com wrote:



 On Mon, Mar 30, 2015 at 7:41 PM, Jim Nasby jim.na...@bluetreble.com
wrote:
 
  On 3/27/15 2:23 PM, Fabrízio de Royes Mello wrote:
 
  Hi all,
 
  I'm tweaking some autovacuum settings in a table with high write usage
  but with ALTER TABLE .. SET ( .. ) this task was impossible, so I did a
  catalog update  (pg_class) to change reloptions.
 
  Maybe it's a stupid doubt, but why we need to get an
AccessExclusiveLock
  on relation to set reloptions if we just touch in pg_class tuples
  (RowExclusiveLock) ?
 
 
  For a very long time catalog access was not MVCC safe. I think that's
been changed, so at this point it may be OK to relax the lock, at least in
the case of autovac settings. There may well be other settings in there
where it would not be safe.
 

 Hummm There are a comment in AlterTableGetLockLevel:

  3017 /*
  3018  * Rel options are more complex than first appears.
Options
  3019  * are set here for tables, views and indexes; for
historical
  3020  * reasons these can all be used with ALTER TABLE,
so we can't
  3021  * decide between them using the basic grammar.
  3022  *
  3023  * XXX Look in detail at each option to determine
lock level,
  3024  * e.g. cmd_lockmode = GetRelOptionsLockLevel((List
*)
  3025  * cmd-def);
  3026  */
  3027 case AT_SetRelOptions:  /* Uses MVCC in
getIndexes() and
  3028  * getTables() */
  3029 case AT_ResetRelOptions:/* Uses MVCC in
getIndexes() and
  3030  * getTables() */
  3031 cmd_lockmode = AccessExclusiveLock;
  3032 break;


 Maybe it's time to implement GetRelOptionsLockLevel to relax the lock
to autovac settings (AccessShareLock). To other settings we continue using
AccessExclusiveLock.

 There are some objection to implement in that way?


Attached a very WIP patch to reduce lock level when setting autovacuum
reloptions in ALTER TABLE .. SET ( .. ) statement.

I confess the implementation is ugly, maybe we should add a new item to
reloptions constants in src/backend/access/common/reloptions.c and a proper
function to get lock level by reloption. Thoughts?

Regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Timbira: http://www.timbira.com.br
 Blog: http://fabriziomello.github.io
 Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello
 Github: http://github.com/fabriziomello
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 32e19c5..0be658f 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -402,6 +402,7 @@ static void ATExecSetTableSpace(Oid tableOid, Oid newTableSpace, LOCKMODE lockmo
 static void ATExecSetRelOptions(Relation rel, List *defList,
 	AlterTableType operation,
 	LOCKMODE lockmode);
+static LOCKMODE GetRelOptionsLockLevel(List *defList);
 static void ATExecEnableDisableTrigger(Relation rel, char *trigname,
 	   char fires_when, bool skip_system, LOCKMODE lockmode);
 static void ATExecEnableDisableRule(Relation rel, char *rulename,
@@ -2783,6 +2784,39 @@ AlterTableInternal(Oid relid, List *cmds, bool recurse)
 }
 
 /*
+ * GetRelOptionsLockLevel
+ *
+ * Return AccessShareLock if all reloptions is related to autovacuum
+ * else return AccessExclusiveLock.
+ *
+ */
+LOCKMODE
+GetRelOptionsLockLevel(List *defList)
+{
+	LOCKMODE	lockmode;
+	ListCell	*cell;
+
+	if (defList == NIL)
+		return NoLock;
+
+	foreach(cell, defList)
+	{
+		DefElem	*def = (DefElem *) lfirst(cell);
+
+		/* relax lock for autovacuum reloptions */
+		if (pg_strncasecmp(autovacuum_, def-defname, 11) == 0)
+			lockmode = AccessShareLock;
+		else
+		{
+			lockmode = AccessExclusiveLock;
+			break;
+		}
+	}
+
+	return lockmode;
+}
+
+/*
  * AlterTableGetLockLevel
  *
  * Sets the overall lock level required for the supplied list of subcommands.
@@ -3028,7 +3062,7 @@ AlterTableGetLockLevel(List *cmds)
 		 * getTables() */
 			case AT_ResetRelOptions:	/* Uses MVCC in getIndexes() and
 		 * getTables() */
-cmd_lockmode = AccessExclusiveLock;
+cmd_lockmode = GetRelOptionsLockLevel((List *) cmd-def);
 break;
 
 			default:			/* oops */

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Providing catalog view to pg_hba.conf file - Patch submission

2015-03-31 Thread Haribabu Kommi
On Mon, Mar 30, 2015 at 4:34 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
 Hi

 I checked this patch. I like the functionality and behave.

Thanks for the review.

Here I attached updated patch with the following changes.

1. Addition of two new keyword columns

keyword_databases - The database name can be all, replication,
sameuser, samerole and samegroup.
keyword_roles - The role can be all and a group name prefixed with +.

The rest of the database and role names are treated as normal database
and role names.

2. Added the code changes to identify the names with quoted.

3. Updated documentation changes

4. Regression test is corrected.


Regards,
Hari Babu
Fujitsu Australia


Catalog_view_to_HBA_settings_patch_V9.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: SCRAM authentication

2015-03-31 Thread Heikki Linnakangas

On 03/30/2015 06:46 PM, Stephen Frost wrote:

* Heikki Linnakangas (hlinn...@iki.fi) wrote:

* With CREATE USER PASSWORD 'foo', which hashes/verifiers should
be generated by default? We currently have a boolean
password_encryption setting for that. Needs to be a list.


This generally sounds good to me but we definitely need to have that
list of hashes to be used.  The MIT KDC for Kerberos (and I believe all
the other Kerberos implementations) have a similar setting for what will
be stored and what will be allowed for hashing and encryption options.
It's very important that we allow users to tweak this list, as we will
want to encourage users to migrate off of the existing md5 storage
mechanism and on to the SCRAM based one eventually.

Unfortunately, the first major release with this will certainly need to
default to including md5 as we can't have a password update or change
break clients right off the bat.  What I think would be fantastic would
be a warning, perhaps in the first release or maybe the second, which
deprecates md5 as an auth method and is thrown when a password is set
which includes storing an md5-based password.  I'm sure there will be
plenty of discussion about that in the future.


Yeah. And even if client are updated, and the server is upgraded, you 
still cannot use SCRAM until all the passwords have been changed and the 
SCRAM verifiers for them generated. Unless we go with the scheme I 
mentioned earlier, and use the MD5 hash of the password as the 
plaintext password to SCRAM.



One additional item is that we need to have a way to prefer SCRAM-based
auth while allowing a fall-back to md5 if the client doesn't support it.
This might have to be driven by the client side explicitly saying I
support SCRAM from the start to avoid breaking existing clients.


I'll start a separate thread on this. It's an interesting feature on its 
own. As well as an option in libpq to refuse plaintext authentication 
even if the server asks for it.



* Per the SCRAM specification, the client sends the username in the
handshake. But in the FE/BE protocol, we've already sent it in the
startup packet. In the patch, libpq always sends an empty username
in the SCRAM exchange, and the username from the startup packet is
what matters. We could also require it to be the same, but in SCRAM
the username to be UTF-8 encoded, while in PostgreSQL the username
can be in any encoding. That is a source of annoyance in itself, as
it's not well-defined in PostgreSQL which encoding to use when
sending a username to the server. But I don't want to try fixing
that in this patch, so it seems easiest to just require the username
to be empty.


I don't like having it be empty..  I'm not looking at the spec right at
the moment, but have you confirmed that the username being empty during
the SCRAM discussion doesn't reduce the effectiveness of the
authentication method overall in some way?


Yes.


Is it ever used in
generation of the authentication verifier, etc?  One way to address the
risk which you bring up about the different encodings might be to simply
discourage using non-UTF8-compliant encodings by throwing a warning or
refusing to support SCRAM in cases where the role wouldn't be allowed by
SCRAM (eg: in CREATE ROLE or ALTER ROLE when the SCRAM auth verifier
storage is being handled).  Another option might be to define a way to
convert from whatever to UTF8 something for the purposes of the
SCRAM auth method.


Presumably the username used in the SCRAM exchange would have to match 
the username sent in the startup packet. Otherwise things get weird. If 
an empty string is a problem (there actually seems to be some language 
in the spec to forbid or at least discourage using an empty string as 
username), we could also specify some other constant that must be used, 
to mean same as in startup packet.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT IGNORE (and UPDATE) 3.0

2015-03-31 Thread Heikki Linnakangas

On 03/30/2015 07:20 PM, Peter Geoghegan wrote:



* I think we should decouple the insertion and wal logging more. I think
   the promise tuple insertion should be different from the final
   insertion of the actual tuple. For one it seems cleaner to me, for
   another it will avoid the uglyness around logical decoding. I think
   also that the separation will make it more realistic to use something
   like this for a COPY variant that doesn't raise unique violations and
   such.

Your COPY argument swung this for me. I'm looking into the implementation.


I'm pretty sceptical of that. ISTM you'll need to do modify the page 
twice for each insertion, first to insert the promise tuple, and then to 
turn the promise tuple into a real tuple. And WAL-log both updates. 
That's going to hurt performance.


To recover COPY from unique violations, you can just do the same as 
INSERT ON CONFLICT IGNORE does, and super-delete the inserted tuple on 
conflict. To recover from any random error, you'll need to abort the 
(sub)transaction anyway, and I don't see how it helps to separate the 
insertion of the promise tuple and the finalization of the insertion.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cleanup double semicolons at the end of source lines

2015-03-31 Thread Heikki Linnakangas

On 03/31/2015 05:25 AM, Petr Jelinek wrote:

Hi,

While reading the code I noticed couple of double semicolons at the end
of lines so I searched for all of them and replaced them with single
ones. The resulting diff is attached.


Thanks, fixed. I also backpatched this, to avoid spurious merge 
conflicts in the future if the surrounding code needs to be modified.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] libpq's multi-threaded SSL callback handling is busted

2015-03-31 Thread Peter Eisentraut
On 2/12/15 7:28 AM, Jan Urbański wrote:
 * If there's already callbacks set: Remember that fact and don't
   overwrite. In the next major version: warn.

 So yeah, that was my initial approach - check if callbacks are set, don't 
 do
 the dance if they are. It felt like a crutch, though, and racy at that. 
 There's
 no atomic way to test-and-set those callbacks. The window for racyness is
 small, though.

 If you do that check during library initialization instead of every
 connection it shouldn't be racy - if that part is run in a multithreaded
 fashion you're doing something crazy.

 Yes, that's true. The problem is that there's no real libpq initialisation
 function. The docs say that:

 If your application initializes libssl and/or libcrypto libraries and libpq 
 is
 built with SSL support, you should call PQinitOpenSSL

 So most apps will just not bother. The moment you know you'll need SSL is 
 only
 when you get an 'S' message from the server...
 
 For the sake of discussion, here's a patch to prevent stomping on
 previously-set callbacks, racy as it looks.
 
 FWIW, it does fix the Python deadlock and doesn't cause the PHP segfault...

I don't think this patch would actually fix the problem that was
described after the original bug report
(http://www.postgresql.org/message-id/5436991b.5020...@vmware.com),
namely that another thread acquires a lock while the libpq callbacks are
set and then cannot release the lock if libpq has been shut down in the
meantime.

The only way to fix that is to never unset the callbacks.  But we don't
want that or can't do that for other reasons.

I think the only way out is to declare that if there are multiple
threads and other threads might be using OpenSSL not through libpq, then
the callbacks need to be managed outside of libpq.

In environments like PHP or Python this would require some coordination
work across modules somehow, but I don't see a way around that.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How about to have relnamespace and relrole?

2015-03-31 Thread Kyotaro HORIGUCHI
Hello,

At Tue, 31 Mar 2015 16:48:18 -0400, Tom Lane t...@sss.pgh.pa.us wrote in 
26969.1427834...@sss.pgh.pa.us
 Hmm.  We can ignore pg_attribute and pg_pltemplate, which don't have OIDs
 and thus aren't candidates anyway.  And we can ignore the ones
 corresponding to the already-existing regFOO types.  That leaves
 
pg_am  | amname
pg_authid  | rolname  (*)
pg_collation   | collname
pg_constraint  | conname
pg_conversion  | conname
pg_database| datname
pg_event_trigger   | evtname
pg_extension   | extname
pg_foreign_data_wrapper| fdwname
pg_foreign_server  | srvname
pg_language| lanname
pg_namespace   | nspname  (*)
pg_opclass | opcname
pg_opfamily| opfname
pg_policy  | polname
pg_rewrite | rulename
pg_tablespace  | spcname
pg_trigger | tgname
pg_ts_parser   | prsname
pg_ts_template | tmplname
 
 of which the proposed patch covers the two starred ones.
 
 OTOH, looking at this list, there are already numerous cases where
 the object identity is more than just a name (eg, collations have
 schema-qualified names, opfamilies are not only schema-qualified
 but are per-index-AM as well, triggers and constraints are named
 per-table, etc).  So it's clear that we've already been applying
 a usefulness criterion rather than just does it have a
 multi-component name when choosing which objects to provide
 regFOO types for.

As I wrote before, the criteria I selected for choosing these
ones was how often the oid is referred to. The attached excel
file shows the complehensive list of reference counts.

Each cells is marked 'x' if the catalog of the row referrs to the
oid of the catalog on the column. So the numbers in the row 2
represents how mane times the oid of the catalog on the column is
referred to from other catalogs. Adding all catalog having tuple
oid and sorting by the number they are ordered as below.

(The upper cased 'X' in the HASOID column indicates that the view
 exposes the oid of underlying table and identifying the rows in
 the view)

(-) in the list below is the regFOO types already exists and the
second column is the number of other catalogs refers to the oid.

   pg_authid | 33 | rolname  (*)
+   pg_class  | 27 | relname  (-)
   pg_namespace  | 20 | nspname  (*)
+   pg_type   | 15 | typname  (-)
+   pg_proc   | 13 | proname  (-)
+   pg_operator   |  5 | oprname  (-)
   pg_database   |  5 | datname
   pg_am |  4 | amname
   pg_collation  |  4 | collname
   pg_tablespace |  4 | spcname
   pg_foreign_server |  3 | srvname
   pg_opfamily   |  3 | opfname
   pg_opclass|  2 | opcname
   pg_constraint |  1 | conname
   pg_foreign_data_wrapper   |  1 | fdwname
   pg_language   |  1 | lanname
+   pg_largeobject_metadata   |  1 | -
   pg_policy |  1 | polname
   pg_rewrite|  1 | rulename
+   pg_ts_config  |  1 | cfgname  (-)
+   pg_ts_dict|  1 | dictname (-)
   pg_ts_parser  |  1 | prsname
   pg_ts_template|  1 | tmplname
+   pg_user_mapping   |  1 | -
+   pg_aggregate  |  0 | - 

All of amop, amproc, attrdef, cast, conversion, default_acl,
enum, event_trigger, extension, group, roles, shadow, trigger,
user are not referred to from any other catalogs.

 In view of that, you could certainly argue that if someone's bothered
 to make a patch to add a new regFOO type, it's useful enough.  I don't
 want to end up with thirtysomething of them, but we don't seem to be
 trending in that direction.

pg_authid and pg_namespace are obviously win the race but haven't
got the prize. database to tablespace are in a gray zone but I
think they need highly significant reason to have regFOO type for
themselves.

On the other hand, regconfig(pg_ts_config) and
regdictionary(pg_ts_dist) have far less significance but I don't
assert they should be removed since they are there now.

 Or in short, 

[HACKERS] pg_restore -t should match views, matviews, and foreign tables

2015-03-31 Thread Craig Ringer
Following on from this -bugs post:

http://www.postgresql.org/message-id/camsr+ygj50tvtvk4dbp66gajeoc0kap6kxfehaom+neqmhv...@mail.gmail.com

this patch adds support for views, foreign tables, and materialised views
to the pg_restore -t flag.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services
From 0319a7ecbab5c1e85e300d93f674087786be144a Mon Sep 17 00:00:00 2001
From: Craig Ringer cr...@2ndquadrant.com
Date: Wed, 1 Apr 2015 10:46:29 +0800
Subject: [PATCH] pg_restore -t should select views, matviews, and foreign
 tables

Currently pg_restore's '-t' option selects only tables, not other
relations. It should be able to match anything that behaves like
a relation in the relation namespace, anything that's interchangable
with a table, including:

* Normal relations
* Views
* Materialized views
* Foreign tables

Sequences are not matched. They're in the relation namespace, but
only as an implementation detail. A separate option to selectively
dump sequences should be added so that there's no BC break if
they later become non-class objects.

Indexes are also not matched; again, a different option should be
added for them.

TOAST tables aren't matched, they're implementation detail.

See:
  http://www.postgresql.org/message-id/camsr+ygj50tvtvk4dbp66gajeoc0kap6kxfehaom+neqmhv...@mail.gmail.com
---
 src/bin/pg_dump/pg_backup_archiver.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index ca427de..75c8515 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -2663,7 +2663,10 @@ _tocEntryRequired(TocEntry *te, teSection curSection, RestoreOptions *ropt)
 	if (ropt-selTypes)
 	{
 		if (strcmp(te-desc, TABLE) == 0 ||
-			strcmp(te-desc, TABLE DATA) == 0)
+			strcmp(te-desc, TABLE DATA) == 0 ||
+			strcmp(te-desc, VIEW) == 0 ||
+			strcmp(te-desc, FOREIGN TABLE) == 0 ||
+			strcmp(te-desc, MATERIALIZED VIEW) == 0)
 		{
 			if (!ropt-selTable)
 return 0;
-- 
2.1.0


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: SSPI authentication ASC_REQ_REPLAY_DETECT flag

2015-03-31 Thread Stephen Frost
Jacobo,

* Jacobo Vazquez (jvazq...@denodo.com) wrote:
Am I misunderstanding something or is this the expected behavior? This
 not means a replay attack risk? I think that if SSL is not used by the
 connection, a malicious user could capture the authentication package which
 the client service ticket and then reuse it.

It's not entirely clear to me what you're getting at here, but Kerberos
service tickets are *intended* to be re-used up until they are invalid
due to their lifetime limit.  That's why they have a lifetime.  If you
don't want them to be reused, make their lifetime very short, but you'll
end up creating a huge additional load on your KDC that way for very
little gain..

Note that this is entirely independent of a replay attack risk, which is
addressed by the resource server checking if the timestamp in the
authenticator being provided is the same as the last one (it should be
denied if it is).  Further, the timestamp in the authenticator has to be
within 5 minutes or it'll also be denied.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] vac truncation scan problems

2015-03-31 Thread Michael Paquier
On Wed, Apr 1, 2015 at 2:18 AM, Jeff Janes jeff.ja...@gmail.com wrote:

 On Tue, Mar 31, 2015 at 1:28 AM, Kyotaro HORIGUCHI 
 horiguchi.kyot...@lab.ntt.co.jp wrote:

 Hi, this is a bug in the commit 0d831389749a3baaced7b984205b9894a82444b9 .

 It allows vucuum freeze to be skipped and inversely lets regular
 vacuum wait for lock. The attched patch fixes it.


 In table_recheck_autovac, vacuum options are determined as following,

tab-at_vacoptions = VACOPT_SKIPTOAST |
(dovacuum ? VACOPT_VACUUM : 0) |
(doanalyze ? VACOPT_ANALYZE : 0) |
 !  (wraparound ? VACOPT_NOWAIT : 0);

 The line prefixed by '!' looks inverted.


 Thanks, it is obvious once you see it!


Nice catch, Horiguchi-san.
-- 
Michael


Re: [HACKERS] [COMMITTERS] pgsql: Centralize definition of integer limits.

2015-03-31 Thread Andres Freund
Hi,

On 2015-03-30 21:50:09 +0200, Andres Freund wrote:
 I'm too fried from the redeye back from pgconf nyc to do anything
 complicated, but it seems quite possible to define int64/uint64 based
 the stdint.h types if available. And generally a good idea too. I guess
 I'll try that tomorrow; unless Andrew beats me to it.

It's possible to do that, but it's not as trivial as I'd hoped. For one
we'd need to include stdint.h in some places we don't today
(postgres_ext.h), for another we'd need some uglyness to determine the
correct printf modifier for int64_t (can't use PRId64 etc afaics).

I'm tempted to just prefix our limits with PG_ and define them
unconditionally, including appropriate casts to our types.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] printing table in asciidoc with psql

2015-03-31 Thread Bruce Momjian
On Wed, Mar 25, 2015 at 09:12:41AM -0400, Bruce Momjian wrote:
 On Wed, Mar 25, 2015 at 09:37:08PM +0900, Michael Paquier wrote:
  On Wed, Mar 25, 2015 at 4:59 PM, Bruce Momjian br...@momjian.us wrote:
   On Wed, Mar 25, 2015 at 02:18:58PM +0900, Michael Paquier wrote:
[options=header,cols=l,l,frame=none]
|
|5 2.2+^.^ |4 2.2+^.^
|2 2.2+^.^ |3 2.2+^.^
|
  
   Hm. This is still incorrect. You should remove options=header here
   or the first tuple is treated as a header in the case
   non-expanded/tuple-only. Your patch removes correctly the header for
   the expanded/tuple-only case though.
   Regards,
  
   OK, fixed.  Thanks for the testing.  Patch attached.  New output:
  
  This time things look good from my side. I have played with this patch
  some time, testing some crazy scenarios and I have not found problems.
  That's cool stuff, thanks!
 
 Wow, thanks.  I never would have gotten here without your help.

Slightly updated patch attached and applied.  I moved asciidoc after
HTML in the list, rather than at the end.  Thanks for everyone's hard
work on this.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +
diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
new file mode 100644
index a33e460..1f29615
*** a/doc/src/sgml/ref/psql-ref.sgml
--- b/doc/src/sgml/ref/psql-ref.sgml
*** lo_import 152801
*** 2090,2096 
para
Sets the output format to one of literalunaligned/literal,
literalaligned/literal, literalwrapped/literal,
!   literalhtml/literal,
literallatex/literal (uses literaltabular/literal),
literallatex-longtable/literal, or
literaltroff-ms/literal.
--- 2090,2096 
para
Sets the output format to one of literalunaligned/literal,
literalaligned/literal, literalwrapped/literal,
!   literalhtml/literal, literalasciidoc/literal,
literallatex/literal (uses literaltabular/literal),
literallatex-longtable/literal, or
literaltroff-ms/literal.
*** lo_import 152801
*** 2119,2125 
/para
  
para
!   The literalhtml/, literallatex/,
literallatex-longtable/literal, and literaltroff-ms/
formats put out tables that are intended to
be included in documents using the respective mark-up
--- 2119,2125 
/para
  
para
!   The literalhtml/, literalasciidoc/, literallatex/,
literallatex-longtable/literal, and literaltroff-ms/
formats put out tables that are intended to
be included in documents using the respective mark-up
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
new file mode 100644
index e64c033..916f1c6
*** a/src/bin/psql/command.c
--- b/src/bin/psql/command.c
*** _align2string(enum printFormat in)
*** 2249,2254 
--- 2249,2257 
  		case PRINT_HTML:
  			return html;
  			break;
+ 		case PRINT_ASCIIDOC:
+ 			return asciidoc;
+ 			break;
  		case PRINT_LATEX:
  			return latex;
  			break;
*** do_pset(const char *param, const char *v
*** 2325,2330 
--- 2328,2335 
  			popt-topt.format = PRINT_WRAPPED;
  		else if (pg_strncasecmp(html, value, vallen) == 0)
  			popt-topt.format = PRINT_HTML;
+ 		else if (pg_strncasecmp(asciidoc, value, vallen) == 0)
+ 			popt-topt.format = PRINT_ASCIIDOC;
  		else if (pg_strncasecmp(latex, value, vallen) == 0)
  			popt-topt.format = PRINT_LATEX;
  		else if (pg_strncasecmp(latex-longtable, value, vallen) == 0)
*** do_pset(const char *param, const char *v
*** 2333,2339 
  			popt-topt.format = PRINT_TROFF_MS;
  		else
  		{
! 			psql_error(\\pset: allowed formats are unaligned, aligned, wrapped, html, latex, troff-ms\n);
  			return false;
  		}
  
--- 2338,2344 
  			popt-topt.format = PRINT_TROFF_MS;
  		else
  		{
! 			psql_error(\\pset: allowed formats are unaligned, aligned, wrapped, html, asciidoc, latex, troff-ms\n);
  			return false;
  		}
  
diff --git a/src/bin/psql/help.c b/src/bin/psql/help.c
new file mode 100644
index 2da444b..f58f5e5
*** a/src/bin/psql/help.c
--- b/src/bin/psql/help.c
*** helpVariables(unsigned short int pager)
*** 351,357 
  	fprintf(output, _(  expanded (or x)toggle expanded output\n));
  	fprintf(output, _(  fieldsep   field separator for unaligned output (default '|')\n));
  	fprintf(output, _(  fieldsep_zero  set field separator in unaligned mode to zero\n));
! 	fprintf(output, _(  format set output format [unaligned, aligned, wrapped, html, latex, ..]\n));
  	fprintf(output, _(  footer enable or disable display of the table footer [on, off]\n));
  	fprintf(output, _(  linestyle  set the border line drawing style 

Re: [HACKERS] Change of pg_ctl's default shutdown method

2015-03-31 Thread Bruce Momjian
On Fri, Mar 20, 2015 at 07:00:59PM -0400, Robert Haas wrote:
 On Fri, Mar 20, 2015 at 6:19 PM, Bruce Momjian br...@momjian.us wrote:
  I have not re-ordered the shutdown method options because I am trying to
  keep the list logical, from least to most severe, so smart is still
  listed first.  It is odd that the default is the middle option, but I
  don't see any other idea on improving that.
 
 I don't really think it's a problem.

Applied.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT IGNORE (and UPDATE) 3.0

2015-03-31 Thread Peter Geoghegan
On Tue, Mar 31, 2015 at 2:26 PM, Peter Geoghegan p...@heroku.com wrote:
 Andres' wish to do things that way is at least partially motivated by
 having logical decoding just work.

I should add that there appears to be some need to terminate the loop
of speculative token waiting. By that I mean that since we're not
looking at the proc array to get a speculative token from
HeapTupleSatisfiesDirty() now, there is a livelock hazard. That goes
away when the speculative inserter cleans up after itself, as Andres
proposed. It would also go away if any speculative waiter cleaned up
after the inserter, which you suggested (that would be kind of
invasive to places like _bt_doinsert(), though). Finally, it would
also work if HeapTupleSatisfiesDirty() tested if the token was still
held directly, before reporting a speculative token, by for example
attempting to briefly acquire a ShareLock on the token (but that would
mean that the extra lock acquisition would be required unless and
until someone updated that originally-speculative tuple, in doing so
finally changing its t_ctid).

I think that we definitely have to do something like this, in any
case. Maybe just have SpeculativeTokenWait deal with the clean up is
cleanest, if we're not going to have inserters clean-up after
themselves immediately per Andres' suggestion.
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug fix for missing years in make_date()

2015-03-31 Thread Adam Brightwell

 Good point.  Next patch attached.


  /*
- * Note: we'll reject zero or negative year values.  Perhaps negatives
- * should be allowed to represent BC years?
+ * Note: Non-positive years are taken to be BCE.
  */

Previously, zero was rejected, what does it do now? I'm sure it represents
0 AD/CE, however, is that important enough to note given that it was not
allowed previously?

-Adam

-- 
Adam Brightwell - adam.brightw...@crunchydatasolutions.com
Database Engineer - www.crunchydatasolutions.com


Re: [HACKERS] Streaming replication

2015-03-31 Thread Greg Stark
The key word you're misunderstanding is filled. It means it doesn't wait
for the 16MB file to be completely filled with records. I.e. what would
happen in the file shipping form of replication.


Re: [HACKERS] Bug fix for missing years in make_date()

2015-03-31 Thread David Fetter
On Tue, Mar 31, 2015 at 10:34:45AM -0400, Adam Brightwell wrote:
 
  Good point.  Next patch attached.
 
 
   /*
 - * Note: we'll reject zero or negative year values.  Perhaps negatives
 - * should be allowed to represent BC years?
 + * Note: Non-positive years are taken to be BCE.
   */
 
 Previously, zero was rejected, what does it do now? I'm sure it represents
 0 AD/CE, however, is that important enough to note given that it was not
 allowed previously?

Now, it's supposed to take 0 as 1 BCE, -1 as 2 BCE, etc.  There should
probably be tests for that. The issue here is that zero was
popularized a very long time after the beginning of the Common Era.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Doubt about AccessExclusiveLock in ALTER TABLE .. SET ( .. );

2015-03-31 Thread Noah Misch
On Tue, Mar 31, 2015 at 01:17:03PM -0400, Robert Haas wrote:
 On Tue, Mar 31, 2015 at 9:11 AM, Fabrízio de Royes Mello
 fabriziome...@gmail.com wrote:
  Attached a very WIP patch to reduce lock level when setting autovacuum
  reloptions in ALTER TABLE .. SET ( .. ) statement.
 
 I think the first thing we need to here is analyze all of the options
 and determine what the appropriate lock level is for each, and why.

Agreed.  Fabrízio, see this message for the discussion that led to the code
comment you found (search for relopt_gen):

  
http://www.postgresql.org/message-id/20140321034556.ga3927...@tornado.leadboat.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] vac truncation scan problems

2015-03-31 Thread Michael Paquier
On Wed, Apr 1, 2015 at 2:26 AM, Jeff Janes jeff.ja...@gmail.com wrote:

 Did it tell you why?  If it surrendered the lock to a competing process,
 it should report that as previous INFO messages. If it doesn't give one of
 those, then it probably just thinks there are some tuples it can't remove
 yet somewhere.  What did it give earlier up in the verbose output, for the
 number of removed and nonremovable tuples?


I just had an extra look at that, and I just got trapped a0f5954a that
bumped max_wal_size from 128GB to 1GB.. Sorry for the noise.
-- 
Michael


Re: [HACKERS] [COMMITTERS] pgsql: Centralize definition of integer limits.

2015-03-31 Thread Andres Freund
On 2015-03-31 12:10:48 -0400, Tom Lane wrote:
 Andres Freund and...@anarazel.de writes:
  On 2015-03-30 21:50:09 +0200, Andres Freund wrote:
  I'm too fried from the redeye back from pgconf nyc to do anything
  complicated, but it seems quite possible to define int64/uint64 based
  the stdint.h types if available. And generally a good idea too. I guess
  I'll try that tomorrow; unless Andrew beats me to it.
 
  It's possible to do that, but it's not as trivial as I'd hoped. For one
  we'd need to include stdint.h in some places we don't today
  (postgres_ext.h), for another we'd need some uglyness to determine the
  correct printf modifier for int64_t (can't use PRId64 etc afaics).
 
 Yeah, I thought the printf strings would be the sticking point :-(

I hacked things till it worked, but it seems fragile. Using Werror for
the format string test and adding 'l' to the tested combination works,
but...  At the point where it basically worked the required changes
already amounted to ~150 lines changed (excluding configure). Making
that robust is more than I'm willing to do right now.

I do think it'd generally not be a bad thing to base our types on the
standard types if available.

  I'm tempted to just prefix our limits with PG_ and define them
  unconditionally, including appropriate casts to our types.
 
 I don't have a better idea.

Will push that.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] printing table in asciidoc with psql

2015-03-31 Thread Thom Brown
On 31 March 2015 at 16:35, Bruce Momjian br...@momjian.us wrote:

 On Wed, Mar 25, 2015 at 09:12:41AM -0400, Bruce Momjian wrote:
  On Wed, Mar 25, 2015 at 09:37:08PM +0900, Michael Paquier wrote:
   On Wed, Mar 25, 2015 at 4:59 PM, Bruce Momjian br...@momjian.us
 wrote:
On Wed, Mar 25, 2015 at 02:18:58PM +0900, Michael Paquier wrote:
 [options=header,cols=l,l,frame=none]
 |
 |5 2.2+^.^ |4 2.2+^.^
 |2 2.2+^.^ |3 2.2+^.^
 |
   
Hm. This is still incorrect. You should remove options=header here
or the first tuple is treated as a header in the case
non-expanded/tuple-only. Your patch removes correctly the header for
the expanded/tuple-only case though.
Regards,
   
OK, fixed.  Thanks for the testing.  Patch attached.  New output:
  
   This time things look good from my side. I have played with this patch
   some time, testing some crazy scenarios and I have not found problems.
   That's cool stuff, thanks!
 
  Wow, thanks.  I never would have gotten here without your help.

 Slightly updated patch attached and applied.  I moved asciidoc after
 HTML in the list, rather than at the end.  Thanks for everyone's hard
 work on this.


I think I done gone broke it:

CREATE TABLE | 3^.||moo|hello, (stuff int, |.^hje|| text);

INSERT INTO | 3^.||moo|hello, VALUES (2,'hello');

Output:

[options=header,cols=l,l,frame=none]
|
^l|stuff ^l|\|.^hje\|\|
|2 |hello
|


(1 row)


This results in:

table class=tableblock frame-none grid-all spread
colgroup
col style=width: 50%;
col style=width: 50%;
/colgroup
thead
tr
th class=tableblock halign-center valign-topstuff/th
th class=tableblock halign-center valign-top|amp;.^hje||2/th
/tr
/thead
/table

Using asciidoctor 1.5.2.

-- 
Thom


Re: [HACKERS] [COMMITTERS] pgsql: Centralize definition of integer limits.

2015-03-31 Thread Tom Lane
Andres Freund and...@anarazel.de writes:
 On 2015-03-30 21:50:09 +0200, Andres Freund wrote:
 I'm too fried from the redeye back from pgconf nyc to do anything
 complicated, but it seems quite possible to define int64/uint64 based
 the stdint.h types if available. And generally a good idea too. I guess
 I'll try that tomorrow; unless Andrew beats me to it.

 It's possible to do that, but it's not as trivial as I'd hoped. For one
 we'd need to include stdint.h in some places we don't today
 (postgres_ext.h), for another we'd need some uglyness to determine the
 correct printf modifier for int64_t (can't use PRId64 etc afaics).

Yeah, I thought the printf strings would be the sticking point :-(

 I'm tempted to just prefix our limits with PG_ and define them
 unconditionally, including appropriate casts to our types.

I don't have a better idea.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] double vacuum in initdb

2015-03-31 Thread Bruce Momjian
On Fri, Mar 20, 2015 at 10:59:41PM -0400, Bruce Momjian wrote:
 On Thu, Dec 11, 2014 at 08:35:43PM -0500, Peter Eisentraut wrote:
  On 12/11/14 11:44 AM, Kevin Grittner wrote:
   We want to finish with VACUUM FREEZE without the FULL, unless we
   don't care about missing visibility maps and free space maps.
 
 I have create the attached initdb patch to update this.

Patch applied.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Fwd: SSPI authentication ASC_REQ_REPLAY_DETECT flag

2015-03-31 Thread Jacobo Vazquez
Hi all,

I installed PostgreSQL 9.3 on a Windows Server 2012 and I have
configured it to use SSPI authentication. The client is on a Windows 7
machine and make the connections via ODBC using a DSN with psqlodbc driver
version 9.03.04.00. Authentication works in this scenario for the user
authenticated in the client machine. I am always using the same user for
connections.

I used Wireshark in the configuration phase to analyze the traffic
between the server and the client. It looks to me that in the
authentication phase, the client always sends the same service ticket to
postgresql server when a new connection is created, even when I create a
new DSN pointing to the same server, it keeps sending the same service
ticket.

Analyzing the source code, in the file src/backend/libpq/auth.c looks
like the server is not checking if the service ticket is reused:

r = AcceptSecurityContext(sspicred,
sspictx,
inbuf,
ASC_REQ_ALLOCATE_MEMORY,
SECURITY_NETWORK_DREP,
newctx,
outbuf,
contextattr,
NULL);

The fourth parameter is not using the ASC_REQ_REPLAY_DETECT flag.

   Am I misunderstanding something or is this the expected behavior? This
not means a replay attack risk? I think that if SSL is not used by the
connection, a malicious user could capture the authentication package which
the client service ticket and then reuse it.

Thanks in advance
-- 




-- 
*Jacobo Vázquez Lorenzo*
Product Development
Denodo Technologies
(+34) 981 10 02 00 Phone
jvazq...@denodo.com
www.denodo.com Legal Notice
The message is intended for the addresses only and its contents and any
attached files are strictly confidential.
If you have received it in error, please remove this mail and contact
postmas...@denodo.com.
Thank you.


Re: [HACKERS] Bug fix for missing years in make_date()

2015-03-31 Thread Tom Lane
David Fetter da...@fetter.org writes:
 On Tue, Mar 31, 2015 at 10:34:45AM -0400, Adam Brightwell wrote:
 Previously, zero was rejected, what does it do now? I'm sure it represents
 0 AD/CE, however, is that important enough to note given that it was not
 allowed previously?

 Now, it's supposed to take 0 as 1 BCE, -1 as 2 BCE, etc.  There should
 probably be tests for that.

Surely that is *not* what we want?  I'd expect any user-facing date
function to reject zero and take -1 as 1 BC, etc.  The behavior you
describe is an internal convention, not something we want to expose
to users.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] clang -fsanitize=undefined error in ecpg

2015-03-31 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 With clang -fsanitize=undefined (clang-3.4), I get the following test failure 
 in ecpg
 (it's the only one in the entire tree):

Hm.  I don't know why you can't reproduce that in the backend, because
when stepping through DecodeDateTime() on the input
select '19990108foobar'::timestamptz;
I definitely see it shifting 1 left 31 places:

Breakpoint 2, DecodeDateTime (field=value optimized out, 
ftype=value optimized out, nf=2, dtype=0x76fec168, 
tm=0x76fec170, fsec=0x76fec1b4, tzp=0x76fec16c)
at datetime.c:1193
1193type = DecodeTimezoneAbbrev(i, 
field[i], val, valtz);
(gdb) n
1194if (type == UNKNOWN_FIELD)
(gdb) p type
$1 = 31
(gdb) s
1195type = DecodeSpecial(i, 
field[i], val);
(gdb) s
DecodeSpecial (field=1, lowtoken=0x76febf89 foobar, val=0x76febf28)
at datetime.c:3018
3018{
(gdb) fin
Run till exit from #0  DecodeSpecial (field=1, 
lowtoken=0x76febf89 foobar, val=0x76febf28) at datetime.c:3031
DecodeDateTime (field=value optimized out, ftype=value optimized out, 
nf=2, dtype=0x76fec168, tm=0x76fec170, fsec=0x76fec1b4, 
tzp=0x76fec16c) at datetime.c:1196
1196if (type == IGNORE_DTF)
Value returned is $2 = 31
(gdb) s
1199tmask = DTK_M(type);
(gdb) p type
$3 = 31
(gdb) s
1200switch (type)
(gdb) p tmask
$4 = -2147483648

 This patch fixes it:

 -#define DTK_M(t)   (0x01  (t))
 +#define DTK_M(t)   ((t) == UNKNOWN_FIELD ? 0 : 0x01  (t))

Don't like that even a little bit.  The intent of the code is perfectly
clear, cf this comment in datetime.h:

 * Field types for time decoding.
 *
 * Can't have more of these than there are bits in an unsigned int
 * since these are turned into bit masks during parsing and decoding.

So I think the correct fix is

-#define DTK_M(t)   (0x01  (t))
+#define DTK_M(t)   (0x01U  (t))

It looks to me like it doesn't actually matter at the moment, because
anyplace where we apply DTK_M to a value that could be UNKNOWN_FIELD,
we'll immediately after that either return an error or replace the
tmask value with something else.  So the lack of portability of this
construction hasn't mattered.  But we should fix it in a way that won't
create time bombs for future code changes, and producing a zero mask
from a valid field type code would be a time bomb.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How about to have relnamespace and relrole?

2015-03-31 Thread Andrew Dunstan


On 03/29/2015 02:55 PM, Tom Lane wrote:

Andrew Dunstan and...@dunslane.net writes:

I have just claimed this as committer in the CF, but on reviewing the
emails it looks like there is disagreement about the need for it at all,
especially from Tom and Robert.
I confess I have often wanted regnamespace, particularly, and
occasionally regrole, simply as a convenience. But I'm not going to
commit it against substantial opposition.
Do we need a vote?

My concern about it is basically that I don't see where we stop.
The existing regFOO alias types are provided for object classes which
have nontrivial naming conventions (schema qualification, overloaded
argument types, etc), so that you can't just do select ... from
catalog where objectname = 'blah'.  That doesn't apply to namespaces
or roles.  So I'm afraid that once this precedent is established,
there will be demands for regFOO for every object class we have,
and I don't want that much clutter.

It may be that these two cases are so much more useful than any other
conceivable cases that we can do them and stop, but I don't think that
argument has been made convincingly.



Well, here's a list of all the fooname attributes in the catalog, which 
I guess are the prime candidates for regfoo pseudotypes. Besides those 
we already have and the two proposed here, I'm not sure there will be 
huge demand for others - tablespace maybe, trigger doesn't seem very 
practicable, and I could just see suggestions for collation and 
conversion, but those seem pretty marginal, and that seems to be about 
it, to me.


  attrelid  | attname
   +--
 pg_proc| proname
 pg_type| typname
 pg_attribute   | attname
 pg_class   | relname
 pg_constraint  | conname
 pg_operator| oprname
 pg_opfamily| opfname
 pg_opclass | opcname
 pg_am  | amname
 pg_language| lanname
 pg_rewrite | rulename
 pg_trigger | tgname
 pg_event_trigger   | evtname
 pg_namespace   | nspname
 pg_conversion  | conname
 pg_database| datname
 pg_tablespace  | spcname
 pg_pltemplate  | tmplname
 pg_authid  | rolname
 pg_ts_config   | cfgname
 pg_ts_dict | dictname
 pg_ts_parser   | prsname
 pg_ts_template | tmplname
 pg_extension   | extname
 pg_foreign_data_wrapper| fdwname
 pg_foreign_server  | srvname
 pg_policy  | polname
 pg_collation   | collname


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: adaptive ndistinct estimator v4

2015-03-31 Thread Tomas Vondra
Hi all,

attached is v4 of the patch implementing adaptive ndistinct estimator.

I've been looking into the strange estimates, mentioned on 2014/12/07:

  values   currentadaptive
  --
  106   99 107
  1068 6449190
  1006  38 6449190
  10006327   42441

I suspected this might be some sort of rounding error in the numerical
optimization (looking for 'm' solving the equation from paper), but
turns out that's not the case.

The adaptive estimator is a bit unstable for skewed distributions, that
are not sufficiently smooth. Whenever f[1] or f[2] was 0 (i.e. there
were no values occuring exactly once or twice in the sample), the result
was rather off.

The simple workaround for this was adding a fallback to GEE when f[1] or
f[2] is 0. GEE is another estimator described in the paper, behaving
much better in those cases.

With the current version, I do get this (with statistics_target=10):

  values   currentadaptive
  --
  106   99 108
  1068 178
  1006  382083
  10006327   11120

The results do change a bit based on the sample, but these values are a
good example of the values I'm getting.

The other examples (with skewed but smooth distributions) work as good
as before.

-- 
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index d4d1914..2496e83 100644
--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -16,6 +16,7 @@
 
 #include math.h
 
+#include access/hash.h
 #include access/multixact.h
 #include access/transam.h
 #include access/tupconvert.h
@@ -110,6 +111,9 @@ static void update_attstats(Oid relid, bool inh,
 static Datum std_fetch_func(VacAttrStatsP stats, int rownum, bool *isNull);
 static Datum ind_fetch_func(VacAttrStatsP stats, int rownum, bool *isNull);
 
+static double adaptive_estimator(int total_rows, int sample_rows,
+int *f, int f_max);
+static int hash_comparator(const void *a, const void *b);
 
 /*
  *	analyze_rel() -- analyze one relation
@@ -1908,6 +1912,7 @@ static void compute_scalar_stats(VacAttrStatsP stats,
 	 int samplerows,
 	 double totalrows);
 static int	compare_scalars(const void *a, const void *b, void *arg);
+static int	compare_scalars_simple(const void *a, const void *b, void *arg);
 static int	compare_mcvs(const void *a, const void *b);
 
 
@@ -2026,6 +2031,23 @@ compute_minimal_stats(VacAttrStatsP stats,
 	StdAnalyzeData *mystats = (StdAnalyzeData *) stats-extra_data;
 
 	/*
+	 * The adaptive ndistinct estimator requires counts for all the
+	 * repetition counts - we can't do the sort-based count directly
+	 * (because this handles data types with just = operator), and the
+	 * MCV-based counting seems insufficient. We'll instead compute
+	 * hash values, and sort those. We're using just 32-bit hashes,
+	 * which may result in a few collisions - for 30k rows (sampled
+	 * rows for default_statistics_target=100) there's 1:10 chance of
+	 * a hash collision (assuming all values are distinct). But this
+	 * seems like a small error compared to the other factors involved
+	 * (sampling, ...) or compared to the MCV-based counting.
+	 */
+	uint32	   *hashes = (uint32*)palloc0(samplerows * sizeof(uint32));
+
+	/* number of computed hashes (technically equal to nonnull_cnt) */
+	int			nhashes = 0;
+
+	/*
 	 * We track up to 2*n values for an n-element MCV list; but at least 10
 	 */
 	track_max = 2 * num_mcv;
@@ -2086,6 +2108,36 @@ compute_minimal_stats(VacAttrStatsP stats,
 			total_width += strlen(DatumGetCString(value)) + 1;
 		}
 
+		/* compute the hash value, depending on the data type kind */
+		if (stats-attrtype-typbyval)
+		{
+			/* simple pass-by-value data type, with 'typlen' bytes */
+			hashes[nhashes++]
+= DatumGetUInt32(hash_any((unsigned char *) value,
+		  stats-attrtype-typlen));
+		}
+		else if (is_varlena)
+		{
+			/* regular varlena data type */
+			hashes[nhashes++]
+= DatumGetUInt32(hash_any((unsigned char *) VARDATA_ANY(value),
+		  VARSIZE_ANY_EXHDR(DatumGetPointer(value;
+		}
+		else if (is_varwidth)
+		{
+			/* pass-by-reference with a variable length (e.g. cstring) */
+			hashes[nhashes++]
+= DatumGetUInt32(hash_any((unsigned char *) DatumGetCString(value),
+		  strlen(DatumGetCString(value;
+		}
+		else
+		{
+			/* pass-by-reference with fixed length (e.g. name) */
+			hashes[nhashes++]
+= DatumGetUInt32(hash_any((unsigned char *) DatumGetCString(value),
+		  stats-attrtype-typlen));
+		}
+
 		/*
 		 * See if the value matches anything we're already tracking.
 		 */
@@ -2141,6 +2193,43 @@ compute_minimal_stats(VacAttrStatsP stats,
 		int			nmultiple,
 	summultiple;
 
+		/* 

Re: [HACKERS] Bug #10432 failed to re-find parent key in index

2015-03-31 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Tue, Mar 31, 2015 at 1:49 PM, Joshua D. Drake j...@commandprompt.com 
 wrote:
 Perhaps we could consider it after a year or two, once 9.4 is indeed
 very stable, but at that point you have to wonder if it's really worth
 the trouble anymore. If someone has runs into that issue frequently, he
 probably should just upgrade to 9.4.

 Ouch. That is a really poor way to look at this.

 I don't agree.  When a bug fix is really invasive, there is a
 considerable risk that it will itself have bugs.  We've got to balance
 the risk of fixing things for users who are currently having problems
 with the risk of creating problems for people who currently aren't
 having any.

It should also be noted that there is very little reason to assume that
whatever issue JD saw on his 9.1.15 system would have been prevented by
Heikki's changes.  We've seen many instances of failed to re-find parent
key errors over the years, with widely varying root causes (when we were
able to find the root cause).

Personally I'm fine with Heikki's opinion that the costs/risks of
backporting outweigh the likely benefits.  I'm certainly on board with
the idea that we wouldn't do it for another year or so ... by which
time 9.1 will be out of support or nearly so.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] clang -fsanitize=undefined error in ecpg

2015-03-31 Thread Peter Eisentraut
With clang -fsanitize=undefined (clang-3.4), I get the following test failure 
in ecpg
(it's the only one in the entire tree):

--- a/src/interfaces/ecpg/test/expected/pgtypeslib-dt_test2.stderr
+++ b/src/interfaces/ecpg/test/results/pgtypeslib-dt_test2.stderr
@@ -1,2 +1,4 @@
 [NO_PID]: ECPGdebug: set to 1
 [NO_PID]: sqlca: code: 0, state: 0
+dt_common.c:2209:13: runtime error: left shift of 1 by 31 places cannot be 
represented in type 'int'
+dt_common.c:1424:12: runtime error: left shift of 1 by 31 places cannot be 
represented in type 'int'


This happens while parsing these strings:

19990108foobar
19990108 foobar,
1999-01-08 foobar
Xaaa


(I'm not sure why it reports two warnings for four cases.  Maybe it collapses 
some warnings.)


This patch fixes it:

diff --git a/src/interfaces/ecpg/pgtypeslib/dt.h 
b/src/interfaces/ecpg/pgtypeslib/dt.h
index 145e2b7..2ccd0be 100644
--- a/src/interfaces/ecpg/pgtypeslib/dt.h
+++ b/src/interfaces/ecpg/pgtypeslib/dt.h
@@ -193,7 +193,7 @@ typedef double fsec_t;
  * Bit mask definitions for time parsing.
  */
 /* Copypasted these values from src/include/utils/datetime.h */
-#define DTK_M(t)   (0x01  (t))
+#define DTK_M(t)   ((t) == UNKNOWN_FIELD ? 0 : 0x01  (t))
 #define DTK_ALL_SECS_M(DTK_M(SECOND) | DTK_M(MILLISECOND) | 
DTK_M(MICROSECOND))
 #define DTK_DATE_M (DTK_M(YEAR) | DTK_M(MONTH) | DTK_M(DAY))
 #define DTK_TIME_M (DTK_M(HOUR) | DTK_M(MINUTE) | DTK_M(SECOND))


Strangely, I cannot reproduce this failure with the backend datetime code that
this was supposedly copied from.

Comments?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Exposing PG_VERSION_NUM in pg_config

2015-03-31 Thread Tom Lane
Michael Paquier michael.paqu...@gmail.com writes:
 On Tue, Mar 31, 2015 at 9:40 AM, Peter Eisentraut pete...@gmx.net wrote:
 I'm interested in the exact syntax you'd use, to compare it to the
 currently used techniques.

 With the presence of VERSION_NUM directly in pg_config, the following
 expression:
 VERSION_NUM=$(shell $(PG_CONFIG) --version-num)

 With its presence in Makefile.global, that's close to what you can do with
 pg_config.h already:
 VERSION_NUM := $(shell cat `$(PG_CONFIG) --libdir`/pgxs/src/Makefile.global
 \
 | perl -ne 'print $$1 and exit if /VERSION_NUM =\s+(\d+)/')
 But that looks a little bit magic..

I'm confused.  If PG_VERSION_NUM is defined in Makefile.global, surely
you don't need anything at all to make use of it in extension makefiles.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug fix for missing years in make_date()

2015-03-31 Thread David Fetter
On Tue, Mar 31, 2015 at 12:58:27PM -0400, Tom Lane wrote:
 David Fetter da...@fetter.org writes:
  On Tue, Mar 31, 2015 at 10:34:45AM -0400, Adam Brightwell wrote:
  Previously, zero was rejected, what does it do now? I'm sure it
  represents 0 AD/CE, however, is that important enough to note
  given that it was not allowed previously?
 
  Now, it's supposed to take 0 as 1 BCE, -1 as 2 BCE, etc.  There
  should probably be tests for that.
 
 Surely that is *not* what we want?

It is if we're to be consistent with the rest of the system, to wit:

SELECT to_date('','');
to_date
---
 0001-01-01 BC
(1 row)

 I'd expect any user-facing date function to reject zero and take -1
 as 1 BC, etc.  The behavior you describe is an internal convention,
 not something we want to expose to users.

That ship has already sailed.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GUC context information in the document.

2015-03-31 Thread Tom Lane
Kyotaro HORIGUCHI horiguchi.kyot...@lab.ntt.co.jp writes:
 If I'm not missing anyting, putting stereotyped information about
 GUC contexts like following would be usable.

 share_buffers (integer), (effective after server restart)

 log_destination (string), (effetive after config reload)

 log_min_duration_statement (integer), (effective in-session, superuser only)

 DateStyle (string), (effective in-session)

 What do you think about this?

TBH, those don't seem like improvements over the existing boilerplate
texts, particularly not the last two.

I follow the general idea of getting rid of the boilerplate sentences
in favor of an annotation similar to the variable datatype notations;
but such annotations would have to be *very* carefully wordsmithed
to be both precise and understandable yet brief enough to fit ... and
these are not.  I'm not sure such a goal is possible at all.

If we were to go in this direction, I'd favor just annotating with
the same context keywords that we already expose to users in the
pg_settings view, ie more like

shared_buffers (integer, postmaster context)

and then we'd need some introductory text in section 18.1 that defines
these keywords.  Maybe we could move the text about them that's currently
associated with the pg_settings view (section 48.69 ATM).

But TBH, I'm not sure that anything like this would reduce the number
of questions.  It's basically relying on the assumption that people would
read section 18.1 before asking, and that's a shaky assumption.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Doubt about AccessExclusiveLock in ALTER TABLE .. SET ( .. );

2015-03-31 Thread Robert Haas
On Tue, Mar 31, 2015 at 9:11 AM, Fabrízio de Royes Mello
fabriziome...@gmail.com wrote:
 Attached a very WIP patch to reduce lock level when setting autovacuum
 reloptions in ALTER TABLE .. SET ( .. ) statement.

I think the first thing we need to here is analyze all of the options
and determine what the appropriate lock level is for each, and why.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug #10432 failed to re-find parent key in index

2015-03-31 Thread Joshua D. Drake


On 03/31/2015 10:58 AM, Robert Haas wrote:


On Tue, Mar 31, 2015 at 1:49 PM, Joshua D. Drake j...@commandprompt.com wrote:

Perhaps we could consider it after a year or two, once 9.4 is indeed
very stable, but at that point you have to wonder if it's really worth
the trouble anymore. If someone has runs into that issue frequently, he
probably should just upgrade to 9.4.


Ouch. That is a really poor way to look at this.


I don't agree.  When a bug fix is really invasive, there is a
considerable risk that it will itself have bugs.


Absolutely.


 We've got to balance
the risk of fixing things for users who are currently having problems
with the risk of creating problems for people who currently aren't
having any.



We are not in disagreement.

JD




--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] vac truncation scan problems

2015-03-31 Thread Jeff Janes
On Tue, Mar 31, 2015 at 1:28 AM, Kyotaro HORIGUCHI 
horiguchi.kyot...@lab.ntt.co.jp wrote:

 Hi, this is a bug in the commit 0d831389749a3baaced7b984205b9894a82444b9 .

 It allows vucuum freeze to be skipped and inversely lets regular
 vacuum wait for lock. The attched patch fixes it.


 In table_recheck_autovac, vacuum options are determined as following,

tab-at_vacoptions = VACOPT_SKIPTOAST |
(dovacuum ? VACOPT_VACUUM : 0) |
(doanalyze ? VACOPT_ANALYZE : 0) |
 !  (wraparound ? VACOPT_NOWAIT : 0);

 The line prefixed by '!' looks inverted.


Thanks, it is obvious once you see it!

Your patch solved the problem for me.

Cheers,

Jeff


Re: [HACKERS] Bug #10432 failed to re-find parent key in index

2015-03-31 Thread Joshua D. Drake


On 03/31/2015 10:51 AM, Andres Freund wrote:


On 2015-03-31 10:49:06 -0700, Joshua D. Drake wrote:

On 03/31/2015 04:20 AM, Heikki Linnakangas wrote:

Perhaps we could consider it after a year or two, once 9.4 is indeed
very stable, but at that point you have to wonder if it's really worth
the trouble anymore. If someone has runs into that issue frequently, he
probably should just upgrade to 9.4.


Ouch. That is a really poor way to look at this.


Man.

Easy for you to say. You're not doing the work (which would be
significant in this case). You're not going to be blamed if the backport
breaks more things than it fixed.


I understand that. I am not picking on anyone. I am just saying that 
looking at the problem this way is poor, which it is. We are saying as a 
community: Your option to remove this data loss bug is to upgrade. That 
is generally not how we approach things.


JD


--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug #10432 failed to re-find parent key in index

2015-03-31 Thread Joshua D. Drake


On 03/31/2015 04:20 AM, Heikki Linnakangas wrote:


I believe that Heikki said he'd backpatch that when 9.4 was considered
very stable. I don't think that we've reached that level of confidence
in the invasive B-Tree bugfixes that went into 9.4 yet.


I have no intention to backpatch the changes. Too big, too invasive.


I can certainly appreciate that.


Perhaps we could consider it after a year or two, once 9.4 is indeed
very stable, but at that point you have to wonder if it's really worth
the trouble anymore. If someone has runs into that issue frequently, he
probably should just upgrade to 9.4.


Ouch. That is a really poor way to look at this.

JD



--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug #10432 failed to re-find parent key in index

2015-03-31 Thread Josh Berkus
On 03/31/2015 04:20 AM, Heikki Linnakangas wrote:
 On 03/30/2015 09:57 PM, Peter Geoghegan wrote:
 On Mon, Mar 30, 2015 at 7:50 PM, Joshua D. Drake
 j...@commandprompt.com wrote:
 We have a database that has run into this problem. The version is
 9.1.15 on
 Linux. I note in this thread:

 http://www.postgresql.org/message-id/cam-w4hp34ppwegtcwjbznwhq0cmu-lxna62vjku8qrtwlob...@mail.gmail.com


 That things appear to be fixed in 9.4 but they have not been
 back-patched?
 What is the current status?

 I believe that Heikki said he'd backpatch that when 9.4 was considered
 very stable. I don't think that we've reached that level of confidence
 in the invasive B-Tree bugfixes that went into 9.4 yet.
 
 I have no intention to backpatch the changes. Too big, too invasive.
 Perhaps we could consider it after a year or two, once 9.4 is indeed
 very stable, but at that point you have to wonder if it's really worth
 the trouble anymore. If someone has runs into that issue frequently, he
 probably should just upgrade to 9.4.

We could use somewhere for users to find out about this kind of issue.
That is, for users to know that they can fix it by upgrading to 9.4, but
not otherwise.  Ideas?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug #10432 failed to re-find parent key in index

2015-03-31 Thread Joshua D. Drake


On 03/31/2015 11:05 AM, Josh Berkus wrote:


I have no intention to backpatch the changes. Too big, too invasive.
Perhaps we could consider it after a year or two, once 9.4 is indeed
very stable, but at that point you have to wonder if it's really worth
the trouble anymore. If someone has runs into that issue frequently, he
probably should just upgrade to 9.4.


We could use somewhere for users to find out about this kind of issue.
That is, for users to know that they can fix it by upgrading to 9.4, but
not otherwise.  Ideas?


That is a good idea. It also opens up the ability for us to be more 
proactive about upgrades. The more complex we get, the more likely this 
type of problem is going to arise. Some type of deficiency to upgrade 
matrix (similar to our feature matrix) might be a good idea.


JD




--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] vac truncation scan problems

2015-03-31 Thread Jeff Janes
On Tue, Mar 31, 2015 at 1:29 AM, Michael Paquier michael.paqu...@gmail.com
wrote:



 On Tue, Mar 31, 2015 at 3:42 PM, Jeff Janes jeff.ja...@gmail.com wrote:

 On Mon, Mar 30, 2015 at 8:54 PM, Jeff Janes jeff.ja...@gmail.com wrote:

 After freeing up the rows at the end of the table so it is eligible for
 truncation, then running a manual VACUUM to actually release the space, I
 kept running into the problem that the truncation scan was consistently
 suspended and then aborted due to a conflicting lock requested/held.

 But the perversity is that that conflicting lock request can only be
 coming, as far as I can tell, from the autovac process.  I'm not sure how
 this happens, as I thought autovac never waited for locks but only obtained
 one if it were instantaneously available, but that it is the only
 explanation I can think of.

 I'm not seeing this in 9.4, but I'm not sure how deterministic it is so
 maybe that is just luck.



 It looks like the culprit is this:

 commit 0d831389749a3baaced7b984205b9894a82444b9
 Author: Alvaro Herrera alvhe...@alvh.no-ip.org
 Date:   Wed Mar 18 11:52:33 2015 -0300

 Rationalize vacuuming options and parameters

 I'd guess the autovac nature of the autovac process is getting lost in
 there where, but I don't see where.


 Hm. I ran a couple of tests and am noticing that a manual VACUUM is not
 able to truncate all the pages (it should, no?)... For example with your
 test case on REL9_4_STABLE VACUUM VERBOSE reports that all the pages are
 truncated:
 INFO:  0: pgbench_accounts: truncated 16394 to 0 pages

 OK, on HEAD this does not seem to work:
 INFO:  0: pgbench_accounts: truncated 16394 to 13554 pages

 But if I try as well with 4559167c (0d831389~1) I am getting a similar
 result:
 INFO:  0: pgbench_accounts: truncated 16394 to 3309 pages


I was not seeing that on commits before 0d831389--although often once the
truncation scan had run for a while without problem, I would abort the
session and call it good, as my failures were always quick ones.

Did it tell you why?  If it surrendered the lock to a competing process, it
should report that as previous INFO messages. If it doesn't give one of
those, then it probably just thinks there are some tuples it can't remove
yet somewhere.  What did it give earlier up in the verbose output, for the
number of removed and nonremovable tuples?

Cheers,

Jeff


Re: [HACKERS] a fast bloat measurement tool (was Re: Measuring relation free space)

2015-03-31 Thread Abhijit Menon-Sen
Hi.

I'm just posting this WIP patch where I've renamed fastbloat to
pgstatbloat as suggested by Tomas, and added in the documentation, and
so on. I still have to incorporate Amit's comments about the estimation
of reltuples according to the way vacuum does it, and I expect to post
that tomorrow (I just need to test a little more).

In the meantime, if anyone else was having trouble installing the
extension due to the incorrect version in the control file, this is the
patch you should be using.

-- Abhijit
From f809e070e8ea13b74c6206ca67a7eaf2a32e60fa Mon Sep 17 00:00:00 2001
From: Abhijit Menon-Sen a...@2ndquadrant.com
Date: Fri, 26 Dec 2014 12:37:13 +0530
Subject: Add pgstatbloat to pgstattuple

---
 contrib/pgstattuple/Makefile   |   4 +-
 contrib/pgstattuple/pgstatbloat.c  | 346 +
 contrib/pgstattuple/pgstattuple--1.2--1.3.sql  |  18 ++
 .../{pgstattuple--1.2.sql = pgstattuple--1.3.sql} |  18 +-
 contrib/pgstattuple/pgstattuple.control|   2 +-
 doc/src/sgml/pgstattuple.sgml  | 135 
 6 files changed, 519 insertions(+), 4 deletions(-)
 create mode 100644 contrib/pgstattuple/pgstatbloat.c
 create mode 100644 contrib/pgstattuple/pgstattuple--1.2--1.3.sql
 rename contrib/pgstattuple/{pgstattuple--1.2.sql = pgstattuple--1.3.sql} (73%)

diff --git a/contrib/pgstattuple/Makefile b/contrib/pgstattuple/Makefile
index 862585c..d7d27a5 100644
--- a/contrib/pgstattuple/Makefile
+++ b/contrib/pgstattuple/Makefile
@@ -1,10 +1,10 @@
 # contrib/pgstattuple/Makefile
 
 MODULE_big	= pgstattuple
-OBJS		= pgstattuple.o pgstatindex.o $(WIN32RES)
+OBJS		= pgstattuple.o pgstatindex.o pgstatbloat.o $(WIN32RES)
 
 EXTENSION = pgstattuple
-DATA = pgstattuple--1.2.sql pgstattuple--1.1--1.2.sql pgstattuple--1.0--1.1.sql pgstattuple--unpackaged--1.0.sql
+DATA = pgstattuple--1.3.sql pgstattuple--1.2--1.3.sql pgstattuple--1.1--1.2.sql pgstattuple--1.0--1.1.sql pgstattuple--unpackaged--1.0.sql
 PGFILEDESC = pgstattuple - tuple-level statistics
 
 REGRESS = pgstattuple
diff --git a/contrib/pgstattuple/pgstatbloat.c b/contrib/pgstattuple/pgstatbloat.c
new file mode 100644
index 000..15c2cb9
--- /dev/null
+++ b/contrib/pgstattuple/pgstatbloat.c
@@ -0,0 +1,346 @@
+/*
+ * contrib/pgstattuple/pgstatbloat.c
+ *
+ * Abhijit Menon-Sen a...@2ndquadrant.com
+ * Portions Copyright (c) 2001,2002	Tatsuo Ishii (from pg_stattuple)
+ *
+ * Permission to use, copy, modify, and distribute this software and
+ * its documentation for any purpose, without fee, and without a
+ * written agreement is hereby granted, provided that the above
+ * copyright notice and this paragraph and the following two
+ * paragraphs appear in all copies.
+ *
+ * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
+ * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
+ * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
+ * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN AS
+ * IS BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
+ * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
+ */
+
+#include postgres.h
+
+#include access/visibilitymap.h
+#include access/transam.h
+#include access/xact.h
+#include access/multixact.h
+#include access/htup_details.h
+#include catalog/namespace.h
+#include funcapi.h
+#include miscadmin.h
+#include storage/bufmgr.h
+#include storage/freespace.h
+#include storage/procarray.h
+#include storage/lmgr.h
+#include utils/builtins.h
+#include utils/tqual.h
+#include commands/vacuum.h
+
+PG_FUNCTION_INFO_V1(pgstatbloat);
+
+/*
+ * tuple_percent, dead_tuple_percent and free_percent are computable,
+ * so not defined here.
+ */
+typedef struct pgstatbloat_output_type
+{
+	uint64		table_len;
+	uint64		tuple_count;
+	uint64		tuple_len;
+	uint64		dead_tuple_count;
+	uint64		dead_tuple_len;
+	uint64		free_space;
+	uint64		total_pages;
+	uint64		scanned_pages;
+} pgstatbloat_output_type;
+
+static Datum build_output_type(pgstatbloat_output_type *stat,
+			   FunctionCallInfo fcinfo);
+static Datum fbstat_relation(Relation rel, FunctionCallInfo fcinfo);
+static Datum fbstat_heap(Relation rel, FunctionCallInfo fcinfo);
+
+/*
+ * build a pgstatbloat_output_type tuple
+ */
+static Datum
+build_output_type(pgstatbloat_output_type *stat, FunctionCallInfo fcinfo)
+{
+#define NCOLUMNS	10
+#define NCHARS		32
+
+	HeapTuple	tuple;
+	char	   *values[NCOLUMNS];
+	char		values_buf[NCOLUMNS][NCHARS];
+	int			i;
+	double		tuple_percent;
+	double		dead_tuple_percent;
+	double		free_percent;	/* free/reusable space in % */
+	double		scanned_percent;
+	TupleDesc	tupdesc;
+	AttInMetadata *attinmeta;
+
+	/* Build a tuple descriptor for our 

Re: [HACKERS] Bug #10432 failed to re-find parent key in index

2015-03-31 Thread Andres Freund
On 2015-03-31 10:49:06 -0700, Joshua D. Drake wrote:
 On 03/31/2015 04:20 AM, Heikki Linnakangas wrote:
 Perhaps we could consider it after a year or two, once 9.4 is indeed
 very stable, but at that point you have to wonder if it's really worth
 the trouble anymore. If someone has runs into that issue frequently, he
 probably should just upgrade to 9.4.
 
 Ouch. That is a really poor way to look at this.

Man.

Easy for you to say. You're not doing the work (which would be
significant in this case). You're not going to be blamed if the backport
breaks more things than it fixed.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug #10432 failed to re-find parent key in index

2015-03-31 Thread Robert Haas
On Tue, Mar 31, 2015 at 1:49 PM, Joshua D. Drake j...@commandprompt.com wrote:
 Perhaps we could consider it after a year or two, once 9.4 is indeed
 very stable, but at that point you have to wonder if it's really worth
 the trouble anymore. If someone has runs into that issue frequently, he
 probably should just upgrade to 9.4.

 Ouch. That is a really poor way to look at this.

I don't agree.  When a bug fix is really invasive, there is a
considerable risk that it will itself have bugs.  We've got to balance
the risk of fixing things for users who are currently having problems
with the risk of creating problems for people who currently aren't
having any.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Vacuuming big btree indexes without pages with deleted items

2015-03-31 Thread Kevin Grittner
Jim Nasby jim.na...@bluetreble.com wrote:
 On 3/27/15 5:15 AM, Vladimir Borodin wrote:

 Master writes this record to xlog in btvacuumscan function after
 vacuuming of all index pages. And in case of no pages with
 deleted items xlog record would contain lastBlockVacuumed 0.

 In btree_xlog_vacuum replica reads all blocks from
 lastBlockVacuumed to last block of the index while applying this
 record because there is no api in the buffer manager to
 understand if the page is unpinned.

 So if the index is quite big (200+ GB in described case) it
 takes much time to do it.

 2. Is it possible not to write to xlog record with
 lastBlockVacuumed 0 in some cases? For example, in case of not
 deleting any pages.

 Possibly, but that's much higher risk. Without studying it, if we
 wanted to mess around with that it might actually make more sense
 to XLOG a set of blkno's that got vacuumed, but I suspect that
 wouldn't be a win.

I feel pretty confident that it would be a win in some significant
cases, but it could be worse in some cases by changing sequential
access to random, unless we use heuristics to protect against
that.  But...

 Or maybe there are some better ways of improving this situation?

This is a start of a better way:

http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=2ed5b87f96d473962ec5230fd820abfeaccb2069

If we expand on that commit to cover non-MVCC index scans,
index-only scans, and index scans of non-WAL-logged indexes, then
this whole aspect of btree vacuum can be eliminated.  It seems
extremely dubious that all of that could be done for 9.5, and it's
certainly not material for back-patching to any stable branches,
but it would be a more complete and better-performing fix than the
alternatives being discussed here.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] printing table in asciidoc with psql

2015-03-31 Thread Bruce Momjian
On Tue, Mar 31, 2015 at 05:25:48PM +0100, Thom Brown wrote:
 Slightly updated patch attached and applied.  I moved asciidoc after
 HTML in the list, rather than at the end.  Thanks for everyone's hard
 work on this. 
 
 
 I think I done gone broke it:
 
 CREATE TABLE | 3^.||moo|hello, (stuff int, |.^hje|| text);
 
 INSERT INTO | 3^.||moo|hello, VALUES (2,'hello');
 
 Output:
 
 [options=header,cols=l,l,frame=none]
 |
 ^l|stuff ^l|\|.^hje\|\|
 |2 |hello
 |

Uh, you broke asciidoctor 1.5.2.   ;-)  LOL

I installed the Asciidoctor Firefox plugin:


https://addons.mozilla.org/en-US/firefox/addon/asciidoctorjs-live-preview/

and was able to see that asciidoctor sucks in the next row's first cell value
when the _heading_ ends with an escaped pipe, e.g. this:

[options=header,cols=l,l,frame=none]
|
^l|stuff ^l|abc\|X
|2 |hello\|
|3 |hello
|

yields a correct HTML heading of:

stuff   abc|X

which is good, but if you remove the X from the asciidoc heading, the
HTML output heading is:

stuff   abc|2

The X is gone, but the 2 from the first data row is now in the
heading, and the first and only data row is now:

hello|3  hello

I can't add a trailing pipe to the header line because it breaks output
in https://asciidoclive.com/ .

I have reported this on the asciidoc discussion list:


http://discuss.asciidoctor.org/Problem-with-table-heading-ending-in-a-pipe-td2902.html

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WAL format changes break the suppression of do-nothing checkpoints.

2015-03-31 Thread Heikki Linnakangas

On 03/30/2015 09:01 PM, Jeff Janes wrote:

commit 2c03216d831160bedd72d45f7 has invalidated the part of the docs
saying If no WAL has been written since the previous checkpoint, new
checkpoints will be skipped even if checkpoint_timeout has passed,
presumably by accident.

It seems that this part is no longer true when it should be true:

 if (curInsert == ControlFile-checkPoint +
 MAXALIGN(SizeOfXLogRecord + sizeof(CheckPoint))

MAXALIGN(SizeOfXLogRecord + sizeof(CheckPoint) is now 96, but the amount by
which curInsert gets advanced is still 104, like it was before the commit.


Hmm. Wasn't this a bit broken before too, when the checkpoint record 
crosses a page boundary?


Instead of trying to calculate where the checkpoint record ends, I think 
we could check that the prev-pointer points to the last checkpoint record.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT IGNORE (and UPDATE) 3.0

2015-03-31 Thread Peter Geoghegan
On Tue, Mar 31, 2015 at 1:09 PM, Heikki Linnakangas hlinn...@iki.fi wrote:
 I'm pretty sceptical of that. ISTM you'll need to do modify the page twice
 for each insertion, first to insert the promise tuple, and then to turn the
 promise tuple into a real tuple. And WAL-log both updates. That's going to
 hurt performance.


Andres' wish to do things that way is at least partially motivated by
having logical decoding just work. The co-ordination I'm currently
doing across changes within transaction reassembly is pretty ugly.
Andres has strongly suggested that it's broken, too, since a snapshot
change could occur between a speculative insertion and its super
deletion within transaction resassembly, thus invalidating the
assumption that the next change not being a super deletion means there
is no such super deletion change (i.e. the insert should be treated as
real).

Anyway, if we don't do this, we'll need to make sure my changes to
transaction reassembly are sound. Hopefully that's an easy fix.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: SCRAM authentication

2015-03-31 Thread Stephen Frost
Heikki,

* Heikki Linnakangas (hlinn...@iki.fi) wrote:
 On 03/30/2015 06:46 PM, Stephen Frost wrote:
 Unfortunately, the first major release with this will certainly need to
 default to including md5 as we can't have a password update or change
 break clients right off the bat.  What I think would be fantastic would
 be a warning, perhaps in the first release or maybe the second, which
 deprecates md5 as an auth method and is thrown when a password is set
 which includes storing an md5-based password.  I'm sure there will be
 plenty of discussion about that in the future.
 
 Yeah. And even if client are updated, and the server is upgraded,
 you still cannot use SCRAM until all the passwords have been changed
 and the SCRAM verifiers for them generated. Unless we go with the
 scheme I mentioned earlier, and use the MD5 hash of the password as
 the plaintext password to SCRAM.

I don't like using the MD5 hash of the password for the token or to
generate the verifier.  I'm no cryptographer and can't speak with any
certainty on the topic, but I understand there can be cases where doing
such would actively reduce the complexity required to find a usable
token.  Further, to be frank and slightly paranoid, it's possible that
individuals have built up caches of known-to-be-valid PG-encoded md5
strings from backups or other stolen data and we would not be doing
people who run those systems any favors with that approach.

At the end of the day, I'd much prefer to see a clean break with new
passwords rolled out by administrators working with their users.  That,
of course, would be much easier if we provided the basics that every
other auth system out there does today, which are the capabilities
available through pam_unix, pam_pwquality, pam_cracklib, etc.  In
particular, things like password history, password validity, password
complexity, account inactivity, account login history (successful vs.
failed attempts), are required in many environments, specifically called
out by NIST 800-53 which is required for US Government deployments, and
probably included in other standards also (eg: PCI, HIPPA, etc).

These are all things I was fighting for 10-or-so years ago.  I sincerely
hope that we're ready to consider these capabilities as being desirable.
I'm certainly hoping to work on them for 9.6 and will be happy to
support them going forward.

 One additional item is that we need to have a way to prefer SCRAM-based
 auth while allowing a fall-back to md5 if the client doesn't support it.
 This might have to be driven by the client side explicitly saying I
 support SCRAM from the start to avoid breaking existing clients.
 
 I'll start a separate thread on this. It's an interesting feature on
 its own. As well as an option in libpq to refuse plaintext
 authentication even if the server asks for it.

Agreed on both counts.

 Is it ever used in
 generation of the authentication verifier, etc?  One way to address the
 risk which you bring up about the different encodings might be to simply
 discourage using non-UTF8-compliant encodings by throwing a warning or
 refusing to support SCRAM in cases where the role wouldn't be allowed by
 SCRAM (eg: in CREATE ROLE or ALTER ROLE when the SCRAM auth verifier
 storage is being handled).  Another option might be to define a way to
 convert from whatever to UTF8 something for the purposes of the
 SCRAM auth method.
 
 Presumably the username used in the SCRAM exchange would have to
 match the username sent in the startup packet. Otherwise things get
 weird. If an empty string is a problem (there actually seems to be
 some language in the spec to forbid or at least discourage using an
 empty string as username), we could also specify some other constant
 that must be used, to mean same as in startup packet.

Ok..  Having it be a constant which means same as in startup packet
may be workable.  My suggestion above was intended to be let's figure
out a way to encode whatever is in the startup packet to work in UTF8,
and then we will decode it ourselves kind of idea.  That would make it
byte-wise different (at least in some cases- we might be able to
minimize the number of cases that happens), but semantically identical.
What would be *really* nice would be to say if your client only speaks
UTF8, then you don't have to worry about this and everything just works
like normal.  That might be too much of a stretch though.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Exposing PG_VERSION_NUM in pg_config

2015-03-31 Thread Peter Eisentraut
On 3/31/15 1:05 AM, Michael Paquier wrote:
 
 
 On Tue, Mar 31, 2015 at 9:40 AM, Peter Eisentraut pete...@gmx.net
 mailto:pete...@gmx.net wrote:
 
 On 3/30/15 6:29 PM, Michael Paquier wrote:
 
 
  On Tue, Mar 31, 2015 at 5:39 AM, Peter Eisentraut pete...@gmx.net 
 mailto:pete...@gmx.net
  mailto:pete...@gmx.net mailto:pete...@gmx.net wrote:
 
  On 3/25/15 1:32 AM, Michael Paquier wrote:
   Well, I have no other cases than ones of the type mentioned 
 upthread,
   and honestly I am fine as long as we do not apply maths to a 
 version
   string. So attached is a patch that adds VERSION_NUM in
   Makefile.global.
 
  How would you make use of this in an extension makefile?
 
 
  One use case is regression test list filtering depending on backend 
 version.
 
 I'm interested in the exact syntax you'd use, to compare it to the
 currently used techniques.
 
 
 With the presence of VERSION_NUM directly in pg_config, the following
 expression:
 VERSION_NUM=$(shell $(PG_CONFIG) --version-num)

My question is, once you have this version number in a variable like
VERSION_NUM, what specifically would you do with it?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Exposing PG_VERSION_NUM in pg_config

2015-03-31 Thread Michael Paquier
On Wed, Apr 1, 2015 at 8:50 AM, Peter Eisentraut pete...@gmx.net wrote:

 On 3/31/15 1:05 AM, Michael Paquier wrote:
 
 
  On Tue, Mar 31, 2015 at 9:40 AM, Peter Eisentraut pete...@gmx.net
  mailto:pete...@gmx.net wrote:
 
  On 3/30/15 6:29 PM, Michael Paquier wrote:
  
  
   On Tue, Mar 31, 2015 at 5:39 AM, Peter Eisentraut pete...@gmx.net
 mailto:pete...@gmx.net
   mailto:pete...@gmx.net mailto:pete...@gmx.net wrote:
  
   On 3/25/15 1:32 AM, Michael Paquier wrote:
Well, I have no other cases than ones of the type mentioned
 upthread,
and honestly I am fine as long as we do not apply maths to a
 version
string. So attached is a patch that adds VERSION_NUM in
Makefile.global.
  
   How would you make use of this in an extension makefile?
  
  
   One use case is regression test list filtering depending on
 backend version.
 
  I'm interested in the exact syntax you'd use, to compare it to the
  currently used techniques.
 
 
  With the presence of VERSION_NUM directly in pg_config, the following
  expression:
  VERSION_NUM=$(shell $(PG_CONFIG) --version-num)

 My question is, once you have this version number in a variable like
 VERSION_NUM, what specifically would you do with it?


For an extension that has a single branch compatible with a set of multiple
major versions of Postgres, the cases are custom values for REGRESS_OPTS
and REGRESS depending on the backend version. I also manipulate on a daily
basis the same set of scripts across many platforms (on Windows as well
using msysgit, and MSVC installation does not have pgxs stuff), so I would
use it to simplify them. It is true that you can already do that by parsing
the output of pg_config --version, and that I would need to maintain both
versions for some time (custom parsing of pg_config --version, and use of
--version-num if this patch is accepted) but it looks IMO tempting to have
directly the version number thinking long term if there is a simple integer
number available at hand to identify a given version of PG.
-- 
Michael


Re: [HACKERS] Exposing PG_VERSION_NUM in pg_config

2015-03-31 Thread Michael Paquier
On Wed, Apr 1, 2015 at 4:09 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Michael Paquier michael.paqu...@gmail.com writes:
  On Tue, Mar 31, 2015 at 9:40 AM, Peter Eisentraut pete...@gmx.net
 wrote:
  I'm interested in the exact syntax you'd use, to compare it to the
  currently used techniques.

  With the presence of VERSION_NUM directly in pg_config, the following
  expression:
  VERSION_NUM=$(shell $(PG_CONFIG) --version-num)

  With its presence in Makefile.global, that's close to what you can do
 with
  pg_config.h already:
  VERSION_NUM := $(shell cat `$(PG_CONFIG)
 --libdir`/pgxs/src/Makefile.global
  \
  | perl -ne 'print $$1 and exit if /VERSION_NUM =\s+(\d+)/')
  But that looks a little bit magic..

 I'm confused.  If PG_VERSION_NUM is defined in Makefile.global, surely
 you don't need anything at all to make use of it in extension makefiles.


No, you don't... But well, coming back to the first point: fetching it
through pg_config avoids any ugly parsing logic.
-- 
Michael


[HACKERS] Move inet_gist to right place in pg_amproc

2015-03-31 Thread Andreas Karlsson

Hi,

The pg_amproc functions for inet_gist were accidentally added under the 
gin heading. I have attached a patch which moves them to the gist 
heading where they belong.


--
Andreas Karlsson
diff --git a/src/include/catalog/pg_amproc.h b/src/include/catalog/pg_amproc.h
index 8a43f64..78c3bd9 100644
--- a/src/include/catalog/pg_amproc.h
+++ b/src/include/catalog/pg_amproc.h
@@ -236,6 +236,14 @@ DATA(insert (	3919   3831 3831 5 3879 ));
 DATA(insert (	3919   3831 3831 6 3880 ));
 DATA(insert (	3919   3831 3831 7 3881 ));
 DATA(insert (	3919   3831 3831 9 3996 ));
+DATA(insert (	3550   869  869  1 3553 ));
+DATA(insert (	3550   869  869  2 3554 ));
+DATA(insert (	3550   869  869  3 3555 ));
+DATA(insert (	3550   869  869  4 3556 ));
+DATA(insert (	3550   869  869  5 3557 ));
+DATA(insert (	3550   869  869  6 3558 ));
+DATA(insert (	3550   869  869  7 3559 ));
+DATA(insert (	3550   869  869  9 3573 ));
 
 
 /* gin */
@@ -405,14 +413,6 @@ DATA(insert (	4037   3802 3802 2 3485 ));
 DATA(insert (	4037   3802 3802 3 3486 ));
 DATA(insert (	4037   3802 3802 4 3487 ));
 DATA(insert (	4037   3802 3802 6 3489 ));
-DATA(insert (	3550   869	869  1 3553 ));
-DATA(insert (	3550   869	869  2 3554 ));
-DATA(insert (	3550   869	869  3 3555 ));
-DATA(insert (	3550   869	869  4 3556 ));
-DATA(insert (	3550   869	869  5 3557 ));
-DATA(insert (	3550   869	869  6 3558 ));
-DATA(insert (	3550   869	869  7 3559 ));
-DATA(insert (	3550   869	869  9 3573 ));
 
 /* sp-gist */
 DATA(insert (	3474   3831 3831 1 3469 ));

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How about to have relnamespace and relrole?

2015-03-31 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 On 03/29/2015 02:55 PM, Tom Lane wrote:
 It may be that these two cases are so much more useful than any other
 conceivable cases that we can do them and stop, but I don't think that
 argument has been made convincingly.

 Well, here's a list of all the fooname attributes in the catalog, which 
 I guess are the prime candidates for regfoo pseudotypes. Besides those 
 we already have and the two proposed here, I'm not sure there will be 
 huge demand for others - tablespace maybe, trigger doesn't seem very 
 practicable, and I could just see suggestions for collation and 
 conversion, but those seem pretty marginal, and that seems to be about 
 it, to me.

Hmm.  We can ignore pg_attribute and pg_pltemplate, which don't have OIDs
and thus aren't candidates anyway.  And we can ignore the ones
corresponding to the already-existing regFOO types.  That leaves

   pg_am  | amname
   pg_authid  | rolname  (*)
   pg_collation   | collname
   pg_constraint  | conname
   pg_conversion  | conname
   pg_database| datname
   pg_event_trigger   | evtname
   pg_extension   | extname
   pg_foreign_data_wrapper| fdwname
   pg_foreign_server  | srvname
   pg_language| lanname
   pg_namespace   | nspname  (*)
   pg_opclass | opcname
   pg_opfamily| opfname
   pg_policy  | polname
   pg_rewrite | rulename
   pg_tablespace  | spcname
   pg_trigger | tgname
   pg_ts_parser   | prsname
   pg_ts_template | tmplname

of which the proposed patch covers the two starred ones.

OTOH, looking at this list, there are already numerous cases where
the object identity is more than just a name (eg, collations have
schema-qualified names, opfamilies are not only schema-qualified
but are per-index-AM as well, triggers and constraints are named
per-table, etc).  So it's clear that we've already been applying
a usefulness criterion rather than just does it have a
multi-component name when choosing which objects to provide
regFOO types for.

In view of that, you could certainly argue that if someone's bothered
to make a patch to add a new regFOO type, it's useful enough.  I don't
want to end up with thirtysomething of them, but we don't seem to be
trending in that direction.

Or in short, objection withdrawn.  (As to the concept, anyway.
I've not read the patch...)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Something is rotten in the state of Denmark...

2015-03-31 Thread Tom Lane
Observe these recent buildfarm failures:

http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=muledt=2015-03-21%2000%3A30%3A02
http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=guaibasaurusdt=2015-03-23%2004%3A17%3A01
http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=muledt=2015-03-31%2023%3A30%3A02

Three similar-looking failures, on two different machines, in a regression
test that has existed for less than three weeks.  Something is very wrong.

What's worse, the failure does not look like something that could have
been directly provoked by commit 31eae6028eca4365e7165f5f33fee1ed0486aee0;
I'm afraid it's just falling victim to a bug that may have been there for
awhile.  Given the evidently low probability of triggering the bug, it
may be hard to isolate :-(

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers