Re: [HACKERS] RFC: replace pg_stat_activity.waiting with something more descriptive

2015-08-05 Thread Amit Kapila
On Tue, Aug 4, 2015 at 5:45 PM, Robert Haas robertmh...@gmail.com wrote:

 On Tue, Jul 28, 2015 at 3:28 PM, Heikki Linnakangas hlinn...@iki.fi
wrote:
  * The patch requires that the LWLOCK_INDIVIDUAL_NAMES array is kept in
sync
  with the list of individual locks in lwlock.h. Sooner or later someone
will
  add an LWLock and forget to update the names-array. That needs to be
made
  less error-prone, so that the names are maintained in the same place as
the
  #defines. Perhaps something like rmgrlist.h.

 This is a good idea, but it's not easy to do in the style of
 rmgrlist.h, because I don't believe there's any way to define a macro
 that expands to a preprocessor directive.  Attached is a patch that
 instead generates the list of macros from a text file, and also
 generates an array inside lwlock.c with the lock names that gets used
 by the Trace_lwlocks stuff where applicable.

 Any objections to this solution to the problem?  If not, I'd like to
 go ahead and push this much.  I can't test the Windows changes
 locally, though, so it would be helpful if someone could check that
 out.


Okay, I have check it on Windows and found below issues which I
have fixed in patch attached with this mail.

1. Got below error while running mkvcbuild.pl

Use of uninitialized value in numeric lt () at Solution.pm line 103.
Could not open src/backend/storage/lmgr/lwlocknames.h at Mkvcbuild.pm line
674.

+ if (IsNewer(
+ 'src/backend/storage/lmgr/lwlocknames.txt',
'src/include/storage/lwlocknames.h'))

I think the usage of old and new filenames in IsNewer is reversed here.

2.
+ print Generating lwlocknames.c and fmgroids.h...\n;

Here it should be lwlocknames.h instead of fmgroids.h

3.
+ if (IsNewer(
+ 'src/include/storage/lwlocknames.h',
+
'src/backend/storage/lmgr/fmgroids.h'))

Here, it seems lwlocknames.h needs to compared instead of fmgroids.h

4. Got below error while running mkvcbuild.pl
Generating lwlocknames.c and fmgroids.h...
/* autogenerated from src/backend/storage/lmgr/lwlocknames.txt, do not edit
*/
/* there is deliberately not an #ifndef LWLOCKNAMES_H here */
rename: lwlocknames.h.tmp3180: Permission denied at generate-lwlocknames.pl
line
 59, $lwlocknames line 47.

In generate-lwlocknames.pl, below line is causing problem.

rename($htmp, 'lwlocknames.h') || die rename: $htmp: $!;

The reason is that closing of tmp files is missing.


Some other general observations

5.
On running perl mkvcbuild.pl in Windows, below message is generated.

Generating lwlocknames.c and lwlocknames.h...
/* autogenerated from src/backend/storage/lmgr/lwlocknames.txt, do not edit
*/
/* there is deliberately not an #ifndef LWLOCKNAMES_H here */

Following message gets displayed, which looks slightly odd, displaying
it in C comments makes it look out of place and other similar .pl files
don't generate such message.  Having said that, I think it displays useful
information, so it might be okay to retain it.

6.
+
+maintainer-clean: clean
+ rm -f lwlocknames.h

Here, don't we need to clean lwlocknames.c?



With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


lwlocknames-v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Dependency between bgw_notify_pid and bgw_flags

2015-08-05 Thread Ashutosh Bapat
On Wed, Aug 5, 2015 at 2:07 AM, Robert Haas robertmh...@gmail.com wrote:

 On Tue, Jul 7, 2015 at 4:34 AM, Ashutosh Bapat
 ashutosh.ba...@enterprisedb.com wrote:
  With that notion of backend, to fix the original problem I reported,
  PostmasterMarkPIDForWorkerNotify() should also look at the
  BackgroundWorkerList. As per the comments in the prologue of this
 function,
  it assumes that only backends need to be notified about worker state
 change,
  which looks too restrictive. Any backend or background worker, which
 wants
  to be notified about a background worker it requested to be spawned,
 should
  be allowed to do so.

 Yeah.  I'm wondering if we should fix this problem by just insisting
 that all workers have an entry in BackendList.  At first look, this
 seems like it would make the code a lot simpler and have virtually no
 downside.  As it is, we're repeatedly reinventing new and different
 ways for unconnected background workers to do things that work fine in
 all other cases, and I don't see the point of that.

 I haven't really tested the attached patch - sadly, we have no testing
 of background workers without shared memory access in the tree - but
 it shows what I have in mind.

 Thoughts?


This idea looks good.

Looking at larger picture, we should also enable this feature to be used by
auxilliary processes. It's very hard to add a new auxilliary process in
current code. One has to go add code at many places to make sure that the
auxilliary processes die and are re-started correctly. Even tougher to add
a parent auxilliary process, which spawns multiple worker processes.That
would be whole lot simpler if we could allow the auxilliary processes to
use background worker infrastructure (which is what they are utlimately).

About the flags BGWORKER_BACKEND_DATABASE_CONNECTION and
BGWORKER_SHMEM_ACCESS
 BGWORKER_BACKEND_DATABASE_CONNECTION is used at seven places in the code:
one is assertion, two check existence of this flag, when backend actually
connects to a database, fourth checks whether BGWORKER_SHMEM_ACCESS is also
set, fifth creates parallel workers with this flag, sixth uses the flag to
add backend to the backed list (which you have removed). Seventh usage is
only usage which installs signal handlers based on this flag, which too I
guess can be overridden (or set correctly) by the actual background worker
code.

BGWORKER_SHMEM_ACCESS has similar usage, except that it resets the on exit
callbacks and detaches the shared memory segment from the background
worker. That avoids a full cluster restart when one of those worker which
can not corrupt shared memory dies. But I do not see any check to prevent
such backend from calling PGSharedMemoryReattach()

So it looks like, it suffices to assume that background worker either needs
to access shared memory or doesn't. Any background worker having shared
memory access can also access database and thus becomes part of the backend
list. Or may be we just avoid these flags and treat every background worker
as if it passed both these flags. That will simplify a lot of code.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


Re: [HACKERS] ON CONFLICT DO UPDATE using EXCLUDED.column gives an error about mismatched types

2015-08-05 Thread Amit Langote
On 2015-08-05 AM 06:44, Peter Geoghegan wrote:
 On Tue, Aug 4, 2015 at 2:29 AM, Amit Langote
 langote_amit...@lab.ntt.co.jp wrote:
 Perhaps, it may have to do with how EXCLUDED pseudo-rel's targetlist is
 manipulated through parse-plan stage?
 
 I think so, yes.
 
 I'll look into writing a fix for this later in the week.
 

Just a heads-up.

I forgot mentioning one thing later yesterday. The way exclRelTlist is
initialized, all the way in the beginning (transformOnConflictClause), is
most probably to blame. It uses expandRelAttrs() for other valid reasons;
but within it, expandRTE() is called with 'false' for include_dropped
(columns). I applied the attached (perhaps ugly) fix, and it seemed to fix
the issue. But, I guess you'll be able to come up with some better fix.

Thanks,
Amit
diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c
index a0dfbf9..cd67c96 100644
--- a/src/backend/parser/analyze.c
+++ b/src/backend/parser/analyze.c
@@ -915,7 +915,7 @@ transformOnConflictClause(ParseState *pstate,
 		 * required permissions back.
 		 */
 		exclRelTlist = expandRelAttrs(pstate, exclRte,
-	  exclRelIndex, 0, -1);
+	  exclRelIndex, 0, -1, true);
 		exclRte-requiredPerms = 0;
 		exclRte-selectedCols = NULL;
 
@@ -1312,7 +1312,7 @@ transformValuesClause(ParseState *pstate, SelectStmt *stmt)
 	 * Generate a targetlist as though expanding *
 	 */
 	Assert(pstate-p_next_resno == 1);
-	qry-targetList = expandRelAttrs(pstate, rte, rtindex, 0, -1);
+	qry-targetList = expandRelAttrs(pstate, rte, rtindex, 0, -1, false);
 
 	/*
 	 * The grammar allows attaching ORDER BY, LIMIT, and FOR UPDATE to a
diff --git a/src/backend/parser/parse_relation.c b/src/backend/parser/parse_relation.c
index 0b2dacf..98124e4 100644
--- a/src/backend/parser/parse_relation.c
+++ b/src/backend/parser/parse_relation.c
@@ -2418,7 +2418,8 @@ expandTupleDesc(TupleDesc tupdesc, Alias *eref, int count, int offset,
  */
 List *
 expandRelAttrs(ParseState *pstate, RangeTblEntry *rte,
-			   int rtindex, int sublevels_up, int location)
+			   int rtindex, int sublevels_up, int location,
+			   bool include_dropped_cols)
 {
 	List	   *names,
 			   *vars;
@@ -2426,7 +2427,7 @@ expandRelAttrs(ParseState *pstate, RangeTblEntry *rte,
 			   *var;
 	List	   *te_list = NIL;
 
-	expandRTE(rte, rtindex, sublevels_up, location, false,
+	expandRTE(rte, rtindex, sublevels_up, location, include_dropped_cols,
 			  names, vars);
 
 	/*
@@ -2448,6 +2449,10 @@ expandRelAttrs(ParseState *pstate, RangeTblEntry *rte,
 			 false);
 		te_list = lappend(te_list, te);
 
+		/* Dropped columns ain't Vars */
+		if (!IsA(varnode, Var))
+			continue;
+
 		/* Require read access to each column */
 		markVarForSelectPriv(pstate, varnode, rte);
 	}
diff --git a/src/backend/parser/parse_target.c b/src/backend/parser/parse_target.c
index 1b3fcd6..6712464 100644
--- a/src/backend/parser/parse_target.c
+++ b/src/backend/parser/parse_target.c
@@ -1185,7 +1185,7 @@ ExpandAllTables(ParseState *pstate, int location)
 			RTERangeTablePosn(pstate, rte,
 			  NULL),
 			0,
-			location));
+			location, false));
 	}
 
 	/*
@@ -1253,7 +1253,7 @@ ExpandSingleTable(ParseState *pstate, RangeTblEntry *rte,
 	{
 		/* expandRelAttrs handles permissions marking */
 		return expandRelAttrs(pstate, rte, rtindex, sublevels_up,
-			  location);
+			  location, false);
 	}
 	else
 	{
diff --git a/src/include/parser/parse_relation.h b/src/include/parser/parse_relation.h
index e2875a0..591e049 100644
--- a/src/include/parser/parse_relation.h
+++ b/src/include/parser/parse_relation.h
@@ -110,8 +110,8 @@ extern void errorMissingColumn(ParseState *pstate,
 extern void expandRTE(RangeTblEntry *rte, int rtindex, int sublevels_up,
 		  int location, bool include_dropped,
 		  List **colnames, List **colvars);
-extern List *expandRelAttrs(ParseState *pstate, RangeTblEntry *rte,
-			   int rtindex, int sublevels_up, int location);
+extern List *expandRelAttrs(ParseState *pstate, RangeTblEntry *rte, int rtindex,
+			int sublevels_up, int location, bool include_dropped_cols);
 extern int	attnameAttNum(Relation rd, const char *attname, bool sysColOK);
 extern Name attnumAttName(Relation rd, int attid);
 extern Oid	attnumTypeId(Relation rd, int attid);

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Patch for ginCombineData

2015-08-05 Thread Robert Abraham
Hello,

we are using gin indexes on big tables. these tables happen to have several
billion rows.
the index creation fails in ginCombineData in src/backend/access/ginbulk.c
because repalloc is limited to 1 gb.
this limitation makes no sense in this context (compare comments in
src/include/utils/memutils.h).
To overcome this limitation on tables with lots of rows repalloc_huge is
used.
The test suite still succeeds on my machine.
Find the patch attached,

Kind regards,

Robert Abraham
*** a/src/backend/access/gin/ginbulk.c
--- b/src/backend/access/gin/ginbulk.c
***
*** 39,45  ginCombineData(RBNode *existing, const RBNode *newdata, void *arg)
  		accum-allocatedMemory -= GetMemoryChunkSpace(eo-list);
  		eo-maxcount *= 2;
  		eo-list = (ItemPointerData *)
! 			repalloc(eo-list, sizeof(ItemPointerData) * eo-maxcount);
  		accum-allocatedMemory += GetMemoryChunkSpace(eo-list);
  	}
  
--- 39,45 
  		accum-allocatedMemory -= GetMemoryChunkSpace(eo-list);
  		eo-maxcount *= 2;
  		eo-list = (ItemPointerData *)
! 			repalloc_huge(eo-list, sizeof(ItemPointerData) * eo-maxcount);
  		accum-allocatedMemory += GetMemoryChunkSpace(eo-list);
  	}
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Add column-name hint to log messages generated by inserts when varchars don't fit

2015-08-05 Thread Stepan Rutz

Hi everybody again,

on our production servers I have quite some errors due to excessively long varchar-values which application-code tries to insert into tables and which dont fit.
The tables have many columns, the statements are not readable and many columns happen to have the same length. Powers of 2 most often for some odd reason ...

I fired up gdb and saw that the error message is generated during the preprocessing of the query where some kind of the constant-folding/constant-elimination happens on the parse-tree. I went ahead and added a try/catch at some point upwards in the call-stack where at least i have the contact of the T_TargetEntry. That has a field resname which gives me exactly the information i need... The column which was overflown. With that info i can fix the application code much more easily. Relation name was out of reach for me, there is a void* passed transparently to the constant-mutator but that is not checkable at the point. That context contains the original top-level statement node however.

The patch just adds a bit of hinting to the error message and goes on.. That is all but really helpful to me and potentially also others.

Attached Patch has more Infos and comments.


Regards from Germany,

Stepan


Stepan Rutz
Phone: +49 (0) 178 654 9284
Email: stepan.r...@gmx.de
Earth: Brunnenallee 25a, 50226 Frechen, Germany

columname_hint.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Add column-name hint to log messages generated by inserts when varchars don't fit

2015-08-05 Thread Stepan Rutz
Hi everybody again,

(Resending this EMail again because apparently I have just send in HTML format, 
which wasn't my intention)


on our production servers I have quite some errors due to excessively long 
varchar-values which application-code tries to insert into tables and which 
don't fit.
The errors look like 
  
  ERROR:  value too long for type character varying(4)

This is not helping me much. The patch will turn this too

  ERROR:  value too long for type character varying(4) (hint: column-name is 
mycolumn)

if the column that was overflown was mycolumn.



The tables have many columns, the statements are not readable and many columns 
happen to have the same length. Powers of 2 most often for some odd reason ...
I fired up gdb and saw that the error message is generated during the 
preprocessing of the query where some kind of the 
constant-folding/constant-elimination happens on the parse-tree. I went ahead 
and added a try/catch at some point upwards in the call-stack where at least i 
have the contact of the T_TargetEntry. That has a field resname which gives me 
exactly the information i need... The column which was overflown. With that 
info i can fix the application code much more easily. Relation name was out of 
reach for me, there is a void* passed transparently to the constant-mutator but 
that is not checkable at the point. That context contains the original 
top-level statement node however.
The patch just adds a bit of hinting to the error message and goes on.. That is 
all but really helpful to me and potentially also others.
Attached Patch has more Infos and comments.
Regards from Germany,
Stepan


Stepan Rutz
Phone: +49 (0) 178 654 9284
Email: stepan.r...@gmx.de
Earth: Brunnenallee 25a, 50226 Frechen, Germany


columname_hint.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pageinspect patch, for showing tuple data

2015-08-05 Thread Nikolay Shaplov
В письме от 3 августа 2015 15:35:23 Вы написали:
 Nikolay Shaplov wrote:
  This patch adds several new functions, available from SQL queries. All
  these functions are based on heap_page_items, but accept slightly
  different arguments and has one additional column at the result set:
  
  heap_page_tuples - accepts relation name, and bulkno, and returns usual
  heap_page_items set with additional column that contain snapshot of tuple
  data area stored in bytea.
 
 I think the API around get_raw_page is more useful generally.  You might
 have table blocks stored in a bytea column for instance, because you
 extracted from some remote server and inserted into a local table for
 examination.  If you only accept relname/blkno, there's no way to
 examine data other than blocks directly in the database dir, which is
 limiting.
 
  There is also one strange function: _heap_page_items it is useless for
  practical purposes. As heap_page_items it accepts page data as bytea, but
  also get a relation name. It tries to apply tuple descriptor of that
  relation to that page data.
 
 For BRIN, I added something similar (brin_page_items), but it receives
 the index OID rather than name, and constructs a tuple descriptor to
 read the data.  I think OID is better than name generally.  (You can
 cast the relation name to regclass).
 
 It's easy to misuse, but these functions are superuser-only, so there
 should be no security issue at least.  The possibility of a crash
 remains real, though, so maybe we should try to come up with a way to
 harden that.

Hmm... may be you are right. I'll try to change it would take raw page and 
OID.

-- 
Nikolay Shaplov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch for ginCombineData

2015-08-05 Thread Alexander Korotkov
Hi!

On Wed, Aug 5, 2015 at 1:17 PM, Robert Abraham 
robert.abraha...@googlemail.com wrote:

 we are using gin indexes on big tables. these tables happen to have
 several billion rows.
 the index creation fails in ginCombineData in src/backend/access/ginbulk.c
 because repalloc is limited to 1 gb.
 this limitation makes no sense in this context (compare comments in
 src/include/utils/memutils.h).
 To overcome this limitation on tables with lots of rows repalloc_huge is
 used.
 The test suite still succeeds on my machine.
 Find the patch attached,


Thank you for notice and for the patch!
You should have maintenance_work_mem  1gb and some very frequent entry so
that it's posting list exceeds 1 gb itself.
These circumstances shouldn't be very rare in modern systems. I think it
could be backpatched.

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: [HACKERS] Reduce ProcArrayLock contention

2015-08-05 Thread Andres Freund
On 2015-08-04 21:20:20 +0530, Amit Kapila wrote:
 Note - The function header comments on pg_atomic_read_u32 and
 corresponding write call seems to be reversed, but that is something
 separate.

Fixed, thanks for noticing.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] more-helpful-izing a debug message

2015-08-05 Thread Andres Freund
On 2015-08-04 16:38:58 -0400, Robert Haas wrote:
 On Wed, Jul 8, 2015 at 5:38 AM, Marko Tiikkaja ma...@joh.to wrote:
  One of the debug messages related to logical replication could be more
  helpful than it currently is.  The attached patch reorders the two
  operations to make it so.
 
  Please consider patching and back-patching.
 
 Andres, this looks like a bug fix to me.  What do you think?

Yes, definitely. Sorry for letting this fall by the wayside. Pushed.

Regards,

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Raising our compiler requirements for 9.6

2015-08-05 Thread Andres Freund
On 2015-08-04 16:55:12 -0400, Robert Haas wrote:
 On Tue, Aug 4, 2015 at 3:55 PM, Andres Freund and...@anarazel.de wrote:
  On 2015-08-04 15:45:44 -0400, Tom Lane wrote:
  I'm not sure that there's any great urgency about changing the instances
  that exist now; the real point of this discussion is that we will allow
  new code to use static inlines in headers.
 
  I agree that we don't have to (and probably shouldn't) make the required
  configure changes and change definitions.  But I do think some of the
  current macro usage deserves to be cleaned up at some point. I,
  somewhere during 9.4 or 9.5, redefined some of the larger macros into
  static inlines and it both reduced the binary size and gave minor
  performance benefits.
 
 We typically recommend that people write their new code like the
 existing code.  If we say that the standards for new code are now
 different from old code in this one way, I don't think that's going to
 be very helpful to anyone.

I'm coming around to actually changing more code initially. While I
don't particularly buy the severity of the make it look like existing
code issue, I do think it'll be rather confusing to have code dependent
on PG_USE_INLINE when it's statically defined.

So unless somebody protests I'm going to prepare (and commit after
posting) a patch to rip out the bits of code that currently depend on
PG_USE_INLINE.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RFC: replace pg_stat_activity.waiting with something more descriptive

2015-08-05 Thread Robert Haas
On Tue, Aug 4, 2015 at 8:46 PM, Josh Berkus j...@agliodbs.com wrote:
 On 06/25/2015 07:50 AM, Tom Lane wrote:
 To do that, we'd have to change the semantics of the 'waiting' column so
 that it becomes true for non-heavyweight-lock waits.  I'm not sure whether
 that's a good idea or not; I'm afraid there may be client-side code that
 expects 'waiting' to indicate that there's a corresponding row in
 pg_locks.  If we're willing to do that, then I'd be okay with
 allowing wait_status to be defined as last thing waited for; but the
 two points aren't separable.

 Speaking as someone who writes a lot of monitoring and alerting code,
 changing the meaning of the waiting column is OK as long as there's
 still a boolean column named waiting and it means query blocked in
 some way.

 Users are used to pg_stat_activity.waiting failing to join against
 pg_locks ... for one thing, there's timing issues there.  So pretty much
 everything I've seen uses outer joins anyway.

All of that is exactly how I feel about it, too.  It's not totally
painless to redefine waiting, but we're not proposing a *big* change
in semantics.  The way I see it, if we change this now, some people
will need to adjust, but it won't really be a big deal.  If we insist
that waiting is graven in stone, then in 5 years people will still
be wondering why the waiting column is inconsistent with the
wait_state column.  That's not going to be a win.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reduce ProcArrayLock contention

2015-08-05 Thread Pavan Deolasee
On Tue, Aug 4, 2015 at 8:59 PM, Robert Haas robertmh...@gmail.com wrote:



 I spent some time looking at this patch today and found that a good
 deal of cleanup work seemed to be needed.  Attached is a cleaned-up
 version which makes a number of changes:


 I'm not entirely happy with the name nextClearXidElem but apart from
 that I'm fairly happy with this version.


Can we just call it nextAtomicListElem and the one in PROC_HDR
headAtomicList or something like that? Basically we can use the same list
later at other places requiring similar treatment. I don't see them
anything specific to clearXid stuff. Rather it is just some sort of atomic
list of PGPROC

I actually even thought if we can define some macros or functions to work
on atomic list of PGPROCs. What we need is:

- Atomic operation to add a PGPROC to list list and return the head of the
list at the time of addition
- Atomic operation to delink a list from the head and return the head of
the list at that time
- Atomic operation to remove a PGPROC from the list and return next element
in the list
- An iterator to work on the list.

This will avoid code duplication when this infrastructure is used at other
places. Any mistake in the sequence of read/write/cas can lead to a hard to
find bug.

Having said that, it might be ok if we go with the current approach and
then revisit this if and when other parts require similar logic.

Thanks,
Pavan

-- 
 Pavan Deolasee   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


Re: [HACKERS] Doubt about AccessExclusiveLock in ALTER TABLE .. SET ( .. );

2015-08-05 Thread Robert Haas
On Tue, Aug 4, 2015 at 1:15 PM, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
 That opens up for lock escalation and deadlocks, doesn't it?  You are
 probably thinking that it's okay to ignore those but I don't necessarily
 agree with that.

Agreed.  I think we're making a mountain out of a molehill here.  As
long as the locks that are actually used are monotonic, just use  and
stick a comment in there explaining that it could need adjustment if
we use other lock levels in the future.  I presume all the lock-levels
used for DDL are, and will always be, self-exclusive, so why all this
hand-wringing?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Make timestamptz_out less slow.

2015-08-05 Thread David Rowley
On 5 August 2015 at 12:51, David Rowley david.row...@2ndquadrant.com
wrote:

 On 29 July 2015 at 03:25, Andres Freund and...@anarazel.de wrote:

 On 2015-07-29 03:10:41 +1200, David Rowley wrote:
  timestamp_out() = 2015-07-29 02:24:33.34 in 3.506000
  timestamp_out_old() = 2015-07-29 02:24:33.034 in 64.518000
  timestamp_out_af() = 2015-07-29 02:24:33.034 in 2.981000
 
  timestamp_out_old is master's version, the timestamp_out_af() is yours,
 and
  timestamp_out() is my one. times are in seconds to perform 100 million
  calls.

 That looks good.

  So it appears your version is a bit faster than mine, but we're both
 about
  20 times faster than the current one.
  Also mine needs fixed up as the fractional part is not padded the same
 as
  yours, but I doubt that'll affect the performance by much.

 Worthwhile to finish that bit and try ;)

  My view: It's probably not worth going quite as far as you've gone for a
  handful of nanoseconds per call, but perhaps something along the lines
 of
  mine can be fixed up.

 Yes, I agreee that your's is probably going to be fast enough.

  Have you thought about what to do when HAVE_INT64_TIMESTAMP is not
 defined?

 I don't think it's actually important. The only difference vs float
 timestamps is that in the latter case we set fsecs to zero BC.

 Unless we want to slow down the common case it seems not unlikely that
 we're going to end up with a separate slow path anyway. E.g. neither
 your version nor mine handles 5 digit years (which is why I fell back to
 the slow path in that case in my patch).


 It occurred to me that handling the 5 digit year is quite a simple change
 to my code:

 We simply just need to check if there was any 'num' left after consuming
 the given space. If there's any left then just use pg_uint2str().
 This keeps things very fast for the likely most common case where the year
 is 4 digits long.

 I've not thought about negative years. The whole function should perhaps
 take signed ints instead of unsigned.


I've made a few changes to this to get the fractional seconds part working
as it should.

It also now works fine with 5 digit years.

It's still in the form of the test program, but it should be simple enough
to pull out what's required from that and put into Postgres.

I've also changed my version of AppendSeconds() so that it returns a
pointer to the new end of string. This should be useful as I see some
strlen() calls to get the new end of string. It'll easy to remove those now
which will further increase performance.

timestamp_out() is the proposed new version
timestamp_out_old() is master's version
timestamp_out_af() is your version

Performance is as follows:

With Clang
david@ubuntu:~/C$ clang timestamp_out.c -o timestamp_out -O2
david@ubuntu:~/C$ ./timestamp_out
timestamp_out() = 2015-07-29 02:24:33.034 in 0.313686
timestamp_out_old() = 2015-07-29 02:24:33.034 in 5.048472
timestamp_out_af() = 2015-07-29 02:24:33.034 in 0.198147

With gcc
david@ubuntu:~/C$ gcc timestamp_out.c -o timestamp_out -O2
david@ubuntu:~/C$ ./timestamp_out
timestamp_out() = 2015-07-29 02:24:33.034 in 0.405795
timestamp_out_old() = 2015-07-29 02:24:33.034 in 4.678918
timestamp_out_af() = 2015-07-29 02:24:33.034 in 0.270557

Regards

David Rowley
--
 David Rowley   http://www.2ndQuadrant.com/
http://www.2ndquadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services
#include stdio.h
#include time.h
#include string.h
#include stdlib.h

struct pg_tm
{
	int			tm_sec;
	int			tm_min;
	int			tm_hour;
	int			tm_mday;
	int			tm_mon;			/* origin 0, not 1 */
	int			tm_year;		/* relative to 1900 */
	int			tm_wday;
	int			tm_yday;
	int			tm_isdst;
	long int	tm_gmtoff;
	const char *tm_zone;
};

static char *pg_uint2str(char *str, unsigned int value);
static char *pg_uint2str_padding(char *str, unsigned int value, unsigned int padding);

static char *
AppendSeconds2(char *cp, int sec, unsigned int fsec, int precision, char fillzeros)
{
	
	if (fillzeros)
		cp = pg_uint2str_padding(cp, abs(sec), 2);
	else
		cp = pg_uint2str(cp, abs(sec));
	
	if (fsec != 0)
	{
		unsigned int value = fsec;
		char *end = cp[precision + 1];
		int gotnonzero = 0;

		*cp++ = '.';
	
		/*
		 * Append the fractional seconds part. Note that we don't want any
		 * trailing zeros here, so since we're building the number in reverse
		 * we'll skip appending any zeros, unless we've seen a non-zero.
		 */
		while (precision--)
		{
			int		remainder;
			int		oldval = value;

			value /= 10;
			remainder = oldval - value * 10;
			
			/* check if we got a non-zero */
			if (remainder)
gotnonzero = 1;
		
			if (gotnonzero)
cp[precision] = '0' + remainder;
			else
end = cp[precision];
		}
		
		/*
		 * if we have a non-zero value then precision must have not been enough
		 * to print the number, we'd better have another go. There won't be any
		 * zero padding, so we can just use pg_uint2str()
		 */
		if (value  0)
			return pg_uint2str(cp, fsec);

		*end = '\0';
		
		

Re: [HACKERS] Raising our compiler requirements for 9.6

2015-08-05 Thread Andres Freund
On 2015-08-05 14:05:24 +0200, Andres Freund wrote:
 So unless somebody protests I'm going to prepare (and commit after
 posting) a patch to rip out the bits of code that currently depend on
 PG_USE_INLINE.

Here's that patch. I only removed code dependant on PG_USE_INLINE. We
might later want to change some of the harder to maintain macros to
inline functions, but that seems better done separately.

Regards,

Andres
From 72025848dc6de01c5ce014a4fde12d7a8610252d Mon Sep 17 00:00:00 2001
From: Andres Freund and...@anarazel.de
Date: Wed, 5 Aug 2015 15:02:56 +0200
Subject: [PATCH] Rely on inline functions even if that causes warnings in
 older compilers.

So far we have worked around the fact that some very old compilers do
not support 'inline' functions by only using inline functions
conditionally (or not at all). Since such compilers are very rare by
now, we have decided to rely on inline functions from 9.6 onwards.

To avoid breaking these old compilers inline is defined away when not
supported. That'll cause function x defined but not used type of
warnings, but since nobody develops on such compilers anymore that's
ok.

This change in policy will allow us to more easily employ inline
functions.

I chose to remove code previously conditional on PG_USE_INLINE as it
seemed confusing to have code dependent on a define that's always
defined.

Discussion: 20150701161447.gb30...@awork2.anarazel.de
---
 config/c-compiler.m4  |  34 --
 config/test_quiet_include.h   |  18 -
 configure |  38 ---
 configure.in  |   2 +-
 src/backend/lib/ilist.c   |   3 -
 src/backend/nodes/list.c  |   3 -
 src/backend/port/atomics.c|   7 --
 src/backend/utils/adt/arrayfuncs.c|   3 -
 src/backend/utils/mmgr/mcxt.c |   3 -
 src/backend/utils/sort/sortsupport.c  |   3 -
 src/include/access/gin_private.h  |   8 +--
 src/include/c.h   |  28 
 src/include/lib/ilist.h   | 106 --
 src/include/nodes/pg_list.h   |  17 ++---
 src/include/pg_config.h.in|   4 --
 src/include/pg_config.h.win32 |   6 +-
 src/include/port/atomics.h| 101 
 src/include/port/atomics/arch-x86.h   |   4 --
 src/include/port/atomics/fallback.h   |   5 --
 src/include/port/atomics/generic-acc.h|   8 +--
 src/include/port/atomics/generic-gcc.h|   8 +--
 src/include/port/atomics/generic-msvc.h   |   8 ---
 src/include/port/atomics/generic-sunpro.h |   4 --
 src/include/port/atomics/generic-xlc.h|   8 ---
 src/include/port/atomics/generic.h|   4 --
 src/include/utils/arrayaccess.h   |  19 +-
 src/include/utils/palloc.h|  11 +---
 src/include/utils/sortsupport.h   |  18 +
 28 files changed, 70 insertions(+), 411 deletions(-)
 delete mode 100644 config/test_quiet_include.h

diff --git a/config/c-compiler.m4 b/config/c-compiler.m4
index 050bfa5..397e1b0 100644
--- a/config/c-compiler.m4
+++ b/config/c-compiler.m4
@@ -17,40 +17,6 @@ fi])# PGAC_C_SIGNED
 
 
 
-# PGAC_C_INLINE
-# -
-# Check if the C compiler understands inline functions without being
-# noisy about unused static inline functions. Some older compilers
-# understand inline functions (as tested by AC_C_INLINE) but warn about
-# them if they aren't used in a translation unit.
-#
-# This test used to just define an inline function, but some compilers
-# (notably clang) got too smart and now warn about unused static
-# inline functions when defined inside a .c file, but not when defined
-# in an included header. Since the latter is what we want to use, test
-# to see if the warning appears when the function is in a header file.
-# Not pretty, but it works.
-#
-# Defines: inline, PG_USE_INLINE
-AC_DEFUN([PGAC_C_INLINE],
-[AC_C_INLINE
-AC_CACHE_CHECK([for quiet inline (no complaint if unreferenced)], pgac_cv_c_inline_quietly,
-  [pgac_cv_c_inline_quietly=no
-  if test $ac_cv_c_inline != no; then
-pgac_c_inline_save_werror=$ac_c_werror_flag
-ac_c_werror_flag=yes
-AC_LINK_IFELSE([AC_LANG_PROGRAM([#include $srcdir/config/test_quiet_include.h],[])],
-   [pgac_cv_c_inline_quietly=yes])
-ac_c_werror_flag=$pgac_c_inline_save_werror
-  fi])
-if test $pgac_cv_c_inline_quietly != no; then
-  AC_DEFINE_UNQUOTED([PG_USE_INLINE], 1,
-[Define to 1 if static inline works without unwanted warnings from ]
-[compilations where static inline functions are defined but not called.])
-fi
-])# PGAC_C_INLINE
-
-
 # PGAC_C_PRINTF_ARCHETYPE
 # ---
 # Set the format archetype used by gcc to check printf type functions.  We
diff --git a/config/test_quiet_include.h b/config/test_quiet_include.h
deleted file mode 100644
index 732b231..000
--- 

Re: [HACKERS] Reduce ProcArrayLock contention

2015-08-05 Thread Robert Haas
On Wed, Aug 5, 2015 at 8:30 AM, Pavan Deolasee pavan.deola...@gmail.com wrote:
 I actually even thought if we can define some macros or functions to work on
 atomic list of PGPROCs. What we need is:

 - Atomic operation to add a PGPROC to list list and return the head of the
 list at the time of addition
 - Atomic operation to delink a list from the head and return the head of the
 list at that time
 - Atomic operation to remove a PGPROC from the list and return next element
 in the list
 - An iterator to work on the list.

The third operation is unsafe because of the A-B-A problem.  That's
why the patch clears the whole list instead of popping an individual
entry.

 This will avoid code duplication when this infrastructure is used at other
 places. Any mistake in the sequence of read/write/cas can lead to a hard to
 find bug.

 Having said that, it might be ok if we go with the current approach and then
 revisit this if and when other parts require similar logic.

Yeah, I don't think we should assume this will be a generic facility.
We can make it one later if needed.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Raising our compiler requirements for 9.6

2015-08-05 Thread Andres Freund
On 2015-08-05 15:08:29 +0200, Andres Freund wrote:
 We might later want to change some of the harder to maintain macros to
 inline functions, but that seems better done separately.

Here's a conversion for fastgetattr() and heap_getattr(). Not only is
the resulting code significantly more readable, but the conversion also
shrinks the code size:

$ ls -l src/backend/postgres_stock src/backend/postgres
-rwxr-xr-x 1 andres andres 37054832 Aug  5 15:18 src/backend/postgres_stock
-rwxr-xr-x 1 andres andres 37209288 Aug  5 15:23 src/backend/postgres

$ size src/backend/postgres_stock src/backend/postgres
   textdata bss dec hex filename
7026843   49982  298584 7375409  708a31 src/backend/postgres_stock
7023851   49982  298584 7372417  707e81 src/backend/postgres

stock is the binary compiled without the conversion.

A lot of the size difference is debugging information (which now needs
less duplicated information on each callsite), but you can see that the
text section (the actual executable code) shrank by 3k.

stripping the binary shows exactly that:
-rwxr-xr-x 1 andres andres 7076760 Aug  5 15:44 src/backend/postgres_s
-rwxr-xr-x 1 andres andres 7079512 Aug  5 15:45 src/backend/postgres_stock_s

To be sure this doesn't cause problems I ran a readonly pgbench. There's
a very slight, but reproducible, performance benefit. I don't think
that's a reason for the change, I just wanted to make sure there's no
regressions.

Regards,

Andres

From cca830c39a6c46caf0c68b340399cf60f04ad31f Mon Sep 17 00:00:00 2001
From: Andres Freund and...@anarazel.de
Date: Wed, 5 Aug 2015 15:30:20 +0200
Subject: [PATCH] Convert fastgetattr() and heap_getattr() into inline
 functions.

The current macro is very hard to read. Now that we can unconditionally
use inline functions there's no reason to keep them that way.

In my gcc 5 based environment this shaves of nearly 200k of the final
binary size. A lot of that is debugging information, but the .text
section alone shrinks by 3k.
---
 src/backend/access/heap/heapam.c  |  46 ---
 src/include/access/htup_details.h | 157 ++
 2 files changed, 75 insertions(+), 128 deletions(-)

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 3701d8e..ff93b3c 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -983,52 +983,6 @@ heapgettup_pagemode(HeapScanDesc scan,
 }
 
 
-#if defined(DISABLE_COMPLEX_MACRO)
-/*
- * This is formatted so oddly so that the correspondence to the macro
- * definition in access/htup_details.h is maintained.
- */
-Datum
-fastgetattr(HeapTuple tup, int attnum, TupleDesc tupleDesc,
-			bool *isnull)
-{
-	return (
-			(attnum)  0 ?
-			(
-			 (*(isnull) = false),
-			 HeapTupleNoNulls(tup) ?
-			 (
-			  (tupleDesc)-attrs[(attnum) - 1]-attcacheoff = 0 ?
-			  (
-			   fetchatt((tupleDesc)-attrs[(attnum) - 1],
-		(char *) (tup)-t_data + (tup)-t_data-t_hoff +
-		(tupleDesc)-attrs[(attnum) - 1]-attcacheoff)
-			   )
-			  :
-			  nocachegetattr((tup), (attnum), (tupleDesc))
-			  )
-			 :
-			 (
-			  att_isnull((attnum) - 1, (tup)-t_data-t_bits) ?
-			  (
-			   (*(isnull) = true),
-			   (Datum) NULL
-			   )
-			  :
-			  (
-			   nocachegetattr((tup), (attnum), (tupleDesc))
-			   )
-			  )
-			 )
-			:
-			(
-			 (Datum) NULL
-			 )
-		);
-}
-#endif   /* defined(DISABLE_COMPLEX_MACRO) */
-
-
 /* 
  *	 heap access method interface
  * 
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 8dd530bd..9d49c5e 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -673,6 +673,38 @@ struct MinimalTupleData
 #define HeapTupleSetOid(tuple, oid) \
 		HeapTupleHeaderSetOid((tuple)-t_data, (oid))
 
+/* prototypes for functions in common/heaptuple.c */
+extern Size heap_compute_data_size(TupleDesc tupleDesc,
+	   Datum *values, bool *isnull);
+extern void heap_fill_tuple(TupleDesc tupleDesc,
+Datum *values, bool *isnull,
+char *data, Size data_size,
+uint16 *infomask, bits8 *bit);
+extern bool heap_attisnull(HeapTuple tup, int attnum);
+extern Datum nocachegetattr(HeapTuple tup, int attnum,
+			   TupleDesc att);
+extern Datum heap_getsysattr(HeapTuple tup, int attnum, TupleDesc tupleDesc,
+bool *isnull);
+extern HeapTuple heap_copytuple(HeapTuple tuple);
+extern void heap_copytuple_with_tuple(HeapTuple src, HeapTuple dest);
+extern Datum heap_copy_tuple_as_datum(HeapTuple tuple, TupleDesc tupleDesc);
+extern HeapTuple heap_form_tuple(TupleDesc tupleDescriptor,
+Datum *values, bool *isnull);
+extern HeapTuple heap_modify_tuple(HeapTuple tuple,
+  TupleDesc tupleDesc,
+  Datum *replValues,
+  bool *replIsnull,
+  bool *doReplace);
+extern void heap_deform_tuple(HeapTuple tuple, TupleDesc tupleDesc,
+  Datum 

Re: [HACKERS] Raising our compiler requirements for 9.6

2015-08-05 Thread Tom Lane
Andres Freund and...@anarazel.de writes:
 On 2015-08-05 14:05:24 +0200, Andres Freund wrote:
 So unless somebody protests I'm going to prepare (and commit after
 posting) a patch to rip out the bits of code that currently depend on
 PG_USE_INLINE.

 Here's that patch. I only removed code dependant on PG_USE_INLINE. We
 might later want to change some of the harder to maintain macros to
 inline functions, but that seems better done separately.

Hmm.  I notice that this removes Noah's hack from commit c53f73879f552a3c.
Do we care about breaking old versions of xlc, and if so, how are we going
to fix that?  (I assume it should be possible to override AC_C_INLINE's
result, but I'm not sure where would be a good place to do so.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Raising our compiler requirements for 9.6

2015-08-05 Thread Andres Freund
On 2015-08-05 10:08:10 -0400, Tom Lane wrote:
 Andres Freund and...@anarazel.de writes:
  On 2015-08-05 14:05:24 +0200, Andres Freund wrote:
  So unless somebody protests I'm going to prepare (and commit after
  posting) a patch to rip out the bits of code that currently depend on
  PG_USE_INLINE.
 
  Here's that patch. I only removed code dependant on PG_USE_INLINE. We
  might later want to change some of the harder to maintain macros to
  inline functions, but that seems better done separately.
 
 Hmm.  I notice that this removes Noah's hack from commit c53f73879f552a3c.
 Do we care about breaking old versions of xlc, and if so, how are we going
 to fix that?  (I assume it should be possible to override AC_C_INLINE's
 result, but I'm not sure where would be a good place to do so.)

Hm. That's a good point.

How about moving that error check into into the aix template file and
erroring out there? Since this is master I think it's perfectly fine to
refuse to work with the buggy unsupported 32 bit compiler. The argument
not to do so was that PG previously worked in the back branches
depending on the minor version, but that's not an argument on master.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Raising our compiler requirements for 9.6

2015-08-05 Thread Tom Lane
Andres Freund and...@anarazel.de writes:
 On 2015-08-05 10:08:10 -0400, Tom Lane wrote:
 Hmm.  I notice that this removes Noah's hack from commit c53f73879f552a3c.
 Do we care about breaking old versions of xlc, and if so, how are we going
 to fix that?  (I assume it should be possible to override AC_C_INLINE's
 result, but I'm not sure where would be a good place to do so.)

 Hm. That's a good point.

 How about moving that error check into into the aix template file and
 erroring out there? Since this is master I think it's perfectly fine to
 refuse to work with the buggy unsupported 32 bit compiler. The argument
 not to do so was that PG previously worked in the back branches
 depending on the minor version, but that's not an argument on master.

The check as Noah wrote it rejects *all* 32-bit IBM compilers, not just
buggy ones.  That was okay when the effect was only a rather minor
performance loss, but refusing to build at all would raise the stakes
quite a lot.  Unless you are volunteering to find out how to tell broken
compilers from fixed ones more accurately, I think you need to confine
the effects of the check to disabling inlining.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Raising our compiler requirements for 9.6

2015-08-05 Thread Andres Freund
On 2015-08-05 10:23:31 -0400, Tom Lane wrote:
 Andres Freund and...@anarazel.de writes:
  How about moving that error check into into the aix template file and
  erroring out there? Since this is master I think it's perfectly fine to
  refuse to work with the buggy unsupported 32 bit compiler. The argument
  not to do so was that PG previously worked in the back branches
  depending on the minor version, but that's not an argument on master.
 
 The check as Noah wrote it rejects *all* 32-bit IBM compilers, not just
 buggy ones.  That was okay when the effect was only a rather minor
 performance loss, but refusing to build at all would raise the stakes
 quite a lot.  Unless you are volunteering to find out how to tell broken
 compilers from fixed ones more accurately, I think you need to confine
 the effects of the check to disabling inlining.

Wasn't the point that 32 bit AIX as a whole hasn't been supported for a
couple years now? My willingness to expend effort for that is rather
limited.

I mean I'd otherwise ok with a PG_FORCE_DISABLE_INLINE flag that takes
effect in c.h or so. That could easily be set in src/template/aix. Might
also be useful for investigatory purposes every couple years or so.


Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Raising our compiler requirements for 9.6

2015-08-05 Thread Tom Lane
Andres Freund and...@anarazel.de writes:
 Wasn't the point that 32 bit AIX as a whole hasn't been supported for a
 couple years now? My willingness to expend effort for that is rather
 limited.

Well, there's one in the buildfarm.  We don't generally turn off
buildfarm critters just because the underlying OS is out of support
(the population would be quite a bit lower if we did).

 I mean I'd otherwise ok with a PG_FORCE_DISABLE_INLINE flag that takes
 effect in c.h or so. That could easily be set in src/template/aix. Might
 also be useful for investigatory purposes every couple years or so.

+1, that could have other use-cases.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Raising our compiler requirements for 9.6

2015-08-05 Thread Andres Freund
On 2015-08-05 10:32:48 -0400, Tom Lane wrote:
 Andres Freund and...@anarazel.de writes:
  Wasn't the point that 32 bit AIX as a whole hasn't been supported for a
  couple years now? My willingness to expend effort for that is rather
  limited.
 
 Well, there's one in the buildfarm.

Oh nice. That's new. I see it has been added less than a week ago ;)

 We don't generally turn off buildfarm critters just because the
 underlying OS is out of support (the population would be quite a bit
 lower if we did).

I didn't know there was a buildfarm animal. We'd been pleading a bunch
of people over the last few years to add one. If there's an actual way
to see platforms breaking I'm more accepting to try and keep them alive.

  I mean I'd otherwise ok with a PG_FORCE_DISABLE_INLINE flag that takes
  effect in c.h or so. That could easily be set in src/template/aix. Might
  also be useful for investigatory purposes every couple years or so.
 
 +1, that could have other use-cases.

Ok, lets' do it that way then. It seems the easiest way to test for this
is to use something like

# IBM XL C/C++ for AIX, V12.1 miscompiles, for 32-bit, some inline
# expansions of ginCompareItemPointers() long long arithmetic.  To
# take advantage of inlining, build a 64-bit PostgreSQL.
test $(getconf HARDWARE_BITMODE) == '32' then
   CFLAGS=$CFLAGS -DPG_FORCE_DISABLE_INLINE
fi

in the xlc part of the template?

do we want to add something like
$as_echo $as_me: WARNING: disabling inlining on 32 bit aix due to compiler 
bugs
? Seems like a good idea to me.

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] libpq: Allow specifying multiple host names to try to connect to

2015-08-05 Thread Bruce Momjian
On Wed, Jul  8, 2015 at 12:24:37PM -0400, Robbie Harwood wrote:
  You update the documentation just for  psql but your change effects any 
  libpq application if we go forward with this patch we should update the 
  documentation for libpq as well.
 
  This approach seems to work with the url style of conninfo
 
  For example
postgres://some-down-host.info,some-other-host.org:5435/test1
 
  seems to work as expected but I don't like that syntax I would rather see
  postgres://some-down-host.info:5435/test1,postgres://some-other-host.org:5435/test1
 
  This would be a more invasive change but I think the syntax is more usable.
 
 I agree with this; it seems to me that it's more powerful to be able to
 specify complete urls for when they may differ.
 
 For the non-url case though, I don't see a clean way of doing this.  We
 could always, e.g., locally bind port specification to the closest host
 specification, but that seems nasty, and is still less powerful than
 passing urls (or we could just do the same for all parameters, but
 that's just a mess).
 
 Might it be reasonable to only allow the multi-host syntax in the
 url-style and not otherwise?

First, I agree this is a very useful feature that we want.  Many NoSQL
databases are promoting multi-host client libraries as HA, which is kind
of humorous, and also makes sense because many NoSQL solution are
multi-host.

I can see this feature benefitting us for clients to auto-failover
without requiring a pooler or virtual IP reassignment, and also useful
for read-only connections that want to connect to a read-only slave, but
don't care which one.  The idea of randomly selecting a host from the
list might be a future feature.

I agree we should allow the specification of multiple hosts, e.g. -h
host1,host2, but anything more complex should require the URL syntax,
and require full URLs separated by commas, not commas inside a single
URL to specify multiple host names, as shown above.  If repeating
information inside each URL is a problem, the user can still use
connections-specific options to controls things, e.g. by using -p 5433,
it is not necessary to specify the port number in the URLs:

$ psql -p 5433  postgres://localhost/test,postgres://localhost/test2

I realize this is libpq-feature-creep, but considering the complexities
of a pooler and virtual IP address reassignment, I think adding this
makes sense.  The fact that other DBs are doing it, including I think
VMWare's libpq, supports the idea of adding this simple specification.

Can someone work on a patch to implement this?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reduce ProcArrayLock contention

2015-08-05 Thread Amit Kapila
On Tue, Aug 4, 2015 at 8:59 PM, Robert Haas robertmh...@gmail.com wrote:


 I'm not entirely happy with the name nextClearXidElem but apart from
 that I'm fairly happy with this version.  We should probably test it
 to make sure I haven't broken anything;


I have verified the patch and it is fine.  I have tested it via manual
tests; for long pgbench tests, results are quite similar to previous
versions of patch.

Few changes, I have made in patch:

1.

+static void

+ProcArrayGroupClearXid(PGPROC *proc, TransactionId latestXid)

+{

+ volatile PROC_HDR *procglobal = ProcGlobal;

+ uint32 nextidx;

+ uint32 wakeidx;

+ int extraWaits = -1;

+

+ /* We should definitely have an XID to clear. */

+ Assert(TransactionIdIsValid(pgxact-xid));


Here Assert is using pgxact which is wrong.

2. Made ProcArrayEndTransactionInternal as inline function as
suggested by you.



With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


group-xid-clearing-v5.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Raising our compiler requirements for 9.6

2015-08-05 Thread Tom Lane
Andres Freund and...@anarazel.de writes:
 Ok, lets' do it that way then. It seems the easiest way to test for this
 is to use something like

 # IBM XL C/C++ for AIX, V12.1 miscompiles, for 32-bit, some inline
 # expansions of ginCompareItemPointers() long long arithmetic.  To
 # take advantage of inlining, build a 64-bit PostgreSQL.
 test $(getconf HARDWARE_BITMODE) == '32' then
CFLAGS=$CFLAGS -DPG_FORCE_DISABLE_INLINE
 fi

 in the xlc part of the template?

Actually, much the easiest way to convert what Noah did would be to add

#if defined(__ILP32__)  defined(__IBMC__)
#define PG_FORCE_DISABLE_INLINE
#endif

in src/include/port/aix.h.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Raising our compiler requirements for 9.6

2015-08-05 Thread Andres Freund
On 2015-08-05 11:12:34 -0400, Tom Lane wrote:
 Andres Freund and...@anarazel.de writes:
  Ok, lets' do it that way then. It seems the easiest way to test for this
  is to use something like
 
  # IBM XL C/C++ for AIX, V12.1 miscompiles, for 32-bit, some inline
  # expansions of ginCompareItemPointers() long long arithmetic.  To
  # take advantage of inlining, build a 64-bit PostgreSQL.
  test $(getconf HARDWARE_BITMODE) == '32' then
 CFLAGS=$CFLAGS -DPG_FORCE_DISABLE_INLINE
  fi
 
  in the xlc part of the template?

(there's a ; missing and it should be CPPFLAGS rather than CFLAGS)

 Actually, much the easiest way to convert what Noah did would be to add
 
 #if defined(__ILP32__)  defined(__IBMC__)
 #define PG_FORCE_DISABLE_INLINE
 #endif
 
 in src/include/port/aix.h.

I'm ok with that too, although I do like the warning at configure
time. I'd go with the template approach due to that, but I don't feel
strongly about it.

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Raising our compiler requirements for 9.6

2015-08-05 Thread Tom Lane
Andres Freund and...@anarazel.de writes:
 I'm ok with that too, although I do like the warning at configure
 time. I'd go with the template approach due to that, but I don't feel
 strongly about it.

Meh.  I did *not* like the way you proposed doing that: it looked far too
dependent on autoconf internals (the kind that they change regularly).
If you can think of a way of emitting a warning that isn't going to break
in a future autoconf release, then ok.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Raising our compiler requirements for 9.6

2015-08-05 Thread Andres Freund
On 2015-08-05 11:23:22 -0400, Tom Lane wrote:
 Andres Freund and...@anarazel.de writes:
  I'm ok with that too, although I do like the warning at configure
  time. I'd go with the template approach due to that, but I don't feel
  strongly about it.
 
 Meh.  I did *not* like the way you proposed doing that: it looked far too
 dependent on autoconf internals (the kind that they change regularly).

Hm, I'd actually checked that as_echo worked back till 9.0. But it
doesn't exist in much older releases.

 If you can think of a way of emitting a warning that isn't going to break
 in a future autoconf release, then ok.

echo $as_me: WARNING: disabling inlining on 32 bit aix due to a bug in xlc 
21

then. That'd have worked back in 7.4 and the worst that'd happen is that
$as_me is empty.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Dependency between bgw_notify_pid and bgw_flags

2015-08-05 Thread Alvaro Herrera
Ashutosh Bapat wrote:

 Looking at larger picture, we should also enable this feature to be
 used by auxilliary processes. It's very hard to add a new auxilliary
 process in current code.  One has to go add code at many places to
 make sure that the auxilliary processes die and are re-started
 correctly.

No kidding.

 Even tougher to add a parent auxilliary process, which spawns multiple
 worker processes.

I'm not sure about this point.  The autovacuum launcher, which is an
obvious candidate to have children processes, does not do that but
instead talks to postmaster to do it instead.  We did it that way to
avoid the danger of children processes connected to shared memory that
would not be direct children of postmaster; that could cause reliability
problems if the intermediate process fails to signal its children for
some reason.  Along the same lines I would suggest that other bgworker
processes should also be controllers only, that is it can ask postmaster
to start other processes but not start them directly.

 That
 would be whole lot simpler if we could allow the auxilliary processes to
 use background worker infrastructure (which is what they are utlimately).
 
 About the flags BGWORKER_BACKEND_DATABASE_CONNECTION and
 BGWORKER_SHMEM_ACCESS
  BGWORKER_BACKEND_DATABASE_CONNECTION is used at seven places in the code:
 one is assertion, two check existence of this flag, when backend actually
 connects to a database, fourth checks whether BGWORKER_SHMEM_ACCESS is also
 set, fifth creates parallel workers with this flag, sixth uses the flag to
 add backend to the backed list (which you have removed). Seventh usage is
 only usage which installs signal handlers based on this flag, which too I
 guess can be overridden (or set correctly) by the actual background worker
 code.
 
 BGWORKER_SHMEM_ACCESS has similar usage, except that it resets the on exit
 callbacks and detaches the shared memory segment from the background
 worker. That avoids a full cluster restart when one of those worker which
 can not corrupt shared memory dies. But I do not see any check to prevent
 such backend from calling PGSharedMemoryReattach()
 
 So it looks like, it suffices to assume that background worker either needs
 to access shared memory or doesn't. Any background worker having shared
 memory access can also access database and thus becomes part of the backend
 list. Or may be we just avoid these flags and treat every background worker
 as if it passed both these flags. That will simplify a lot of code.

If you want to make aux processes use the bgworker infrastructure (a
worthy goal I think), it is possible that more uses would be found for
those flags.  For instance, avlauncher is equivalent to a worker that
has SHMEM_ACCESS but no DATABASE_CONNECTION; maybe some of the code
would differ depending on whether or not DATABASE_CONNECTION is
specified.  The parallel to the avlauncher was the main reason I decided
to separate those two flags.  Therefore I wouldn't try to simplify just
yet.  If you succeed in making the avlauncher (and more generally all of
autovacuum) use bgworker code, perhaps that would be the time to search
for simplifications to make, because we would know more about how it
would be used.

From your list above it doesn't sound like DATABASE_CONNECTION actually
does anything useful other than sanity checks.  I wonder if the behavior
that avlauncher becomes part of the backend list would be sane (this is
what would happen if you simply remove the DATABASE_CONNECTION flag and
then turn avlauncher into a regular bgworker).

On further thought, given that almost every aux process has particular
timing needs for start/stop under different conditions, I am doubtful
that you could turn any of them into a bgworker.  Maybe pgstat would be
the only exception, perhaps bgwriter, not sure.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freeze avoidance of very large table.

2015-08-05 Thread Bruce Momjian
On Wed, Jul  8, 2015 at 02:31:04PM +0100, Simon Riggs wrote:
 On 7 July 2015 at 18:45, Sawada Masahiko sawada.m...@gmail.com wrote:
 
 On Wed, Jul 8, 2015 at 12:37 AM, Andres Freund and...@anarazel.de wrote:
  On 2015-07-07 16:25:13 +0100, Simon Riggs wrote:
  I don't think pg_freespacemap is the right place.
 
  I agree that pg_freespacemap sounds like an odd location.
 
  I'd prefer to add that as a single function into core, so we can write
  formal tests.
 
  With the advent of src/test/modules it's not really a prerequisite for
  things to be builtin to be testable. I think there's fair arguments for
  moving stuff like pg_stattuple, pg_freespacemap, pg_buffercache into
  core at some point, but that's probably a separate discussion.
 
 
 I understood.
 So I will place bunch of test like src/test/module/visibilitymap_test,
 which contains  some tests regarding this feature,
 and gather them into one patch.
 
 
 Please place it in core. I see value in having a diagnostic function for
 general use on production systems.

Sorry to be coming to this discussion late.

I understand the desire for a diagnostic function in core, but we have
to be consistent.  Just because we are adding this function now doesn't
mean we should use different rules from what we did previously for
diagnostic functions.  Either their is logic to why this function is
different from the other diagnostic functions in contrib, or we need to
have a separate discussion of whether diagnostic functions belong in
contrib or core.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freeze avoidance of very large table.

2015-08-05 Thread Alvaro Herrera
Bruce Momjian wrote:

 I understand the desire for a diagnostic function in core, but we have
 to be consistent.  Just because we are adding this function now doesn't
 mean we should use different rules from what we did previously for
 diagnostic functions.  Either their is logic to why this function is
 different from the other diagnostic functions in contrib, or we need to
 have a separate discussion of whether diagnostic functions belong in
 contrib or core.

Then let's start moving some extensions to src/extension/.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freeze avoidance of very large table.

2015-08-05 Thread Robert Haas
On Wed, Aug 5, 2015 at 12:36 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Bruce Momjian wrote:
 I understand the desire for a diagnostic function in core, but we have
 to be consistent.  Just because we are adding this function now doesn't
 mean we should use different rules from what we did previously for
 diagnostic functions.  Either their is logic to why this function is
 different from the other diagnostic functions in contrib, or we need to
 have a separate discussion of whether diagnostic functions belong in
 contrib or core.

 Then let's start moving some extensions to src/extension/.

That seems like yet another separate issue.

FWIW, it seems to me that we've done a heck of a lot of moving stuff
out of contrib over the last few releases.  A bunch of things moved to
src/test/modules and a bunch of things went to src/bin.  We can move
more, of course, but this code reorganization has non-trivial costs
and I'm not clear what benefits we hope to realize and whether we are
in fact realizing those benefits.  At this point, the overwhelming
majority of what's in contrib is extensions; we're not far from being
able to put the whole thing in src/extensions if it really needs to be
moved at all.

But I don't think it's fair to conflate that with Bruce's question,
which it seems to me is both a fair question and a different one.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freeze avoidance of very large table.

2015-08-05 Thread Alvaro Herrera
Robert Haas wrote:
 On Wed, Aug 5, 2015 at 12:36 PM, Alvaro Herrera
 alvhe...@2ndquadrant.com wrote:
  Bruce Momjian wrote:
  I understand the desire for a diagnostic function in core, but we have
  to be consistent.  Just because we are adding this function now doesn't
  mean we should use different rules from what we did previously for
  diagnostic functions.  Either their is logic to why this function is
  different from the other diagnostic functions in contrib, or we need to
  have a separate discussion of whether diagnostic functions belong in
  contrib or core.
 
  Then let's start moving some extensions to src/extension/.
 
 That seems like yet another separate issue.
 
 FWIW, it seems to me that we've done a heck of a lot of moving stuff
 out of contrib over the last few releases.  A bunch of things moved to
 src/test/modules and a bunch of things went to src/bin.  We can move
 more, of course, but this code reorganization has non-trivial costs
 and I'm not clear what benefits we hope to realize and whether we are
 in fact realizing those benefits.  At this point, the overwhelming
 majority of what's in contrib is extensions; we're not far from being
 able to put the whole thing in src/extensions if it really needs to be
 moved at all.

There are a number of things in contrib that are not extensions, and
others are not core-quality yet.  I don't think we should move
everything; at least not everything in one go.  I think there are a
small number of diagnostic extensions that would be useful to have in
core (pageinspect, pg_buffercache, pg_stat_statements).

 But I don't think it's fair to conflate that with Bruce's question,
 which it seems to me is both a fair question and a different one.

Well, there was no question as such.  If the question is should we
instead put it in contrib just to be consistent? then I think the
answer is no.  I value consistency as much as every other person, but I
there are other things I value more, such as availability.  If stuff is
in contrib and servers don't have it installed because of package
policies and it takes three management layers' approval to get it
installed in a dying server, then I prefer to have it in core.

If the question was why are we not using the rule we previously had
that diagnostic tools were in contrib? then I think the answer is that
we have evolved and we now know better.  We have evolved in the sense
that we have more stuff in production now that needs better diagnostic
tooling to be available; and we know better now in the sense that we
have realized there's this company policy bureaucracy that things in
contrib are not always available for reasons that are beyond us.

Anyway, the patch as proposed puts the new functions in core as builtins
(which is what Bruce seems to be objecting to).  Maybe instead of
proposing moving existing extensions in core, it would be better to have
this patch put those two new functions alone as a single new extension
in src/extension, and not move anything else.  I don't necessarily
resist adding these functions as builtins, but if we do that then
there's no going back to having them as an extension instead, which is
presumably more in line with what we want in the long run.

(It would be a shame to delay this patch, which messes with complex
innards, just because of a discussion about the placement of two
smallish diagnostic functions.)

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RFC: replace pg_stat_activity.waiting with something more descriptive

2015-08-05 Thread Ildus Kurbangaliev

On 08/04/2015 11:47 PM, Robert Haas wrote:

On Tue, Aug 4, 2015 at 4:37 PM, Ildus Kurbangaliev
i.kurbangal...@postgrespro.ru wrote:

A new version of the patch. I used your idea with macros, and with tranches that
allowed us to remove array with names (they can be written directly to the 
corresponding
tranche).

You seem not to have addressed a few of the points I brought up here:

http://www.postgresql.org/message-id/CA+TgmoaGqhah0VTamsfaOMaE9uOrCPYSXN8hCS9=wirupjs...@mail.gmail.com



About `memcpy`, PgBackendStatus struct already have a bunch of 
multi-byte variables,  so it will be
not consistent anyway if somebody will want to copy it in that way. On 
the other hand two bytes in this case
give less overhead because we can avoid the offset calculations. And as 
I've mentioned before the class

of wait will be useful when monitoring of waits will be extended.

Other things from that patch already changed in latest patch.

On 08/04/2015 11:53 PM, Alvaro Herrera wrote:

Just a bystander here, I haven't reviewed this patch at all, but I have
two questions,

1. have you tested this under -DEXEC_BACKEND ?  I wonder if those
initializations are going to work on Windows.

No, it wasn't tested on Windows

--
Ildus Kurbangaliev
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Draft Alpha 2 announcement

2015-08-05 Thread Josh Berkus
Hackers,

Please check over the attached for errors.

Also, please suggest major fixes/changes since Alpha 1 I might have
missed.  Thanks!

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
PostgreSQL 9.5 Alpha 2 Released
===

The PostgreSQL Global Development Group announces today that the second alpha release of PostgreSQL 9.5 is available for download.  This release contains previews of all of the features which will be available in the final release of version 9.5, although some details will change before then.  Please download, test, and report what you find.

Changes Since Alpha1


Many bugs and issues reported by our users and contributors have been fixed since the release of Alpha1.  These include:

* Make pg_dump work back up Row Level Security policies
* Make pg_rewind work with symlinks
* Fix several issue with locking
* Multiple changes and improvements to Row Level Security
* Correct some oversights in BRIN indexes
* Fix behavior of JSON negative array subscripts
* Correct strxfrm() behavior on buggy platforms
* Multiple documentation changes and additions

If you reported an issue while testing PostgreSQL 9.5, please download Alpha2 and test whether that issue has been fixed.  If you have not yet tested version 9.5, now is the time for you to help out PostgreSQL development.

Alpha and Beta Schedule
---

This is the alpha release of version 9.5, indicating that some changes to features are still possible before release.  The PostgreSQL Project will release 9.5 another alpha or beta in September, and then periodically release alphas or betas as required for testing until the final release in late 2015.  For more information, and suggestions on how to test the alpha and betas, see the Beta Testing page. 

Full documentation and release notes of the new version is available online and also installs with PostgreSQL.  Also see the What's New page for details on some features.

Links
-

* [Downloads Page](http://www.postgresql.org/download/)
* [Beta Testing Information](http://www.postgresql.org/developer/alpha)
* [9.5 Alpha Release Notes](http://www.postgresql.org/docs/devel/static/release-9-5.html)
* [What's New in 9.5](https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_9.5)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freeze avoidance of very large table.

2015-08-05 Thread Josh Berkus
On 08/05/2015 10:00 AM, Alvaro Herrera wrote:
 Anyway, the patch as proposed puts the new functions in core as builtins
 (which is what Bruce seems to be objecting to).  Maybe instead of
 proposing moving existing extensions in core, it would be better to have
 this patch put those two new functions alone as a single new extension
 in src/extension, and not move anything else.  I don't necessarily
 resist adding these functions as builtins, but if we do that then
 there's no going back to having them as an extension instead, which is
 presumably more in line with what we want in the long run.

For my part, I am unclear on why we are putting *any* diagnostic tools
in /contrib today.  Either the diagnostic tools are good quality and
necessary for a bunch of users, in which case we ship them in core, or
they are obscure and/or untested, in which case they go in an external
project and/or on PGXN.

Yes, for tools with overhead we might want to require enabling them in
pg.conf.  But that's very different from requiring the user to install a
separate package.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freeze avoidance of very large table.

2015-08-05 Thread Bruce Momjian
On Wed, Aug  5, 2015 at 10:22:48AM -0700, Josh Berkus wrote:
 On 08/05/2015 10:00 AM, Alvaro Herrera wrote:
  Anyway, the patch as proposed puts the new functions in core as builtins
  (which is what Bruce seems to be objecting to).  Maybe instead of
  proposing moving existing extensions in core, it would be better to have
  this patch put those two new functions alone as a single new extension
  in src/extension, and not move anything else.  I don't necessarily
  resist adding these functions as builtins, but if we do that then
  there's no going back to having them as an extension instead, which is
  presumably more in line with what we want in the long run.
 
 For my part, I am unclear on why we are putting *any* diagnostic tools
 in /contrib today.  Either the diagnostic tools are good quality and
 necessary for a bunch of users, in which case we ship them in core, or
 they are obscure and/or untested, in which case they go in an external
 project and/or on PGXN.
 
 Yes, for tools with overhead we might want to require enabling them in
 pg.conf.  But that's very different from requiring the user to install a
 separate package.

I don't care what we do, but I do think we should be consistent. 
Frankly I am unclear why I am even having to make this point, as cases
where we have chosen expediency over consistency have served us badly in
the past.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft Alpha 2 announcement

2015-08-05 Thread Joe Conway
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/05/2015 10:15 AM, Josh Berkus wrote:
 * Make pg_dump work back up Row Level Security policies

There were far more than this single bugfix related to RLS. You might
want to say something like:

* Numerous Row Level Security related bug fixes

- -- 
Joe Conway
Crunchy Data
Enterprise PostgreSQL
http://crunchydata.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJVwkf1AAoJEDfy90M199hlEnwP/RNObNwpdzN9MxWTEzLQfB5o
PdQ4kjE4tY/PdYrLv2G7a12G9i/Lfkza1grHc+FaAuc7QPToOPLEut/punZgPlU2
+Zf9Bn1Pjkl9h37BbW1H978qAq6vbS4R+9mp8qamMKfmYlJptCSOrG549wTqOesM
640ZfFsnB/3L0ApIcfrg/EYZPrl/Ue9PntFK1cKwcu9BAPC+Sc/kM5P/nxyz2Avo
Ovetv1AUlSWTLARDBDyXDiCaB9ADWLrlPvfNIt6DuzkEKADzh8R49YLCOkhdWW+a
z9LYqINtlRfwN9oh7ob/OJD3FT1SPjHOiBwQB8bGJGSAqeMSTY5owuBZOnDlcZx5
cK7CFUQioIY831o5z/nyLdzYR/LRzn9tr4yLoVIDB0ZERdBtu7xI2pM3S2TB9yJk
U2PySM2LFTBLlXTI1al38/yKVZCK+aLcF3jqZxEbEOE5SsaudoDQUcrR3MeSSwoy
bnUYDez4sLa96j4seF0yiRSlYYUZFxE/BW+VW8QF0e9l8JX0WWoZCTMC7m28eUMN
HMa5skbQrE40taJt1kbNz9FGR8u5EeF2cbe5A9ys7dvijDqt1fEYLNQCY4FZ3g3/
yPFip0txcL2kJ8qGUILxWn4dGeNXQTcBQg1L41ZkOEmTW+oEVvUZ0P2TtRBYXKG1
2+cebmtLgaO8O3cY4O+7
=9lWg
-END PGP SIGNATURE-


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft Alpha 2 announcement

2015-08-05 Thread Josh Berkus
On 08/05/2015 10:29 AM, Joe Conway wrote:
 On 08/05/2015 10:15 AM, Josh Berkus wrote:
 * Make pg_dump work back up Row Level Security policies
 
 There were far more than this single bugfix related to RLS. You might
 want to say something like:
 
 * Numerous Row Level Security related bug fixes

I have a line like that (but I like numerous, will use that); I wanted
also to call out pg_dump because it is a user-visible fix in need of
testing.


-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freeze avoidance of very large table.

2015-08-05 Thread Josh Berkus
On 08/05/2015 10:26 AM, Bruce Momjian wrote:
 On Wed, Aug  5, 2015 at 10:22:48AM -0700, Josh Berkus wrote:
 On 08/05/2015 10:00 AM, Alvaro Herrera wrote:
 Anyway, the patch as proposed puts the new functions in core as builtins
 (which is what Bruce seems to be objecting to).  Maybe instead of
 proposing moving existing extensions in core, it would be better to have
 this patch put those two new functions alone as a single new extension
 in src/extension, and not move anything else.  I don't necessarily
 resist adding these functions as builtins, but if we do that then
 there's no going back to having them as an extension instead, which is
 presumably more in line with what we want in the long run.

 For my part, I am unclear on why we are putting *any* diagnostic tools
 in /contrib today.  Either the diagnostic tools are good quality and
 necessary for a bunch of users, in which case we ship them in core, or
 they are obscure and/or untested, in which case they go in an external
 project and/or on PGXN.

 Yes, for tools with overhead we might want to require enabling them in
 pg.conf.  But that's very different from requiring the user to install a
 separate package.
 
 I don't care what we do, but I do think we should be consistent. 
 Frankly I am unclear why I am even having to make this point, as cases
 where we have chosen expediency over consistency have served us badly in
 the past.

Saying it's stupid to be consistent with a bad old rule, and making a
new rule is not expediency.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freeze avoidance of very large table.

2015-08-05 Thread Alvaro Herrera
Josh Berkus wrote:
 On 08/05/2015 10:26 AM, Bruce Momjian wrote:

  I don't care what we do, but I do think we should be consistent. 
  Frankly I am unclear why I am even having to make this point, as cases
  where we have chosen expediency over consistency have served us badly in
  the past.
 
 Saying it's stupid to be consistent with a bad old rule, and making a
 new rule is not expediency.

So I discussed this with Bruce on IM a bit.  I think there are basically
four ways we could go about this:

1. Add the functions as a builtins.
   This is what the current patch does.  Simon seems to prefer this,
   because he wants the function to be always available in production;
   but I don't like this option because adding functions as builtins
   makes it impossible to move later to extensions.
   Bruce doesn't like this option either.

2. Add the functions to contrib, keep them there for the foreesable future.
   Simon is against this option, because the functions will be
   unavailable when needed in production.  I am of the same position.
   Bruce opines this option is acceptable.

3. a) Add the function to some extension in contrib now, by using a
  slightly modified version of the current patch, and
   b) Apply some later patch to move said extension to src/extension.

4. a) Patch some extension(s) to move it to src/extension,
   b) Apply a version of this patch that adds the new functions to said
  extension

Essentially 3 and 4 are the same thing except the order is reversed;
they both result in the functions being shipped in some core extension
(a concept we do not have today).  Bruce says either of these is fine
with him.  I am fine with either of them also.  As long as we do 3b
during 9.6 timeframe, the outcome of either 3 and 4 seems to be
acceptable for Simon also.

Robert seems to be saying that he doesn't care about moving extensions
to core at all.

What do others think?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freeze avoidance of very large table.

2015-08-05 Thread Josh Berkus
On 08/05/2015 10:46 AM, Alvaro Herrera wrote:
 1. Add the functions as a builtins.
This is what the current patch does.  Simon seems to prefer this,
because he wants the function to be always available in production;
but I don't like this option because adding functions as builtins
makes it impossible to move later to extensions.
Bruce doesn't like this option either.

Why would we want to move them later to extensions?  Do you anticipate
not needing them in the future?  If we don't need them in the future,
why would they continue to exist at all?

I'm really not getting this.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Raising our compiler requirements for 9.6

2015-08-05 Thread Andres Freund
On 2015-08-05 17:19:05 +0200, Andres Freund wrote:
 On 2015-08-05 11:12:34 -0400, Tom Lane wrote:
  Andres Freund and...@anarazel.de writes:
   Ok, lets' do it that way then. It seems the easiest way to test for this
   is to use something like
  
   # IBM XL C/C++ for AIX, V12.1 miscompiles, for 32-bit, some inline
   # expansions of ginCompareItemPointers() long long arithmetic.  To
   # take advantage of inlining, build a 64-bit PostgreSQL.
   test $(getconf HARDWARE_BITMODE) == '32' then
  CFLAGS=$CFLAGS -DPG_FORCE_DISABLE_INLINE
   fi

So that approach doesn't work out well because the 32 bit xlc can be
installed on the 64 bit system.

  Actually, much the easiest way to convert what Noah did would be to add
  
  #if defined(__ILP32__)  defined(__IBMC__)
  #define PG_FORCE_DISABLE_INLINE
  #endif
  
  in src/include/port/aix.h.

Therefore I'm going to reshuffle things in that direction tomorrow. I'll
wait for other fallout first though. So far only gcc, xlc and clang (via
gcc frontend) have run...

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Doubt about AccessExclusiveLock in ALTER TABLE .. SET ( .. );

2015-08-05 Thread Fabrízio de Royes Mello
On Wed, Aug 5, 2015 at 9:31 AM, Robert Haas robertmh...@gmail.com wrote:

 On Tue, Aug 4, 2015 at 1:15 PM, Alvaro Herrera alvhe...@2ndquadrant.com
wrote:
  That opens up for lock escalation and deadlocks, doesn't it?  You are
  probably thinking that it's okay to ignore those but I don't necessarily
  agree with that.

 Agreed.  I think we're making a mountain out of a molehill here.  As
 long as the locks that are actually used are monotonic, just use  and
 stick a comment in there explaining that it could need adjustment if
 we use other lock levels in the future.  I presume all the lock-levels
 used for DDL are, and will always be, self-exclusive, so why all this
 hand-wringing?


New version attached with suggested changes.

Regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Timbira: http://www.timbira.com.br
 Blog: http://fabriziomello.github.io
 Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello
 Github: http://github.com/fabriziomello
diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml
index 1c1c181..ad985cd 100644
--- a/doc/src/sgml/ref/alter_table.sgml
+++ b/doc/src/sgml/ref/alter_table.sgml
@@ -543,6 +543,10 @@ ALTER TABLE ALL IN TABLESPACE replaceable class=PARAMETERname/replaceable
   of commandALTER TABLE/ that forces a table rewrite.
  /para
 
+ para
+  Changing autovacuum storage parameters acquires a literalSHARE UPDATE EXCLUSIVE/literal lock.
+ /para
+
  note
   para
While commandCREATE TABLE/ allows literalOIDS/ to be specified
diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index 180f529..c39b878 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -57,7 +57,8 @@ static relopt_bool boolRelOpts[] =
 		{
 			autovacuum_enabled,
 			Enables autovacuum in this relation,
-			RELOPT_KIND_HEAP | RELOPT_KIND_TOAST
+			RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
+			ShareUpdateExclusiveLock
 		},
 		true
 	},
@@ -65,7 +66,8 @@ static relopt_bool boolRelOpts[] =
 		{
 			user_catalog_table,
 			Declare a table as an additional catalog table, e.g. for the purpose of logical replication,
-			RELOPT_KIND_HEAP
+			RELOPT_KIND_HEAP,
+			AccessExclusiveLock
 		},
 		false
 	},
@@ -73,7 +75,8 @@ static relopt_bool boolRelOpts[] =
 		{
 			fastupdate,
 			Enables \fast update\ feature for this GIN index,
-			RELOPT_KIND_GIN
+			RELOPT_KIND_GIN,
+			AccessExclusiveLock
 		},
 		true
 	},
@@ -81,7 +84,8 @@ static relopt_bool boolRelOpts[] =
 		{
 			security_barrier,
 			View acts as a row security barrier,
-			RELOPT_KIND_VIEW
+			RELOPT_KIND_VIEW,
+			AccessExclusiveLock
 		},
 		false
 	},
@@ -95,7 +99,8 @@ static relopt_int intRelOpts[] =
 		{
 			fillfactor,
 			Packs table pages only to this percentage,
-			RELOPT_KIND_HEAP
+			RELOPT_KIND_HEAP,
+			AccessExclusiveLock
 		},
 		HEAP_DEFAULT_FILLFACTOR, HEAP_MIN_FILLFACTOR, 100
 	},
@@ -103,7 +108,8 @@ static relopt_int intRelOpts[] =
 		{
 			fillfactor,
 			Packs btree index pages only to this percentage,
-			RELOPT_KIND_BTREE
+			RELOPT_KIND_BTREE,
+			AccessExclusiveLock
 		},
 		BTREE_DEFAULT_FILLFACTOR, BTREE_MIN_FILLFACTOR, 100
 	},
@@ -111,7 +117,8 @@ static relopt_int intRelOpts[] =
 		{
 			fillfactor,
 			Packs hash index pages only to this percentage,
-			RELOPT_KIND_HASH
+			RELOPT_KIND_HASH,
+			AccessExclusiveLock
 		},
 		HASH_DEFAULT_FILLFACTOR, HASH_MIN_FILLFACTOR, 100
 	},
@@ -119,7 +126,8 @@ static relopt_int intRelOpts[] =
 		{
 			fillfactor,
 			Packs gist index pages only to this percentage,
-			RELOPT_KIND_GIST
+			RELOPT_KIND_GIST,
+			AccessExclusiveLock
 		},
 		GIST_DEFAULT_FILLFACTOR, GIST_MIN_FILLFACTOR, 100
 	},
@@ -127,7 +135,8 @@ static relopt_int intRelOpts[] =
 		{
 			fillfactor,
 			Packs spgist index pages only to this percentage,
-			RELOPT_KIND_SPGIST
+			RELOPT_KIND_SPGIST,
+			AccessExclusiveLock
 		},
 		SPGIST_DEFAULT_FILLFACTOR, SPGIST_MIN_FILLFACTOR, 100
 	},
@@ -135,7 +144,8 @@ static relopt_int intRelOpts[] =
 		{
 			autovacuum_vacuum_threshold,
 			Minimum number of tuple updates or deletes prior to vacuum,
-			RELOPT_KIND_HEAP | RELOPT_KIND_TOAST
+			RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
+			ShareUpdateExclusiveLock
 		},
 		-1, 0, INT_MAX
 	},
@@ -143,7 +153,8 @@ static relopt_int intRelOpts[] =
 		{
 			autovacuum_analyze_threshold,
 			Minimum number of tuple inserts, updates or deletes prior to analyze,
-			RELOPT_KIND_HEAP
+			RELOPT_KIND_HEAP,
+			ShareUpdateExclusiveLock
 		},
 		-1, 0, INT_MAX
 	},
@@ -151,7 +162,8 @@ static relopt_int intRelOpts[] =
 		{
 			autovacuum_vacuum_cost_delay,
 			Vacuum cost delay in milliseconds, for autovacuum,
-			RELOPT_KIND_HEAP | RELOPT_KIND_TOAST
+			RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
+			ShareUpdateExclusiveLock
 		},
 		-1, 0, 100
 	},
@@ -159,7 +171,8 @@ static relopt_int intRelOpts[] =
 		{
 			autovacuum_vacuum_cost_limit,
 			Vacuum cost 

Re: [HACKERS] Freeze avoidance of very large table.

2015-08-05 Thread Alvaro Herrera
Josh Berkus wrote:
 On 08/05/2015 10:46 AM, Alvaro Herrera wrote:
  1. Add the functions as a builtins.
 This is what the current patch does.  Simon seems to prefer this,
 because he wants the function to be always available in production;
 but I don't like this option because adding functions as builtins
 makes it impossible to move later to extensions.
 Bruce doesn't like this option either.
 
 Why would we want to move them later to extensions?

Because it's not nice to have random stuff as builtins.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Caching offsets of varlena attributes

2015-08-05 Thread Vignesh Raghunathan
Hello,

In the function heap_deform_tuple, there is a comment before caching
varlena attributes specifying that the offset will be valid for either an
aligned or unaligned value if there are no padding bytes. Could someone
please elaborate on this?

Also, why is it safe to call att_align_nominal if the attribute is not
varlena? Couldn't att_align_pointer be called for both cases? I am not able
to understand how att_align_nominal is faster.

Thanks,
Vignesh


Re: [HACKERS] LWLock deadlock and gdb advice

2015-08-05 Thread Jeff Janes
On Sun, Aug 2, 2015 at 8:05 AM, Andres Freund and...@anarazel.de wrote:

 On 2015-08-02 17:04:07 +0200, Andres Freund wrote:
  I've attached a version of the patch that should address Heikki's
  concern. It imo also improves the API and increases debuggability by not
  having stale variable values in the variables anymore. (also attached is
  a minor optimization that Heikki has noticed)


I've run these (from your git commit) for over 60 hours with no problem, so
I think it's solved.

If there are still problems, hopefully they will come out in other tests.
I don't know why the gin test was efficient at provoking the problem while
none of the other ones (btree-upsert, gist, brin, btree-foreign key,
btree-prepare transaction) I've played around with.  Perhaps it is just due
to the amount of WAL which gin generates.

Cheers,

Jeff


Re: [HACKERS] RFC: replace pg_stat_activity.waiting with something more descriptive

2015-08-05 Thread Robert Haas
On Wed, Aug 5, 2015 at 1:10 PM, Ildus Kurbangaliev
i.kurbangal...@postgrespro.ru wrote:
 About `memcpy`, PgBackendStatus struct already have a bunch of multi-byte
 variables,  so it will be
 not consistent anyway if somebody will want to copy it in that way. On the
 other hand two bytes in this case
 give less overhead because we can avoid the offset calculations. And as I've
 mentioned before the class
 of wait will be useful when monitoring of waits will be extended.

You're missing the point.  Those multi-byte fields have additional
synchronization requirements, as I explained in some detail in my
previous email. You can't just wave that away.

When people raise points in review, you need to respond to those with
discussion, not just blast out a new patch version that may or may not
have made some changes in that area.  Otherwise, you're wasting the
time of the people who are reviewing, which is not nice.

 1. have you tested this under -DEXEC_BACKEND ?  I wonder if those
 initializations are going to work on Windows.

 No, it wasn't tested on Windows

I don't think it's going to work on Windows.
CreateSharedMemoryAndSemaphores() is called once only from the
postmaster on non-EXEC_BACKEND builds, but on EXEC_BACKEND builds
(i.e. Windows) it's called for every process startup.  Thus, it's got
to be idempotent: if the shared memory structures it's looking for
already exist, it must not try to recreate them.  You have, for
example, InitBufferPool() calling LWLockCreateTranche(), which
unconditionally assigns a new tranche ID.  It can't do that; all of
the processes in the system have to agree on what the tranche IDs are.

The general problem that I see with splitting apart the main lwlock
array into many tranches is that all of those tranches need to get set
up properly - with matching tranche IDs - in every backend.  In
non-EXEC_BACKEND builds, that's basically free, but on EXEC_BACKEND
builds it isn't.  I think that's OK; this won't be the first piece of
state where EXEC_BACKEND builds incur some extra overhead.  But we
should make an effort to keep that overhead small.

The way to do that, I think, is to store some state in shared memory
that, in EXEC_BACKEND builds, will allow new postmaster children to
correctly re-register all of the tranches.  It seems to me that we can
do this as follows:

1. Have a compiled-in array of built-in tranche names.
2. In LWLockShmemSize(), work out the number of lwlocks each tranche
should contain.
3. In CreateLWLocks(), if IsUnderPostmaster, grab enough shared memory
for all the lwlocks themselves plus enough extra shared memory for one
pointer per tranche.  Store pointers to the start of each tranche in
shared memory, and initialize all the lwlocks.
4. In CreateLWLocks(), if tranche registration has not yet been done
(either because we are the postmaster, or because this is the
EXEC_BACKEND case) loop over the array of built-in tranche names and
register each one with the corresponding address grabbed from shared
memory.

A more radical redesign would be to have each tranche of LWLocks as a
separate chunk of shared memory, registered with ShmemInitStruct(),
and let EXEC_BACKEND children find those chunks by name using
ShmemIndex.  But that seems like more work to me for not much benefit,
especially since there's this weird thing where lwlocks are
initialized before ShmemIndex gets set up.

Yet another possible approach is to have each module register its own
tranche and track its own tranche ID using the same kind of strategy
that replication origins and WAL insert locks already employ.  That
may well be the right long-term strategy, but it seems like sort of a
pain to all that effort right now for this project, so I'm inclined to
hack on the approach described above and see if I can get that working
for now.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] LWLock deadlock and gdb advice

2015-08-05 Thread Andres Freund
On 2015-08-05 11:22:55 -0700, Jeff Janes wrote:
 On Sun, Aug 2, 2015 at 8:05 AM, Andres Freund and...@anarazel.de wrote:
 
  On 2015-08-02 17:04:07 +0200, Andres Freund wrote:
   I've attached a version of the patch that should address Heikki's
   concern. It imo also improves the API and increases debuggability by not
   having stale variable values in the variables anymore. (also attached is
   a minor optimization that Heikki has noticed)
 
 
 I've run these (from your git commit) for over 60 hours with no problem, so
 I think it's solved.

Cool! Thanks for the testing.

 If there are still problems, hopefully they will come out in other tests.
 I don't know why the gin test was efficient at provoking the problem while
 none of the other ones (btree-upsert, gist, brin, btree-foreign key,
 btree-prepare transaction) I've played around with.  Perhaps it is just due
 to the amount of WAL which gin generates.

I'm not sure either... I've been able to reproduce a version of the
issue by judiciously sprinkling pg_usleep()'s over the code.

Regards,

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Removing unreferenced files

2015-08-05 Thread Ron Farrer

Hello,

This is in regards to the patch[0] posted in 2006 based on previous
works[1]. Below is a summary of the issues, at present, as I understand
it along with some questions.

Initial questions that had no consensus in previous discussions:
1. Approach on file handling undecided
2. Startup vs standalone tool
3. If startup: how to determine when to run

Outstanding problems (point-for-point with original 2005 post):
1. Does not work with non-standard tablespaces. The check will even
report all files are stale, etc.
2. Has issues with stale subdirs of a tablespace (subdirs corresponding
to a nonexistent database) [appears related to #1 because of maintenance
mode and not failing]
3. Assumes relfilenode is unique database-wide when it’s only safe
tablespace-wide
4. Does not examine table segment files such as “nnn.1” - it should
instead complain when “nnn” does not match a hash entry
5. It loads every value of relfilenode in pg_class into the hash table
without checking that it is meaningful or not - needs to check.
6. strol vs strspn (or other) [not sure what the problem here is. If
errors are handled correctly this should not be an issue]
7. No checks for readdir failure [this should be easy to check for]

Other thoughts:
1. What to do if problem happens during drop table/index and the files
that should be removed are still there.. the DBA needs to know when this
happens somehow
2.  What happened to pgfsck: was that a better approach? why was that
abandoned?
3.  What to do about stale files and missing files

References:
0 -
http://www.postgresql.org/message-id/200606081508.k58f85m29...@candle.pha.pa.us
1 - http://www.postgresql.org/message-id/8291.1115340...@sss.pgh.pa.us


Ron

-- 
Command Prompt, Inc. http://www.commandprompt.com/ +1-800-492-2240
PostgreSQL Centered full stack support, consulting, and development.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed assertion in joinrels.c

2015-08-05 Thread Tom Lane
Piotr Stefaniak postg...@piotr-stefaniak.me writes:
 On 08/05/2015 02:24 AM, Tom Lane wrote:
 Piotr Stefaniak postg...@piotr-stefaniak.me writes:
 Set join_collapse_limit = 32 and you should see the error.

 Ah ... now I get that error on the smaller query, but the larger one
 (that you put in an attachment) still doesn't show any problem.
 Do you have any other nondefault settings?

 Sorry, that needs from_collapse_limit = 32 as well.

Yeah, I assumed as much, but it still doesn't happen for me.  Possibly
something platform-dependent in statistics, or something like that.

Anyway, I fixed the problem exposed by the smaller query; would you
check that the larger one is okay for you now?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freeze avoidance of very large table.

2015-08-05 Thread Petr Jelinek

On 2015-08-05 20:09, Alvaro Herrera wrote:

Josh Berkus wrote:

On 08/05/2015 10:46 AM, Alvaro Herrera wrote:

1. Add the functions as a builtins.
This is what the current patch does.  Simon seems to prefer this,
because he wants the function to be always available in production;
but I don't like this option because adding functions as builtins
makes it impossible to move later to extensions.
Bruce doesn't like this option either.


Why would we want to move them later to extensions?


Because it's not nice to have random stuff as builtins.



Extensions have one nice property, they provide namespacing so not 
everything has to be in pg_catalog which already has about gazilion 
functions. It's nice to have stuff you don't need for day to day 
operations separate but still available (which is why src/extensions is 
better than contrib).


--
 Petr Jelinek  http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed assertion in joinrels.c

2015-08-05 Thread Andreas Seltenreich
Tom Lane writes:

 On 08/01/2015 05:59 PM, Tom Lane wrote:
 Well, I certainly think all of these represent bugs:
 1 | ERROR:  could not find pathkey item to sort

 Hmm ... I see no error with these queries as of today's HEAD or
 back-branch tips.  I surmise that this was triggered by one of the other
 recently-fixed bugs, though the connection isn't obvious offhand.

I still see this error in master as of b8cbe43, but the queries are
indeed a pita to reproduce.  The one below is the only one so far that
is robust against running ANALYZE on the regression db, and also
reproduces when I run it as an EXTRA_TEST with make check.

regards,
Andreas

select
  rel_217088662.a as c0,
  rel_217088554.a as c1,
  rel_217088662.b as c2,
  subq_34235266.c0 as c3,
  rel_217088660.id2 as c4,
  rel_217088660.id2 as c5
from
  public.clstr_tst as rel_217088554
inner join (select
   rel_217088628.a as c0
 from
   public.rtest_vview3 as rel_217088628
 where (rel_217088628.b !~ rel_217088628.b)
   and rel_217088628.b ~~* rel_217088628.b)
 or (rel_217088628.b ~* rel_217088628.b))
   or (rel_217088628.b  rel_217088628.b))
 or (rel_217088628.b = rel_217088628.b))) as subq_34235266
 inner join public.num_exp_mul as rel_217088660
   inner join public.onek2 as rel_217088661
   on (rel_217088660.id1 = rel_217088661.unique1 )
 on (subq_34235266.c0 = rel_217088660.id1 )
  inner join public.main_table as rel_217088662
  on (rel_217088661.unique2 = rel_217088662.a )
on (rel_217088554.b = rel_217088660.id1 )
where rel_217088554.d = rel_217088554.c
fetch first 94 rows only;


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] deparsing utility commands

2015-08-05 Thread Jim Nasby

On 7/31/15 8:45 AM, Shulgin, Oleksandr wrote:

While running deparsecheck suite I'm getting a number of oddly looking
errors:

WARNING:  state: 42883 errm: operator does not exist: pg_catalog.oid =
pg_catalog.oid

This is caused by deparsing create view, e.g.:

STATEMENT:  create view v1 as select * from t1 ;
ERROR:  operator does not exist: pg_catalog.oid = pg_catalog.oid at
character 52
HINT:  No operator matches the given name and argument type(s). You
might need to add explicit type casts.
QUERY:  SELECT * FROM pg_catalog.pg_rewrite WHERE ev_class = $1 AND
rulename = $2
CONTEXT:  PL/pgSQL function test_ddl_deparse() line 1 at FOR over SELECT
rows

The pg_rewrite query comes from ruleutils.c, while ddl_deparse.c calls
it through pg_get_viewdef_internal() but don't understand how is it
different from e.g., select pg_get_viewdef(...), and that last one is
not affected.


I'm not sure what test_ddl_deparse is doing, is that where the oid = oid 
is coming from?


It might be enlightening to replace = with OPERATOR(pg_catalog.=) and 
see if that works.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [DOCS] max_worker_processes on the standby

2015-08-05 Thread Petr Jelinek

On 2015-08-05 00:13, Alvaro Herrera wrote:

Robert Haas wrote:

On Tue, Aug 4, 2015 at 12:41 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:



The alternative is to turn the feature on automatically if it sees that
the master also has it on, i.e. the value would not be what the config
file says it is.  Doing this would be a bit surprising IMO, but given
the behavior above maybe it's better than the current behavior.


I think it's totally reasonable for the standby to follow the master's
behavior rather than the config file.  That should be documented, but
otherwise, no problem.  If it were technologically possible for the
standby to follow the config file rather than the master in all cases,
that would be fine, too.  But the current behavior is somewhere in the
middle, and that doesn't seem like a good plan.


So I discussed this with Petr.  He points out that if we make the
standby follows the master, then the problem would be the misbehavior
that results once the standby is promoted: at that point the standby
would no longer follow the master and would start with the feature
turned off, which could be disastrous (depending on what are you using
the commit timestamps for).

Given this, we're leaning towards the idea that the standby should not
try to follow the master at all.  Instead, an extension that wants to
use this stuff can check the value for itself, and raise a fatal error
if it's not already turned on the config file.  That way, a lot of the
strange corner cases disappear.



Actually, after thinking bit more about this I think the behavior of 
these two will be similar - you suddenly lose the commit timestamp info. 
The difference is that with fist option you'll lose it after restart 
while with second one you lose it immediately after promotion since 
there was never any info on the slave.


Extensions can do sanity checking in both scenarios.

The way I see it the first option has following advantages:
- it's smaller change
- it's more consistent with how wal_log_hints behaves
- fixing the config does not require server restart since the in-memory 
state was set from WAL record automatically


However the second option has also some:
- one can have slave which doesn't have overhead of the commit timestamp 
SLRU if they don't need it there
- it's theoretically easier to notice that the track_commit_timestamps 
is off in config because the the SQL interface will complain if called 
on the slave


So +0.5 from me towards following master and removing the error message

--
 Petr Jelinek  http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] BRIN trivial tweaks

2015-08-05 Thread Alvaro Herrera
Kevin Grittner wrote:
 I happened to notice two declarations of functions for BRIN that
 are not actually defined, and a comment that looked like it was
 incompletely edited.  Attached patch is a suggestion, but I leave
 it to those working on the feature to commit, since there might be
 a reason I'm missing for the declarations and my wording might not
 be ideal.

Thanks, pushed this alongside together with the fix for the other BRIN
bug you opened.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] brin index vacuum versus transaction snapshots

2015-08-05 Thread Alvaro Herrera
Alvaro Herrera wrote:

 Here's the promised patch.

Pushed to master and 9.5.  Thanks for reporting!

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench - allow backslash-continuations in custom scripts

2015-08-05 Thread Heikki Linnakangas

On 07/24/2015 11:36 AM, Kyotaro HORIGUCHI wrote:

At Fri, 24 Jul 2015 07:39:16 +0200 (CEST), Fabien COELHO coe...@cri.ensmp.fr wrote 
in alpine.DEB.2.10.1507240731050.12839@sto

- backslash commands is handled as the same as before: multiline
  is not allowed.


Hmm... that is really the feature I wanted to add initially, too bad
it is the dropped one:-)


Ouch. The story has been derailed somewhere.

Since SQL statments could be multilined without particluar
marker, we cannot implement multilined backslash commands in the
same way..


I don't think we actually want backslash-continuations. The feature we 
want is allow SQL statements span multiple lines, and using the psql 
lexer solves that. We don't need the backslash-continuations when we 
have that.


On 07/25/2015 05:53 PM, Fabien COELHO wrote:

I don't have idea how to deal with the copy of psqlscan.[lh] from
psql. Currently they are simply the dead copies of those of psql.


I think that there should be no copies, but it should use relative
symbolic links so that the files are kept synchronized.


Yeah, I think so but symlinks could harm on git and Windows.
The another way would be make copies it from psql directory. They live
next door to each other.


Indeed there are plenty of links already which are generated by makefiles
(see src/bin/pg_xlogdump/*), and probably a copy is made on windows. There
should no file duplication within the source tree.


Yeah, following the example of pg_xlogdump and others is the way to go.

Docs need updating, and there's probably some cleanup to do before this 
is ready for committing, but overall I think this is definitely the 
right direction.


I complained upthread that this makes it impossible to use 
multi-statements in pgbench, as they would be split into separate 
statements, but looking at psqlscan.l there is actually a syntax for 
that in psql already. You escape the semicolon as \;, e.g. SELECT 1 \; 
SELECT 2;, and then both queries will be sent to the server as one. So 
even that's OK.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] raw output from copy

2015-08-05 Thread Heikki Linnakangas

On 07/27/2015 02:28 PM, Pavel Stehule wrote:

2015-07-27 10:41 GMT+02:00 Heikki Linnakangas hlinn...@iki.fi:


What about input? This is a whole new feature, but it would be nice to be
able to pass the file contents as a query parameter. Something like:

\P /tmp/foo binary
INSERT INTO foo VALUES (?);


The example of input is strong reason, why don't do it via inserts. Only
parsing some special ? symbol needs lot of new code.


Sorry, I meant $1 in place of the ?. No special parsing needed, psql can 
send the query to the server as is, with the parameters that are given 
by this new mechanism.



In this case, I don't see any advantage of  psql based solution. COPY is
standard interface for input/output from/to files, and it should be used
there.


I'm not too happy with the COPY approach, although I won't object is one 
of the other committers feel more comfortable with it. However, we don't 
seem to be making progress here, so I'm going to mark this as Returned 
with Feedback. I don't feel good about that either, because I don't 
actually have any great suggestions on how to move this forward. Which 
is a pity because this is a genuine problem for users.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] raw output from copy

2015-08-05 Thread Andrew Dunstan


On 08/05/2015 04:59 PM, Heikki Linnakangas wrote:

On 07/27/2015 02:28 PM, Pavel Stehule wrote:

2015-07-27 10:41 GMT+02:00 Heikki Linnakangas hlinn...@iki.fi:

What about input? This is a whole new feature, but it would be nice 
to be

able to pass the file contents as a query parameter. Something like:

\P /tmp/foo binary
INSERT INTO foo VALUES (?);


The example of input is strong reason, why don't do it via inserts. Only
parsing some special ? symbol needs lot of new code.


Sorry, I meant $1 in place of the ?. No special parsing needed, psql 
can send the query to the server as is, with the parameters that are 
given by this new mechanism.



In this case, I don't see any advantage of  psql based solution. COPY is
standard interface for input/output from/to files, and it should be used
there.


I'm not too happy with the COPY approach, although I won't object is 
one of the other committers feel more comfortable with it. However, we 
don't seem to be making progress here, so I'm going to mark this as 
Returned with Feedback. I don't feel good about that either, because I 
don't actually have any great suggestions on how to move this forward. 
Which is a pity because this is a genuine problem for users.





This is really only a psql problem, IMNSHO. Inserting and extracting 
binary data is pretty trivial for most users of client libraries (e.g. 
it's a couple of lines of code in a DBD::Pg program), but it's hard in psql.


I do agree that the COPY approach feels more than a little klunky.

cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash index creation warning

2015-08-05 Thread Bruce Momjian
On Fri, Jun 26, 2015 at 11:40:27AM -0400, Robert Haas wrote:
 On Wed, Jun 24, 2015 at 4:53 AM, Peter Geoghegan p...@heroku.com wrote:
  On Wed, Jun 24, 2015 at 1:45 AM, Craig Ringer cr...@2ndquadrant.com wrote:
  WARNING: hash indexes are not crash-safe, not replicated, and their
  use is discouraged
 
  +1
 
 I'm not wild about this rewording; I think that if users don't know
 what WAL is, they probably need to know that in order to make good
 decisions about whether to use hash indexes.  But I don't feel
 super-strongly about it.

Coming late to this, but I think Robert is right.  WAL is used for crash
recovery, PITR, and streaming replication, and I am not sure we want to
specify all of those in the warning message.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench - allow backslash-continuations in custom scripts

2015-08-05 Thread Fabien COELHO


Hello Heikki,

I don't think we actually want backslash-continuations. The feature we want 
is allow SQL statements span multiple lines, and using the psql lexer 
solves that. We don't need the backslash-continuations when we have that.


Sure. The feature *I* initially wanted was to have multi-line 
meta-commands. For this feature ISTM that continuations are, alas, the 
solution.



Indeed there are plenty of links already which are generated by makefiles
(see src/bin/pg_xlogdump/*), and probably a copy is made on windows. There
should no file duplication within the source tree.


Yeah, following the example of pg_xlogdump and others is the way to go.

Docs need updating, and there's probably some cleanup to do before this is 
ready for committing, but overall I think this is definitely the right 
direction.


I've created an entry for the next commitfest, and put the status to 
waiting on author.


I complained upthread that this makes it impossible to use multi-statements 
in pgbench, as they would be split into separate statements, but looking at 
psqlscan.l there is actually a syntax for that in psql already. You escape 
the semicolon as \;, e.g. SELECT 1 \; SELECT 2;, and then both queries will 
be sent to the server as one. So even that's OK.


Good!

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] nodes/*funcs.c inconsistencies

2015-08-05 Thread Noah Misch
On Mon, Aug 03, 2015 at 09:57:52AM -0700, Joe Conway wrote:
 On 08/03/2015 09:55 AM, Stephen Frost wrote:
  * Noah Misch (n...@leadboat.com) wrote:
  On Sun, Aug 02, 2015 at 11:31:16PM -0400, Tom Lane wrote:
  That being the case, it would probably be a good idea to get
  them done before alpha2, as there may not be a good opportunity
  afterwards.
  
  Freedom to bump catversion after alpha2 will be
  barely-distinguishable from freedom to do so now.  I have planned
  to leave my usual comment period of a few days, though skipping
  that would be rather innocuous in this case.
  
  This is clearly necessary, of course, and I wouldn't be surprised
  if we have another necessary bump post-alpha2, but at the same
  time, it'd certainly be nice if we are able to put out alpha2 and
  then a beta1 in the future without needing to do a bump.
  
  In other words, +1 from me for going ahead and committing this for 
  alpha2, if possible.
 
 +1

Rather than commit on an emergency basis in the few hours between these +1's
and the wrap, I kept to my original schedule.  FYI, if it hadn't required
emergency procedures (cancelling the day's plans so I could get to a notebook
computer), I would have committed early based on the three +1s.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [DESIGN] ParallelAppend

2015-08-05 Thread Amit Kapila
On Sat, Aug 1, 2015 at 6:39 PM, Kouhei Kaigai kai...@ak.jp.nec.com wrote:

  On Tue, Jul 28, 2015 at 6:08 PM, Kouhei Kaigai kai...@ak.jp.nec.com
wrote:
 
  I am not sure, but what problem do you see in putting Funnel node
  for one of the relation scans and not for the others.
 
 At this moment, I'm not certain whether background worker can/ought
 to launch another background workers.
 If sub-Funnel node is executed by 10-processes then it also launch
 10-processes for each, 100-processes will run for each?


Yes, that could be more work than current, but what I had in mind
is not that way, rather I was thinking that master backend will only
kick of workers for Funnel nodes in plan.

   If we pull Funnel here, I think the plan shall be as follows:
 Funnel
  -- SeqScan on rel1
  -- PartialSeqScan on rel2
  -- IndexScan on rel3
  
 
  So if we go this route, then Funnel should have capability
  to execute non-parallel part of plan as well, like in this
  case it should be able to execute non-parallel IndexScan on
  rel3 as well and then it might need to distinguish between
  parallel and non-parallel part of plans.  I think this could
  make Funnel node complex.
 
 It is difference from what I plan now. In the above example,
 Funnel node has two non-parallel aware node (rel1 and rel3)
 and one parallel aware node, then three PlannedStmt for each
 shall be put on the TOC segment. Even though the background
 workers pick up a PlannedStmt from the three, only one worker
 can pick up the PlannedStmt for rel1 and rel3, however, rel2
 can be executed by multiple workers simultaneously.

Okay, now I got your point, but I think the cost of execution
of non-parallel node by additional worker is not small considering
the communication cost and setting up an addional worker for
each sub-plan (assume the case where out of 100-child nodes
only few (2 or 3) nodes actually need multiple workers).

 
  I think for a particular PlannedStmt, number of workers must have
  been decided before start of execution, so if those many workers are
  available to work on that particular PlannedStmt, then next/new
  worker should work on next PlannedStmt.
 
 My concern about pre-determined number of workers is, it depends on the
 run-time circumstances of concurrent sessions. Even if planner wants to
 assign 10-workers on a particular sub-plan, only 4-workers may be
 available on the run-time because of consumption by side sessions.
 So, I expect only maximum number of workers is meaningful configuration.


In that case, there is possibility that many of the workers are just
working on one or two of the nodes and other nodes execution might
get starved.  I understand this is tricky problem to allocate number
of workers for different nodes, however we should try to develop any
algorithm where there is some degree of fairness in allocation of workers
for different nodes.


  2. Execution of work by workers and Funnel node and then pass
  the results back to upper node.  I think this needs some more
  work in addition to ParallelSeqScan patch.
 
 I expect we can utilize existing infrastructure here. It just picks
 up the records come from the underlying workers, then raise it to
 the upper node.


Sure, but still you need some work atleast in the area of making
workers understand different node types, I am guessing you need
to modify readfuncs.c to support new plan node if any for this
work.


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Freeze avoidance of very large table.

2015-08-05 Thread Alvaro Herrera
Bruce Momjian wrote:

 This is why I suggested putting the new SQL function where it belongs
 for consistency and then open a separate thread to discuss the future of
 where we want diagnostic functions to be.  It is too complicated to talk
 about both issues in the same thread.

Oh come on -- gimme a break.  We figure out much more complicated
problems in single threads all the time.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Raising our compiler requirements for 9.6

2015-08-05 Thread Noah Misch
On Wed, Aug 05, 2015 at 10:32:48AM -0400, Tom Lane wrote:
 Andres Freund and...@anarazel.de writes:
  Wasn't the point that 32 bit AIX as a whole hasn't been supported for a
  couple years now? My willingness to expend effort for that is rather
  limited.
 
 Well, there's one in the buildfarm.  We don't generally turn off
 buildfarm critters just because the underlying OS is out of support
 (the population would be quite a bit lower if we did).

For the record, mandrill's OS and compiler are both in support and not so old
(June 2012 compiler, February 2013 OS).  The last 32-bit AIX *kernel* did exit
support in 2012, but 32-bit executables remain the norm.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] deparsing utility commands

2015-08-05 Thread Alvaro Herrera
Jim Nasby wrote:
 On 7/31/15 8:45 AM, Shulgin, Oleksandr wrote:

 STATEMENT:  create view v1 as select * from t1 ;
 ERROR:  operator does not exist: pg_catalog.oid = pg_catalog.oid at
 character 52
 HINT:  No operator matches the given name and argument type(s). You
 might need to add explicit type casts.
 QUERY:  SELECT * FROM pg_catalog.pg_rewrite WHERE ev_class = $1 AND
 rulename = $2
 CONTEXT:  PL/pgSQL function test_ddl_deparse() line 1 at FOR over SELECT
 rows

 I'm not sure what test_ddl_deparse is doing, is that where the oid = oid is
 coming from?
 
 It might be enlightening to replace = with OPERATOR(pg_catalog.=) and see if
 that works.

That worked, thanks!

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freeze avoidance of very large table.

2015-08-05 Thread Bruce Momjian
On Wed, Aug  5, 2015 at 11:57:48PM -0300, Alvaro Herrera wrote:
 Bruce Momjian wrote:
 
  This is why I suggested putting the new SQL function where it belongs
  for consistency and then open a separate thread to discuss the future of
  where we want diagnostic functions to be.  It is too complicated to talk
  about both issues in the same thread.
 
 Oh come on -- gimme a break.  We figure out much more complicated
 problems in single threads all the time.

Well, people are confused, as stated --- what more can I say?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Caching offsets of varlena attributes

2015-08-05 Thread Haribabu Kommi
On Thu, Aug 6, 2015 at 4:09 AM, Vignesh Raghunathan
vignesh.pg...@gmail.com wrote:
 Hello,

 In the function heap_deform_tuple, there is a comment before caching varlena
 attributes specifying that the offset will be valid for either an aligned or
 unaligned value if there are no padding bytes. Could someone please
 elaborate on this?

In PostgreSQL tuple can contain two types of varlena headers. Those are
short varlena(doesn't need any alignment) and 4-byte varlena(needs alignment).

Refer the function heap_fill_tuple to see how the tuple is constructed.
For short varlena headers, even if the alignment suggests to do the alignment,
we shouldn't not do. Because of this reason instead of att_align_nominal, the
att_align_pointer is called.

The following is the comment from att_align_pointer macro which gives the
details why we should use this macro instead of att_align_nominal.

 * (A zero byte must be either a pad byte, or the first byte of a correctly
 * aligned 4-byte length word; in either case we can align safely.  A non-zero
 * byte must be either a 1-byte length word, or the first byte of a correctly
 * aligned 4-byte length word; in either case we need not align.)

 Also, why is it safe to call att_align_nominal if the attribute is not
 varlena? Couldn't att_align_pointer be called for both cases? I am not able
 to understand how att_align_nominal is faster.

All other types definitely needs either char or int or double alignment. Because
of this reason it is safe to use the att_align_nominal macro. Please refer the
function heap_fill_tuple for more details.

Regards,
Hari Babu
Fujitsu Australia


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Raising our compiler requirements for 9.6

2015-08-05 Thread Tom Lane
Andres Freund and...@anarazel.de writes:
 ... I'm going to reshuffle things in that direction tomorrow. I'll
 wait for other fallout first though. So far only gcc, xlc and clang (via
 gcc frontend) have run...

In the department of other fallout, pademelon is not happy:

cc -Ae -g +O0 -Wp,-H16384 -I../../../src/include -D_XOPEN_SOURCE_EXTENDED  
-I/usr/local/libxml2-2.6.23/include/libxml2 -I/usr/local/include  -c -o 
pg_resetxlog.o pg_resetxlog.c
cc -Ae -g +O0 -Wp,-H16384 pg_resetxlog.o -L../../../src/port 
-L../../../src/common -L/usr/local/libxml2-2.6.23/lib -L/usr/local/lib -Wl,+b 
-Wl,'/home/bfarm/bf-data/HEAD/inst/lib'  -Wl,-z -lpgcommon -lpgport -lxnet 
-lxml2 -lz -lreadline -ltermcap -lm  -o pg_resetxlog
/usr/ccs/bin/ld: Unsatisfied symbols:
   pg_atomic_clear_flag_impl (code)
   pg_atomic_init_flag_impl (code)
   pg_atomic_compare_exchange_u32_impl (code)
   pg_atomic_fetch_add_u32_impl (code)
   pg_atomic_test_set_flag_impl (code)
   pg_atomic_init_u32_impl (code)
make[3]: *** [pg_resetxlog] Error 1

I'd have thought that port/atomics.c would be included in libpgport, but
it seems not.  (But is pademelon really the only buildfarm critter relying
on the fallback atomics implementation?)

Another possible angle is: what the heck does pg_resetxlog need with
atomic ops, anyway?  Should these things have a different, stub
implementation for FRONTEND code?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] libpq: Allow specifying multiple host names to try to connect to

2015-08-05 Thread Michael Paquier
On Wed, Aug 5, 2015 at 11:53 PM, Bruce Momjian br...@momjian.us wrote:
 On Wed, Jul  8, 2015 at 12:24:37PM -0400, Robbie Harwood wrote:
  You update the documentation just for  psql but your change effects any
  libpq application if we go forward with this patch we should update the
  documentation for libpq as well.
 
  This approach seems to work with the url style of conninfo
 
  For example
postgres://some-down-host.info,some-other-host.org:5435/test1
 
  seems to work as expected but I don't like that syntax I would rather see
  postgres://some-down-host.info:5435/test1,postgres://some-other-host.org:5435/test1
 
  This would be a more invasive change but I think the syntax is more usable.

 I agree with this; it seems to me that it's more powerful to be able to
 specify complete urls for when they may differ.

 For the non-url case though, I don't see a clean way of doing this.  We
 could always, e.g., locally bind port specification to the closest host
 specification, but that seems nasty, and is still less powerful than
 passing urls (or we could just do the same for all parameters, but
 that's just a mess).

 Might it be reasonable to only allow the multi-host syntax in the
 url-style and not otherwise?

 First, I agree this is a very useful feature that we want.  Many NoSQL
 databases are promoting multi-host client libraries as HA, which is kind
 of humorous, and also makes sense because many NoSQL solution are
 multi-host.
 I can see this feature benefitting us for clients to auto-failover
 without requiring a pooler or virtual IP reassignment, and also useful
 for read-only connections that want to connect to a read-only slave, but
 don't care which one.  The idea of randomly selecting a host from the
 list might be a future feature.

Yep. The JDBC driver is doing it as well.

 I realize this is libpq-feature-creep, but considering the complexities
 of a pooler and virtual IP address reassignment, I think adding this
 The fact that other DBs are doing it, including I think
 VMWare's libpq, supports the idea of adding this simple specification.

Not exactly (the change has been open-sourced). Some extra logic has
been added in pghost parsing handling so as it is possible to grab
from it an ldap search filter, and then override pghostaddr using the
result found.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Doubt about AccessExclusiveLock in ALTER TABLE .. SET ( .. );

2015-08-05 Thread Michael Paquier
On Thu, Aug 6, 2015 at 3:06 AM, Fabrízio de Royes Mello wrote:
 On Wed, Aug 5, 2015 at 9:31 AM, Robert Haas wrote:
 Agreed.  I think we're making a mountain out of a molehill here.  As
 long as the locks that are actually used are monotonic, just use  and
 stick a comment in there explaining that it could need adjustment if
 we use other lock levels in the future.  I presume all the lock-levels
 used for DDL are, and will always be, self-exclusive, so why all this
 hand-wringing?


 New version attached with suggested changes.

Thanks!

+# SET autovacuum_* options needs a ShareUpdateExclusiveLock
+# so we mix reads with it to see what works or waits
s/needs/need/ and I think you mean mixing writes, not reads.

Those are minor things though, and from my point of view a committer
can look at it.
Regards,
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freeze avoidance of very large table.

2015-08-05 Thread Bruce Momjian
On Wed, Aug  5, 2015 at 10:58:00AM -0700, Josh Berkus wrote:
 On 08/05/2015 10:46 AM, Alvaro Herrera wrote:
  1. Add the functions as a builtins.
 This is what the current patch does.  Simon seems to prefer this,
 because he wants the function to be always available in production;
 but I don't like this option because adding functions as builtins
 makes it impossible to move later to extensions.
 Bruce doesn't like this option either.
 
 Why would we want to move them later to extensions?  Do you anticipate
 not needing them in the future?  If we don't need them in the future,
 why would they continue to exist at all?
 
 I'm really not getting this.
  

This is why I suggested putting the new SQL function where it belongs
for consistency and then open a separate thread to discuss the future of
where we want diagnostic functions to be.  It is too complicated to talk
about both issues in the same thread.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers