Re: [HACKERS] Re: [HACKERS] Patch für MAP_HUGETLB for mmap() shared memory

2013-06-29 Thread Andres Freund
Hi,

On 2013-06-28 23:03:22 -0400, Bruce Momjian wrote:
 Did we decide against specifying huge pages in Postgres?

I don't think so. We need somebody to make some last modifications to
the patch though and Christian doesn't seem to have the time atm.

I think the bigger memory (size of the per process page table) and #cpus
(number of pg backends = number of page tables allocated) the more
imporant it gets to implement this. We already can spend gigabytes of
memory on those on big systems.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GIN improvements part 1: additional information

2013-06-29 Thread Heikki Linnakangas

On 27.06.2013 17:20, Antonin Houska wrote:

I was curious about the new layout of the data page, so I spent a while
looking into the code.
It's interesting, but I suspect 2 things are not o.k.:

* gindatapage.c:dataIsEnoughSpace() - 'i++' in the for loop should
probably be 'j++', otherwise it loops forever


Hmm. It won't loop forever, i is counting the number of entries that fit 
on the page, while j is used to loop through the items being added. 
However, i isn't actually used for anything (and isn't initialized 
either), so it's just dead code.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] checking variadic any argument in parser - should be array

2013-06-29 Thread Pavel Stehule
2013/6/28 Jeevan Chalke jeevan.cha...@enterprisedb.com:
 Hi Pavel,


 On Fri, Jun 28, 2013 at 2:53 AM, Pavel Stehule pavel.steh...@gmail.com
 wrote:

 Hello

 2013/6/27 Jeevan Chalke jeevan.cha...@enterprisedb.com:
  Hi Pavel,
 
  I had a look over your new patch and it looks good to me.
 
  My review comments on patch:
 
  1. It cleanly applies with patch -p1 command.
  2. make/make install/make check were smooth.
  3. My own testing didn't find any issue.
  4. I had a code walk-through and I am little bit worried or confused on
  following changes:
 
if (PG_ARGISNULL(argidx))
return NULL;
 
  ! /*
  !  * Non-null argument had better be an array.  The parser
  doesn't
  !  * enforce this for VARIADIC ANY functions (maybe it should?),
  so
  that
  !  * check uses ereport not just elog.
  !  */
  ! arr_typid = get_fn_expr_argtype(fcinfo-flinfo, argidx);
  ! if (!OidIsValid(arr_typid))
  ! elog(ERROR, could not determine data type of concat()
  input);
  !
  ! if (!OidIsValid(get_element_type(arr_typid)))
  ! ereport(ERROR,
  ! (errcode(ERRCODE_DATATYPE_MISMATCH),
  !  errmsg(VARIADIC argument must be an array)));
 
  - /* OK, safe to fetch the array value */
arr = PG_GETARG_ARRAYTYPE_P(argidx);
 
/*
  --- 3820,3828 
if (PG_ARGISNULL(argidx))
return NULL;
 
  ! /* Non-null argument had better be an array */
  !
  Assert(OidIsValid(get_element_type(get_fn_expr_argtype(fcinfo-flinfo,
  argidx;
 
arr = PG_GETARG_ARRAYTYPE_P(argidx);
 
  We have moved checking of array type in parser (ParseFuncOrColumn())
  which
  basically verifies that argument type is indeed an array. Which exactly
  same
  as that of second if statement in earlier code, which you replaced by an
  Assert.
 
  However, what about first if statement ? Is it NO more required now?
  What if
  get_fn_expr_argtype() returns InvalidOid, don't you think we need to
  throw
  an error saying could not determine data type of concat() input?

 yes, If I understand well to question, a main differences is in stage
 of checking. When I do a check in parser stage, then I can expect so
 actual_arg_types array holds a valid values.


 That's fine.



 
  Well, I tried hard to trigger that code, but didn't able to get any test
  which fails with that error in earlier version and with your version.
  And
  thus I believe it is a dead code, which you removed ? Is it so ?

 It is removed in this version :), and it is not a bug, so there is not
 reason for patching previous versions. Probably there should be a
 Assert instead of error. This code is relatively mature - so I don't
 expect a issue from SQL level now. On second hand, this functions can
 be called via DirectFunctionCall API from custom C server side
 routines, and there a developer can does a bug simply if doesn't fill
 necessary structs well. So, there can be Asserts still.

 
  Moreover, if in any case get_fn_expr_argtype() returns an InvalidOid, we
  will hit an Assert rather than an error, is this what you expect ?
 

 in this moment yes,

 small change can helps with searching of source of possible issues.

 so instead on line
 Assert(OidIsValid(get_element_type(get_fn_expr_argtype(fcinfo-flinfo,
 argidx;

 use two lines

 Assert(OidIsValid(get_fn_expr_argtype(fcinfo-flinfo, argidx)));
 Assert(OidIsValid(get_element_type(get_fn_expr_argtype(fcinfo-flinfo,
 argidx;

 what you think?


 Well, I am still not fully understand or convinced about first Assert, error
 will be good enough like what we have now.

 Anyway, converting it over two lines eases the debugging efforts. But please
 take output of get_fn_expr_argtype(fcinfo-flinfo, argidx) into separate
 variable so that we will avoid calling same function twice.

It is called in Assert, so it will be removed in production
environment. Using variable for this purpose is useless and less
maintainable.


 I think some short comment for these Asserts will be good. At-least for
 second one as it is already done by parser and non-arrays are not at
 expected at this point.


yes, I'll add some comment

Regards

Pavel



  5. This patch has user visibility, i.e. now we are throwing an error
  when
  user only says VARIADIC NULL like:
 
  select concat(variadic NULL) is NULL;
 
  Previously it was working but now we are throwing an error. Well we are
  now
  more stricter than earlier with using VARIADIC + ANY, so I have no issue
  as
  such. But I guess we need to document this user visibility change. I
  don't
  know exactly where though. I searched for VARIADIC and all related
  documentation says it needs an array, so nothing harmful as such, so you
  can
  ignore this review comment but I thought it worth mentioning it.

 yes, it is point for possible issues in RELEASE NOTES, I am thinking ???


 

[HACKERS] GIN low hanging fruit

2013-06-29 Thread Heikki Linnakangas
While profiling Alexander's patches, I noticed that a lot of time is 
spent in ginCompareItemPointers:


 47,59%  postmaster  postgres gingetbitmap
 46,73%  postmaster  postgres ginCompareItemPointers
  2,31%  postmaster  postgres FunctionCall8Coll
  1,54%  postmaster  postgres callConsistentFn
  0,79%  postmaster  postgres ginarrayconsistent
  0,63%  postmaster  postgres MemoryContextReset

I think much of the time spent in gingetbitmap has to do with calling 
ginCompareItemPointers, too.


The query in question is this:

postgres=# explain analyze select * from intarrtbl where ii @ 
'{1,2,3,4,5,6,7,8,9,10}'::int[];
   QUERY PLAN 




-
 Bitmap Heap Scan on intarrtbl  (cost=3036.00..3040.01 rows=1 width=61) 
(actual

time=405.461..405.461 rows=0 loops=1)
   Recheck Cond: (ii @ '{1,2,3,4,5,6,7,8,9,10}'::integer[])
   -  Bitmap Index Scan on gin_intarr_master  (cost=0.00..3036.00 
rows=1 width=

0) (actual time=405.458..405.458 rows=0 loops=1)
 Index Cond: (ii @ '{1,2,3,4,5,6,7,8,9,10}'::integer[])
 Total runtime: 405.482 ms
(5 rows)

Rewriting and inlining ginCompareItemPointers as attached helps a lot. 
After the patch:


postgres=# explain analyze select * from intarrtbl where ii @ 
'{1,2,3,4,5,6,7,8,9,10}'::int[];
   QUERY PLAN 




-
 Bitmap Heap Scan on intarrtbl  (cost=3036.00..3040.01 rows=1 width=61) 
(actual

time=149.694..149.694 rows=0 loops=1)
   Recheck Cond: (ii @ '{1,2,3,4,5,6,7,8,9,10}'::integer[])
   -  Bitmap Index Scan on gin_intarr_master  (cost=0.00..3036.00 
rows=1 width=

0) (actual time=149.691..149.691 rows=0 loops=1)
 Index Cond: (ii @ '{1,2,3,4,5,6,7,8,9,10}'::integer[])
 Total runtime: 149.716 ms
(5 rows)

This patch seems pretty uncontroversial, so I'll go commit it. Once we 
get this elephant out of the room, performance testing the rest of the 
gin patches become much more meaningful. We might want to optimize the 
ItemPointerCompare function, used elsewhere in the backend, too, but 
I'll leave that for a separate patch.


- Heikki
diff --git a/src/backend/access/gin/gindatapage.c b/src/backend/access/gin/gindatapage.c
index 13ab448..f017de0 100644
--- a/src/backend/access/gin/gindatapage.c
+++ b/src/backend/access/gin/gindatapage.c
@@ -17,25 +17,6 @@
 #include access/gin_private.h
 #include utils/rel.h
 
-int
-ginCompareItemPointers(ItemPointer a, ItemPointer b)
-{
-	BlockNumber ba = GinItemPointerGetBlockNumber(a);
-	BlockNumber bb = GinItemPointerGetBlockNumber(b);
-
-	if (ba == bb)
-	{
-		OffsetNumber oa = GinItemPointerGetOffsetNumber(a);
-		OffsetNumber ob = GinItemPointerGetOffsetNumber(b);
-
-		if (oa == ob)
-			return 0;
-		return (oa  ob) ? 1 : -1;
-	}
-
-	return (ba  bb) ? 1 : -1;
-}
-
 /*
  * Merge two ordered arrays of itempointers, eliminating any duplicates.
  * Returns the number of items in the result.
diff --git a/src/include/access/gin_private.h b/src/include/access/gin_private.h
index 1868b77..c603521 100644
--- a/src/include/access/gin_private.h
+++ b/src/include/access/gin_private.h
@@ -530,7 +530,6 @@ extern void ginEntryFillRoot(GinBtree btree, Buffer root, Buffer lbuf, Buffer rb
 extern IndexTuple ginPageGetLinkItup(Buffer buf);
 
 /* gindatapage.c */
-extern int	ginCompareItemPointers(ItemPointer a, ItemPointer b);
 extern uint32 ginMergeItemPointers(ItemPointerData *dst,
 	 ItemPointerData *a, uint32 na,
 	 ItemPointerData *b, uint32 nb);
@@ -724,4 +723,28 @@ extern void ginHeapTupleFastCollect(GinState *ginstate,
 extern void ginInsertCleanup(GinState *ginstate,
  bool vac_delay, IndexBulkDeleteResult *stats);
 
+/*
+ * Merging the results of several gin scans compares item pointers a lot,
+ * so we want this to be inlined. But if the compiler doesn't support that,
+ * fall back on the non-inline version from itemptr.c. See STATIC_IF_INLINE in
+ * c.h.
+ */
+#ifdef PG_USE_INLINE
+static inline int
+ginCompareItemPointers(ItemPointer a, ItemPointer b)
+{
+	uint64 ia = (uint64) a-ip_blkid.bi_hi  32 | (uint64) a-ip_blkid.bi_lo  16 | a-ip_posid;
+	uint64 ib = (uint64) b-ip_blkid.bi_hi  32 | (uint64) b-ip_blkid.bi_lo  16 | b-ip_posid;
+
+	if (ia == ib)
+		return 0;
+	else if (ia  ib)
+		return 1;
+	else
+		return -1;
+}
+#else
+#define ginCompareItemPointers(a, b) ItemPointerCompare(a, b)
+#endif   /* PG_USE_INLINE */
+
 #endif   /* GIN_PRIVATE_H */

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql and pset without any arguments

2013-06-29 Thread Marc Mamin

Von: pgsql-hackers-ow...@postgresql.org 
[pgsql-hackers-ow...@postgresql.org]quot; im Auftrag von quot;Gilles Darold 
[gilles.dar...@dalibo.com]

I was looking at psql 8.3 documention about \pset options and saw that
there was the following note :

Note: It is an error to call \pset without any arguments. In the
future this case might show the current status of all printing options.

I looked backward and forward to find that this note is present in all
versions since 7.1 up to 9.3, maybe it is time to add this little feature.

I've attached a patch to add the usage of the \pset command without any
arguments to displays current status of all printing options instead of
the error message. Here is a sample output:

(postgres@[local]:5494) [postgres]  \pset
Output format is aligned.
Border style is 2.
Expanded display is used automatically.
Null display is NULL.
Field separator is |.
Tuples only is off.
Title is unset.
Table attributes unset.
Line style is unicode.
Pager is used for long output.
Record separator is newline.
(postgres@[local]:5494) [postgres] 


Hello,
this is a nice additional feature.
As a user (not a hacker), I would prefer to see the real parameter name instead 
of the display name.

e.g.
Border style is 2.
=
border = 2

without this, the user would not know out of the fly which parameter to 
modify...

best regards,
Marc Mamin

To avoid redundant code I've added a new method printPsetInfo() so that
do_pset() and exec_command() will used the same output message, they are
all in src/bin/psql/command.c. For example:

(postgres@[local]:5494) [postgres]  \pset null 'NULL'
Null display is NULL.
(postgres@[local]:5494) [postgres] 

The patch print all variables information from struct printTableOpt when
\pset is given without any arguments and also update documentation.

Let me know if there's any additional work to do on this basic patch or
something that I've omitted.

Best regards,

--
Gilles Darold
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] proposal: simple date constructor from numeric values

2013-06-29 Thread Pavel Stehule
Hello

long time I am thinking about simple function for creating date or
timestamp values based on numeric types without necessity to create
format string.

some like ansi_date(year, month, day) and ansi_timestamp(year, month,
day, hour, minuts, sec, msec, offset)

What do you think about this idea?

Regards

Pavel Stehule


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GIN improvements part 1: additional information

2013-06-29 Thread Antonin Houska

On 06/29/2013 11:00 AM, Heikki Linnakangas wrote:

On 27.06.2013 17:20, Antonin Houska wrote:

I was curious about the new layout of the data page, so I spent a while
looking into the code.
It's interesting, but I suspect 2 things are not o.k.:

* gindatapage.c:dataIsEnoughSpace() - 'i++' in the for loop should
probably be 'j++', otherwise it loops forever


Hmm. It won't loop forever, i is counting the number of entries that 
fit on the page, while j is used to loop through the items being 
added. However, i isn't actually used for anything (and isn't 
initialized either), so it's just dead code.


- Heikki
You're right. While thinking about possible meaning of the 'i' I didn't 
notice that j++ is in the 'for' construct. Stupid mistake on my side.


Tony


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql and pset without any arguments

2013-06-29 Thread Erik Rijkers
On Sat, June 29, 2013 01:08, Gilles Darold wrote:
  Here is a sample output:

 (postgres@[local]:5494) [postgres]  \pset
 Output format is aligned.
 Border style is 2.
 Expanded display is used automatically.
 Null display is NULL.
 Field separator is |.
 Tuples only is off.
 Title is unset.
 Table attributes unset.
 Line style is unicode.
 Pager is used for long output.
 Record separator is newline.
 (postgres@[local]:5494) [postgres] 


+1

This seems handy.  Maybe it could be improved
a bit with the keyboard shortcuts prefixed, like so:

(postgres@[local]:5494) [postgres]  \pset
\a  Output format is aligned.
\x  Expanded display is used automatically.
\f  Field separator is |.
\t  Tuples only is off.
\C  Title is unset.
\T  Table attributes unset.
Border style is 2.
Line style is unicode.
Null display is NULL.
Pager is used for long output.
Record separator is newline.


So that it also serves a reminder on how to subsequently
change them


Thanks,

Erik Rijkers








-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql and pset without any arguments

2013-06-29 Thread Pavel Stehule
2013/6/29 Erik Rijkers e...@xs4all.nl:
 On Sat, June 29, 2013 01:08, Gilles Darold wrote:
  Here is a sample output:

 (postgres@[local]:5494) [postgres]  \pset
 Output format is aligned.
 Border style is 2.
 Expanded display is used automatically.
 Null display is NULL.
 Field separator is |.
 Tuples only is off.
 Title is unset.
 Table attributes unset.
 Line style is unicode.
 Pager is used for long output.
 Record separator is newline.
 (postgres@[local]:5494) [postgres] 


 +1

 This seems handy.  Maybe it could be improved
 a bit with the keyboard shortcuts prefixed, like so:

 (postgres@[local]:5494) [postgres]  \pset
 \a  Output format is aligned.
 \x  Expanded display is used automatically.
 \f  Field separator is |.
 \t  Tuples only is off.
 \C  Title is unset.
 \T  Table attributes unset.
 Border style is 2.
 Line style is unicode.
 Null display is NULL.
 Pager is used for long output.
 Record separator is newline.


it is less readable - and same info you can get with \?

Regards

Pavel


 So that it also serves a reminder on how to subsequently
 change them


 Thanks,

 Erik Rijkers








 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade -u

2013-06-29 Thread Bruce Momjian
On Fri, Jun 28, 2013 at 11:59:58PM -0400, Peter Eisentraut wrote:
 On 6/28/13 9:43 PM, Bruce Momjian wrote:
  On Fri, Jun 28, 2013 at 09:15:31PM -0400, Peter Eisentraut wrote:
  On 6/28/13 6:06 PM, Bruce Momjian wrote:
  On Wed, May 29, 2013 at 09:44:00AM -0400, Peter Eisentraut wrote:
  On 5/28/13 10:55 PM, Bruce Momjian wrote:
  Wow, I never realized other tools used -U for user, instead of -u. 
  Should I change pg_upgrade to use -U for 9.4?  I can keep supporting -u
  as an undocumented option.
 
  It seems to me that that option shouldn't be necessary anyway.
  pg_upgrade should somehow be able to find out by itself what the
  superuser of the old cluster was.
 
  Uh, any idea how to do that?
 
  select rolname from pg_authid where oid = 10;
  
  Uh, how do I know what username to connect as to issue that query?
 
 single-user mode?

Yes, but single-user mode is a whole new set of problems.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_ctl and -h/help

2013-06-29 Thread Bruce Momjian
In studying pg_upgrade's handling of --help, I noticed that pg_ctl
supports -h for help, but it is the only tool to do so, and -h is not
documented.  I propose we remove -h for help in pg_ctl, and have it
support only -? and --help.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Add session_preload_libraries configuration parameter

2013-06-29 Thread Robins
 On 12 June 2013 22:12, Peter Eisentraut pete...@gmx.net wrote:

 This is like shared_preload_libraries except that it takes effect at
 backend start and can be changed without a full postmaster restart.  It
 is like local_preload_libraries except that it is still only settable by
 a superuser.  This can be a better way to load modules such as
 auto_explain.

 Since there are now three preload parameters, regroup the documentation
 a bit.  Put all parameters into one section, explain common
 functionality only once, update the descriptions to reflect current and
 future realities.
 ---


Hi,

Did some basic checks on this patch. List-wise feedback below.

- Cleanly applies to Git-Head: Yes (Minor 1 line offset in guc.c, but
that's probably because of the delay in reviewing)
- Documentation Updated: Yes
- All tests pass: Yes
- Removed unnecessary extra-lines: Yes

- Do we want it?: ???

- Any visible issues: No
- Any compiler warnings: No

- Others:
Number of new lines added not covered by tests: -107 (Wierd but, it seems
to have reduced line coverage). But correct me if am not seeing the
elephant in the room.

--
Robins Tharakan


Re: [HACKERS] Patch to add support of IF NOT EXISTS to others CREATE statements

2013-06-29 Thread Fabrízio de Royes Mello
On Thu, Jun 20, 2013 at 1:24 PM, Peter Eisentraut pete...@gmx.net wrote:

 On 6/20/13 11:04 AM, Robert Haas wrote:
  I kind of don't see the point of having IF NOT EXISTS for things that
  have OR REPLACE, and am generally in favor of implementing OR REPLACE
  rather than IF NOT EXISTS where possible.

 I tend to agree.


I agree if is possible to have OR REPLACE then we must do that, but in
other hands
I don't see a problem if we have support to both IF NOT EXISTS and OR
REPLACE. In
some cases we don't really want to replace the object body if its already
exists so
IF NOT EXISTS is useful to don't break the transaction inside a upgrade
script.



   Btw., I also want REPLACE BUT DO NOT CREATE.
  That's a mouthful.  What's it good for?

 If you run an upgrade SQL script that is supposed to replace, say, a
 bunch of functions with new versions, you'd want the behavior that it
 replaces the existing function if it exists, but errors out if it
 doesn't, because then you're perhaps connected to the wrong database.

 It's a marginal feature, and I'm not going to pursue it, but if someone
 wanted to make the CREATE commands fully featured, there is use for this.


Well, my intention is do that for all CREATE commands.

Regards,

-- 
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Blog sobre TI: http://fabriziomello.blogspot.com
 Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello


Re: [HACKERS] Patch to add support of IF NOT EXISTS to others CREATE statements

2013-06-29 Thread Fabrízio de Royes Mello
On Mon, Jun 24, 2013 at 8:05 AM, Andres Freund and...@2ndquadrant.com
wrote:


 I'd argue if we go that way - which seems to be a good idea - we really
 ought to make a complete pass and add it to all commands where it's
 currently missing.


Yeah... this is my purpose, but I decide do that in two steps. First with
the patch already
sent to CF1 and second with another patch to cover the remaining CREATE
commands.

I created a simple spreadsheet [1] to control my work. Suggestions are
welcome.


 * CREATE DOMAIN
 * CREATE GROUP
 * CREATE TABLE AS
 * CREATE MATERIALIZED VIEW
 * CREATE SEQUENCE (we have ALTER but not CREATE?)
 * CREATE TABLESPACE (arguably slightly harder)
 * CREATE FOREIGN DATA WRAPPER
 * CREATE SERVER
 * CREATE DATABASE
 * CREATE USER MAPPING
 * CREATE TRIGGER
 * CREATE EVENT TRIGGER
 * CREATE INDEX
 * CLUSTER


Ok.

 Cases that seem useful, even though we have OR REPLACE:
 * CREATE VIEW
 * CREATE FUNCTION

+1

 Of dubious use:
 * CREATE OPERATOR CLASS
 * CREATE OPERATOR FAMILY
 * CREATE RULE
 * CREATE CONVERSION

In fact I would say that will be seldom used, but I don't see any
problem to implement them.

Regards,

[1]
https://docs.google.com/spreadsheet/ccc?key=0Ai7oCVcVQiKFdEctQUxNNlR1R2xRTUpJNFNDcFo4MUEusp=sharing

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Blog sobre TI: http://fabriziomello.blogspot.com
 Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello


Re: [HACKERS] psql and pset without any arguments

2013-06-29 Thread Gilles Darold
Le 29/06/2013 13:55, Erik Rijkers a écrit :
 On Sat, June 29, 2013 01:08, Gilles Darold wrote:
  Here is a sample output:

 (postgres@[local]:5494) [postgres]  \pset
 Output format is aligned.
 Border style is 2.
 Expanded display is used automatically.
 Null display is NULL.
 Field separator is |.
 Tuples only is off.
 Title is unset.
 Table attributes unset.
 Line style is unicode.
 Pager is used for long output.
 Record separator is newline.
 (postgres@[local]:5494) [postgres] 

 +1

 This seems handy.  Maybe it could be improved
 a bit with the keyboard shortcuts prefixed, like so:

 (postgres@[local]:5494) [postgres]  \pset
 \a  Output format is aligned.
 \x  Expanded display is used automatically.
 \f  Field separator is |.
 \t  Tuples only is off.
 \C  Title is unset.
 \T  Table attributes unset.
 Border style is 2.
 Line style is unicode.
 Null display is NULL.
 Pager is used for long output.
 Record separator is newline.


 So that it also serves a reminder on how to subsequently
 change them

My first though was to print something like \set output, but why not
reuse the original code/output when \pset is used ?

This second choice has three main advantages :

 * Information shown is the same everywhere
 * Backward compatibility with \pset output
 * Avoid code redundancy

About shortcut I'm agree with Pavel that it is less readable and already
in the help, \? is the big reminder :-)

Regards,

-- 
Gilles Darold
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New regression test time

2013-06-29 Thread Robert Haas
On Jun 29, 2013, at 12:36 AM, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
 I see two problems with this report:
 1. it creates a new installation for each run,

But that's the normal way of running the tests anyway, isn't it?

 2. it only uses the serial schedule.

make check uses the parallel schedule - did Josh indicate he did anything else?

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] vacuumlo - use a cursor

2013-06-29 Thread Bruce Momjian

Is there a reason this patch was not applied?

---

On Mon, Nov 12, 2012 at 05:14:57PM -0500, Andrew Dunstan wrote:
 vacuumlo is rather simpleminded about dealing with the list of LOs
 to be removed - it just fetches them as a straight resultset. For
 one of my our this resulted in an out of memory condition. The
 attached patch tries to remedy that by using a cursor instead. If
 this is wanted I will add it to the next commitfest. The actualy
 changes are very small - most of the patch is indentation changes
 due to the introduction of an extra loop.
 
 cheers
 
 andrew

 *** a/contrib/vacuumlo/vacuumlo.c
 --- b/contrib/vacuumlo/vacuumlo.c
 ***
 *** 290,362  vacuumlo(const char *database, const struct _param * param)
   PQclear(res);
   
   buf[0] = '\0';
 ! strcat(buf, SELECT lo FROM vacuum_l);
 ! res = PQexec(conn, buf);
 ! if (PQresultStatus(res) != PGRES_TUPLES_OK)
 ! {
 ! fprintf(stderr, Failed to read temp table:\n);
 ! fprintf(stderr, %s, PQerrorMessage(conn));
 ! PQclear(res);
   PQfinish(conn);
   return -1;
 ! }
   
 - matched = PQntuples(res);
   deleted = 0;
 ! for (i = 0; i  matched; i++)
   {
 ! Oid lo = atooid(PQgetvalue(res, i, 0));
   
 ! if (param-verbose)
   {
 ! fprintf(stdout, \rRemoving lo %6u   , lo);
 ! fflush(stdout);
   }
   
 ! if (param-dry_run == 0)
   {
 ! if (lo_unlink(conn, lo)  0)
   {
 ! fprintf(stderr, \nFailed to remove lo %u: , 
 lo);
 ! fprintf(stderr, %s, PQerrorMessage(conn));
 ! if (PQtransactionStatus(conn) == 
 PQTRANS_INERROR)
   {
 ! success = false;
 ! break;
   }
   }
   else
   deleted++;
 ! }
 ! else
 ! deleted++;
 ! if (param-transaction_limit  0 
 ! (deleted % param-transaction_limit) == 0)
 ! {
 ! res2 = PQexec(conn, commit);
 ! if (PQresultStatus(res2) != PGRES_COMMAND_OK)
   {
 ! fprintf(stderr, Failed to commit 
 transaction:\n);
 ! fprintf(stderr, %s, PQerrorMessage(conn));
   PQclear(res2);
 ! PQclear(res);
 ! PQfinish(conn);
 ! return -1;
 ! }
 ! PQclear(res2);
 ! res2 = PQexec(conn, begin);
 ! if (PQresultStatus(res2) != PGRES_COMMAND_OK)
 ! {
 ! fprintf(stderr, Failed to start 
 transaction:\n);
 ! fprintf(stderr, %s, PQerrorMessage(conn));
   PQclear(res2);
 - PQclear(res);
 - PQfinish(conn);
 - return -1;
   }
 - PQclear(res2);
   }
   }
   PQclear(res);
   
   /*
 --- 290,389 
   PQclear(res);
   
   buf[0] = '\0';
 ! strcat(buf, 
 !DECLARE myportal CURSOR WITH HOLD FOR SELECT lo FROM 
 vacuum_l);
 ! res = PQexec(conn, buf);
 ! if (PQresultStatus(res) != PGRES_COMMAND_OK)
 ! {
 ! fprintf(stderr, DECLARE CURSOR failed: %s, PQerrorMessage(conn));
 ! PQclear(res);
   PQfinish(conn);
   return -1;
 ! }
 ! PQclear(res);
 ! 
 ! snprintf(buf, BUFSIZE, FETCH FORWARD  INT64_FORMAT  IN myportal, 
 !  param-transaction_limit  0 ? 
 param-transaction_limit : 1000);
   
   deleted = 0;
 ! 
 ! while (1)
   {
 ! res = PQexec(conn, buf);
 ! if (PQresultStatus(res) != PGRES_TUPLES_OK)
 ! {
 ! fprintf(stderr, Failed to read temp table:\n);
 ! fprintf(stderr, %s, PQerrorMessage(conn));
 ! PQclear(res);
 ! PQfinish(conn);
 ! return -1;
 ! }
   
 ! matched = PQntuples(res);
 ! 
 ! if (matched = 0)
   {
 ! /* at end of resultset */
 ! break;
   }
   
 ! for (i = 0; i  matched; i++)
   {
 ! Oid lo = atooid(PQgetvalue(res, i, 
 0));
 ! 
 ! 

Re: [HACKERS] Eliminating PD_ALL_VISIBLE, take 2

2013-06-29 Thread Robins
On 10 June 2013 00:17, Jeff Davis pg...@j-davis.com wrote:

 On Thu, 2013-05-30 at 10:07 -0700, Jeff Davis wrote:
   Come to think of it, even without the torn page  checksum issue, do we
   really want to actively clear the all-visible flags after upgrade?

 Removed that from the patch and rebased. I think the best approach is to
 remove the bit opportunistically when we're already dirtying the page
 for something else.

 However, right now, there is enough skepticism of the general approach
 in this patch (and enough related proposals) that I'll leave this to be
 resolved if and when there is more agreement that my approach is a good
 one.


Did some basic checks on this patch. List-wise feedback below.

- Cleanly applies to Git-Head: Yes (Some offsets, but thats probably
because of delay in review)
- Documentation Updated: No. (Required?)
- Tests Updated: No. (Required?)
- All tests pass: Yes
- Does it Work : ???

- Any visible issues: No
- Any compiler warnings: No

- Others:
Number of uncovered lines: Reduced by 167 lines

Robins Tharakan


Re: [HACKERS] [PATCH] Add session_preload_libraries configuration parameter

2013-06-29 Thread Robins
 Number of new lines added not covered by tests: -107 (Wierd but, it seems
 to have reduced line coverage). But correct me if am not seeing the
 elephant in the room.


My apologies.

Number of new untested lines: 108

Robins


Re: [HACKERS] vacuumlo - use a cursor

2013-06-29 Thread Andrew Dunstan


Nobody seemed interested. But I do think it's a good idea still.

cheers

andrew



On 06/29/2013 11:23 AM, Bruce Momjian wrote:

Is there a reason this patch was not applied?

---

On Mon, Nov 12, 2012 at 05:14:57PM -0500, Andrew Dunstan wrote:

vacuumlo is rather simpleminded about dealing with the list of LOs
to be removed - it just fetches them as a straight resultset. For
one of my our this resulted in an out of memory condition. The
attached patch tries to remedy that by using a cursor instead. If
this is wanted I will add it to the next commitfest. The actualy
changes are very small - most of the patch is indentation changes
due to the introduction of an extra loop.

cheers

andrew
*** a/contrib/vacuumlo/vacuumlo.c
--- b/contrib/vacuumlo/vacuumlo.c
***
*** 290,362  vacuumlo(const char *database, const struct _param * param)
PQclear(res);
   
   	buf[0] = '\0';

!   strcat(buf, SELECT lo FROM vacuum_l);
!   res = PQexec(conn, buf);
!   if (PQresultStatus(res) != PGRES_TUPLES_OK)
!   {
!   fprintf(stderr, Failed to read temp table:\n);
!   fprintf(stderr, %s, PQerrorMessage(conn));
!   PQclear(res);
PQfinish(conn);
return -1;
!   }
   
- 	matched = PQntuples(res);

deleted = 0;
!   for (i = 0; i  matched; i++)
{
!   Oid lo = atooid(PQgetvalue(res, i, 0));
   
! 		if (param-verbose)

{
!   fprintf(stdout, \rRemoving lo %6u   , lo);
!   fflush(stdout);
}
   
! 		if (param-dry_run == 0)

{
!   if (lo_unlink(conn, lo)  0)
{
!   fprintf(stderr, \nFailed to remove lo %u: , 
lo);
!   fprintf(stderr, %s, PQerrorMessage(conn));
!   if (PQtransactionStatus(conn) == 
PQTRANS_INERROR)
{
!   success = false;
!   break;
}
}
else
deleted++;
!   }
!   else
!   deleted++;
!   if (param-transaction_limit  0 
!   (deleted % param-transaction_limit) == 0)
!   {
!   res2 = PQexec(conn, commit);
!   if (PQresultStatus(res2) != PGRES_COMMAND_OK)
{
!   fprintf(stderr, Failed to commit 
transaction:\n);
!   fprintf(stderr, %s, PQerrorMessage(conn));
PQclear(res2);
!   PQclear(res);
!   PQfinish(conn);
!   return -1;
!   }
!   PQclear(res2);
!   res2 = PQexec(conn, begin);
!   if (PQresultStatus(res2) != PGRES_COMMAND_OK)
!   {
!   fprintf(stderr, Failed to start 
transaction:\n);
!   fprintf(stderr, %s, PQerrorMessage(conn));
PQclear(res2);
-   PQclear(res);
-   PQfinish(conn);
-   return -1;
}
-   PQclear(res2);
}
}
PQclear(res);
   
   	/*

--- 290,389 
PQclear(res);
   
   	buf[0] = '\0';

!   strcat(buf,
!  DECLARE myportal CURSOR WITH HOLD FOR SELECT lo FROM 
vacuum_l);
! res = PQexec(conn, buf);
! if (PQresultStatus(res) != PGRES_COMMAND_OK)
! {
! fprintf(stderr, DECLARE CURSOR failed: %s, PQerrorMessage(conn));
! PQclear(res);
PQfinish(conn);
return -1;
! }
! PQclear(res);
!
!   snprintf(buf, BUFSIZE, FETCH FORWARD  INT64_FORMAT  IN myportal,
!param-transaction_limit  0 ? 
param-transaction_limit : 1000);
   
   	deleted = 0;

!
!   while (1)
{
!   res = PQexec(conn, buf);
!   if (PQresultStatus(res) != PGRES_TUPLES_OK)
!   {
!   fprintf(stderr, Failed to read temp table:\n);
!   fprintf(stderr, %s, PQerrorMessage(conn));
!   PQclear(res);
!   PQfinish(conn);
!   return -1;
!   }
   
! 		matched = PQntuples(res);

!
!   if (matched = 0)
{
!   /* at end of resultset */
!   break;
}
   
! 		for (i = 0; i  matched; i++)

{
! 

Re: [HACKERS] vacuumlo - use a cursor

2013-06-29 Thread Bruce Momjian
On Sat, Jun 29, 2013 at 11:33:54AM -0400, Andrew Dunstan wrote:
 
 Nobody seemed interested. But I do think it's a good idea still.

Well, if no one replied, and you thought it was a good idea, then it was
a good idea.  ;-)

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] vacuumlo - use a cursor

2013-06-29 Thread Joshua D. Drake


On 06/29/2013 08:35 AM, Bruce Momjian wrote:


On Sat, Jun 29, 2013 at 11:33:54AM -0400, Andrew Dunstan wrote:


Nobody seemed interested. But I do think it's a good idea still.


Well, if no one replied, and you thought it was a good idea, then it was
a good idea.  ;-)



I think it is a good idea just of limited use. I only have one customer 
that still uses large objects. Which is a shame really as they are more 
efficient that bytea.


JD

--
Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
For my dreams of your image that blossoms
   a rose in the deeps of my heart. - W.B. Yeats


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] vacuumlo - use a cursor

2013-06-29 Thread Andrew Dunstan


On 06/29/2013 11:35 AM, Bruce Momjian wrote:

On Sat, Jun 29, 2013 at 11:33:54AM -0400, Andrew Dunstan wrote:

Nobody seemed interested. But I do think it's a good idea still.

Well, if no one replied, and you thought it was a good idea, then it was
a good idea.  ;-)




I try not to assume that even if I think it's a good idea it will be 
generally wanted unless at least one other person speaks up. But now 
that Joshua has I will revive it and add it to the next commitfest.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GIN improvements part 1: additional information

2013-06-29 Thread Heikki Linnakangas

On 29.06.2013 11:56, Heikki Linnakangas wrote:

On 25.06.2013 01:03, Alexander Korotkov wrote:

New revision of patch is attached. Now it includes some docs.


Thanks! I'm looking into this in detail now.

First, this patch actually contains two major things:

1. Pack item pointers more tightly on posting data leaf pages.
2. Allow opclass implementation to attach additional information to
each item pointer.

These are two very distinct features, so this patch needs to be split
into two. I extracted the 1st part into a separate patch, attached, and
am going to focus on that now.

I made one significant change: I removed the 'freespace' field you added
to GinpageOpaque. Instead, on data leaf pages the offset from the
beginning of the page where the packed items end is stored in place of
the 'maxoff' field. This allows for quick calculation of the free space,
but there is no count of item pointers stored on the page anymore, so
some code that looped through all the item pointers relying on 'maxoff'
had to be changed to work with the end offset instead. I'm not 100%
wedded on this, but I'd like to avoid adding the redundant freespace
field on pages that don't need it, because it's confusing and you have
to remember to keep them in sync.


Ah, crap, looks like I sent the wrong patch, and now I can't find the 
correct one anymore. The patch I sent didn't include the changes store 
end offset instead of freespace. I'll rewrite that part..


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request for Patch Feedback: Lag Lead Window Functions Can Ignore Nulls

2013-06-29 Thread Jeff Davis

On Mon, 2013-06-24 at 18:01 +0100, Nicholas White wrote:
 Good catch - I've attached a patch to address your point 1. It now
 returns the below (i.e. correctly doesn't fill in the saved value if
 the index is out of the window. However, I'm not sure whether (e.g.)
 lead-2-ignore-nulls means count forwards two rows, and if that's null
 use the last one you've seen (the current implementation) or count
 forwards two non-null rows (as you suggest). The behaviour isn't
 specified in a (free) draft of the 2003 standard
 (http://www.wiscorp.com/sql_2003_standard.zip), and I don't have
 access to the (non-free) final version. Could someone who does have
 access to it clarify this? I've also added your example to the
 regression test cases.

Reading a later version of the draft, it is specified, but is still
slightly unclear.

As I see it, the standard describes the behavior in terms of eliminating
the NULL rows entirely before applying the offset. This matches Troels's
interpretation. Are you aware of any implementations that do something
different?
 
 I didn't include this functionality for the first / last value window
 functions as their implementation is currently a bit different; they
 just call WinGetFuncArgInFrame to pick out a single value. Making
 these functions respect nulls would involve changing the single lookup
 to a walk through the tuples to find the first non-null version, and
 keeping track of this index in a struct in the context. As this change
 is reasonably orthogonal I was going to submit it as a separate patch.

Sounds good.

Regards,
Jeff Davis






-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bugfix and new feature for PGXS

2013-06-29 Thread Andrew Dunstan


On 06/26/2013 10:52 AM, Andrew Dunstan wrote:


On 06/25/2013 11:29 AM, Cédric Villemain wrote:

Le mardi 25 juin 2013 17:18:51, Andrew Dunstan a écrit :

On 06/24/2013 07:24 PM, Cédric Villemain wrote:

Le mardi 25 juin 2013 00:18:26, Andrew Dunstan a écrit :

On 06/24/2013 04:02 PM, Cédric Villemain wrote:
WIth extension, we do have to set VPATH explicitely if we want to 
use
VPATH (note that contribs/extensions must not care that 
postgresql has

been built with or without VPATH set). My patches try to fix that.

No, this is exactly what I'm objecting to. I want to be able to do:
  invoke_vpath_magic
  standard_make_commands_as_for_non_vpath

Your patches have been designed to overcome your particular 
issues, but

they don't meet the general case, IMNSHO. This is why I want to have
more discussion about how vpath builds could work for PGXS modules.

The patch does not restrict anything, it is not supposed to lead to
regression.
The assignment of VPATH and srcdir are wrong in the PGXS case, the 
patch
correct them. You're still free to do make VPATH=/mypath ... the 
VPATH

provided won't be erased by the pgxs.mk logic.

I still think this is premature.  I have spent some more time trying to
make it work the way I think it should, so far without success. I think
we need more discussion about how we'd like VPATH to work for PGXS
would. There's no urgency about this - we've lived with the current
situation for quite a while.

Sure...
I did a quick and dirty patch (I just validate without lot of testing),
attached to this email to fix json_build (at least for make, make 
install)
As you're constructing targets based on commands to execute in the 
srcdir
directory, and because srcdir is only set in pgxs.mk, it is possible 
to define

srcdir early in the json_build/Makefile and use it.



This still doesn't do what I really want, which is to be able to 
invoke make without anything special in the invocation that triggers 
VPATH processing.


Here's what I did that works like I think it should.

First the attached patch on top of yours.

Second, I did:

mkdir vpath.json_build
cd vpath.json_build
sh/path/to/pgsource/config/prep_buildtree ../json_build .
ln -s ../json_build/json_build.control .

Then I created vpath.mk with these contents:

ext_srcdir = ../json_build
USE_VPATH = $(ext_srcdir)

Finally I vpath-ized the Makefile (see attached).

Given all of that, I was able to do, in the vpath directory:

make
make install
make installcheck
make clean

and it all just worked, with exactly the same make invocations as work 
in an in-tree build.


So what's lacking to make this nice is a tool to automate the second 
and third steps above.


Maybe there are other ways of doing this, but this does what I'd like 
to see.



I haven't seen a response to this. One thing we are missing is 
documentation. Given that I'm inclined to commit all of this (i.e. 
cedric's patches 1,2,3, and 4 plus my addition).


I'm also inclined to backpatch it, since without that it seems to me 
unlikely packagers will be able to make practical use of it for several 
years, and the risk is very low.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bugfix and new feature for PGXS

2013-06-29 Thread Cédric Villemain
[it seems my first email didn't make it, sent again]

Le mercredi 26 juin 2013 16:52:01, Andrew Dunstan a écrit :
 On 06/25/2013 11:29 AM, Cédric Villemain wrote:
  Le mardi 25 juin 2013 17:18:51, Andrew Dunstan a écrit :
  On 06/24/2013 07:24 PM, Cédric Villemain wrote:
  Le mardi 25 juin 2013 00:18:26, Andrew Dunstan a écrit :
  On 06/24/2013 04:02 PM, Cédric Villemain wrote:
  WIth extension, we do have to set VPATH explicitely if we want to use
  VPATH (note that contribs/extensions must not care that postgresql
  has been built with or without VPATH set). My patches try to fix
  that.
  
  No, this is exactly what I'm objecting to. I want to be able to do:
invoke_vpath_magic
standard_make_commands_as_for_non_vpath
  
  Your patches have been designed to overcome your particular issues,
  but they don't meet the general case, IMNSHO. This is why I want to
  have more discussion about how vpath builds could work for PGXS
  modules.
  
  The patch does not restrict anything, it is not supposed to lead to
  regression.
  The assignment of VPATH and srcdir are wrong in the PGXS case, the
  patch correct them. You're still free to do make VPATH=/mypath ...
  the VPATH provided won't be erased by the pgxs.mk logic.
  
  I still think this is premature.  I have spent some more time trying to
  make it work the way I think it should, so far without success. I think
  we need more discussion about how we'd like VPATH to work for PGXS
  would. There's no urgency about this - we've lived with the current
  situation for quite a while.
  
  Sure...
  I did a quick and dirty patch (I just validate without lot of testing),
  attached to this email to fix json_build (at least for make, make
  install) As you're constructing targets based on commands to execute in
  the srcdir directory, and because srcdir is only set in pgxs.mk, it is
  possible to define srcdir early in the json_build/Makefile and use it.
 
 This still doesn't do what I really want, which is to be able to invoke
 make without anything special in the invocation that triggers VPATH
 processing.

I believe it is the case currently (with my patches applyed), we just have to 
invoke Makefile like we invoke configure for PostgreSQL, except that we don't  
need a configure stage with the contribs.

Obviously it is not exactly the same because 'configure' is a script to 
execute, but 'make' is a command with a configuration file (the Makefile)

For PostgreSQL it is:
$ mkdir build_dir
$ cd build_dir
$ path/to/source/tree/configure [options go here]
$ gmake

For contribs it is:
$ mkdir build_dir
$ cd build_dir
$ gmake -f /path/to/source/tree/Makefile [options go here]

 Here's what I did that works like I think it should.
 
 First the attached patch on top of yours.
 
 Second, I did:
 
  mkdir vpath.json_build
  cd vpath.json_build
  sh/path/to/pgsource/config/prep_buildtree ../json_build .
  ln -s ../json_build/json_build.control .

OK, this creates supposedly required directories in the build process. This 
looks a bit wrong and more work than required: not all the source directories 
are required by the build process. But I understand the point.

Second, config/prep_buildtree is not installed (by make install), so it is not 
suitable to use it with PGXS (PGXS is found with pg_config, when the postgresql 
source tree is not accessible anymore). We may add config/prep_buildtree to 
INSTALL_PROGRAM. 

The 'ln -s' can probably be copied by prep_buildtree.

 Then I created vpath.mk with these contents:
 
  ext_srcdir = ../json_build
  USE_VPATH = $(ext_srcdir)

OK.

 Finally I vpath-ized the Makefile (see attached).

I don't see how this is more pretty than other solution.

 Given all of that, I was able to do, in the vpath directory:
 
  make
  make install
  make installcheck
  make clean
 
 and it all just worked, with exactly the same make invocations as work
 in an in-tree build.

It breaks others:
  mkdir /tmp/auth_delay  cd /tmp/auth_delay
  sh /path/prep_buildtree /path/contrib/auth_delay/ .
  (no .control, it's not an extension)
  make 
  make: *** Pas de règle pour fabriquer la cible « auth_delay.so », nécessaire 
pour « all ». Arrêt.

This works: 
  cd /tmp/auth_delay
  rm -rf *
  make -f /path/contrib/auth_delay/Makefile

PS: I exported USE_PGXS=1 because of the switch case in the makefile.

 So what's lacking to make this nice is a tool to automate the second and
 third steps above.

If the second and third steps are automatized, you'll still have to invoke 
them at some point. How ?

  mkdir /tmp/json_build  cd /tmp/json_build 
  make
  make install
  make installcheck
  make clean

- this will never work, make won't find anything to do. 
You'll need the equivalent of configure (as we do for PostgreSQL, I agree) to 
trigger prep_buildtree:

  mkdir /tmp/json_build  cd /tmp/json_build 
  $ path/to/source/tree/configure [options go here]
  make
  make install
  make installcheck
  make clean

 

Re: [HACKERS] Bugfix and new feature for PGXS

2013-06-29 Thread Cédric Villemain
Le samedi 29 juin 2013 19:54:53, Andrew Dunstan a écrit :
 On 06/26/2013 10:52 AM, Andrew Dunstan wrote:
  On 06/25/2013 11:29 AM, Cédric Villemain wrote:
  Le mardi 25 juin 2013 17:18:51, Andrew Dunstan a écrit :
  On 06/24/2013 07:24 PM, Cédric Villemain wrote:
  Le mardi 25 juin 2013 00:18:26, Andrew Dunstan a écrit :
  On 06/24/2013 04:02 PM, Cédric Villemain wrote:
  WIth extension, we do have to set VPATH explicitely if we want to
  use
  VPATH (note that contribs/extensions must not care that
  postgresql has
  been built with or without VPATH set). My patches try to fix that.
  
  No, this is exactly what I'm objecting to. I want to be able to do:
invoke_vpath_magic
standard_make_commands_as_for_non_vpath
  
  Your patches have been designed to overcome your particular
  issues, but
  they don't meet the general case, IMNSHO. This is why I want to have
  more discussion about how vpath builds could work for PGXS modules.
  
  The patch does not restrict anything, it is not supposed to lead to
  regression.
  The assignment of VPATH and srcdir are wrong in the PGXS case, the
  patch
  correct them. You're still free to do make VPATH=/mypath ... the
  VPATH
  provided won't be erased by the pgxs.mk logic.
  
  I still think this is premature.  I have spent some more time trying to
  make it work the way I think it should, so far without success. I think
  we need more discussion about how we'd like VPATH to work for PGXS
  would. There's no urgency about this - we've lived with the current
  situation for quite a while.
  
  Sure...
  I did a quick and dirty patch (I just validate without lot of testing),
  attached to this email to fix json_build (at least for make, make
  install)
  As you're constructing targets based on commands to execute in the
  srcdir
  directory, and because srcdir is only set in pgxs.mk, it is possible
  to define
  srcdir early in the json_build/Makefile and use it.
  
  This still doesn't do what I really want, which is to be able to
  invoke make without anything special in the invocation that triggers
  VPATH processing.
  
  Here's what I did that works like I think it should.
  
  First the attached patch on top of yours.
  
  Second, I did:
  mkdir vpath.json_build
  cd vpath.json_build
  sh/path/to/pgsource/config/prep_buildtree ../json_build .
  ln -s ../json_build/json_build.control .
  
  Then I created vpath.mk with these contents:
  ext_srcdir = ../json_build
  USE_VPATH = $(ext_srcdir)
  
  Finally I vpath-ized the Makefile (see attached).
  
  Given all of that, I was able to do, in the vpath directory:
  make
  make install
  make installcheck
  make clean
  
  and it all just worked, with exactly the same make invocations as work
  in an in-tree build.
  
  So what's lacking to make this nice is a tool to automate the second
  and third steps above.
  
  Maybe there are other ways of doing this, but this does what I'd like
  to see.
 
 I haven't seen a response to this. One thing we are missing is
 documentation. Given that I'm inclined to commit all of this (i.e.
 cedric's patches 1,2,3, and 4 plus my addition).

I did so I sent the mail again. I believe your addition need some changes to 
allow  the PGXS+VPATH case as it currently depends on the source-tree tool. So 
it needs to be in postgresql-server-devel packages, thus installed by 
postgresql make install.
I'm going to update the doc. It's probably worth to have some examples.

 I'm also inclined to backpatch it, since without that it seems to me
 unlikely packagers will be able to make practical use of it for several
 years, and the risk is very low.

Yes, it is valuable to help packagers here, thanks.
-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation


signature.asc
Description: This is a digitally signed message part.


Re: [HACKERS] vacuumlo - use a cursor

2013-06-29 Thread Bruce Momjian
On Sat, Jun 29, 2013 at 12:21:11PM -0400, Andrew Dunstan wrote:
 
 On 06/29/2013 11:35 AM, Bruce Momjian wrote:
 On Sat, Jun 29, 2013 at 11:33:54AM -0400, Andrew Dunstan wrote:
 Nobody seemed interested. But I do think it's a good idea still.
 Well, if no one replied, and you thought it was a good idea, then it was
 a good idea.  ;-)
 
 
 
 I try not to assume that even if I think it's a good idea it will be
 generally wanted unless at least one other person speaks up. But now
 that Joshua has I will revive it and add it to the next commitfest.

Great.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bugfix and new feature for PGXS

2013-06-29 Thread Andrew Dunstan


On 06/29/2013 02:27 PM, Cédric Villemain wrote:


I haven't seen a response to this. One thing we are missing is
documentation. Given that I'm inclined to commit all of this (i.e.
cedric's patches 1,2,3, and 4 plus my addition).

I did so I sent the mail again. I believe your addition need some changes to
allow  the PGXS+VPATH case as it currently depends on the source-tree tool. So
it needs to be in postgresql-server-devel packages, thus installed by
postgresql make install.
I'm going to update the doc. It's probably worth to have some examples.



I think we can survive for now without an additional tool. What I did 
was a proof of concept, it was not intended as a suggestion that we 
should install prep_buildtree. I am only suggesting that, in addition to 
your changes, if USE_VPATH is set then that path is used instead of a 
path inferred from the name of the Makefile.


A tool to set up a full vpath build tree for extensions or external 
modules seems to be beyond the scope of this.





I'm also inclined to backpatch it, since without that it seems to me
unlikely packagers will be able to make practical use of it for several
years, and the risk is very low.

Yes, it is valuable to help packagers here, thanks.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] checking variadic any argument in parser - should be array

2013-06-29 Thread Pavel Stehule
Hello

updated patch - precious Assert, more comments

Regards

Pavel

2013/6/29 Pavel Stehule pavel.steh...@gmail.com:
 2013/6/28 Jeevan Chalke jeevan.cha...@enterprisedb.com:
 Hi Pavel,


 On Fri, Jun 28, 2013 at 2:53 AM, Pavel Stehule pavel.steh...@gmail.com
 wrote:

 Hello

 2013/6/27 Jeevan Chalke jeevan.cha...@enterprisedb.com:
  Hi Pavel,
 
  I had a look over your new patch and it looks good to me.
 
  My review comments on patch:
 
  1. It cleanly applies with patch -p1 command.
  2. make/make install/make check were smooth.
  3. My own testing didn't find any issue.
  4. I had a code walk-through and I am little bit worried or confused on
  following changes:
 
if (PG_ARGISNULL(argidx))
return NULL;
 
  ! /*
  !  * Non-null argument had better be an array.  The parser
  doesn't
  !  * enforce this for VARIADIC ANY functions (maybe it should?),
  so
  that
  !  * check uses ereport not just elog.
  !  */
  ! arr_typid = get_fn_expr_argtype(fcinfo-flinfo, argidx);
  ! if (!OidIsValid(arr_typid))
  ! elog(ERROR, could not determine data type of concat()
  input);
  !
  ! if (!OidIsValid(get_element_type(arr_typid)))
  ! ereport(ERROR,
  ! (errcode(ERRCODE_DATATYPE_MISMATCH),
  !  errmsg(VARIADIC argument must be an array)));
 
  - /* OK, safe to fetch the array value */
arr = PG_GETARG_ARRAYTYPE_P(argidx);
 
/*
  --- 3820,3828 
if (PG_ARGISNULL(argidx))
return NULL;
 
  ! /* Non-null argument had better be an array */
  !
  Assert(OidIsValid(get_element_type(get_fn_expr_argtype(fcinfo-flinfo,
  argidx;
 
arr = PG_GETARG_ARRAYTYPE_P(argidx);
 
  We have moved checking of array type in parser (ParseFuncOrColumn())
  which
  basically verifies that argument type is indeed an array. Which exactly
  same
  as that of second if statement in earlier code, which you replaced by an
  Assert.
 
  However, what about first if statement ? Is it NO more required now?
  What if
  get_fn_expr_argtype() returns InvalidOid, don't you think we need to
  throw
  an error saying could not determine data type of concat() input?

 yes, If I understand well to question, a main differences is in stage
 of checking. When I do a check in parser stage, then I can expect so
 actual_arg_types array holds a valid values.


 That's fine.



 
  Well, I tried hard to trigger that code, but didn't able to get any test
  which fails with that error in earlier version and with your version.
  And
  thus I believe it is a dead code, which you removed ? Is it so ?

 It is removed in this version :), and it is not a bug, so there is not
 reason for patching previous versions. Probably there should be a
 Assert instead of error. This code is relatively mature - so I don't
 expect a issue from SQL level now. On second hand, this functions can
 be called via DirectFunctionCall API from custom C server side
 routines, and there a developer can does a bug simply if doesn't fill
 necessary structs well. So, there can be Asserts still.

 
  Moreover, if in any case get_fn_expr_argtype() returns an InvalidOid, we
  will hit an Assert rather than an error, is this what you expect ?
 

 in this moment yes,

 small change can helps with searching of source of possible issues.

 so instead on line
 Assert(OidIsValid(get_element_type(get_fn_expr_argtype(fcinfo-flinfo,
 argidx;

 use two lines

 Assert(OidIsValid(get_fn_expr_argtype(fcinfo-flinfo, argidx)));
 Assert(OidIsValid(get_element_type(get_fn_expr_argtype(fcinfo-flinfo,
 argidx;

 what you think?


 Well, I am still not fully understand or convinced about first Assert, error
 will be good enough like what we have now.

 Anyway, converting it over two lines eases the debugging efforts. But please
 take output of get_fn_expr_argtype(fcinfo-flinfo, argidx) into separate
 variable so that we will avoid calling same function twice.

 It is called in Assert, so it will be removed in production
 environment. Using variable for this purpose is useless and less
 maintainable.


 I think some short comment for these Asserts will be good. At-least for
 second one as it is already done by parser and non-arrays are not at
 expected at this point.


 yes, I'll add some comment

 Regards

 Pavel



  5. This patch has user visibility, i.e. now we are throwing an error
  when
  user only says VARIADIC NULL like:
 
  select concat(variadic NULL) is NULL;
 
  Previously it was working but now we are throwing an error. Well we are
  now
  more stricter than earlier with using VARIADIC + ANY, so I have no issue
  as
  such. But I guess we need to document this user visibility change. I
  don't
  know exactly where though. I searched for VARIADIC and all related
  documentation says it needs an array, so nothing harmful as such, so you
  can
  ignore this 

Re: [HACKERS] Spin Lock sleep resolution

2013-06-29 Thread Jeff Janes
On Fri, Jun 28, 2013 at 2:46 AM, Heikki Linnakangas hlinnakan...@vmware.com
 wrote:

 On 27.06.2013 17:30, Robert Haas wrote:

 On Wed, Jun 26, 2013 at 5:49 PM, Jeff Janesjeff.ja...@gmail.com  wrote:

 If we think the patch has a risk of introducing subtle errors, then it

 probably can't be justified based on the small performance gains you saw.

 But if we think it has little risk, then I think it is justified simply
 based on cleaner code, and less confusion for people who are tracing a
 problematic process.  If we want it to do a random escalation, it should
 probably look like a random escalation to the interested observer.


 I think it has little risk.  If it turns out to be worse for
 performance, we can always revert it, but I expect it'll be better or
 a wash, and the results so far seem to bear that out.  Another
 interesting question is whether we should fool with the actual values
 for minimum and maximum delays, but that's a separate and much harder
 question, so I think we should just do this for now and call it good.


 My thoughts exactly. I wanted to see if David Gould would like to do some
 testing with it, but there's realy no need to hold off committing for that,
 I'm not expecting any surprises there. Committed.


Thanks.



 Jeff, in the patch you changed the datatype of cur_delay variable from int
 to long. AFAICS that makes no difference, so I kept it as int. Let me know
 if there was a reason for that change.



I think my thought process at the time was that since the old code
multiplied by 1000L, not 1000 in the expression argument to pg_usleep,
I wanted to keep the spirit of the L in place.  It seemed safer than trying
to prove to myself that it was not necessary.

If int suffices, should the L come out of the defines for MIN_DELAY_USEC
and MAX_DELAY_USEC ?

Cheers,

Jeff


Re: [HACKERS] New regression test time

2013-06-29 Thread Josh Berkus

 I see two problems with this report:
 1. it creates a new installation for each run,

Yes, I'm running make check

 2. it only uses the serial schedule.

Um, no:

parallel group (19 tests):  limit prepare copy2 plancache xml returning
conversion rowtypes largeobject temp truncate polymorphism with
without_oid sequence domain rangefuncs alter_table plpgsql

Out of curiosity, I tried a serial run (MAX_CONNECTIONS=1), which took
about 39s (with patches).

 I care more about the parallel schedule than the serial one, because
 since it obviously runs in less time, then I can run it often and not
 worry about how long it takes.  On the other hand, the cost of the extra
 initdb obviously means that the percentage is a bit lower than if you
 were to compare test run time without considering the initdb step.

Possibly, but I know I run make check not make installcheck when I'm
testing new code.  And the buildfarm, afaik, runs make check.  And,
for that matter, who the heck cares?

The important thing is that make check still runs in  30 seconds on a
standard laptop (an ultraportable, even), so it's not holding up a
change-test-change-test cycle.  If you were looking to prove something
else, then go for it.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bugfix and new feature for PGXS

2013-06-29 Thread Josh Berkus
On 06/29/2013 11:42 AM, Andrew Dunstan wrote:
 
 I think we can survive for now without an additional tool. What I did
 was a proof of concept, it was not intended as a suggestion that we
 should install prep_buildtree. I am only suggesting that, in addition to
 your changes, if USE_VPATH is set then that path is used instead of a
 path inferred from the name of the Makefile.

SO is this patch returned with feedback?


-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] request a new feature in fuzzystrmatch

2013-06-29 Thread Joe Conway
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 06/25/2013 01:37 PM, Liming Hu wrote:
 please remove dameraulevenshteinnocompatible related stuff, I
 just followed the template you created. 
 dameraulevenshteinnocompatible was used in my first testing.

 diff -cNr
 /opt/src/pgsql-git/master/contrib/fuzzystrmatch/fuzzystrmatch--1.0.sql

 
diff -cNr
/opt/src/pgsql-git/master/contrib/fuzzystrmatch/fuzzystrmatch--unpackaged--1.0.sql

Actually, given that this change will create version 1.1 of the
extension, I believe the 1.0 versions of the sql scripts should
probably be removed entirely. Can someone with more knowledge of the
extension facility comment on that?

Joe


- -- 
Joe Conway
credativ LLC: http://www.credativ.us
Linux, PostgreSQL, and general Open Source
Training, Service, Consulting,  24x7 Support
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with undefined - http://www.enigmail.net/

iQIcBAEBAgAGBQJRz0V5AAoJEDfy90M199hl3sgP/Ai51I56A7luUhRFt2s3eo0p
HWiuxsWU2tXwoGmvYFtSze80mpfBfCGpBrD+uhh2PLZlFt6uS21wpYaezr1Y9wAM
astvWAOXVOg/AUlb5lyoAze5CL3kpRVX7EvUPGrzb6RMK3VfIckRdIoqwh8/Ko/C
j1ECEcWMHsRDUUDs+kU/mXAyRKpnC+VdSwOAU/k3J7Q0KpSvCZ4R+codinR29Ub4
L4W30436IKOr+BktorFc24FYRpVVG2llMPtWUhgKb1MfW5d+NRqID0gTIF4a9TJr
yaEBmmT1eupdje4pwY3R+xDGroCj3AAx7PAfW+CRMrsYvEFVy945l2lbSjvqDlDy
lpFsWWepYNdQhZBKP0Au986aqTKmoKgFoyWV/Yi7BfLx7Gh70Wk8OPcDKuUIdHou
UNfMouAUxGW9ZHZRmDSaYncTnLEcEJS11akddoIKDXCbdTVAQXFGLNqv1pp0KN+J
xU1eOrh+oQVwTayZH+S2Xa8+AT+g9wLH5zynJozDSRitdLIGdN1zzUVVXdO1Nx2i
2anxXdoW4DB+x+G8ea2Wmxfsw3MMS8Ck10VqhN6JMojDP0rczA6+EFrZcBlsWlWx
LX4iPTroUGZ318IC+A2CXci0uXs7+2l2zrescpEdqrqcTwTJHpGFbRbYniGBFgXW
6LeSPHhRDQUykXMvEU9I
=xeVQ
-END PGP SIGNATURE-


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New regression test time

2013-06-29 Thread Andrew Dunstan


On 06/29/2013 03:57 PM, Josh Berkus wrote:

I see two problems with this report:
1. it creates a new installation for each run,

Yes, I'm running make check


2. it only uses the serial schedule.

Um, no:

parallel group (19 tests):  limit prepare copy2 plancache xml returning
conversion rowtypes largeobject temp truncate polymorphism with
without_oid sequence domain rangefuncs alter_table plpgsql

Out of curiosity, I tried a serial run (MAX_CONNECTIONS=1), which took
about 39s (with patches).


I care more about the parallel schedule than the serial one, because
since it obviously runs in less time, then I can run it often and not
worry about how long it takes.  On the other hand, the cost of the extra
initdb obviously means that the percentage is a bit lower than if you
were to compare test run time without considering the initdb step.

Possibly, but I know I run make check not make installcheck when I'm
testing new code.  And the buildfarm, afaik, runs make check.  And,
for that matter, who the heck cares?


It runs both :-) We run make check early in the process to make sure 
we can at least get that far. We run make installcheck later, among 
other things to check that the tests work in different locales.



I think we need to have a better understanding of just what our standard 
regression tests do.


AIUI: They do test feature use and errors that have cropped up in the 
past that we need to beware of. They don't test every bug we've ever 
had, nor do they exercise every piece of code.


Maybe there is a good case for these last two in a different set of tests.

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] request a new feature in fuzzystrmatch

2013-06-29 Thread Liming Hu

On 6/29/2013 1:37 PM, Joe Conway wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 06/25/2013 01:37 PM, Liming Hu wrote:

please remove dameraulevenshteinnocompatible related stuff, I
just followed the template you created.
dameraulevenshteinnocompatible was used in my first testing.
diff -cNr
/opt/src/pgsql-git/master/contrib/fuzzystrmatch/fuzzystrmatch--1.0.sql



diff -cNr
/opt/src/pgsql-git/master/contrib/fuzzystrmatch/fuzzystrmatch--unpackaged--1.0.sql

Actually, given that this change will create version 1.1 of the
extension, I believe the 1.0 versions of the sql scripts should
probably be removed entirely. Can someone with more knowledge of the
extension facility comment on that?

Joe

Thanks. It is my first contribution to Postgresql community.

LIming


- -- 
Joe Conway

credativ LLC: http://www.credativ.us
Linux, PostgreSQL, and general Open Source
Training, Service, Consulting,  24x7 Support
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with undefined - http://www.enigmail.net/

iQIcBAEBAgAGBQJRz0V5AAoJEDfy90M199hl3sgP/Ai51I56A7luUhRFt2s3eo0p
HWiuxsWU2tXwoGmvYFtSze80mpfBfCGpBrD+uhh2PLZlFt6uS21wpYaezr1Y9wAM
astvWAOXVOg/AUlb5lyoAze5CL3kpRVX7EvUPGrzb6RMK3VfIckRdIoqwh8/Ko/C
j1ECEcWMHsRDUUDs+kU/mXAyRKpnC+VdSwOAU/k3J7Q0KpSvCZ4R+codinR29Ub4
L4W30436IKOr+BktorFc24FYRpVVG2llMPtWUhgKb1MfW5d+NRqID0gTIF4a9TJr
yaEBmmT1eupdje4pwY3R+xDGroCj3AAx7PAfW+CRMrsYvEFVy945l2lbSjvqDlDy
lpFsWWepYNdQhZBKP0Au986aqTKmoKgFoyWV/Yi7BfLx7Gh70Wk8OPcDKuUIdHou
UNfMouAUxGW9ZHZRmDSaYncTnLEcEJS11akddoIKDXCbdTVAQXFGLNqv1pp0KN+J
xU1eOrh+oQVwTayZH+S2Xa8+AT+g9wLH5zynJozDSRitdLIGdN1zzUVVXdO1Nx2i
2anxXdoW4DB+x+G8ea2Wmxfsw3MMS8Ck10VqhN6JMojDP0rczA6+EFrZcBlsWlWx
LX4iPTroUGZ318IC+A2CXci0uXs7+2l2zrescpEdqrqcTwTJHpGFbRbYniGBFgXW
6LeSPHhRDQUykXMvEU9I
=xeVQ
-END PGP SIGNATURE-




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New regression test time

2013-06-29 Thread Josh Berkus
On 06/29/2013 02:14 PM, Andrew Dunstan wrote:
 AIUI: They do test feature use and errors that have cropped up in the
 past that we need to beware of. They don't test every bug we've ever
 had, nor do they exercise every piece of code.

If we don't have a test for it, then we can break it in the future and
not know we've broken it until .0 is released.  Is that really a
direction we're happy going in?

 
 Maybe there is a good case for these last two in a different set of tests.

If we had a different set of tests, that would be a valid argument.  But
we don't, so it's not.  And nobody has offered to write a feature to
split our tests either.

I have to say, I'm really surprised at the level of resistance people on
this list are showing to the idea of increasing test coverage. I thought
that Postgres was all about reliability?   For a project as mature as we
are, our test coverage is abysmal, and I think I'm starting to see why.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New regression test time

2013-06-29 Thread Dann Corbit
-Original Message-
From: pgsql-hackers-ow...@postgresql.org 
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Josh Berkus
Sent: Saturday, June 29, 2013 3:00 PM
To: Andrew Dunstan
Cc: Alvaro Herrera; pgsql-hackers@postgresql.org; Robins Tharakan
Subject: Re: [HACKERS] New regression test time

On 06/29/2013 02:14 PM, Andrew Dunstan wrote:
 AIUI: They do test feature use and errors that have cropped up in the 
 past that we need to beware of. They don't test every bug we've ever 
 had, nor do they exercise every piece of code.

If we don't have a test for it, then we can break it in the future and not know 
we've broken it until .0 is released.  Is that really a direction we're happy 
going in?

 
 Maybe there is a good case for these last two in a different set of tests.

If we had a different set of tests, that would be a valid argument.  But we 
don't, so it's not.  And nobody has offered to write a feature to split our 
tests either.

I have to say, I'm really surprised at the level of resistance people on this 
list are showing to the idea of increasing test coverage. I thought
that Postgres was all about reliability?   For a project as mature as we
are, our test coverage is abysmal, and I think I'm starting to see why.

An ounce of prevention is worth a pound of cure.
The cost of a bug rises exponentially, starting at requirements gathering and 
ending at the customer.
Where I work, we have two computer  rooms full of machines that run tests 
around the clock, 24x365.
Even so, a full regression takes well over a week because we perform hundreds 
of thousands of tests.
All choices of this kind are trade-offs.  But in such situations, my motto is:
Do whatever will make the customer prosper the most.
IMO-YMMV


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New regression test time

2013-06-29 Thread Andrew Dunstan


On 06/29/2013 05:59 PM, Josh Berkus wrote:


Maybe there is a good case for these last two in a different set of tests.

If we had a different set of tests, that would be a valid argument.  But
we don't, so it's not.  And nobody has offered to write a feature to
split our tests either.

I have to say, I'm really surprised at the level of resistance people on
this list are showing to the idea of increasing test coverage. I thought
that Postgres was all about reliability?   For a project as mature as we
are, our test coverage is abysmal, and I think I'm starting to see why.




Dividing the tests into different sections is as simple as creating one 
schedule file per section.


I'm not at all resistant to it. In fact, of someone wants to set up 
separate sections and add new tests to the different sections I'll be 
more than happy to provide buildfarm support for it. Obvious candidates 
could include:


 * code coverage
 * bugs
 * tests too big to run in everyday developer use


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New regression test time

2013-06-29 Thread Josh Berkus

 
 Dividing the tests into different sections is as simple as creating one
 schedule file per section.

Oh?  Huh.  I'd thought it would be much more complicated.  Well, by all
means, let's do it then.

I'm not personally convinced that the existing regression tests all
belong in the default section -- I think a lot of what we've chosen to
test or not test to date has been fairly arbitrary -- but we can tinker
with that later.

 I'm not at all resistant to it. In fact, of someone wants to set up
 separate sections and add new tests to the different sections I'll be
 more than happy to provide buildfarm support for it. Obvious candidates
 could include:
 
  * code coverage
  * bugs
  * tests too big to run in everyday developer use

So we'd want codecoverage and codecoverage-parallel then, presumably,
for Robins tests?  Is there really a reason to supply a non-parallel
version?  Would we want to include the core tests in those?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New regression test time

2013-06-29 Thread Claudio Freire
On Sat, Jun 29, 2013 at 7:58 PM, Josh Berkus j...@agliodbs.com wrote:


 Dividing the tests into different sections is as simple as creating one
 schedule file per section.

 Oh?  Huh.  I'd thought it would be much more complicated.  Well, by all
 means, let's do it then.

I think I should point out, since it seems to have gone unnoticed,
that there's a thread with a patch about exactly this (splitting
tests), the big test separation POC.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] pgbench --throttle (submission 7 - with lag measurement)

2013-06-29 Thread Greg Smith

On 6/22/13 12:54 PM, Fabien COELHO wrote:

After some poking around, and pursuing various red herrings, I resorted
to measure the delay for calling PQfinish(), which is really the only
special thing going around at the end of pgbench run...


This wasn't what I was seeing, but it's related.  I've proved to myself 
the throttle change isn't reponsible for the weird stuff I'm seeing now. 
 I'd like to rearrange when PQfinish happens now based on what I'm 
seeing, but that's not related to this review.


I duplicated the PQfinish problem you found too.  On my Linux system, 
calls to PQfinish are normally about 36 us long.  They will sometimes 
get lost for 15ms before they return.  That's a different problem 
though, because the ones I'm seeing on my Mac are sometimes 150ms. 
PQfinish never takes quite that long.


PQfinish doesn't pause for a long time on this platform.  But it does 
*something* that causes socket select() polling to stutter.  I have 
instrumented everything interesting in this part of the pgbench code, 
and here is the problem event.


1372531862.062236 select with no timeout sleeping=0
1372531862.109111 select returned 6 sockets latency 46875 us

Here select() is called with 0 sleeping processes, 11 that are done, and 
14 that are running.  The running ones have all sent SELECT statements 
to the server, and they are waiting for a response.  Some of them 
received some data from the server, but they haven't gotten the entire 
response back.  (The PQfinish calls could be involved in how that happened)


With that setup, select runs for 47 *ms* before it gets the next byte to 
a client.  During that time 6 clients get responses back to it, but it 
stays stuck in there for a long time anyway.  Why?  I don't know exactly 
why, but I am sure that pgbench isn't doing anything weird.  It's either 
libpq acting funny, or the OS.  When pgbench is waiting on a set of 
sockets, and none of them are returning anything, that's interesting. 
But there's nothing pgbench can do about it.


The cause/effect here is that the randomness to the throttling code 
spreads out when all the connections end a bit. There are more times 
during which you might have 20 connections finished while 5 still run.


I need to catch up with revisions done to this feature since I started 
instrumenting my copy more heavily.  I hope I can get this ready for 
commit by Monday.  I've certainly beaten on the feature for long enough now.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New regression test time

2013-06-29 Thread Stephen Frost
Josh,

* Josh Berkus (j...@agliodbs.com) wrote:
 If we don't have a test for it, then we can break it in the future and
 not know we've broken it until .0 is released.  Is that really a
 direction we're happy going in?

To be fair, AIUI anyway, certain companies have much larger regression
suites that they run new versions of PG against.  I'm sure they don't
have the platform coverage that the buildfarm does, but I expect no
small number of build farm animals would fall over under the strain of
that regression suite (or at least, they'd have trouble keeping up with
the commit rate).

I'm definitely pro-more-tests when they add more code/feature coverage
and are nominal in terms of additional time taken during 'make check',
but I seriously doubt we're ever going to have anything close to
complete code coverage in such a suite of tests.

 I have to say, I'm really surprised at the level of resistance people on
 this list are showing to the idea of increasing test coverage. I thought
 that Postgres was all about reliability?   For a project as mature as we
 are, our test coverage is abysmal, and I think I'm starting to see why.

I do think having these tests split into different groups would be
valuable.  It might even be good to split them into groups based on what
code they exercise and then devs might be able to decide I'm working in
this area right now, I want the tests that are applicable to that code.
Might be a bit too much effort though too, dunno.  Would be really neat
if it was automated in some way. ;)

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] request a new feature in fuzzystrmatch

2013-06-29 Thread Michael Paquier
On Sun, Jun 30, 2013 at 5:37 AM, Joe Conway m...@joeconway.com wrote:
 Actually, given that this change will create version 1.1 of the
 extension, I believe the 1.0 versions of the sql scripts should
 probably be removed entirely. Can someone with more knowledge of the
 extension facility comment on that?
When upgrading this extension here is what you should do:
- Remove fuzzystrmatch--1.0.sql
- Add fuzzystrmatch--1.1.sql with the new definitions
- Add fuzzystrmatch--1.0--1.1.sql to be able to perform an upgrade
between 1.0 and 1.1
- Let fuzzystrmatch--unpackaged--1.0.sql as-is

Then by having a quick glance at the patch you sent...
- fuzzystrmatch--unpackaged--1.0.sql is renamed internally to 1.1.sql
- fuzzystrmatch--1.0.sql remains, and is renamed to 1.1 internally
- fuzzystrmatch--1.0--1.1.sql is not provided

Regards,
--
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] checksum_impl.h fails cpluspluscheck

2013-06-29 Thread Peter Eisentraut
./src/include/storage/checksum_impl.h: In function ‘uint32 
pg_checksum_block(char*, uint32)’:
./src/include/storage/checksum_impl.h:154: warning: comparison between signed 
and unsigned integer expressions

We could exclude that file from the check, but it's also easy to fix by
making the variables unsigned:

diff --git a/src/include/storage/checksum_impl.h 
b/src/include/storage/checksum_impl.h
index ce1b124..7987b04 100644
--- a/src/include/storage/checksum_impl.h
+++ b/src/include/storage/checksum_impl.h
@@ -141,7 +141,7 @@ pg_checksum_block(char *data, uint32 size)
uint32  sums[N_SUMS];
uint32  (*dataArr)[N_SUMS] = (uint32 (*)[N_SUMS]) data;
uint32  result = 0;
-   int i,
+   unsigned int i,
j;

/* ensure that the size is compatible with the algorithm */


Preferences?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] checksum_impl.h fails cpluspluscheck

2013-06-29 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 ./src/include/storage/checksum_impl.h: In function ‘uint32 
 pg_checksum_block(char*, uint32)’:
 ./src/include/storage/checksum_impl.h:154: warning: comparison between signed 
 and unsigned integer expressions

 We could exclude that file from the check, but it's also easy to fix by
 making the variables unsigned:
 ...
 Preferences?

Possibly use uint32 for consistency with the other vars in the function?
No real objection to unsigned int, though, since it's the same thing in
practice.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] checksum_impl.h fails cpluspluscheck

2013-06-29 Thread Peter Geoghegan
On Sat, Jun 29, 2013 at 8:55 PM, Peter Eisentraut pete...@gmx.net wrote:
 ./src/include/storage/checksum_impl.h: In function ‘uint32 
 pg_checksum_block(char*, uint32)’:
 ./src/include/storage/checksum_impl.h:154: warning: comparison between signed 
 and unsigned integer expressions

On the subject of checksum_impl.h, don't you think it's a bit
unfortunate that clients have to do this?:

+ // checksum_impl.h uses Assert, which doesn't work outside the server
+ #undef Assert
+ #define Assert(X)
+
+ #include storage/checksum_impl.h
+

Maybe external utilities ought to include another header that does all
of this for them?

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_ctl and -h/help

2013-06-29 Thread Michael Paquier
On Sat, Jun 29, 2013 at 10:45 PM, Bruce Momjian br...@momjian.us wrote:
 In studying pg_upgrade's handling of --help, I noticed that pg_ctl
 supports -h for help, but it is the only tool to do so, and -h is not
 documented.  I propose we remove -h for help in pg_ctl, and have it
 support only -? and --help.
I suppose that it doesn't hurt to have it, but for yes the sake of
consistency with the other binaries it would make sense to remove it.
Btw, not even the docs, it is also not listed in the --help message
findable in code.
--
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers