Re: [HACKERS] [PATCH] PostgreSQL fails to build with 32bit MinGW-w64

2011-11-27 Thread NISHIYAMA Tomoaki
Hi,

 For the win32.h, I really don't understand why _WINSOCKAPI_ was defined 
 before
 winsock2.h
 some google suggests that defining _WINSOCKAPI_ beforewindows.h  prevents
 inclusion of winsock.h but that does not have relation to inclusion of
 winsock2.h  and ifwinsock2.h  is included first, it should be ok.
 
 If this guess is right, perhaps it could be better to remove the three 
 lines.
 #if !defined(WIN64) || defined(WIN32_ONLY_COMPILER)
 #define _WINSOCKAPI_
 #endif
 
 No, this broke some compilers, IIRC (probably the native mingw compiler, 
 which is in use by several buildfarm members). Getting this right was very 
 tricky and time-consuming when I was adding support for the 64 bit mingw-w64 
 compiler, and there were a couple of rounds of breakage.
 
 I'm therefore much more inclined to go the way of your earlier patch, which 
 seems much less risky.


I agree that original patch could be less risky. 
However, it doesn't match what Microsoft says:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms737629%28v=vs.85%29.aspx

So, I think the standard way is not defining _WINSOCKAPI_, and if any compiler 
requires that,
which I think unlikely, then it should be defined as a exceptional case.
At least, native mingw GCC 4.5.2 (20110802 catalogue) and
4.6.1 (latest catalogue) does compile happily without the three lines.

 I only changed this for consistency. For me, it works without that define in 
 all test
 environments, too.
 
 +/* __MINGW64_VERSION_MAJOR is related to both 32/64 bit gcc compiles by
 + * mingw-w64, however it gots defined only after
 Why not use __MINGW32__, which is defined without including any headers?
 
 Because it's defined by other than mingw-w64 compilers.

I see. That's because mingw (not -w64). 
Should it be ok if mingw is ok with that condition?

 We have a bunch of compilers to support here. There are LOTS of compiler 
 scenarios on Windows (several versions of MSVC, 32bit and 64bit mingw-w64, 
 native mingw gcc, and a couple of Cygwin based compilers), and keeping track 
 of them all and making sure they don't break can be a pain.


Yes, that really is a pain.

The code block
#if _MSC_VER = 1400 || defined(WIN64)
#define errcode __msvc_errcode
#include crtdefs.h
#undef errcode
#endif

looks as if there is no real need to crtdefs.h but, they wanted to prevent
definition of errcode and therefor put to the first place.

So, I was afraid moving the include downwards might break by 
by including a header that internally includes crtdefs.h.
If this is not problematic for MSVC (perhaps you know better on that),  
I have no objection in moving the order.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] odbc_fdw

2011-11-27 Thread Florian Schwendener

Hi there!

I built the current PostgreSQL 9.1.1 sources under Ubuntu 11.04 (in a 
VMware under Win7).
I followed the steps in this guide: 
www.thegeekstuff.com/2009/04/linux-postgresql-install-and-configure-from-source


It seems to work (I can run the server and connect to it with PgAdmin).

Now I'd like to integrate the ODBC_FDW extension in my installation. 
Sadly, the make file throws errors (no target named ... specified).

Is there any way to do this in a simple way?

I'm a linux newbie, by the way...

Thank you for your help!
Flo


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: GiST for range types (was Re: [HACKERS] Range Types - typo + NULL string constructor)

2011-11-27 Thread Alexander Korotkov
On Sat, Nov 26, 2011 at 11:11 AM, Jeff Davis pg...@j-davis.com wrote:

 There's been some significant change in rangetypes_gist.c, can you
 please rebase this patch?

OK, rebased with head.

--
With best regards,
Alexander Korotkov.


rangetypegist-0.3.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GiST range-contained-by searches versus empty ranges

2011-11-27 Thread Alexander Korotkov

 The first solution that comes to mind is to make the penalty and
 picksplit functions forcibly segregate empty ranges from others, that is
 a split will never put empty ranges together with non-empty ones.  Then,
 we can assume that a non-empty internal node doesn't represent any empty
 leaf entries, and avoid descending to it when it doesn't overlap the
 target range.  Then the equality-search case could be improved too.

 Thoughts, better ideas?


Have you seen my patch about GiST for range types?
https://commitfest.postgresql.org/action/patch_view?id=682
There mentioned problem is solved by introduction of RANGE_CONTAIN_EMPTY
bit in range flags. This bit is only used in GiST index and means that
there are underlying empty ranges.

--
With best regards,
Alexander Korotkov.


Re: [HACKERS] vpath builds and verbose error messages

2011-11-27 Thread Peter Eisentraut
On lör, 2011-11-26 at 10:45 -0500, Tom Lane wrote:
 Peter Eisentraut pete...@gmx.net writes:
  On fre, 2011-11-18 at 09:44 -0500, Tom Lane wrote:
  It wouldn't be that hard for elog.c to do strrchr(fname, '/') or
  something like that,
 
  Here is a patch for that.  I would also like to backpatch this.
 
 Hmmm ... is it possible that strrchr could change errno?

It's not documented to do that, and an implementation would have to go
far out of it's way to do it anyway.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Perl xsubpp

2011-11-27 Thread Mr. Aaron W. Swenson
On Sat, Nov 26, 2011 at 03:28:57PM -0500, Andrew Dunstan wrote:
 On 10/12/2011 08:55 PM, Alex Hunsaker wrote:
  On Wed, Oct 12, 2011 at 17:53, David E.
  Wheelerda...@kineticode.com wrote:
  On Sep 15, 2011, at 3:04 PM, Alex Hunsaker wrote:
 
  Close, seems I was wrong about the typemap ExtUtils::ParseXS does not
  install a new one so we still need to point to the one in privlib.
  Also xsubpp is not executable so the test should be -r or something.
 
  Also don't think we should change the configure switch tests to
  test XSUBPPDIR.
 
  Find those plus some minor typos fixed in the attached.
  xsubpp_v3.patch
  --
  Doesn't look like this has been applied yet. I think it ought to
  be backported, too, frankly. DId I miss it?
  Nah, probably should add it to the next commit fest so it does not get
  forgotten.
 
 
 committed.
 
 cheers
 
 andrew

Has this been backpatched as well?

-- 
Mr. Aaron W. Swenson
Gentoo Linux Developer
Email: titanof...@gentoo.org
GnuPG FP : 2C00 7719 4F85 FB07 A49C  0E31 5713 AA03 D1BB FDA0
GnuPG ID : D1BBFDA0


pgpOvprJOI60g.pgp
Description: PGP signature


Re: [HACKERS] Patch: Perl xsubpp

2011-11-27 Thread Andrew Dunstan



On 11/27/2011 08:25 AM, Mr. Aaron W. Swenson wrote:

On Sat, Nov 26, 2011 at 03:28:57PM -0500, Andrew Dunstan wrote:

On 10/12/2011 08:55 PM, Alex Hunsaker wrote:

On Wed, Oct 12, 2011 at 17:53, David E.
Wheelerda...@kineticode.com  wrote:

On Sep 15, 2011, at 3:04 PM, Alex Hunsaker wrote:


Close, seems I was wrong about the typemap ExtUtils::ParseXS does not
install a new one so we still need to point to the one in privlib.
Also xsubpp is not executable so the test should be -r or something.

Also don't think we should change the configure switch tests to
test XSUBPPDIR.

Find those plus some minor typos fixed in the attached.
xsubpp_v3.patch
--

Doesn't look like this has been applied yet. I think it ought to
be backported, too, frankly. DId I miss it?

Nah, probably should add it to the next commit fest so it does not get
forgotten.


committed.



Has this been backpatched as well?



It has been to 9.1.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] PostgreSQL fails to build with 32bit MinGW-w64

2011-11-27 Thread NISHIYAMA Tomoaki
Hi,

 +/* __MINGW64_VERSION_MAJOR is related to both 32/64 bit gcc compiles by
 + * mingw-w64, however it gots defined only after
 Why not use __MINGW32__, which is defined without including any headers?
 
 Because it's defined by other than mingw-w64 compilers.
 
 I see. That's because mingw (not -w64). 
 Should it be ok if mingw is ok with that condition?


This really breaks mingw gcc 4.6.1 :( it does not have crtdefs.h)
If moving downward do not break MSVC, perhaps its the good way.
Otherwise, we might check for the presence of crtdefs.h with configure?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Move pg_dump memory routines into pg_dumpmem.c/h and restore com

2011-11-27 Thread Andrew Dunstan



On 11/26/2011 10:36 PM, Bruce Momjian wrote:

Move pg_dump memory routines into pg_dumpmem.c/h and restore common.c
with its original functions.  The previous function migration would
cause too many difficulties in back-patching.



MSVC is still broken with this change, but now I think we've exposed a 
long-standing error in the MSVC build system.


Mkvcbuild.pm has:

   my $pgdumpall = AddSimpleFrontend('pg_dump', 1);
   $pgdumpall-{name} = 'pg_dumpall';
   $pgdumpall-AddIncludeDir('src\backend');
   $pgdumpall-AddFile('src\bin\pg_dump\pg_dumpall.c');
   $pgdumpall-AddFile('src\bin\pg_dump\keywords.c');
   $pgdumpall-AddFile('src\backend\parser\kwlookup.c')


AddSimpleFrontend() calls AddDir() which harvests the contents of 
$(OBJS) from the Makefile for the target. But pg_dumpall doesn't want 
$(OBJS). We've been benignly but mistakenly building it with them for a 
quite a few years, but now we can't do that any more, given Bruce's 
changes. It looks like the fix is to call AddProject() for pg_dumpall 
instead of AddSimpleFrontend() and then do a little more work ourselves 
to select exactly what we need.



I don't have time to do that and test it immediately, as I'll be away 
most of the day, so if anyone else can please go for it.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Large number of open(2) calls with bulk INSERT into empty table

2011-11-27 Thread Florian Weimer
I noticed that a bulk INSERT into an empty table (which has been
TRUNCATEd in the same transaction, for good measure) results in a
curious number of open(2) calls for the FSM resource fork:

open(base/657862/16554373_fsm, O_RDWR) = -1 ENOENT (No such file or directory)
lseek(17, 0, SEEK_END)  = 407584768
write(17, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 
8192
lseek(18, 0, SEEK_END)  = 333119488
write(18, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 
8192
lseek(18, 0, SEEK_END)  = 333127680
write(18, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 
8192
open(base/657862/16554373_fsm, O_RDWR) = -1 ENOENT (No such file or directory)
lseek(17, 0, SEEK_END)  = 407592960
write(17, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 
8192
lseek(18, 0, SEEK_END)  = 333135872
write(18, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 
8192
open(base/657862/16554373_fsm, O_RDWR) = -1 ENOENT (No such file or directory)
lseek(17, 0, SEEK_END)  = 407601152
write(17, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 
8192
lseek(18, 0, SEEK_END)  = 333144064
write(18, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 
8192
lseek(18, 0, SEEK_END)  = 333152256
write(18, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 
8192
open(base/657862/16554373_fsm, O_RDWR) = -1 ENOENT (No such file or directory)

I'm not sure if this result in a significant performance hit on Linux
(because the dentry cache covers negative lookups, too), but I suspect
that it could be an issue with other systems.

This happens with PostgreSQL 9.1.0.

-- 
Florian Weimerfwei...@bfk.de
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] small cleanup pl_exec.c

2011-11-27 Thread Pavel Stehule
Hello,

function exec_set_found uses a PointerGetDatum, should be BoolGetDatum

*** pl_exec.c.orig  2011-11-24 12:51:45.0 +0100
--- pl_exec.c   2011-11-27 18:07:46.983118326 +0100
***
*** 5860,5866 
PLpgSQL_var *var;

var = (PLpgSQL_var *) (estate-datums[estate-found_varno]);
!   var-value = PointerGetDatum(state);
var-isnull = false;
  }

--- 5860,5866 
PLpgSQL_var *var;

var = (PLpgSQL_var *) (estate-datums[estate-found_varno]);
!   var-value = BoolGetDatum(state);
var-isnull = false;
  }

Regards

Pavel Stehule

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GiST range-contained-by searches versus empty ranges

2011-11-27 Thread Tom Lane
Alexander Korotkov aekorot...@gmail.com writes:
 The first solution that comes to mind is to make the penalty and
 picksplit functions forcibly segregate empty ranges from others, that is
 a split will never put empty ranges together with non-empty ones.

 Have you seen my patch about GiST for range types?
 https://commitfest.postgresql.org/action/patch_view?id=682
 There mentioned problem is solved by introduction of RANGE_CONTAIN_EMPTY
 bit in range flags. This bit is only used in GiST index and means that
 there are underlying empty ranges.

Yeah, I had been coming around to the idea that we'd need some kluge
like that.  The forcible-segregation idea falls apart as soon as you
start thinking about multiple-column indexes --- if you have several
columns each demanding to segregate a certain kind of data, you can
easily overrun the space available in the root page, whereupon it's no
longer possible to guarantee anything about the contents of child
pages.

I think this is a must fix for 9.2, because once we release it will
be too late to do anything without invalidating existing indexes.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] small cleanup pl_exec.c

2011-11-27 Thread Tom Lane
Pavel Stehule pavel.steh...@gmail.com writes:
 function exec_set_found uses a PointerGetDatum, should be BoolGetDatum

Applied, thanks.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Feature proposal: www_fdw

2011-11-27 Thread Alexander Soudakov
Hello.

I finished developing www_fdw:
https://github.com/cyga/www_fdw/

It has docs/examples:
https://github.com/cyga/www_fdw/wiki

I haven't upload it to pgxn or pgfoundry yet.

I want to ask for the following 2 things:
1. testing from community;
2. how can I add my extension to official fdw list:
http://wiki.postgresql.org/wiki/Foreign_data_wrappers
Is there any procedure for it?



On Thu, Sep 29, 2011 at 1:20 AM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
 Florian Pflug f...@phlo.org wrote:
 On Sep28, 2011, at 15:32 , Alexander Soudakov wrote:
 Here you can find www_fdw feature documentation:
 http://wiki.postgresql.org/wiki/WWW_FDW

 Certainly looks useful (as a third-party extension, as others have
 already pointed out)

 Our programmers agree that it is likely to be useful here.  I agree
 that it should be an extension.

 What I didn't quite understand is how one would pass (dynamic)
 parameters for a GET request. For example, not too long ago I
 needed to access the Google Maps API from postgres. I ended up
 using pl/python, and now wonder if your FDW would support that
 use-case.

 I would assume that the usual ? to start parameters and  between
 parameters would be used.  For example, with Google Maps:

 http://maps.google.com/maps?hl=enie=UTF8hq=hnear=Madison,+Dane,+Wisconsinll=43.074684,-89.38188spn=0.003006,0.00383t=hz=18layer=ccbll=43.07468,-89.381742panoid=LhJ-PFHVzxRguJ6h616mmQcbp=12,355.53,,0,-1.32

 -Kevin




-- 
Alexander Soudakov
Software Developer
email: cyga...@gmail.com
skype: asudakov

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Feature proposal: www_fdw

2011-11-27 Thread Alexander Soudakov
Also,

in addition to:
 I want to ask for the following 2 things:
 1. testing from community;
 2. how can I add my extension to official fdw list:
 http://wiki.postgresql.org/wiki/Foreign_data_wrappers
 Is there any procedure for it?

Can I have it included in 9.2?

On Sun, Nov 27, 2011 at 10:28 PM, Alexander Soudakov cyga...@gmail.com wrote:
 Hello.

 I finished developing www_fdw:
 https://github.com/cyga/www_fdw/

 It has docs/examples:
 https://github.com/cyga/www_fdw/wiki

 I haven't upload it to pgxn or pgfoundry yet.

 I want to ask for the following 2 things:
 1. testing from community;
 2. how can I add my extension to official fdw list:
 http://wiki.postgresql.org/wiki/Foreign_data_wrappers
 Is there any procedure for it?



 On Thu, Sep 29, 2011 at 1:20 AM, Kevin Grittner
 kevin.gritt...@wicourts.gov wrote:
 Florian Pflug f...@phlo.org wrote:
 On Sep28, 2011, at 15:32 , Alexander Soudakov wrote:
 Here you can find www_fdw feature documentation:
 http://wiki.postgresql.org/wiki/WWW_FDW

 Certainly looks useful (as a third-party extension, as others have
 already pointed out)

 Our programmers agree that it is likely to be useful here.  I agree
 that it should be an extension.

 What I didn't quite understand is how one would pass (dynamic)
 parameters for a GET request. For example, not too long ago I
 needed to access the Google Maps API from postgres. I ended up
 using pl/python, and now wonder if your FDW would support that
 use-case.

 I would assume that the usual ? to start parameters and  between
 parameters would be used.  For example, with Google Maps:

 http://maps.google.com/maps?hl=enie=UTF8hq=hnear=Madison,+Dane,+Wisconsinll=43.074684,-89.38188spn=0.003006,0.00383t=hz=18layer=ccbll=43.07468,-89.381742panoid=LhJ-PFHVzxRguJ6h616mmQcbp=12,355.53,,0,-1.32

 -Kevin




 --
 Alexander Soudakov
 Software Developer
 email: cyga...@gmail.com
 skype: asudakov




-- 
Alexander Soudakov
Software Developer
email: cyga...@gmail.com
skype: asudakov

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: GiST for range types (was Re: [HACKERS] Range Types - typo + NULL string constructor)

2011-11-27 Thread Tom Lane
Alexander Korotkov aekorot...@gmail.com writes:
 On Sat, Nov 26, 2011 at 11:11 AM, Jeff Davis pg...@j-davis.com wrote:
 There's been some significant change in rangetypes_gist.c, can you
 please rebase this patch?

 OK, rebased with head.

I looked at this patch a bit.  I agree with the aspect of it that says
let's add a flag bit so we can tell whether an upper GiST item includes
any empty ranges; I think we really need that in order to make
contained_by searches usable.  However, I'm not so happy with the
proposed rewrite of the penalty/picksplit functions.  I see two problems
there:

1. penalty is using both hard-wired penalty values (1.0, 2.0, etc) and
values obtained from subtype_diff.  This is not good, because you have
no idea what scale the subtype differences will be expressed on.  The
hard-wired values could be greatly larger than range widths, or greatly
smaller, resulting in randomly different index behavior.

2. It's too large/complicated.  You're proposing to add nearly a
thousand lines to rangetypes_gist.c, and I do not see any reason to
think that this is so much better than what's there now as to justify
that kind of increment in the code size.  I saw your performance
results, but one set of results on an arbitrary (not-real-world) test
case doesn't prove a lot to me; and in particular it doesn't prove that
we couldn't do as well with a much smaller and simpler patch.

There are a lot of garden-variety coding problems, too, for instance here:

+ *penalty = Max(DatumGetFloat8(FunctionCall2(
+ subtype_diff, orig_lower.val, new_lower.val)), 0.0);

which is going to uselessly call the subtype_diff function twice most of
the time (Max() is only a macro), plus you left off the collation
argument.  But I don't think it's worth worrying about those until the
big picture is correct, which I feel it isn't yet.

Earlier in the thread you wrote:

 Questions:
 1) I'm not sure about whether we need support of  in GiST, because it
 always produces full index scan (except search for non-empty ranges).

I was thinking the same thing; that opclass entry seems pretty darn
useless.

I propose to pull out and apply the changes related to the
RANGE_CONTAIN_EMPTY flag, and also remove the  opclass entry,
because I think these are uncontroversial and in the nature of
must fix quickly.  The redesign of the penalty and picksplit
functions should be discussed separately.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] information schema/aclexplode doesn't know about default privileges

2011-11-27 Thread Peter Eisentraut
Try this:

create function foo(int) returns int as $$ select $1 $$ language sql;

select * from information_schema.routine_privileges;

This ought to show EXECUTE privilege on the new function, but it
doesn't, because proacl is null, and nothing in the information schema
handles that specially.

I've pondered some ways to fix that.  One would be to add a variant of
aclexplode() that takes a parameter telling which catalog the acl datum
came from, and aclexplode() could then substitute the data received
acldefault() for null values.  The other way would be to handle this
entirely in the information schema SQL (either using some coalesce calls
or perhaps a UNION).  But that would mean duplicating the knowledge of
acldefault() in a second remote place.  So I'm thinking that handling it
in aclexplode() would be better.

Comments?



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: GiST for range types (was Re: [HACKERS] Range Types - typo + NULL string constructor)

2011-11-27 Thread Alexander Korotkov
On Sun, Nov 27, 2011 at 10:43 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 1. penalty is using both hard-wired penalty values (1.0, 2.0, etc) and
 values obtained from subtype_diff.  This is not good, because you have
 no idea what scale the subtype differences will be expressed on.  The
 hard-wired values could be greatly larger than range widths, or greatly
 smaller, resulting in randomly different index behavior.

Current GiST code only compare penalty values of inserting same tuple. And
don't see why it may alters. So, values obtained from subtype_diff
and hard-wired values would be never compared each other.


 2. It's too large/complicated.  You're proposing to add nearly a
 thousand lines to rangetypes_gist.c, and I do not see any reason to
 think that this is so much better than what's there now as to justify
 that kind of increment in the code size.  I saw your performance
 results, but one set of results on an arbitrary (not-real-world) test
 case doesn't prove a lot to me; and in particular it doesn't prove that
 we couldn't do as well with a much smaller and simpler patch.

I've tested double sorting split algorithm itself pretty much on synthetic
datasets. See paper for details. Strategy of separation of different
classes of ranges really need more testing. But obtaining large enough
real-life datasets is pretty *problematic for me.*


 There are a lot of garden-variety coding problems, too, for instance here:

 + *penalty = Max(DatumGetFloat8(FunctionCall2(
 + subtype_diff, orig_lower.val, new_lower.val)), 0.0);

 which is going to uselessly call the subtype_diff function twice most of
 the time (Max() is only a macro), plus you left off the collation
 argument.  But I don't think it's worth worrying about those until the
 big picture is correct, which I feel it isn't yet.

Oh, I see. It will be fixed.

I propose to pull out and apply the changes related to the
 RANGE_CONTAIN_EMPTY flag, and also remove the  opclass entry,
 because I think these are uncontroversial and in the nature of
 must fix quickly.  The redesign of the penalty and picksplit
 functions should be discussed separately.

I think the same.

--
With best regards,
Alexander Korotkov.


Re: [HACKERS] Feature proposal: www_fdw

2011-11-27 Thread Kevin Grittner
Alexander Soudakov cyga...@gmail.com wrote:
 
 in addition to:
 I want to ask for the following 2 things:
 1. testing from community;
 2. how can I add my extension to official fdw list:
 http://wiki.postgresql.org/wiki/Foreign_data_wrappers
 Is there any procedure for it?
 
 Can I have it included in 9.2?
 
I think your best bet would be to review patch submission guidelines
and submit it to the open CommitFest:
 
http://wiki.postgresql.org/wiki/Submitting_a_Patch#Patch_submission
 
https://commitfest.postgresql.org/action/commitfest_view/open
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Prep object creation hooks, and related sepgsql updates

2011-11-27 Thread Kohei KaiGai
2011/11/26 Dimitri Fontaine dimi...@2ndquadrant.fr:
 Kohei KaiGai kai...@kaigai.gr.jp writes:
 We still don't have clear direction of the way to implement external 
 permission
 checks on object creation time. So, please consider these patches are on the
 proof-of-concept stage; using prep-creation-hook to permission checks.

 I wonder if you could implement that as an extension given the command
 trigger patch finds its way in.  What do you think?

Unfortunately, it does not solve my point.

My proposition allows an extension to deliver an opaque value being set up
at the prep-creation hook into post-creation hook. It shall be used to deliver
a security label to be assigned on the new object, however, it is unavailable
to assign on prep-creation phase, because its object-id is not fixed yet.
(It is not an option to ask operating system a default security label of the
new object twice, because security policy may be reloaded between prep-
and post-.)

It is also reason why I mentioned about an idea that put prep-creation hook
on a limited number of object classes only. It requires us code modification
to maintain an opaque private between prep- and post- hooks.

Thanks,
-- 
KaiGai Kohei kai...@kaigai.gr.jp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Perl xsubpp

2011-11-27 Thread David E. Wheeler
On Nov 27, 2011, at 6:11 AM, Andrew Dunstan wrote:

 Has this been backpatched as well?
 
 It has been to 9.1.

There may be a simple workaround, but it's non-obvious. I think it should be 
back-patched all the way.

Best,

David


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Refactoring on DROP/ALTER SET SCHEMA/ALTER RENAME TO statement

2011-11-27 Thread Kohei KaiGai
2011/11/21 Robert Haas robertmh...@gmail.com:
 Now, what you have here is a much broader reworking.  And that's not
 necessarily bad, but at the moment I'm not really seeing how it
 benefits us.

 In my point, if individual object types need to have its own handler for
 alter commands, points of the code to check permissions are also
 distributed for each object types. It shall be a burden to maintain hooks
 that allows modules (e.g sepgsql) to have permission checks.

 Well, it's always nicer if you can just put a call to some hook in one
 place instead of many.  But it's not worth sacrificing everything to
 make that happen.  I think we need to compare the value of only
 needing a hook in one place against the disruption of changing a lot
 of code that is working fine as it is.  In the case of the DROP
 commands, it seems to me that the refactoring you did came out a huge
 win, but this doesn't seem as clear to me.  Note that DROP actually
 does dispatch the actual work of dropping the object to a
 type-specific function, unlike what you're trying to do here.

Yes, I agree with your opinion.
I'm also not sure whether my refactoring efforts on ALTER commands
will give us larger worth than its size of code changes, although we will
be able to consolidate the point of hooks around them.

If we add new properties that required by AlterObjectNamespace, as
you suggested, it will allow to reduce number of caller of this routine
mechanically with small changes.
Should I try this reworking with this way?
At least, AlterObjectNamespace already consolidate the point to check
permissions.

I initially thought it is too specific for AlterObjectNamespace, then I
reconsidered that we may be able to apply same properties to
ALTER RENAME TO/SET OWNER commands also, even though
these commands have completely branched routines for each object
types.

Thanks,
-- 
KaiGai Kohei kai...@kaigai.gr.jp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Prep object creation hooks, and related sepgsql updates

2011-11-27 Thread Dimitri Fontaine
Kohei KaiGai kai...@kaigai.gr.jp writes:
 I wonder if you could implement that as an extension given the command
 trigger patch finds its way in.  What do you think?

 Unfortunately, it does not solve my point.

[...]

 It is also reason why I mentioned about an idea that put prep-creation hook
 on a limited number of object classes only. It requires us code modification
 to maintain an opaque private between prep- and post- hooks.

In my current proposal for command triggers, the trigger procedure is
given schemaname and objectname as separate arguments.  It seems to me
easy enough to use that as a key to some data structure where the value
is any opaque data you need, and that you maintain in your extension
triggers code. You can write them in C.

I don't think schemaname+objectname fails to be unique, so I don't think
you need another kind of Oid in BEFORE creation triggers here.

-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Prep object creation hooks, and related sepgsql updates

2011-11-27 Thread Kohei KaiGai
2011/11/27 Dimitri Fontaine dimi...@2ndquadrant.fr:
 Kohei KaiGai kai...@kaigai.gr.jp writes:
 I wonder if you could implement that as an extension given the command
 trigger patch finds its way in.  What do you think?

 Unfortunately, it does not solve my point.

 [...]

 It is also reason why I mentioned about an idea that put prep-creation hook
 on a limited number of object classes only. It requires us code modification
 to maintain an opaque private between prep- and post- hooks.

 In my current proposal for command triggers, the trigger procedure is
 given schemaname and objectname as separate arguments.  It seems to me
 easy enough to use that as a key to some data structure where the value
 is any opaque data you need, and that you maintain in your extension
 triggers code. You can write them in C.

Sorry, it does not cover all the code paths that I want to apply permission
checks around creation of new tables.

The existing DAC checks permission on creation of new tables at
DefineRelation() and OpenIntoRel(), and sepgsql also wants to follow
this manner.
However, OpenIntoRel() does not go through ProcessUtility, so it seems
to me the command trigger is not invoked in this case.

And, it seems to me the current proposition of the command trigger
does not support to fire triggers on creation of databases, although
permission checks requires Oid of source database that is not also
appeared in pg_database catalog.

 I don't think schemaname+objectname fails to be unique, so I don't think
 you need another kind of Oid in BEFORE creation triggers here.

The pg_seclabel and pg_shseclabel needs OID to assign a security label
on a particular database object, so label provider (sepgsql) must know
Oid of the target object on assignment time.

Thanks,
-- 
KaiGai Kohei kai...@kaigai.gr.jp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Prep object creation hooks, and related sepgsql updates

2011-11-27 Thread Dimitri Fontaine
Kohei KaiGai kai...@kaigai.gr.jp writes:
 Sorry, it does not cover all the code paths that I want to apply permission
 checks around creation of new tables.

 The existing DAC checks permission on creation of new tables at
 DefineRelation() and OpenIntoRel(), and sepgsql also wants to follow
 this manner.
 However, OpenIntoRel() does not go through ProcessUtility, so it seems
 to me the command trigger is not invoked in this case.

we have the same problem in the command trigger patch, we will need to
add specific calls to its functions from other code path than just
ProcessUtility.

 And, it seems to me the current proposition of the command trigger
 does not support to fire triggers on creation of databases, although
 permission checks requires Oid of source database that is not also
 appeared in pg_database catalog.

I have to have a look at what forbids us to add support for the create
database command here.  It seems to be just another branch of the switch
in standard_ProcessUtility().

 I don't think schemaname+objectname fails to be unique, so I don't think
 you need another kind of Oid in BEFORE creation triggers here.

 The pg_seclabel and pg_shseclabel needs OID to assign a security label
 on a particular database object, so label provider (sepgsql) must know
 Oid of the target object on assignment time.

Yes, and you need to refer to things you did in the BEFORE trigger from
the AFTER trigger, I'm just offering you a way to do that.  Then if you
need the Oid in the AFTER trigger, of course you have it.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] information schema/aclexplode doesn't know about default privileges

2011-11-27 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 This ought to show EXECUTE privilege on the new function, but it
 doesn't, because proacl is null, and nothing in the information schema
 handles that specially.

 I've pondered some ways to fix that.  One would be to add a variant of
 aclexplode() that takes a parameter telling which catalog the acl datum
 came from, and aclexplode() could then substitute the data received
 acldefault() for null values.  The other way would be to handle this
 entirely in the information schema SQL (either using some coalesce calls
 or perhaps a UNION).  But that would mean duplicating the knowledge of
 acldefault() in a second remote place.  So I'm thinking that handling it
 in aclexplode() would be better.

+1.  It would be a really bad idea for the acldefault() logic to be
duplicated someplace else, especially in SQL code where grepping for the
relevant macros wouldn't even find it.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] hiding variable-length fields from Form_pg_* structs

2011-11-27 Thread Peter Eisentraut
It would be helpful if variable length catalog fields (except the first
one) would not be visible on the C level in the Form_pg_* structs.  We
keep them listed in the include/catalog/pg_*.h files so that the BKI
generating code can see them and for general documentation, but the
fields are meaningless in C, and some catalog files contain free-form
comments advising the reader of that.  If we could hide them somehow, we
would avoid random programming bugs, deconfuse compilers trying to
generate useful warnings, and save occasional stack space.  There are
several known cases of the first and second issue, at least.

I haven't found the ideal way to implement that yet, but the general
idea would be something like:

CATALOG(pg_attribute,1249) BKI_BOOTSTRAP ...
{
...
int4attinhcount;
Oid attcollation;
aclitem attacl[1];
CATVARLEN(
textattoptions[1];
textattfdwoptions[1];
)
} FormData_pg_attribute;

where CATVARLEN is defined empty in C, and ignored in the BKI generation
code.

The real trick is to find something that handles well with pgindent and
indenting text editors.

Ideas?



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade automatic testing

2011-11-27 Thread Peter Eisentraut
On lör, 2011-11-05 at 18:45 +0200, Peter Eisentraut wrote:
 On mån, 2011-09-19 at 07:06 +0300, Peter Eisentraut wrote:
  I found a simpler way to get this working.  Just hack up the catalogs
  for the new path directly.  So I can now run this test suite against
  older versions as well, like this:
  
  contrib/pg_upgrade$ make installcheck oldsrc=somewhere oldbindir=elsewhere
 
 Any comments on how to proceed with this?  I think it has been useful in
 detecting pg_upgrade breakage a few times already, so I'd like to commit
 it and start using it.

I've committed it now, and some buildfarm members are failing with lack
of shared memory, semaphores, or disk space.  Don't know what to do with
that or why so many are failing like that.  We could create a way to
omit the test if it becomes a problem.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade automatic testing

2011-11-27 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 I've committed it now, and some buildfarm members are failing with lack
 of shared memory, semaphores, or disk space.  Don't know what to do with
 that or why so many are failing like that.  We could create a way to
 omit the test if it becomes a problem.

I believe the issue is that those BF members have kernel settings that
only support running one postmaster at a time.  The way you've got this
set up, it launches a new private postmaster during a make installcheck;
which is not only problematic from a resource consumption standpoint,
but seems to me to violate the spirit of make installcheck, because
what it's testing is not the installed postmaster but a local instance.

Can you confine the test to only occur in make check mode, not make
installcheck, please?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logging in high performance systems.

2011-11-27 Thread Stephen Frost
* Theo Schlossnagle (je...@omniti.com) wrote:
 I'd like to be able to creatively solve this problem without modifying
 the core, but today I cannot.

I thought a hook already existed for this (there's already a module
which uses a hook to log commands done as a superuser, for example)..
Perhaps it wasn't in the right place, however.

 So... here's my first whack at solving this with some flexibility.

I was hoping to work on something similar (and have discussed it with
Magnus a few times) but with much more flexibility.  Basically, have a
structure where there's a lot of meta-data available that can easily be
used to direct where a log message goes (or if it's logged at all), all
without having to actually parse the log message (which is painful..).

To be honest, I was consdiering something like what syslog-ng provides,
but PG specific, not needing regexp's (though perhaps supporting them)
and with multiple different destination types and locations.

Independent of all that, if we don't have hooks already that can be used
for this, I certainly think it makes sense to add some, even if we do
something more later.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] hiding variable-length fields from Form_pg_* structs

2011-11-27 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 CATALOG(pg_attribute,1249) BKI_BOOTSTRAP ...
 {
 ...
 int4attinhcount;
 Oid attcollation;
 aclitem attacl[1];
 CATVARLEN(
 textattoptions[1];
 textattfdwoptions[1];
 )
 } FormData_pg_attribute;

 where CATVARLEN is defined empty in C, and ignored in the BKI generation
 code.

 The real trick is to find something that handles well with pgindent and
 indenting text editors.

The low-tech way would be

CATALOG(pg_attribute,1249) BKI_BOOTSTRAP ...
{
...
int4attinhcount;
Oid attcollation;
aclitem attacl[1];
#ifdef CATALOG_VARLEN_FIELDS
textattoptions[1];
textattfdwoptions[1];
#endif
} FormData_pg_attribute;

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: GiST for range types (was Re: [HACKERS] Range Types - typo + NULL string constructor)

2011-11-27 Thread Tom Lane
Alexander Korotkov aekorot...@gmail.com writes:
 On Sun, Nov 27, 2011 at 10:43 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 1. penalty is using both hard-wired penalty values (1.0, 2.0, etc) and
 values obtained from subtype_diff.  This is not good, because you have
 no idea what scale the subtype differences will be expressed on.  The
 hard-wired values could be greatly larger than range widths, or greatly
 smaller, resulting in randomly different index behavior.

 Current GiST code only compare penalty values of inserting same tuple. And
 don't see why it may alters. So, values obtained from subtype_diff
 and hard-wired values would be never compared each other.

I see your point that we only need the penalty values to be comparable
for the same new value, but I don't think that really answers my
objection, because you've had to lobotomize the logic.  As an example,
if we have a new empty range to insert, and all the existing root-page
entries are ordinary finite ranges, this code will throw up its hands
and give them all the same 4.0 penalty value.  Surely it would be better
to attempt to pick the smallest (narrowest) existing range.  But to do
that, you have to pay attention to the subdiff value.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade automatic testing

2011-11-27 Thread Andrew Dunstan



On 11/27/2011 06:17 PM, Tom Lane wrote:

Peter Eisentrautpete...@gmx.net  writes:

I've committed it now, and some buildfarm members are failing with lack
of shared memory, semaphores, or disk space.  Don't know what to do with
that or why so many are failing like that.  We could create a way to
omit the test if it becomes a problem.

I believe the issue is that those BF members have kernel settings that
only support running one postmaster at a time.  The way you've got this
set up, it launches a new private postmaster during a make installcheck;
which is not only problematic from a resource consumption standpoint,
but seems to me to violate the spirit of make installcheck, because
what it's testing is not the installed postmaster but a local instance.

Can you confine the test to only occur in make check mode, not make
installcheck, please?





Contrib tests are only run by the buildfarm in installcheck mode, so 
that will probably turn the tests off for the buildfarm, so +1 on that 
:-) I think these tests are probably somewhat ill-conceived. Note also 
that shell scripts are not portable, and so these tests would not be 
able to run on MSVC buildfarm members, even if they had been enabled in 
the MSVC regression driver, which they have not. If we need a regression 
driver script it needs to be written in Perl.


Also note that the test as written is likely to add significantly to 
buildfarm run times, as it will run the entire base regression suite 
again, possibly several times.


Finally, I think that this is probably not what we really need. I have 
already started work (as I mentioned some weeks ago) on having the 
buildfarm stash away a successful build and data directory, to be used 
later in cross-version upgrade testing, which seems to me much more what 
we need from something like the buildfarm. Maybe we could discuss how to 
run something like that.


And yes, some buildfarm members run on fairly scarce machine resources, 
of memory, CPU time and disk space, and we need not to increase what our 
tests use without due notice and care.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Feature proposal: www_fdw

2011-11-27 Thread Hitoshi Harada
On Sun, Nov 27, 2011 at 10:28 AM, Alexander Soudakov cyga...@gmail.com wrote:
 Hello.

 I finished developing www_fdw:
 https://github.com/cyga/www_fdw/

 It has docs/examples:
 https://github.com/cyga/www_fdw/wiki

 I haven't upload it to pgxn or pgfoundry yet.

 I want to ask for the following 2 things:
 1. testing from community;
 2. how can I add my extension to official fdw list:
 http://wiki.postgresql.org/wiki/Foreign_data_wrappers
 Is there any procedure for it?

You need a community login to edit wiki. Go
https://www.postgresql.org/account/login/?next=/account/ for sign up.
Now that you have uploaded it to PGXN, I hope many people will test it.

Regards,
-- 
Hitoshi Harada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Inlining comparators as a performance optimisation

2011-11-27 Thread Peter Geoghegan
Attached are the results from performing a similar process to the
prior benchmark, but on Greg Smith's high-end server, and with an
orderlines table that has been doubled-up until it is 1538 MB,
making the same old query perform a quicksort that's over 3GB. Short
version:  HEAD is 20468.0ms, with my patch it's 13689.698ms, so these
gains hold-up very well for very large in-memory sorts, possibly even
perfectly.

Note that the doubling up had an interesting and desirable effect
for the purposes of this patch review: It's approaching a worst case
due to the low cardinality of the product + quantity columns, which
crops up with the non-inlined functions only (int4regqsort_arg, etc).
All too frequently, evaluating the first scankey results in equality,
and so we cannot elide the SQL function call machinery. This is going
to dampen the improvement for the multiple scanKey case considerably
(and it looks like a smaller relative improvement than when the
orderlines table was quite a lot smaller). As I said from the outset,
there is no worst case for the single scanKey case that occurs to me.
Multiple scanKeys are likely to be a problem for any effort to offer
user-defined, per-type sort functions. I could probably make
int4regqsort_arg and friends perform a bit better than we see here by
increasing the cardinality of those two columns for the second query,
so the first scanKey comparison would frequently suffice.

Incidentally, I'm pretty sceptical of the idea of any effort being
made to provide user-defined per-type sort functions or anything like
that. No one has come forward with a novel sorting implementation that
looks useful, although there has been some talk of some. It's
relatively easy to hack on tuplesort to build a prototype, so I don't
think the lack of this functionality is the blocker here. Besides, we
will probably end up doing a terrible job of anticipating how invasive
such a facility needs to be to facilitate these novel sorting
implementations.

While I'm very encouraged by these results, I must admit that there
are a few things that I cannot currently explain. I don't know why the
second query on the Optimized sheet outperforms the same query on
the Not inlined page. When you consider how small the standard
deviation is for each set of results, it's fairly obvious that it
cannot be explained by noise. I thought it was weird, so I re-ran both
benchmarks, with much the same outcome. It may be that the compiler
gets lucky for the optimized case, resulting in a bit of an extra
gain, and that effect is ridiculously magnified. In all cases, the
regression tests pass.

The single key/inlining optimisation seems to account for a big part
of the gains, and the Not inlined figures for the first query would
seem to suggest that the inlining can account for more than half of
gains seen, whereas that was previously assumed to be less. What I
think is happening here is that we're benefiting both from inlining as
well as not having to even consider multiple scanKeys (we have to at
least consider them even if we know in advance from the nScankeys
variable that there'll never be multiple scanKeys). Again, if the
cardinality was higher, we'd probably see Not inlined do better
here.

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services


results_server.ods
Description: application/vnd.oasis.opendocument.spreadsheet

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Perl xsubpp

2011-11-27 Thread Mr. Aaron W. Swenson
On Sun, Nov 27, 2011 at 12:12:41PM -0800, David E. Wheeler wrote:
 On Nov 27, 2011, at 6:11 AM, Andrew Dunstan wrote:
 
  Has this been backpatched as well?
  
  It has been to 9.1.
 
 There may be a simple workaround, but it's non-obvious. I think it should be 
 back-patched all the way.
 
 Best,
 
 David

That's my vote, too. It's preventing users of all versions from compiling
against ExtUtils-ParseXS-3.20.0.

-- 
Mr. Aaron W. Swenson
Gentoo Linux
Developer, Proxy Committer
Email: titanof...@gentoo.org
GnuPG FP : 2C00 7719 4F85 FB07 A49C  0E31 5713 AA03 D1BB FDA0
GnuPG ID : D1BBFDA0


pgpyHoCvdur9C.pgp
Description: PGP signature


Re: GiST for range types (was Re: [HACKERS] Range Types - typo + NULL string constructor)

2011-11-27 Thread Alexander Korotkov
On Mon, Nov 28, 2011 at 3:00 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 I see your point that we only need the penalty values to be comparable
 for the same new value, but I don't think that really answers my
 objection, because you've had to lobotomize the logic.  As an example,
 if we have a new empty range to insert, and all the existing root-page
 entries are ordinary finite ranges, this code will throw up its hands
 and give them all the same 4.0 penalty value.  Surely it would be better
 to attempt to pick the smallest (narrowest) existing range.  But to do
 that, you have to pay attention to the subdiff value.

I believe it's a problem of the current GiST interface. If using subdiff
value as an penalty for insertion of empty range, we have to return 0
penalty for any entry with RANGE_CONTAIN_EMPTY flag. And for plain empty
entry too without any chance to define priority between them. In my opinion
solution is that penalty function should return vector of floats instead of
single float. With current GiST interface we have to do will solution of
handling some cases better and some cases worse. For example, GiST for
boxes also suffers from interface limitation. In many papers I met
recommendation to choose smallest box from boxes with same extention (it's
not a rare situation to have multiple boxes with zero extention) for tuple
insertion. But with current interface, we can't implement it.

--
With best regards,
Alexander Korotkov.


Re: [HACKERS] Patch: add timing of buffer I/O requests

2011-11-27 Thread Greg Smith

On 11/27/2011 04:39 PM, Ants Aasma wrote:

On the AMD I saw about 3% performance drop with timing enabled. On the
Intel machine I couldn't measure any statistically significant change.


Oh no, it's party pooper time again.  Sorry I have to be the one to do 
it this round.  The real problem with this whole area is that we know 
there are systems floating around where the amount of time taken to grab 
timestamps like this is just terrible.  I've been annoyed enough by that 
problem to spend some time digging into why that is--seems to be a bunch 
of trivia around the multiple ways to collect time info on x86 
systems--and after this CommitFest is over I was already hoping to dig 
through my notes and start quantifying that more.  So you can't really 
prove the overhead of this approach is acceptable just by showing two 
examples; we need to find one of the really terrible clocks and test 
there to get a real feel for the worst-case.


I recall a patch similar to this one was submitted by Greg Stark some 
time ago.  It used the info for different reasons--to try and figure out 
whether reads were cached or not--but I believe it withered rather than 
being implemented mainly because it ran into the same fundamental 
roadblocks here.  My memory could be wrong here, there were also 
concerns about what the data would be used for.


I've been thinking about a few ways to try and cope with this whole 
class of timing problem:


-Document the underlying problem and known workarounds, provide a way to 
test how bad the overhead is, and just throw our hands up and say 
sorry, you just can't instrument like this if someone has a slow system.


-Have one of the PostgreSQL background processes keep track of a time 
estimate on its own, only periodically pausing to sync against the real 
time.  Then most calls to gettimeofday() can use that value instead.  I 
was thinking of that idea for slightly longer running things though; I 
doubt that can be made accurate enough to test instrument buffer


And while I hate to kick off massive bike-shedding in your direction, 
I'm also afraid this area--collecting stats about how long individual 
operations take--will need a much wider ranging approach than just 
looking at the buffer cache ones.  If you step back and ask what do 
people expect here?, there's a pretty large number who really want 
something like Oracle's v$session_wait  and v$system_event interface for 
finding the underlying source of slow things.  There's enough demand for 
that that EnterpriseDB has even done some work in this area too; what 
I've been told about it suggests the code isn't a great fit for 
contribution to community PostgreSQL though.  Like I said, this area is 
really messy and hard to get right.


Something more ambitious like the v$ stuff would also take care of what 
you're doing here; I'm not sure that what you've done helps built it 
though.  Please don't take that personally.  Part of one of my own 
instrumentation patches recently was rejected out of hand for the same 
reason, just not being general enough.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers