Re: [HACKERS] Notes on implementing URI syntax for libpq

2011-12-09 Thread Alexander Shulgin

Excerpts from Daniel Farina's message of Mon Dec 05 11:56:19 +0200 2011:
> 
> I think the current direction is fine, although as Robert Haas has
> said, I am not really at all inclined to view JDBC compatibility as
> any kind of a plus.  JDBC URLs are weird, and do the drivers actually
> link libpq anyway?   That world is unto itself.

Daniel,

The JDBC driver is special in that it intentionally does not use libpq.  Given 
every other binding (think Ruby, Python, Perl, Tcl, etc.) does use libpq, it 
makes perfect sense to me to make the syntax compatible with JDBC.

I see this as a two-fold effort: add URI syntax to libpq *and* improve JDBC's 
syntax to support the usual "user:pw@" notation.  This way, not only the above 
language's bindings URI syntaxes would become compatible with each other 
(eventually, with release of new libpq and new drivers' versions,) but they 
would also be interchangeable with JDBC's new syntax (also, eventually.)

> Are there any provisions for choosing X.509/cert authentication?  I
> imagine not, but out-of-band presentation of that information is the
> norm there, and I'm not sure if is any room for improvement within
> reach.

Since the idea is to parse any supported URI query parameters, this is likely 
going to Just Work(tm) if you add proper "sslcert=&sslkey=" query parameters to 
the connection URI.

> >> If we can decide on this, we should also put reasonable effort into making 
> >> JDBC support the same syntax.
> >
> > What would be our plan on this?  Since the syntax proposed here is strictly 
> > a superset of the existing JDBC syntax, I would think this qualifies as an 
> > improvement and it would be backwards compatible with any previous version 
> > of the JDBC connector.
> 
> I suppose that is nice, but is this designed, or coincidental?  Is
> there any fundamental reason why the JDBC driver will remain so
> similar to libpq in the future?  Will people realistically be able to
> use one URL across their Java and libpq projects in most situations,
> now and in the forseeable future, including the keyword options?
> Because as soon as one encounters the need to maintain two URLs for
> any reason, the otherwise real convenience regresses into bloat.

See above.  The hope is that URIs will be compatible sans the driver-specific 
extra query parameters which might be not recognized by either party.

--
Regards,
Alex

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] review: CHECK FUNCTION statement

2011-12-09 Thread Albe Laurenz
Pavel Stehule wrote:
> updated version
>
> changes:
>
> * CHECK FUNCTION ALL; is enabled - in this case functions from
> pg_catalog schema are ignored
>
> I looked on parser, and I didn't other changes there - IN SCHEMA, FOR
> ROLE are used more time there, so our usage will be consistent

> a small addition
> 
> * don't check SQL functions - are checked well now
> * don't check functions from information_schema too

One hunk in the patch fails due to conflict with
commit d5f23af6bfbc454e86dd16e5c7a0bfc0cf6189d0
(Peter Eisentraut's const patch).

There are also compiler warnings about discarded const
qualifiers in backend/nodes/copyfuncs.c,
backend/nodes/equalfuncs.c and backend/parser/gram.y.

There is a bug when ALL IN SCHEMA or ALL IN LANGUAGE
is used:

test=> CHECK FUNCTION ALL IN LANGUAGE plpgsql;
ERROR:  language "language" does not exist
test=> CHECK FUNCTION ALL IN SCHEMA laurenz;
ERROR:  schema "schema" does not exist

Something gets mixed up here.

I like the idea that CHECK FUNCTION ALL without additional
clauses works and ignores pg_catalog and information_schema!

I'm working on some documentation, but it won't be final before
the functionality is agreed upon.

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Collecting statistics on CSV file data

2011-12-09 Thread Etsuro Fujita
Hi Hanada-san,

I updated the patch.  Please find attached a patch.

Best regards,
Etsuro Fujita

> (2011/11/18 21:00), Shigeru Hanada wrote:
>> (2011/11/18 16:25), Etsuro Fujita wrote:
>>> Thank you for your testing.  I updated the patch according to your
>>> comments.  Attached is the updated version of the patch.
>>
>> I'd like to share result of my review even though it's not fully
>> finished.  So far I looked from viewpoint of API design, code
>> formatting, and documentation.  I'll examine effectiveness of the patch
>> and details of implementation next week, and hopefully try writing
>> ANALYZE handler for pgsql_fdw :)
>>
>> New patch has correct format, and it could be applied to HEAD of master
>> branch cleanly.  All regression tests including those of contrib modules
>> have passed.  It contains changes of codes and regression tests related
>> to the issue, and they have enough comments.  IMO the document in this
>> patch is not enough to show how to write analyze handler to FDW authors,
>> but it can be enhanced afterward.  With this patch, FDW author can
>> provide optional ANALYZE handler which updates statistics of foreign
>> tables.  Planner would be able to generate better plan by using statistics.
>>
>>> Yes.  But in the updated version, I've refactored analyze.c a little bit
>>> to allow FDW authors to simply call do_analyze_rel().
>> 
>>> The updated version enables FDW authors to just write their own
>>> acquire_sample_rows().  On the other hand, by retaining to hook
>>> AnalyzeForeignTable routine at analyze_rel(), higher level than
>>> acquire_sample_rows() as before, it allows FDW authors to write
>>> AnalyzeForeignTable routine for foreign tables on a remote server to ask
>>> the server for its current stats instead, as pointed out earlier by Tom
>>> Lane.
>>
>> IIUC, this patch offers three options to FDWs: a) set
>> AnalyzeForeignTable to NULL to indicate lack of capability, b) provide
>> AnalyzeForeignTable which calls do_analyze_rel with custom
>> sample_row_acquirer, and c) create statistics data from scratch by FDW
>> itself by doing similar things to do_analyze_rel with given argument or
>> copying statistics from remote PostgreSQL server.
>>
>> ISTM that this design is well-balanced between simplicity and
>> flexibility.  Maybe these three options would suit web-based wrappers,
>> file-based or RDBMS wrappers, and pgsql_fdw respectively.  I think that
>> adding more details of FdwRoutine, such as purpose of new callback
>> function and difference from required ones, would help FDW authors,
>> including me :)
>>
>> I have some random comments:
>>
>> - I think separated typedef of sample_acquire_rows would make codes more
>> readable.  In addition, parameters of the function should be described
>> explicitly.
>> - I couldn't see the reason why file_fdw sets ctid of sample tuples,
>> though I guess it's for Vitter's random sampling algorithm.  If every
>> FDW must set valid ctid to sample tuples, it should be mentioned in
>> document of AnalyzeForeignTable.  Exporting some functions from
>> analyze.c relates this issue?
>> - Why file_fdw skips sample tuples which have NULL value?  AFAIS
>> original acquire_sample_rows doesn't do so.
>> - Some comment lines go past 80 columns.
>> - Some headers included in file_fdw.c seems unnecessary.
>>
>> Regards,
> 
> 

*** a/contrib/file_fdw/file_fdw.c
--- b/contrib/file_fdw/file_fdw.c
***
*** 20,25 
--- 20,26 
  #include "commands/copy.h"
  #include "commands/defrem.h"
  #include "commands/explain.h"
+ #include "commands/vacuum.h"
  #include "foreign/fdwapi.h"
  #include "foreign/foreign.h"
  #include "miscadmin.h"
***
*** 101,106  static void fileBeginForeignScan(ForeignScanState *node, int 
eflags);
--- 102,108 
  static TupleTableSlot *fileIterateForeignScan(ForeignScanState *node);
  static void fileReScanForeignScan(ForeignScanState *node);
  static void fileEndForeignScan(ForeignScanState *node);
+ static void fileAnalyzeForeignTable(Relation onerel, VacuumStmt *vacstmt, int 
elevel);
  
  /*
   * Helper functions
***
*** 112,118  static List *get_file_fdw_attribute_options(Oid relid);
  static void estimate_costs(PlannerInfo *root, RelOptInfo *baserel,
   const char *filename,
   Cost *startup_cost, Cost *total_cost);
! 
  
  /*
   * Foreign-data wrapper handler function: return a struct with pointers
--- 114,123 
  static void estimate_costs(PlannerInfo *root, RelOptInfo *baserel,
   const char *filename,
   Cost *startup_cost, Cost *total_cost);
! static intacquire_sample_rows(Relation onerel,
!  HeapTuple *rows, int targrows,
!  double *totalrows, double *totaldeadrows,
!  BlockNumber *totalpages, int elevel);
  
  /*
   * Foreign-data wrapper handler function: return a struct with pointers
**

Re: [HACKERS] Bug in walsender when calling out to do_pg_stop_backup (and others?)

2011-12-09 Thread Heikki Linnakangas

On 03.12.2011 18:37, Heikki Linnakangas wrote:

One small change I'd like to make is to treat the loss of connection
more as a new "top-level" event, rather than as a new reason for query
cancellation. A lost connection is more like receiving SIGTERM, really.
Please take a look at the attached patch. I've been testing this by
doing "SELECT pg_sleep(1), a from generate_series(1,1000) a;", and
killing the connection with "killall -9 psql".


Ok, committed this.


One remaining issue I have with this is the wording of the error
message. The closest existing message we have is in old-mode COPY:

ereport(FATAL,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("connection lost during COPY to stdout")));

In the patch, I made the new message just "connection lost", but does
anyone else have a better opinion on that? Perhaps it should be
"connection lost while sending response to client". Or "connection lost
during execution of statement", but that might not be accurate if we're
not executing a statement at the moment, but just sending a "sync"
message or similar.


I made the error message "connection to client lost" in the end.

I still wonder if it would be safe to simply elog(FATAL) in 
internal_flush(), but didn't dare to do that without more thorough analysis.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [v9.2] Fix Leaky View Problem

2011-12-09 Thread Robert Haas
On Thu, Dec 8, 2011 at 5:17 PM, Kohei KaiGai  wrote:
> My first impression remind me an idea that I proposed before, even
> though it got negative response due to user visible changes.
> It requires superuser privilege to create new operators, since we
> assume superuser does not set up harmful configuration.

I don't think that's acceptable from a usability point of view; this
feature is important, but not important enough to go start ripping out
other features that people are already using, like non-superuser
operators.  I'm also pretty skeptical that it would fix the problem,
because the superuser might fail to realize that creating an operator
was going to create this type of security exposure.  After all, you
and I also failed to realize that, so it's obviously a fairly subtle
problem.

I feel like there must be some logic in the planner somewhere that is
"looking through" the subquery RTE and figuring out that safe_foo.a is
really the same variable as foo.a, and which therefore feels entitled
to apply !!'s selectivity estimator to foo.a's statistics.  If that's
the case, it might be possible to handicap that logic so that when the
security_barrier flag is set, it doesn't do that, and instead treats
safe_foo.a as a black box.  That would, obviously, degrade the quality
of complex plans involving security views, but I think we should focus
on getting something that meets our security goals first and then try
to improve performance later.

(For example, I am fairly certain that only a superuser can install a
new selectivity estimator; so perhaps we could allow selectivity
estimators to be signaled with the information that a security view
interposes or not, and then they can make an estimator-specific
decision on how to punt; but on the other hand that might be a stupid
idea; so for step #1 let's just figure out how to batten down the
hatches.)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Notes on implementing URI syntax for libpq

2011-12-09 Thread Robert Haas
On Fri, Dec 9, 2011 at 6:03 AM, Alexander Shulgin  
wrote:
> See above.  The hope is that URIs will be compatible sans the driver-specific 
> extra query parameters which might be not recognized by either party.

Yeah.  I am not that concerned with being stupidity-compatible with
anyone else ... but neither am I inclined to go out of our way to be
incompatible.  It seems like the design on the table might allow us to
get the best of both worlds.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] review: CHECK FUNCTION statement

2011-12-09 Thread Albe Laurenz
Pavel Stehule wrote:
> there is fixed version

Here is my attempt at a doc patch.

Could you add it to your patch so that all is in a single patch?

Yours,
Laurenz Albe


check_function_docs.patch
Description: check_function_docs.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: SP-GiST, Space-Partitioned GiST

2011-12-09 Thread Tom Lane
Teodor Sigaev  writes:
> [ spgist patch ]

I've been working through this patch and fixing assorted things.
There are a couple of issues that require some discussion:

1. It took me awhile to realize it, but there are actually three
different datatypes that can be stored in an SPGist index: the prefix
value of an inner tuple, the key value for each node (downlink) in an
inner tuple, and the data value of a leaf tuple.  One reason this was
not obvious is that only two of these types can be configured by the
opclass config function; the leaf tuple datatype is hard-wired to be
the same as the indexed column's type.  Why is that?  It seems to me
to be both confusing and restrictive.  For instance, if you'd designed
the suffix tree opclass just a little differently, it would be wanting
to store "char" not text in leaf nodes.  Shouldn't we change this to
allow the leaf data type to be specified explicitly?

2. The code is a bit self-contradictory about whether the index is
complete or not.  The top-level ambuild and aminsert code rejects
NULLs (but there seems to be some provision for storing nulls in
leaf tuples --- I'm confused what that's used for, but apparently
not for actual data nulls).  If this is a designed, permanent limitation
then SPGist indexes can never be expected to represent every heap tuple,
which means there is no value in full-index scans.  However there is
code in spgWalk() to support a full-index scan.  It's dead code, because
pg_am.amoptionalkey is not set for spgist and so the planner will never
create a qual-free index scan plan.  Perhaps we should rip out the
full-index-scan code?  Or do you have plans to support indexing nulls
later?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] static or dynamic libpgport

2011-12-09 Thread Andrew Dunstan
Recently I attempted to build an external package (pg_bulkload) against 
the latest Fedora packages. Unfortunately this fails, as pgxs adds 
"-lpgport" to any link line for an executable, and the corresponding 
libpgport.a isn't there. And in fact, pg_bulkload does use some of the 
functionality there (e.g. pg_strncasecmp), so just stripping "-lpgport" 
out doesn't work either.


This happened because Fedora packaging guidelines 
 
are strongly against shipping static libraries, and so all the 
PostgreSQL static libraries are excluded from the distribution (and I 
believe there are similar restrictions for RHEL). Of these libraries, I 
believe the only one that is *only* built as a static library is libpgport.


Is there any good reason why we shouldn't build and install a dynamic 
libpgport.so?


(Of course, you could say "use the community RPMs", but that would be a 
bit of a cop out. Some organizations have a perfectly reasonable policy 
or requiring use of vendor packages wherever possible, since vendors are 
naturally only going to support packages they provide. So either we 
should be arguing to the Fedora/RedHat people that they should ship the 
static library, or we should be providing them with a dynamic one, ISTM.)


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] static or dynamic libpgport

2011-12-09 Thread Peter Geoghegan
On 9 December 2011 16:13, Andrew Dunstan  wrote:
> Is there any good reason why we shouldn't build and install a dynamic
> libpgport.so?

+1 in favour of building and installing a dynamic libpgport.so. I
generally agree with your analysis.

I've seen this issue crop up a good few times now. I'm a Fedora user
myself, but about 2 years ago I got into a "he said she said"
situation with an OpenSUSE package maintainer over this, when I had to
build Slony on that platform. I'm a bit hazy on the details now, but
iirc he thought that it wasn't necessary to ship libpgport.a in
particular (though I don't think that they have a beef with static
libraries generally) - maybe they took a cue from Redhat there?

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] review: CHECK FUNCTION statement

2011-12-09 Thread Pavel Stehule
Hello

2011/12/9 Albe Laurenz :
> Pavel Stehule wrote:
>> there is fixed version
>
> Here is my attempt at a doc patch.
>
> Could you add it to your patch so that all is in a single patch?
>

there is merged patch

Thank you

Regards

Pavel

> Yours,
> Laurenz Albe


check_function-2011-12-09-3.diff.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] static or dynamic libpgport

2011-12-09 Thread Steve Singer

On 11-12-09 11:13 AM, Andrew Dunstan wrote:
Recently I attempted to build an external package (pg_bulkload) 
against the latest Fedora packages. Unfortunately this fails, as pgxs 
adds "-lpgport" to any link line for an executable, and the 
corresponding libpgport.a isn't there. And in fact, pg_bulkload does 
use some of the functionality there (e.g. pg_strncasecmp), so just 
stripping "-lpgport" out doesn't work either.


This happened because Fedora packaging guidelines 
 
are strongly against shipping static libraries, and so all the 
PostgreSQL static libraries are excluded from the distribution (and I 
believe there are similar restrictions for RHEL). Of these libraries, 
I believe the only one that is *only* built as a static library is 
libpgport.


Is there any good reason why we shouldn't build and install a dynamic 
libpgport.so?


+1

We've struggled with slony and pgport because so many users have had 
problems with pgport not being included in some distributions.  It has 
some useful functions, I think recent versions of slony use it on win32 
but don't elsewhere. Wee have had at least one patch floating around 
that makes conditionally includes  certain small behaviours in slony 
based on if pgport is available or not based on a configure check.


What package would a shared static pgport be installed with? Slony 
requires a server + headers to build but slon and slonik only have a 
runtime dependency on libpq (I don't know if anyone installs slon/slonik 
on a machine without a postgresql server but you could)


Steve




cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] static or dynamic libpgport

2011-12-09 Thread Andrew Dunstan



On 12/09/2011 01:01 PM, Steve Singer wrote:

On 11-12-09 11:13 AM, Andrew Dunstan wrote:
Recently I attempted to build an external package (pg_bulkload) 
against the latest Fedora packages. Unfortunately this fails, as pgxs 
adds "-lpgport" to any link line for an executable, and the 
corresponding libpgport.a isn't there. And in fact, pg_bulkload does 
use some of the functionality there (e.g. pg_strncasecmp), so just 
stripping "-lpgport" out doesn't work either.


This happened because Fedora packaging guidelines 
 
are strongly against shipping static libraries, and so all the 
PostgreSQL static libraries are excluded from the distribution (and I 
believe there are similar restrictions for RHEL). Of these libraries, 
I believe the only one that is *only* built as a static library is 
libpgport.


Is there any good reason why we shouldn't build and install a dynamic 
libpgport.so?


+1

We've struggled with slony and pgport because so many users have had 
problems with pgport not being included in some distributions.  It has 
some useful functions, I think recent versions of slony use it on 
win32 but don't elsewhere. Wee have had at least one patch floating 
around that makes conditionally includes  certain small behaviours in 
slony based on if pgport is available or not based on a configure check.


What package would a shared static pgport be installed with? Slony 
requires a server + headers to build but slon and slonik only have a 
runtime dependency on libpq (I don't know if anyone installs 
slon/slonik on a machine without a postgresql server but you could)





In the Fedora world, a static lib would go in postgresql-devel, but a 
dynamic lib would go in postgresql-libs, which is also where libpq is 
shipped.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql line number reporting from stdin

2011-12-09 Thread Peter Eisentraut
On lör, 2011-11-26 at 22:36 +0200, Peter Eisentraut wrote:
> There is a long-standing oddity in psql that running
> 
> psql -f foo.sql
> 
> returns error messages with file name and line number, like
> 
> psql:foo.sql:1: ERROR:  syntax error at or near "foo"
> 
> but running
> 
> psql < foo.sql does not.  I suggest we change the latter to print
> 
> psql::1: ERROR:  syntax error at or near "foo"

The problem is, this breaks the regression tests, because first the
actual output changes, and second the line numbers get included, which
will create a mess every time you edit a test.  Not sure whether we can
work around that.  Ideas?



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Core Extensions relocation

2011-12-09 Thread Josh Berkus
All,

This is currently awaiting a check by gsmith that the 7 named extensions
do not add any new dependancies.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql line number reporting from stdin

2011-12-09 Thread Tom Lane
Peter Eisentraut  writes:
> The problem is, this breaks the regression tests, because first the
> actual output changes, and second the line numbers get included, which
> will create a mess every time you edit a test.  Not sure whether we can
> work around that.  Ideas?

Ugh, that's pretty nearly a deal-breaker.  Would it be sane to have a
command line switch the regression test driver could specify to prevent
inclusion of this info?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] pg_test_fsync: Delete temporary file when aborted by a signal

2011-12-09 Thread Robert Haas
On Wed, Dec 7, 2011 at 1:40 PM, Marti Raudsepp  wrote:
> Hi list,
>
> I found a 'pg_test_fsync.out' file in my $PGDATA, which was probably
> left around because I aborted pg_test_fsync with ^C back when setting
> up the server.
>
> Here's a patch to delete that file via a signal handler for
> SIGINT/SIGTERM/SIGHUP.
>
> Not tested on Windows, but should work according to MSDN documentation.

I committed this, with a bit of additional paranoia.  Let's see what
the buildfarm thinks.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] PostgreSQL fails to build with 32bit MinGW-w64

2011-12-09 Thread Robert Haas
On Thu, Dec 8, 2011 at 12:46 PM, Andrew Dunstan  wrote:
> This is apparently an optimization bug in the compiler. If I turn
> optimization off (CFLAGS=-O0) it goes away. Ick.
>
> So at the moment I'm a bit blocked. I can't really file a bug because the
> compiler can't currently be used to build postgres, I don't have time to
> construct a self-contained test case, and I don't want to commit changes to
> enable the compiler until the issue is solved.

If we're talking about adding support for a previously-unsupported
configuration, it seems to me that it would be fine to commit a patch
that made everything work, but for the compiler bug.  We could refrain
from stating that we officially support that configuration until the
compiler bug is fixed, or even document the existence of the bug.  We
can't be responsible for other people's broken code, but I don't
necessarily see that as a reason not to commit a prerequisite patch.
Otherwise, as you say, there's a chicken-and-egg problem, and who does
that help?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] PostgreSQL fails to build with 32bit MinGW-w64

2011-12-09 Thread Andrew Dunstan



On 12/09/2011 03:11 PM, Robert Haas wrote:

On Thu, Dec 8, 2011 at 12:46 PM, Andrew Dunstan  wrote:

This is apparently an optimization bug in the compiler. If I turn
optimization off (CFLAGS=-O0) it goes away. Ick.

So at the moment I'm a bit blocked. I can't really file a bug because the
compiler can't currently be used to build postgres, I don't have time to
construct a self-contained test case, and I don't want to commit changes to
enable the compiler until the issue is solved.

If we're talking about adding support for a previously-unsupported
configuration, it seems to me that it would be fine to commit a patch
that made everything work, but for the compiler bug.  We could refrain
from stating that we officially support that configuration until the
compiler bug is fixed, or even document the existence of the bug.  We
can't be responsible for other people's broken code, but I don't
necessarily see that as a reason not to commit a prerequisite patch.
Otherwise, as you say, there's a chicken-and-egg problem, and who does
that help?




Yeah, fair enough. I'll work on that.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RangeVarGetRelid()

2011-12-09 Thread Robert Haas
On Wed, Dec 7, 2011 at 8:42 AM, Noah Misch  wrote:
> It narrows the window for race conditions of that genesis, but isn't doing so
> an anti-feature?  Even if not, doing that _only_ in RemoveRelations() is odd.

I dunno.  I was just reluctant to change things without a clear reason
for doing so, and "it's not what we do in other places" didn't seem
like enough.  Really, I'd like to *always* do
AcceptInvalidationMessages() before a name lookup, but I'm not
convinced it's cheap enough for that.

>> Attached please find a patch with some more fixes on this same general
>> theme.  This one tackles renaming of relations, columns, and triggers;
>> and changing the schema of relations.  In these cases, the current
>> code does a permissions check before locking the table (which is good)
>> and uses RangeVarGetRelid() to guard against "cache lookup failure"
>> errors caused by concurrent DDL (also good).  However, if the referent
>> of the name changes during the lock wait, we don't recheck
>> permissions; we just allow the rename or schema change on the basis
>> that the user had permission to do it to the relation that formerly
>> had that name.  While this is pretty minor as security concerns go, it
>> seems best to clean it up, so this patch does that.
>
> I'd suggest limiting callback functions to checks that benefit greatly from
> preceding the lock acquisition.  In these cases, that's only the target object
> ownership check; all other checks can wait for the lock.  Writing correct
> callback code is relatively tricky; we have no lock, so the available catalog
> entries can change arbitrarily often as the function operates.  It's worth the
> trouble of navigating that hazard to keep the mob from freely locking all
> objects.  However, if the owner of "some_table" writes "ALTER VIEW some_table
> RENAME TO foo", I see no commensurate benefit from reporting the relkind
> mismatch before locking the relation.  This would also permit sharing a
> callback among almost all the call sites you've improved so far.  Offhand, I
> think only ReindexIndex() would retain a bespoke callback.

No, I don't think so.  REINDEX INDEX needs special heap locking and
does not reject operations on system tables, REINDEX TABLE does not
reject operations on system tables, LOCK TABLE has special permissions
requirements and does not reject operations on system tables, DROP
TABLE also has special permissions requirements, RENAME ATTRIBUTE is
"normal" (i.e. must own relation, must not apply to system tables),
and RENAME RELATION has an extra permission check.  It might not be
worth having separate callbacks *just* to check the relkind, but I
don't think we have any instances of that in what's already committed
or in this patch.  It's an issue that is on my mind, though; and
perhaps as I keep cranking through the call sites some things will
turn up that can be merged.  It's amazing how many subtly different
permissions requirements there are here, but none of them seem
terribly ill-founded, and, in any case, changing them is a job for
another day if it's to be done at all.

> This patch removes two of the three callers of CheckRelationOwnership(),
> copying some of its logic to two other sites.  I'd suggest fixing the third
> caller (standard_ProcessUtility() for CREATE INDEX) in this same patch.  That
> way, we can either delete the function now or adjust it to permit continued
> sharing of that code under the new regime.

I had the same thought, but wasn't quite sure how to do it.  That code
seems to be making the rather optimistic assumption that you can look
up the same name as many times as you want and get the same answer.
CheckRelationOwnership() looks it up, and then transformIndexStmt()
looks it up again, and then DefineIndex() looks it up again ... twice,
in the case of a concurrent build.  If someone renames a relation with
the same name into a namespace earlier in our search path during all
this, the chances of totally nutty behavior seem pretty good; what
happens if we do phase 2 of the concurrent index build on a different
relation than phase 1?  So I didn't attempt to tackle the problem just
because it looked to me like the code needed a little more TLC than I
had time to give it, but maybe it deserves a second look.

> I like how this patch reduces the arbitrary division of authorization checks
> between alter.c and tablecmds.c.

Thanks.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review of VS 2010 support patches

2011-12-09 Thread Andrew Dunstan



On 12/04/2011 12:44 PM, Andrew Dunstan wrote:



On 11/29/2011 04:32 PM, Brar Piening wrote:

Andrew Dunstan wrote:


Some minor nitpicks:

Do we really need to create all those VSProject.pm and 
VSSolution.pm files? They are all always included anyway. Why 
not just stash all the packages in Solution.pm and Project.pm? 

We certainly don't *need* them.
Having different files separates the tasks of generating different 
target file formats into different source files. In my opinion this 
makes it easier to find the code that is actually generating the 
files that get used in a specific build environment.
While the VSSolution.pm and VC200nProject.pm files are indeed not 
much more than stubs that could eventually be extended in future (and 
probably never will) VC2010Project.pm contains the whole code for 
generating the new file format which would significantly bloat up the 
code in Project.pm that currently contains the common code for 
generating the old file formats.


Anyhow - this is just my opinion and my intention is to help 
improving the Windows build process and not forcing my design into 
the project.




Well, I do also dislike the asymmetry of it. Here's what I suggest: 
for the Solution files, we'll just put the object packages in 
Solution.pm. There really doesn't seem like any need for those to have 
tiny files on their own. For the Project files, factor out the 
2005/2008 specific parts from Project.pm into a new file, and have a 
new file for the equivalent parts of your new VC2010Project.pm. Then 
we'll add packages to Project.pm to create objects just like I'm 
suggesting above for Solution.pm. The result is then more symmetrical 
and we'll have three new files instead of seven (counting 
VSObjectFactory.pm).


Perhaps, too, this has all got sufficiently  complicated that adding 
some descritpion of what's going on here to README would be in order. 
I suspect some of my fellow committers tend to look at the whole thing 
and scratch their heads a bit, and that means expecting other people 
to make sense if it is probably a bit much ;-)





In the absence of reaction to this I've marked the patch as "waiting on 
author", but if/when I have time I'll work on rearranging things as above.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Notes on implementing URI syntax for libpq

2011-12-09 Thread Daniel Farina
On Fri, Dec 9, 2011 at 3:03 AM, Alexander Shulgin  
wrote:
> The JDBC driver is special in that it intentionally does not use libpq.  
> Given every other binding (think Ruby, Python, Perl, Tcl, etc.) does use 
> libpq, it makes perfect sense to me to make the syntax compatible with JDBC.

I am with you until the reasoning of the last part, which is can be
(as I understand it) rephrased to "every other major language
ecosystem uses libpq, so therefore it makes perfect sense to have the
syntax compatible with JDBC."  To which I say, "what?"

I guess if I move the parenthetical grouping of logic around, what you
are probably intending to say is "everyone except this one ecosystem
does the normal thing, so we have an opportunity to Unite The Clans,
by absorbing a unique aspect of one of them"

> I see this as a two-fold effort: add URI syntax to libpq *and* improve JDBC's 
> syntax to support the usual "user:pw@" notation.  This way, not only the 
> above language's bindings URI syntaxes would become compatible with each 
> other (eventually, with release of new libpq and new drivers' versions,) but 
> they would also be interchangeable with JDBC's new syntax (also, eventually.)

Okay. This is how I teased out my interpretation above.

> Since the idea is to parse any supported URI query parameters, this is likely 
> going to Just Work(tm) if you add proper "sslcert=&sslkey=" query parameters 
> to the connection URI.

Hmm. I guess the only downside is one has to have files materialized
to make that work, and one cannot really embed them in the URL (for
files that long, it is madness anyway).  But I do appreciate the exact
symmetry with the libpq options.

> The hope is that URIs will be compatible sans the driver-specific extra query 
> parameters which might be not recognized by either party.

How about a more general quirk of the hypothetical URL parsing code:
because JDBC's URLs are not URLs (schemes cannot have colons, per
rfc1738, section 2.1) treat the JDBC string specially and ignore it,
and parse the rest as a legitimate URL.  One could be even more
permissive and state that all tokens before the last colon-delimited
part of the scheme are ignored:

i:am:feeling:like:postgresql://(etc)
jdbc:postgresql://(etc)
psycopg2:postgresql://(etc)

Which would reduce to the same thing as:

postgresql://(etc)

What I can't get excited about is:

postgresql:ssl://user:pw@host:port/dbname?sslmode=...

Since this is not actually a URL, and the "scheme" using the above
rule would be "ssl".  If you really want to have SSL be part of the
scheme (given ssl=require exists, I'd prefer One Way that involves no
scheme alterations to denote the transport), then you can use an
RFC-compatible notation like "+":

postgresql+ssl://

For which the "scheme" would be "postgresql+ssl".  Again, I'm not
terribly excited about having a scheme that denotes the transport (in
spite of it being semi-commonly done as in svn+ssh), especially if
redundant with query string options.

-- 
fdr

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Notes on implementing URI syntax for libpq

2011-12-09 Thread Alexander Shulgin

Excerpts from Daniel Farina's message of Fri Dec 09 23:04:26 +0200 2011:
> 
> I guess if I move the parenthetical grouping of logic around, what you
> are probably intending to say is "everyone except this one ecosystem
> does the normal thing, so we have an opportunity to Unite The Clans,
> by absorbing a unique aspect of one of them"

Yes, what I meant is something more or less like that.

> i:am:feeling:like:postgresql://(etc)
> jdbc:postgresql://(etc)
> psycopg2:postgresql://(etc)
> 
> Which would reduce to the same thing as:
> 
> postgresql://(etc)

Well, it wasn't suggested that JDBC people paste their URIs to psql, while 
keeping the "jdbc:" prefix, that would be really weird thing to do.

However, I have just noticed they *do* require that part themselves, like in:

  String url = "jdbc:postgresql://localhost/test";
  Connection conn = DriverManager.getConnection(url);

It is really weird, since as far as I can see from the docs, the "jdbc:" part 
is always discarded by the driver manager.  That must be some true Java way of 
doing things. :-p

> What I can't get excited about is:
> 
> postgresql:ssl://user:pw@host:port/dbname?sslmode=...
> 
> Since this is not actually a URL, and the "scheme" using the above
> rule would be "ssl".  If you really want to have SSL be part of the
> scheme (given ssl=require exists, I'd prefer One Way that involves no
> scheme alterations to denote the transport), then you can use an
> RFC-compatible notation like "+":
> 
> postgresql+ssl://
> 
> For which the "scheme" would be "postgresql+ssl".  Again, I'm not
> terribly excited about having a scheme that denotes the transport (in
> spite of it being semi-commonly done as in svn+ssh), especially if
> redundant with query string options.

Yeah, I was also considering "+ssl", but don't recall if I ever suggested that 
on the list.

My primary motivation behind making SSL stand out in the URI is that it "feels 
wrong" when that is pushed to the query parameters.  In a real-world URI that 
would be impossible, since it's the server which is supposed to parse the 
parameters, not the client, but that can only happen after the connection has 
been established.

However, since we're parsing all of the "query parameters" locally in a client, 
this becomes less of a problem, so I would agree that we don't need a special 
scheme for SSL connections.  Especially, since the default SSL mode is "prefer" 
and to override that you still need to add a "sslmode=" query parameter.

--
Alex

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RangeVarGetRelid()

2011-12-09 Thread Noah Misch
On Fri, Dec 09, 2011 at 03:43:19PM -0500, Robert Haas wrote:
> On Wed, Dec 7, 2011 at 8:42 AM, Noah Misch  wrote:
> > It narrows the window for race conditions of that genesis, but isn't doing 
> > so
> > an anti-feature? ?Even if not, doing that _only_ in RemoveRelations() is 
> > odd.
> 
> I dunno.  I was just reluctant to change things without a clear reason
> for doing so, and "it's not what we do in other places" didn't seem
> like enough.  Really, I'd like to *always* do
> AcceptInvalidationMessages() before a name lookup, but I'm not
> convinced it's cheap enough for that.

Okay.

> > I'd suggest limiting callback functions to checks that benefit greatly from
> > preceding the lock acquisition. ?In these cases, that's only the target 
> > object
> > ownership check; all other checks can wait for the lock. ?Writing correct
> > callback code is relatively tricky; we have no lock, so the available 
> > catalog
> > entries can change arbitrarily often as the function operates. ?It's worth 
> > the
> > trouble of navigating that hazard to keep the mob from freely locking all
> > objects. ?However, if the owner of "some_table" writes "ALTER VIEW 
> > some_table
> > RENAME TO foo", I see no commensurate benefit from reporting the relkind
> > mismatch before locking the relation. ?This would also permit sharing a
> > callback among almost all the call sites you've improved so far. ?Offhand, I
> > think only ReindexIndex() would retain a bespoke callback.
> 
> No, I don't think so.  REINDEX INDEX needs special heap locking and
> does not reject operations on system tables, REINDEX TABLE does not
> reject operations on system tables, LOCK TABLE has special permissions
> requirements and does not reject operations on system tables, DROP
> TABLE also has special permissions requirements, RENAME ATTRIBUTE is
> "normal" (i.e. must own relation, must not apply to system tables),
> and RENAME RELATION has an extra permission check.  It might not be
> worth having separate callbacks *just* to check the relkind, but I
> don't think we have any instances of that in what's already committed
> or in this patch.  It's an issue that is on my mind, though; and
> perhaps as I keep cranking through the call sites some things will
> turn up that can be merged.

I forgot that you fixed RemoveRelations() and LockTable() in your last patch.
In addition to ReindexIndex(), which I mentioned, those two do need their own
callbacks in any case.

It also seems my last explanation didn't convey the point.  Yes, nearly every
command has a different set of permissions checks.  However, we don't benefit
equally from performing each of those checks before acquiring a lock.
Consider renameatt(), which checks three things: you must own the relation,
the relation must be of a supported relkind, and the relation must not be a
typed table.  To limit opportunities for denial of service, let's definitely
perform the ownership check before taking a lock.  The other two checks can
wait until we hold that lock.  The benefit of checking them early is to avoid
making a careless relation owner wait for a lock before discovering the
invalidity of his command.  That's nice as far as it goes, but let's not
proliferate callbacks for such a third-order benefit.

> > This patch removes two of the three callers of CheckRelationOwnership(),
> > copying some of its logic to two other sites. ?I'd suggest fixing the third
> > caller (standard_ProcessUtility() for CREATE INDEX) in this same patch. 
> > ?That
> > way, we can either delete the function now or adjust it to permit continued
> > sharing of that code under the new regime.
> 
> I had the same thought, but wasn't quite sure how to do it.  That code
> seems to be making the rather optimistic assumption that you can look
> up the same name as many times as you want and get the same answer.
> CheckRelationOwnership() looks it up, and then transformIndexStmt()
> looks it up again, and then DefineIndex() looks it up again ... twice,
> in the case of a concurrent build.  If someone renames a relation with
> the same name into a namespace earlier in our search path during all
> this, the chances of totally nutty behavior seem pretty good; what
> happens if we do phase 2 of the concurrent index build on a different
> relation than phase 1?  So I didn't attempt to tackle the problem just
> because it looked to me like the code needed a little more TLC than I
> had time to give it, but maybe it deserves a second look.

Ah, fair enough.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] static or dynamic libpgport

2011-12-09 Thread Bruce Momjian
Andrew Dunstan wrote:
> >> Is there any good reason why we shouldn't build and install a dynamic 
> >> libpgport.so?
> >
> > +1
> >
> > We've struggled with slony and pgport because so many users have had 
> > problems with pgport not being included in some distributions.  It has 
> > some useful functions, I think recent versions of slony use it on 
> > win32 but don't elsewhere. Wee have had at least one patch floating 
> > around that makes conditionally includes  certain small behaviours in 
> > slony based on if pgport is available or not based on a configure check.
> >
> > What package would a shared static pgport be installed with? Slony 
> > requires a server + headers to build but slon and slonik only have a 
> > runtime dependency on libpq (I don't know if anyone installs 
> > slon/slonik on a machine without a postgresql server but you could)
> >
> >
> 
> In the Fedora world, a static lib would go in postgresql-devel, but a 
> dynamic lib would go in postgresql-libs, which is also where libpq is 
> shipped.

I am not against shipping a dynamic libpgport, but I will just point out
that this was never intended or anticipated.  Are there any symbols in
there that might conflict with other software?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Timing overhead and Linux clock sources

2011-12-09 Thread Ants Aasma
On Wed, Dec 7, 2011 at 9:40 AM, Greg Smith  wrote:
>  He estimated 22ns per gettimeofday on the system with fast timing
> calls--presumably using TSC, and possibly faster than I saw because his
> system had less cores than mine to worry about.  He got 990 ns on his slower
> system, and a worst case there of 3% overhead.

Roberts comment about sequential scan performing lots of reads in a tight loop
made me think of worse worst case. A count(*) with wide rows and/or lots of
bloat. I created a test table with one tuple per page like this:
CREATE TABLE io_test WITH (fillfactor=10) AS
SELECT repeat('x', 1000) FROM generate_series(1,3);
I then timed SELECT COUNT(*) FROM io_test; with track_iotiming on and
off. Averages of 1000 executions, differences significant according to t-test:
timer | iotiming=off |  iotiming=on | diff
 hpet | 86.10 ms |147.80 ms | 71.67%
  tsc | 85.86 ms | 87.66 ms |  2.10%

The attached test program (test_gettimeofday_monotonic) shows that one
test loop iteration takes 34ns with tsc and 1270ns with hpet.

I also managed to run the test program a couple of two socket Solaris 10
machines. The one with Xeon X5570 had iteration time of 220ns and Xeon
E5620 had 270ns iterations. I'm not sure yet whether this is due to Solaris
gettimeofday just being slower or some hardware issue.

I also tested a more reasonable case of count(*) on pgbench_accounts with
scale factor 50 (61 tuples per page). With tsc timing was actually 1% faster,
but not statistically significant, with hpet the overhead was 5.6%.

Scaling the overhead for the Solaris machines, it seems that the worst case
for timing all buffer reads would be about 20% slower. Count(*) on medium
length tuples for an out of shared buffers, in OS cache would have overhead
between 1 and 2%.

>> One random thought: I wonder if there's a way for us to just time
>> every N'th event or something like that, to keep the overhead low.

This would work only for cases where there's a reasonably uniform distribution
of times or really long sampling periods, otherwise the variability will make
the result pretty much useless. For example in the I/O case, a pretty typical
load can have 1% of timings be 3 orders of magnitude longer than median.

--
Ants Aasma
#include 
#include 

typedef long long int64;
typedef unsigned long long uint64;
typedef long int32;
typedef unsigned long uint32;

#define TIMEDIFF(a, b) (((a).tv_sec - (b).tv_sec)*100 + ((a).tv_usec - (b).tv_usec))

int main(int argc, char **argv, char **arge) {
struct timeval start, end, prev, cur;
int64 i;
int64 time_elapsed = 0;
int32 duration;
uint64 loop_count = 0;
static int64 timings[32];

if (argc != 2 || !sscanf(argv[1], "%ld", &duration) || duration < 0) {
printf("Usage: test_gettimeofday_monotonic seconds\n");
return 1;
}
uint64 end_time = 100ll*duration;

gettimeofday(&start, 0);
cur = start;

while (time_elapsed < end_time) {
   prev = cur;
   gettimeofday(&cur, 0);

   long diff = TIMEDIFF(cur, prev);
   if (diff < 0) {
   printf("Time warp: %ld < 0 => %ld.%ld < %ld.%ld\n",
  diff, cur.tv_sec, cur.tv_usec, prev.tv_sec, prev.tv_usec);
   return -1;
   }
   int bits = 0;
   while (diff) {
  diff >>= 1;
  bits++;
   }
   timings[bits]++;
   loop_count++;
   time_elapsed = TIMEDIFF(cur, start);
}
gettimeofday(&end, 0);

printf("Per loop: %0.2f ns\n",
((double) TIMEDIFF(end, start))*1000/loop_count);
printf("%9s: %10s %9s\n", "usec", "count", "percent");

int found = 0;
for (i = 31; i >= 0; i--) {
if (found || timings[i]) {
found = 1;
printf("%9ld: %10lld %8.5f%%\n", 1l<>1, timings[i],
((double)timings[i])*100/loop_count);
}
}
return 0;
}

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Timing overhead and Linux clock sources

2011-12-09 Thread Greg Smith

On 12/09/2011 06:48 PM, Ants Aasma wrote:

The attached test program (test_gettimeofday_monotonic) shows that one
test loop iteration takes 34ns with tsc and 1270ns with hpet.
   


This test program is great, I've wanted this exact sort of visibility 
into this problem for years.  I've toyed with writing something like 
this and then seeing what results it returns on all of the build farm 
animals.  For now I can just run it on all the hardware I have access 
to, which is not a small list.


I think we should bundle this up similarly to test_fsync, document some 
best practices on time sources, and then point the vague warning about 
EXPLAIN overhead toward that.  Then new sources of timing overhead can 
point there too.   Much like low-level fsync timing, there's nothing 
PostgreSQL can do about it, the best we can do is provide a program to 
help users classify a system as likely or unlikely to run to have 
high-overhead timing.  I can make the needed docs changes, this is 
resolving a long standing issue impacting code I wanted to add.


Rough guideline I'm seeing shape up is that <50ns is unlikely to cause 
clock timing to be a significant problem, >500ns certainly is, and 
values in the middle should concern people but not necessarily 
invalidate the timing data collected.


I just confirmed that switching the clock source by echoing a new value 
into /sys/devices/system/clocksource/clocksource0/current_clocksource 
works (on the 2.6.32 kernel at least).  What we want to see on a good 
server is a result that looks like this, from my Intel i7-860 system:


$ cat /sys/devices/system/clocksource/clocksource0/current_clocksource
tsc
$ ./test_gettimeofday_monotonic 5
Per loop: 39.30 ns
 usec:  count   percent
4:  6  0.0%
2:104  0.8%
1:4999760  3.92983%
0:  16109 96.07009%

Here's how badly that degrades if I use one of the alternate time sources:

# echo acpi_pm > 
/sys/devices/system/clocksource/clocksource0/current_clocksource

$ ./test_gettimeofday_monotonic 5
Per loop: 727.65 ns
 usec:  count   percent
   16:  1  0.1%
8:  0  0.0%
4:   1233  0.01794%
2:699  0.01017%
1:4992622 72.65764%
0:1876879 27.31423%

echo hpet > /sys/devices/system/clocksource/clocksource0/current_clocksource
$ ./test_gettimeofday_monotonic 5
Per loop: 576.96 ns
 usec:  count   percent
8:  2  0.2%
4:   1273  0.01469%
2:767  0.00885%
1:4993028 57.61598%
0:3670977 42.36046%

Switching to the Intel T7200 CPU in my T60 laptop only provides the poor 
quality time sources, not TSC, and results show timing is really slow there:


$ cat /sys/devices/system/clocksource/clocksource0/current_clocksource
hpet
$ ./test_gettimeofday_monotonic 5
Per loop: 1019.60 ns
 usec:  count   percent
  256:  2  0.4%
  128:  3  0.6%
   64: 90  0.00184%
   32: 23  0.00047%
   16: 92  0.00188%
8:   1246  0.02541%
4: 34  0.00069%
2: 136154  2.77645%
1:4700772 95.85818%
0:  65466  1.33498%

# echo acpi_pm > 
/sys/devices/system/clocksource/clocksource0/current_clocksource

$ ./test_gettimeofday_monotonic 5
Per loop: 1864.66 ns
 usec:  count   percent
  256:  2  0.7%
  128:  0  0.0%
   64:  3  0.00011%
   32:  6  0.00022%
   16: 90  0.00336%
8:   1741  0.06493%
4:   2062  0.07690%
2:2260601 84.30489%
1: 416954 15.54952%
0:  0  0.0%

This seems to be measuring exactly the problem I only had a hand-wave 
"it's bad on this hardware" explanation of before.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Why do regression tests now leave "regress_test_role_super" around?

2011-12-09 Thread Tom Lane
As of commit fc6d1006bda783cc002c61a5f072905849dbde4b, the regression
tests leave an unused role sitting around, because that commit removed
DROP ROLE regress_test_role_super;
from foreign_data.sql.  Was that intentional?  If so, why?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2011-12-09 Thread Greg Smith

On 12/07/2011 02:34 AM, Shigeru Hanada wrote:

I think that only the owner of foreign table can keep collation
consistent between foreign and local, like data type of column.  We need to
support per-column-collation on foreign tables too, or should deny pushing
down condition which is collation-sensitive...
   


I am not sure about what next step you were planning here.  Are you 
thinking to block this sort of push-down in an update to your feature 
patch, or does some more serious work on foreign table collation need to 
happen first instead?


It looks like there has been some good discussion of this feature here, 
but there is still some work needed before it will be ready to commit.  
Hanada-san, did you get the feedback you were looking for here yet, or 
are there things you still wanted to discuss?  It is not clear to me 
what happens next; I would appreciate your comment on that.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers