Re: [HACKERS] [COMMITTERS] pgsql: Add missing format attributes

2011-09-11 Thread Fujii Masao
On Sun, Sep 11, 2011 at 5:17 AM, Peter Eisentraut pete...@gmx.net wrote:
 Add missing format attributes

 Add __attribute__ decorations for printf format checking to the places that
 were missing them.  Fix the resulting warnings.  Add
 -Wmissing-format-attribute to the standard set of warnings for GCC, so these
 don't happen again.

 The warning fixes here are relatively harmless.  The one serious problem
 discovered by this was already committed earlier in
 cf15fb5cabfbc71e07be23cfbc813daee6c5014f.

This commit causes the following warning at the compile time.

error.c: In function 'ecpg_raise_backend':
error.c:339: warning: field precision should have type 'int', but
argument 2 has type 'long unsigned int'

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] prepare plans of embedded sql on function start

2011-09-11 Thread Pavel Stehule
        CHECK FUNCTION function_name(arglist);


 I proposed a stored procedure check_function(name, arglist), but
 CHECK FUNCTION is ok for me too. Is easy implement it. Maybe there is
 issue - CHECK will be a keyword :(


CHECK is reserved keyword now, so this is issue.

sorry for noise

Pavel

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] prepare plans of embedded sql on function start

2011-09-11 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
 I'm not that happy with overloading the ANALYZE keyword to mean this
 (especially not since there is already meaning attached to the syntax
 ANALYZE x(y)).  But we could certainly use some other name --- I'm
 inclined to suggest CHECK:

   CHECK FUNCTION function_name(arglist);

This looks as good as it gets, but as we proposed some new behaviors for
ANALYZE in the past, I though I would bounce them here again for you to
decide about the overall picture.

The idea (that didn't get much traction at the time) was to support
ANALYZE on VIEWS so that we have statistics support for multi-columns or
any user given join.  The very difficult part about that is to be able
to match those stats we would have against a user SQL query.

But such a matching has been talked about in other contexts, it seems to
me, so the day we have that capability we might want to add ANALYZE
support to VIEWS.  ANALYZE could then support tables, indexes, views and
functions, and maybe some more database objects in the future.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Thinking about inventing MemoryContextSetParent

2011-09-11 Thread Martijn van Oosterhout
On Sat, Sep 10, 2011 at 06:03:23PM -0400, Tom Lane wrote:
 I'm considering inventing a new mcxt.c primitive,
 
 void MemoryContextSetParent(MemoryContext context, MemoryContext new_parent);
 
 which would have the effect of delinking context from its current
 parent context and attaching it as a child of the new specified parent.
 (Any child contexts that it has would naturally follow along.)
 Because of the way that mcxt.c handles parent/child links, there is no
 palloc required and so the operation cannot fail.

I like this idea. Currently the only way to control object lifetime is
at creation time. This means that you can atomically change the
lifetime of a collection of object when it reaches a state we like.

It occured to me this might be useful in other places where we copy
data into a longer lived context after checking.  Maybe config file
reading or plan construction.  The only issue I can think of is if
people where allocating in the local context assuming it would be
cleaned up and this data got kept as well.  So it's probably not
appropriate for things that happen really often.

Have a nice day,
-- 
Martijn van Oosterhout   klep...@svana.org   http://svana.org/kleptog/
 He who writes carelessly confesses thereby at the very outset that he does
 not attach much importance to his own thoughts.
   -- Arthur Schopenhauer


signature.asc
Description: Digital signature


Re: [HACKERS] [COMMITTERS] pgsql: Add missing format attributes

2011-09-11 Thread Peter Eisentraut
On sön, 2011-09-11 at 16:11 +0900, Fujii Masao wrote:
 On Sun, Sep 11, 2011 at 5:17 AM, Peter Eisentraut pete...@gmx.net wrote:
  Add missing format attributes
 
  Add __attribute__ decorations for printf format checking to the places that
  were missing them.  Fix the resulting warnings.  Add
  -Wmissing-format-attribute to the standard set of warnings for GCC, so these
  don't happen again.
 
  The warning fixes here are relatively harmless.  The one serious problem
  discovered by this was already committed earlier in
  cf15fb5cabfbc71e07be23cfbc813daee6c5014f.
 
 This commit causes the following warning at the compile time.
 
 error.c: In function 'ecpg_raise_backend':
 error.c:339: warning: field precision should have type 'int', but
 argument 2 has type 'long unsigned int'

Fixed, thanks.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump.c

2011-09-11 Thread David Fetter
On Thu, Sep 08, 2011 at 03:20:14PM -0400, Andrew Dunstan wrote:
 
 In the refactoring Large C files discussion one of the biggest
 files Bruce mentioned is pg_dump.c. There has been discussion in the
 past of turning lots of the knowledge currently embedded in this
 file into a library, which would make it available to other clients
 (e.g. psql). I'm not sure what a reasonable API for that would look
 like, though. Does anyone have any ideas?

Here's a sketch.

In essence, libpgdump should have the following areas of functionality:

- Discover the user-defined objects in the database.
- Tag each as pre-data, data, and post-data.
- Make available the dependency graph of the user-defined objects in the 
database.
- Enable the mechanical selection of subgraphs which may or may not be 
connected.
- Discover parallelization capability, if available.
- Dump requested objects of an arbitrary subset of the database,
  optionally using such capability.

Then there's questions of scope, which I'm straddling the fence about.
Should there be separate libraries to transform and restore?

A thing I'd really like to have in a libdump would be to have the
RDBMS-specific parts as loadable modules, but that, too, could be way
out of scope for this.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump.c

2011-09-11 Thread Robert Haas
On Thu, Sep 8, 2011 at 3:20 PM, Andrew Dunstan and...@dunslane.net wrote:
 In the refactoring Large C files discussion one of the biggest files Bruce
 mentioned is pg_dump.c. There has been discussion in the past of turning
 lots of the knowledge currently embedded in this file into a library, which
 would make it available to other clients (e.g. psql). I'm not sure what a
 reasonable API for that would look like, though. Does anyone have any ideas?

A good start might be to merge together more of pg_dump and pg_dumpall
than is presently the case.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump.c

2011-09-11 Thread Andrew Dunstan



On 09/11/2011 10:25 AM, David Fetter wrote:

On Thu, Sep 08, 2011 at 03:20:14PM -0400, Andrew Dunstan wrote:

In the refactoring Large C files discussion one of the biggest
files Bruce mentioned is pg_dump.c. There has been discussion in the
past of turning lots of the knowledge currently embedded in this
file into a library, which would make it available to other clients
(e.g. psql). I'm not sure what a reasonable API for that would look
like, though. Does anyone have any ideas?

Here's a sketch.

In essence, libpgdump should have the following areas of functionality:

- Discover the user-defined objects in the database.
- Tag each as pre-data, data, and post-data.
- Make available the dependency graph of the user-defined objects in the 
database.
- Enable the mechanical selection of subgraphs which may or may not be 
connected.
- Discover parallelization capability, if available.
- Dump requested objects of an arbitrary subset of the database,
   optionally using such capability.

Then there's questions of scope, which I'm straddling the fence about.
Should there be separate libraries to transform and restore?

A thing I'd really like to have in a libdump would be to have the
RDBMS-specific parts as loadable modules, but that, too, could be way
out of scope for this.




In the first place, this isn't an API, it's a description of 
functionality. A C library's API is expressed in its header files.


Also, I think you have seriously misunderstood the intended scope of the 
library. Dumping and restoring, parallelization, and so on are not in 
the scope I was thinking of. I think those are very properly the 
property of pg_dump.c and friends. The only part I was thinking of 
moving to a library was the discovery part, which is in fact a very 
large part of pg_dump.c.


One example of what I'd like to provide is something this:

char * pg_get_create_sql(PGconn *conn, object oid, catalog_class 
oid, pretty boolean);


Which would give you the sql to create an object, optionally pretty 
printing it.


Another is:

char * pg_get_select(PGconn *conn, table_or_view oid, pretty 
boolean, alias *char );


which would generate a select statement for all the fields in a given 
table, with an optional alias prefix.


For the purposes of pg_dump, perhaps we'd want to move all the getFoo() 
functions in pg_dump.c into the library, along with a couple of bits 
from common.c like getSchemaData().


(Kinda thinking out loud here.)

cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump.c

2011-09-11 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 One example of what I'd like to provide is something this:

  char * pg_get_create_sql(PGconn *conn, object oid, catalog_class 
 oid, pretty boolean);

 Which would give you the sql to create an object, optionally pretty 
 printing it.

I think the major problem with creating a decent API here is that
the SQL to create an object is only a small part ... almost a trivial
part ... of what pg_dump needs to know about it.  It's also aware of
ownership, permissions, schema membership, dependencies, etc etc.
I'm not sure about a reasonable representation for all that.

In particular, I think that discovering a safe dump order for a selected
set of objects is a pretty key portion of pg_dump's functionality.
Do we really want to assume that that needn't be included in a
hypothetical library?

Other issues include:

* pg_dump's habit of assuming that the SQL is being generated to work
with a current server as target, even when dumping from a much older
server.  It's not clear to me that other clients for a library would
want that behavior ... but catering to multiple output versions would
kick the complexity up by an order of magnitude.

* a lot of other peculiar things that pg_dump does in the name of
backwards compatibility or robustness of the output script, which again
aren't necessarily useful for other purposes.  An example here is the
choice to treat tablespace of a table as a separate property that's
not specified in the base CREATE TABLE command, so that the script
doesn't fail completely if the target database hasn't got such a
tablespace.

* performance.  Getting the data retail per-object, as the above API
implies, would utterly suck.  You have to think a little more carefully
about the integration between the discovery phase and the output phase,
as in there has to be a good deal of it.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] psql additions

2011-09-11 Thread Andrew Dunstan


Here's a couple of ideas I had recently about making psql a bit more 
user friendly.


First, it would be useful to be able to set pager options and possibly 
other settings, so my suggestion is for a \setenv command that could be 
put in a .psqlrc file, something like:


   \setenv PAGER='less'
   \setenv LESS='-imFX4'


Probably other people can think of more uses for such a gadget.

Second, I'd like to be able to set a minimum number of lines below which 
the pager would not be used, something like:


   \pset pagerminlines 200


Thoughts?

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump.c

2011-09-11 Thread Andrew Dunstan



On 09/11/2011 02:50 PM, Tom Lane wrote:

In particular, I think that discovering a safe dump order for a selected
set of objects is a pretty key portion of pg_dump's functionality.
Do we really want to assume that that needn't be included in a
hypothetical library?


Maybe. Who else would need it?



Other issues include:

* pg_dump's habit of assuming that the SQL is being generated to work
with a current server as target, even when dumping from a much older
server.  It's not clear to me that other clients for a library would
want that behavior ... but catering to multiple output versions would
kick the complexity up by an order of magnitude.


Good point. Maybe what we need to think about instead is adding some 
backend functions to do the sort of things I want. That would avoid 
version issues and have the advantage that it would be available to all 
clients, as well as avoiding possible performance issues you mention.




cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Double sorting split patch

2011-09-11 Thread Alexander Korotkov
Hackers,

I've got my patch with double sorting picksplit impementation for GiST into
more acceptable form. A little of testing is below. Index creation time is
slightly higher, but search is much faster. The testing datasets were
following:
1) uniform dataset - 10M rows
2) geonames points - 7.6M rows

test=# create index uniform_new_linear_idx on uniform using gist (point);
CREATE INDEX
Time: 397362,915 ms

test=# explain (analyze, buffers) select * from uniform where point @
box(point(0.5,0.5),point(0.501,0.501));
 QUERY PLAN


-
 Bitmap Heap Scan on uniform  (cost=433.27..25873.19 rows=1 width=16)
(actual time=1.407..1.448
rows=8 loops=1)
   Recheck Cond: (point @ '(0.501,0.501),(0.5,0.5)'::box)
   Buffers: shared hit=39
   -  Bitmap Index Scan on uniform_new_linear_idx  (cost=0.00..430.77
rows=1 width=0) (actual time=1.388..1.388 rows=8 loops=1)
 Index Cond: (point @ '(0.501,0.501),(0.5,0.5)'::box)
 Buffers: shared hit=31
 Total runtime: 1.527 ms
(7 rows)

test=# explain (analyze, buffers) select * from uniform where point @
box(point(0.3,0.4),point(0.301,0.401));
  QUERY PLAN


--
 Bitmap Heap Scan on uniform  (cost=433.27..25873.19 rows=1 width=16)
(actual time=0.715..0.795
rows=15 loops=1)
   Recheck Cond: (point @ '(0.301,0.401),(0.3,0.4)'::box)
   Buffers: shared hit=30
   -  Bitmap Index Scan on uniform_new_linear_idx  (cost=0.00..430.77
rows=1 width=0) (actual time=0.695..0.695 rows=15 loops=1)
 Index Cond: (point @ '(0.301,0.401),(0.3,0.4)'::box)
 Buffers: shared hit=15
 Total runtime: 0.892 ms
(7 rows)

test=# create index uniform_double_sorting_idx on uniform using gist
(point);
CREATE INDEX
Time: 492796,671 ms

test=# explain (analyze, buffers) select * from uniform where point @
box(point(0.5,0.5),point(0.501,0.501));
   QUERY PLAN


-
 Bitmap Heap Scan on uniform  (cost=445.39..25885.31 rows=1 width=16)
(actual time=0.376..0.417
rows=8 loops=1)
   Recheck Cond: (point @ '(0.501,0.501),(0.5,0.5)'::box)
   Buffers: shared hit=15
   -  Bitmap Index Scan on uniform_double_sorting_idx  (cost=0.00..442.89
rows=1 width=0) (actual time=0.357..0.357 rows=8 loops=1)
 Index Cond: (point @ '(0.501,0.501),(0.5,0.5)'::box)
 Buffers: shared hit=7
 Total runtime: 0.490 ms
(7 rows)

test=# explain (analyze, buffers) select * from uniform where point @
box(point(0.3,0.4),point(0.301,0.401));
QUERY PLAN


--
 Bitmap Heap Scan on uniform  (cost=445.39..25885.31 rows=1 width=16)
(actual time=0.189..0.270
rows=15 loops=1)
   Recheck Cond: (point @ '(0.301,0.401),(0.3,0.4)'::box)
   Buffers: shared hit=19
   -  Bitmap Index Scan on uniform_double_sorting_idx  (cost=0.00..442.89
rows=1 width=0) (actual time=0.168..0.168 rows=15 loops=1)
 Index Cond: (point @ '(0.301,0.401),(0.3,0.4)'::box)
 Buffers: shared hit=4
 Total runtime: 0.358 ms
(7 rows)

test=# create index geonames_new_linear_idx on geonames using gist (point);
CREATE INDEX
Time: 279922,518 ms


test=# explain (analyze, buffers) select * from geonames where point @
box(point(34.4671,126.631),point(34.5023,126.667));
  QUERY PLAN


--
 Bitmap Heap Scan on geonames  (cost=341.98..19686.88 rows=7604 width=16)
(actual time=0.905..0.948
rows=11 loops=1)
   Recheck Cond: (point @ '(34.5023,126.667),(34.4671,126.631)'::box)
   Buffers: shared hit=25
   -  Bitmap Index Scan on geonames_new_linear_idx  (cost=0.00..340.07
rows=7604 width=0) (actual time=0.889..0.889 rows=11 loops=1)
 Index Cond: (point @ '(34.5023,126.667),(34.4671,126.631)'::box)
 Buffers: shared hit=20
 Total runtime: 1.029 ms
(7 rows)

test=# explain (analyze, buffers) select * from geonames where point @
box(point(46.1384,-104.72), point(46.2088,-104.65));
  QUERY PLAN


--
 Bitmap Heap Scan on geonames  (cost=341.98..19686.88 rows=7604 

Re: [HACKERS] pg_dump.c

2011-09-11 Thread Rob Wultsch
On Sun, Sep 11, 2011 at 9:18 AM, Andrew Dunstan and...@dunslane.net wrote:


 On 09/11/2011 10:25 AM, David Fetter wrote:

 On Thu, Sep 08, 2011 at 03:20:14PM -0400, Andrew Dunstan wrote:

 In the refactoring Large C files discussion one of the biggest
 files Bruce mentioned is pg_dump.c. There has been discussion in the
 past of turning lots of the knowledge currently embedded in this
 file into a library, which would make it available to other clients
 (e.g. psql). I'm not sure what a reasonable API for that would look
 like, though. Does anyone have any ideas?

 Here's a sketch.

 In essence, libpgdump should have the following areas of functionality:

 - Discover the user-defined objects in the database.
 - Tag each as pre-data, data, and post-data.
 - Make available the dependency graph of the user-defined objects in the
 database.
 - Enable the mechanical selection of subgraphs which may or may not be
 connected.
 - Discover parallelization capability, if available.
 - Dump requested objects of an arbitrary subset of the database,
   optionally using such capability.

 Then there's questions of scope, which I'm straddling the fence about.
 Should there be separate libraries to transform and restore?

 A thing I'd really like to have in a libdump would be to have the
 RDBMS-specific parts as loadable modules, but that, too, could be way
 out of scope for this.



 In the first place, this isn't an API, it's a description of functionality.
 A C library's API is expressed in its header files.

 Also, I think you have seriously misunderstood the intended scope of the
 library. Dumping and restoring, parallelization, and so on are not in the
 scope I was thinking of. I think those are very properly the property of
 pg_dump.c and friends. The only part I was thinking of moving to a library
 was the discovery part, which is in fact a very large part of pg_dump.c.

 One example of what I'd like to provide is something this:

    char * pg_get_create_sql(PGconn *conn, object oid, catalog_class oid,
 pretty boolean);

 Which would give you the sql to create an object, optionally pretty printing
 it.

 Another is:

    char * pg_get_select(PGconn *conn, table_or_view oid, pretty boolean,
 alias *char );

 which would generate a select statement for all the fields in a given table,
 with an optional alias prefix.

 For the purposes of pg_dump, perhaps we'd want to move all the getFoo()
 functions in pg_dump.c into the library, along with a couple of bits from
 common.c like getSchemaData().

 (Kinda thinking out loud here.)

 cheers

 andrew




For whatever it is worth, the SHOW CREATE TABLE command in MySQL is
well loved. Having the functionality to generate SQL in the server can
be very nice.



-- 
Rob Wultsch
wult...@gmail.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alpha 1 for 9.2

2011-09-11 Thread Devrim GÜNDÜZ
On Tue, 2011-09-06 at 16:49 +0300, Devrim GÜNDÜZ wrote:
 Is there a plan to wrap up 9.2 Alpha 1 before the next commitfest?

...

Ok, so if noone is willing to produce alpha's (which is sad), we need to
change the text in here:

http://www.postgresql.org/developer/alpha

-- 
Devrim GÜNDÜZ
Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz


signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] [WIP] Caching constant stable expressions per execution

2011-09-11 Thread Tom Lane
Marti Raudsepp ma...@juffo.org writes:
 On Sun, Sep 11, 2011 at 01:51, Tom Lane t...@sss.pgh.pa.us wrote:
 The patch as given has a bunch of implementation issues

 This is my first patch that touches the more complicated internals of
 Postgres. I'm sure I have a lot to learn. :)

Well, people seem to think that this is worth pursuing, so here's a
couple of thoughts about what needs to be done to get to something
committable.

First off, there is one way in which you are cheating that does have
real performance implications, so you ought to fix that before trusting
your performance results too much.  You're ensuring that the cached
datum lives long enough by doing this:

+   /* This cache has to persist for the whole query */
+   oldcontext = MemoryContextSwitchTo(econtext-ecxt_per_query_memory);
+
+   fcache-cachedResult = ExecMakeFunctionResult(fcache, econtext, isNull, 
isDone);
+   fcache-cachedIsNull = *isNull;
+
+   /* Set-returning functions can't be cached */
+   Assert(!isDone || *isDone == ExprSingleResult);
+
+   MemoryContextSwitchTo(oldcontext);

IMO this is no good because it means that every intermediate result
computed within the cacheable expression will be leaked into
per_query_memory.  Yeah, you're only doing it once, but once could still
be too much.  Consider for instance the case where the function
internally generates a lot of cruft over multiple operations, and it
thinks it's cleaning up by resetting ecxt_per_tuple_memory every so
often.  If CurrentMemoryContext isn't pointing to ecxt_per_tuple_memory,
this loses.  I think what you need to do is run the function in the
normal environment and then use datumCopy() to save the value into
per_query_memory.  The reason this is performance-relevant is that the
datum copy step represents real added cycles.  I think it probably
doesn't invalidate the idea, but it'd be good to fix it and recheck
your performance numbers before putting in more work.  Assuming
that passes ...

The concept you're trying to encapsulate here is not really specific to
FuncExpr or OpExpr nodes.  Rather, the idea you want to implement is
let's cache the result of any expression tree that contains no Vars,
internal Params, or volatile functions.  An example of this is that
the result of
CASE WHEN f1()  42 THEN f2() ELSE NULL END
ought to be perfectly cacheable if f1 and f2 (and the  operator) are
stable or immutable.  Now it doesn't seem like a good plan to me to
plaster stableconst flags on every expression node type, nor to
introduce logic for handling that into everything in execQual.c.
So what I suggest is that you should invent a new expression node
type CacheExpr (that's just the first name that came to mind, maybe
somebody has a better idea) that takes an expression node as input
and caches the result value.  This makes things simple and clean in
the executor.  The planner would have to figure out where to inject
CacheExpr nodes into expression trees --- ideally only the minimum
number of nodes would be added.  I think you could persuade
eval_const_expressions to do that, but it would probably involve
bubbling additional information back up from each level of recursion.
I haven't thought through the details.

The other thing that is going to be an issue is that I'm fairly sure
this breaks plpgsql's handling of simple expressions.  (If there's not
a regression test that the patch is failing, there ought to be ...)
The reason is that we build an execution tree for a given simple
expression only once per transaction and then re-use it.  So for
example consider a plpgsql function containing

x := stablefunction();

I think your patch means that stablefunction() would be called only once
per transaction, and the value would be cached and returned in all later
executions.  This would be wrong if the plpgsql function is called in
successive statements that have different snapshots, or contains a loop
around the assignment plus operations that change whatever state
stablefunction() looks at.  It would be legitimate for stablefunction()
to have different values in the successive executions.

The quick and dirty solution to this would be for plpgsql to pass some
kind of planner flag that disables insertion of CacheExpr nodes, or
alternatively have it not believe that CacheExpr nodes are safe to have
in simple expressions.  But that gives up all the advantage of the
concept for this use-case, which seems a bit disappointing.  Maybe we
can think of a better answer.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] superusers are members of all roles?

2011-09-11 Thread Andrew Dunstan



On 09/09/2011 11:34 PM, Bruce Momjian wrote:

Robert Haas wrote:

On Sat, May 7, 2011 at 11:42 PM, Bruce Momjianbr...@momjian.us  wrote:

Is this a TODO?

I think so.

Added to TODO:

Address problem where superusers are assumed to be members of all groups

http://archives.postgresql.org/pgsql-hackers/2011-04/msg00337.php


This turns out to be a one-liner.

Patch attached.

cheers

andrew
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 1ee030f..1c84a60 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -442,8 +442,13 @@ is_member(Oid userid, const char *role)
 	if (!OidIsValid(roleid))
 		return false;			/* if target role not exist, say no */
 
-	/* See if user is directly or indirectly a member of role */
-	return is_member_of_role(userid, roleid);
+	/* 
+	 * See if user is directly or indirectly a member of role.
+	 * For this purpose, a superuser is not considered to be automatically
+	 * a member of the role, so group auth only applies to explicit
+	 * membership.
+	 */
+	return is_member_of_role_nosuper(userid, roleid);
 }
 
 /*

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] superusers are members of all roles?

2011-09-11 Thread Stephen Frost
* Andrew Dunstan (and...@dunslane.net) wrote:
  Address problem where superusers are assumed to be members of all groups
  
  http://archives.postgresql.org/pgsql-hackers/2011-04/msg00337.php
 
 This turns out to be a one-liner.

I really don't know that I agree with removing this, to be honest..  I
haven't got time at the moment to really discuss it, but at the very
least, not being able to 'set role' to any user when postgres would be
REALLY annoying..

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] superusers are members of all roles?

2011-09-11 Thread Robert Haas
On Sun, Sep 11, 2011 at 10:32 PM, Stephen Frost sfr...@snowman.net wrote:
 * Andrew Dunstan (and...@dunslane.net) wrote:
      Address problem where superusers are assumed to be members of all 
  groups
 
          http://archives.postgresql.org/pgsql-hackers/2011-04/msg00337.php

 This turns out to be a one-liner.

 I really don't know that I agree with removing this, to be honest..  I
 haven't got time at the moment to really discuss it, but at the very
 least, not being able to 'set role' to any user when postgres would be
 REALLY annoying..

Sure.  But I don't believe anyone has proposed changing that.  What
we're talking about here is that, for example, setting a reject rule
for a certain group in pg_hba.conf will always match superusers, even
though they're not in that group.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] superusers are members of all roles?

2011-09-11 Thread Andrew Dunstan



On 09/11/2011 10:32 PM, Stephen Frost wrote:

* Andrew Dunstan (and...@dunslane.net) wrote:

Address problem where superusers are assumed to be members of all groups

http://archives.postgresql.org/pgsql-hackers/2011-04/msg00337.php

This turns out to be a one-liner.

I really don't know that I agree with removing this, to be honest..  I
haven't got time at the moment to really discuss it, but at the very
least, not being able to 'set role' to any user when postgres would be
REALLY annoying..




It's NOT changing that. All this affects is how +groupname is treated in 
pg_hba.conf, i.e. do we treat every superuser there as being a member of 
every group.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql additions

2011-09-11 Thread Robert Haas
On Sun, Sep 11, 2011 at 2:58 PM, Andrew Dunstan and...@dunslane.net wrote:
 Here's a couple of ideas I had recently about making psql a bit more user
 friendly.

 First, it would be useful to be able to set pager options and possibly other
 settings, so my suggestion is for a \setenv command that could be put in a
 .psqlrc file, something like:

   \setenv PAGER='less'
   \setenv LESS='-imFX4'

Seems useful.

 Second, I'd like to be able to set a minimum number of lines below which the
 pager would not be used, something like:

   \pset pagerminlines 200

 Thoughts?

Gee, why do I feel like we have something like this already?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] What Would You Like To Do?

2011-09-11 Thread David E. Wheeler
Hackers,

Later this week I'm giving a [brief][] for an audience of what I hope will be 
corporate PostgreSQL users that covers how to get a feature developed for 
PostgreSQL. The idea here is that there are a lot of organizations out there 
with very deep commitments to PostgreSQL, who really take advantage of what it 
has to offer, but also would love additional features PostgreSQL doesn't offer. 
Perhaps some of them would be willing to fund development of the featured they 
need.

[brief]: http://postgresopen.org/2011/schedule/presentations/83/

Toward the end of the presentation, I'd like to make some suggestions and offer 
to do some match-making. I'm thinking primarily of listing some of the stuff 
the community would love to see done, along with the names of the folks and/or 
companies who, with funding, might make it happen. My question for you is: What 
do you want to work on?

Here's my preliminary list:

* Integrated partitioning support: Simon/2nd Quadrant
* High-CPU concurrency: Robert/Enterprise DB
* Multimaster replication and clustering: Simon/2nd Quadrant
* Multi-table indexes: Heiki? Oleg  Teodor?
* Column-leve collation support: Peter/Enterprise DB
* Faster and more fault tolerant data loading: Andrew/PGX
* Automated postgresql.conf Configuration: Greg/2nd Quadrant
* Parallel pg_dump: Andrew/PGX
* SET GLOBAL-style configuration in SQL: Greg/2nd Quadant
* Track table and index caching to improve optimizer decisions: 
Robert/Enterprise DB

Thanks to Greg Smith for adding a few bonus ideas I hadn't thought of. What 
else have you got? I don't think we necessarily have to limit ourselves to core 
features, BTW: projects like PostGIS and pgAdmin are also clearly popular, and 
new projects of that scope (or improvements to those!) would no doubt be 
welcome. Also, I'm highlighting PGXN and an example of how this sort of thing 
might work.

So, what do you want to work on? Let me know, I'll do as much match-making at 
the conference as I can.

Best,

David



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to improve reliability of postgresql on linux nfs

2011-09-11 Thread George Barnett
On 10/09/2011, at 1:30 AM, Bernd Helmle wrote:

 --On 9. September 2011 10:27:22 -0400 Tom Lane t...@sss.pgh.pa.us wrote:
 
 On the whole I think you'd be better off lobbying your NFS implementors
 to provide something closer to the behavior of every other filesystem on
 the planet.  Or checking to see if you need to adjust your NFS
 configuration, as the other responders mentioned.
 
 You really need at least mount options 'hard' _and_ 'nointr' on NFS mounts, 
 otherwise you are out of luck. Oracle and DB2 guys recommend those settings 
 and without them any millisecond of network glitch could disturb things 
 unreasonably.

Hi,

My mount options include hard and intr.

George
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers