Re: [HACKERS] Further news on Clang - spurious warnings

2011-08-04 Thread Tom Lane
Peter Geoghegan pe...@2ndquadrant.com writes:
 On 3 August 2011 21:03, Tom Lane t...@sss.pgh.pa.us wrote:
 I mean that it's unclear what you'll get if status has a bitpattern
 equivalent to a negative integer.  If the compiler implements the
 comparison as signed, the test will yield TRUE; if unsigned, it's FALSE.

 On compilers on which the enum value is represented as an unsigned
 int, passing -1 is just the same as doing that with any function with
 an unsigned int argument for that argument - it will convert to a
 large unsigned value . So sure, that isolated part of the test's
 outcome will vary depending on whether or not the compiler opts to
 represent the enum as signed when it can, but I don't look at it as
 you do. I simply consider that to be a violation of the enum's
 contract, or the caller's failure to consider that the enum couldn't
 represent -1 -- they got what they asked for.

This argument is completely missing the point of the test, which is to
verify whether or not the caller adhered to the enum's contract.  You
can *not* assume that he did while arguing about whether the test is
valid or accomplishes its goals.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transient plans versus the SPI API

2011-08-04 Thread Simon Riggs
On Wed, Aug 3, 2011 at 8:33 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Simon Riggs si...@2ndquadrant.com writes:
 I think its possible to tell automatically whether we need to replan
 always or not based upon the path we take through selectivity
 functions.

 I don't really believe that, or at least I think it would only detect a
 few cases.  Examples of parameter-value-sensitive decisions that are
 made nowhere near the selectivity functions are constraint exclusion and
 LIKE pattern to index-qual conversion.  And in none of these cases do we
 really know at the bottom level whether a different parameter value will
 lead to a significant change in the finished plan.  For instance, if
 there's no index for column foo, it is a waste of time to force
 replanning just because we have varying selectivity estimates for
 WHERE foo  $1.

 I think we'll be a lot better off with the framework discussed last
 year: build a generic plan, as well as custom plans for the first few
 sets of parameter values, and then observe whether there's a significant
 reduction in estimated costs for the custom plans.

The problem there is which executions we build custom plans for. That
turns the problem into a sampling issue and you'll only fix the
problems that occur with a frequency to match your sampling pattern
and rate. Examples of situations where it won't help.

* plans that vary by table size will be about the same in the first 5
executions. After large number of executions, things go bad.

* text search using parameter is provided by user input - sensible
requests have low selectivities; some users put in space or e and
then we try to retrieve whole table by index scan. Almost impossible
to prevent all potentially high selectivity inputs from user. We could
add LIMIT but frequently ORM generated queries do not do that.

This isn't my-way-or-your-way - I think we need to look at some form
of safety barriers so we generate a plan but also know when the plan
has outlived its usefulness and force a re-plan.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cataloguing NOT NULL constraints

2011-08-04 Thread Dean Rasheed
On 3 August 2011 22:26, Alvaro Herrera alvhe...@commandprompt.com wrote:
 Excerpts from Dean Rasheed's message of sáb jul 23 04:37:06 -0400 2011:
 On 22 July 2011 22:28, Robert Haas robertmh...@gmail.com wrote:

  mine was that we need a command such as
 
  ALTER TABLE foo ALTER COLUMN bar SET NOT NULL name_of_notnull_constr
 
  where the last bit is what's new.
 
  Well, if you don't have that, I don't see how you have any chance of
  pg_dump working correctly.

 Ah yes, pg_dump's dumpConstraint() needs a clause to alter a table
 adding a named NOT NULL constraint (and the DOMAIN case should be
 preserving the constraint's name too). So it looks like some new
 syntax for ALTER TABLE to add named NOT NULL constraints is probably
 needed before this can be committed.

 So after writing the code to handle named NOT NULL constraints for
 tables, I'm thinking that dumpConstraints needs to be fixed thusly:

 @@ -12888,6 +12968,27 @@ dumpConstraint(Archive *fout, ConstraintInfo 
 *coninfo)
                         NULL, NULL);
        }
    }
 +   else if (coninfo-contype == 'n'  tbinfo)
 +   {
 +       /* NOT NULL constraint on a table */
 +       if (coninfo-separate)
 +       {
 +           write_msg(NULL, NOT NULL constraints cannot be dumped separately 
 from their owning table\n);
 +           exit_nicely();
 +       }
 +   }
 +   else if (coninfo-contype == 'n'  tbinfo == NULL)
 +   {
 +       /* NOT NULL constraint on a domain */
 +       TypeInfo   *tyinfo = coninfo-condomain;
 +
 +       /* Ignore if not to be dumped separately */
 +       if (coninfo-separate)
 +       {
 +           write_msg(NULL, NOT NULL constraints cannot be dumped separately 
 from their owning domain\n);
 +           exit_nicely();
 +       }
 +   }
    else
    {
        write_msg(NULL, unrecognized constraint type: %c\n, 
 coninfo-contype);


 There doesn't seem to be any point in giving pg_dump the ability to dump
 such constraints separately; I don't think there's any situation in
 which dependencies between the table/domain and NOT NULL constraints
 would get circular (which is what causes them to acquire the separate
 flag).  I don't want to write code that is going to be always
 unused, particularly not in a beast as hairy such as pg_dump.


Wow. Yes I think you're right - it is a hairy beast :-)

It was the code immediately above that for CHECK constraints that made
me assume NOT NULLs would need similar logic. But I haven't quite
figured out how a CHECK constraint's dependencies could form a loop,
so I don't know if it could happen for a NOT NULL.

Regards,
Dean

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cataloguing NOT NULL constraints

2011-08-04 Thread Nikhil Sontakke
 So after writing the code to handle named NOT NULL constraints for
 tables, I'm thinking that dumpConstraints needs to be fixed thusly:

 @@ -12888,6 +12968,27 @@ dumpConstraint(Archive *fout, ConstraintInfo 
 *coninfo)
                         NULL, NULL);
        }
    }
 +   else if (coninfo-contype == 'n'  tbinfo)
 +   {
 +       /* NOT NULL constraint on a table */
 +       if (coninfo-separate)
 +       {
 +           write_msg(NULL, NOT NULL constraints cannot be dumped separately 
 from their owning table\n);
 +           exit_nicely();
 +       }
 +   }
 +   else if (coninfo-contype == 'n'  tbinfo == NULL)
 +   {
 +       /* NOT NULL constraint on a domain */
 +       TypeInfo   *tyinfo = coninfo-condomain;
 +
 +       /* Ignore if not to be dumped separately */
 +       if (coninfo-separate)
 +       {
 +           write_msg(NULL, NOT NULL constraints cannot be dumped separately 
 from their owning domain\n);
 +           exit_nicely();
 +       }
 +   }
    else
    {
        write_msg(NULL, unrecognized constraint type: %c\n, 
 coninfo-contype);


Some nit-picking.

AFAICS above, we seem to be only using 'tbinfo' to identify the object
type here - 'table' visavis 'domain'. We could probably reduce the
above two elses to a single one and use the check of tbinfo being not
null to decide which object type name to spit out..

Although, it's difficult to see how we could end up marking NOT NULL
constraints as 'separate' ever. So this code will be rarely exercised,
if ever IMO.

Regards,
Nikhils

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Further news on Clang - spurious warnings

2011-08-04 Thread Peter Geoghegan
On 4 August 2011 07:08, Tom Lane t...@sss.pgh.pa.us wrote:
 Peter Geoghegan pe...@2ndquadrant.com writes:
 On 3 August 2011 21:03, Tom Lane t...@sss.pgh.pa.us wrote:
 I mean that it's unclear what you'll get if status has a bitpattern
 equivalent to a negative integer.  If the compiler implements the
 comparison as signed, the test will yield TRUE; if unsigned, it's FALSE.

 On compilers on which the enum value is represented as an unsigned
 int, passing -1 is just the same as doing that with any function with
 an unsigned int argument for that argument - it will convert to a
 large unsigned value . So sure, that isolated part of the test's
 outcome will vary depending on whether or not the compiler opts to
 represent the enum as signed when it can, but I don't look at it as
 you do. I simply consider that to be a violation of the enum's
 contract, or the caller's failure to consider that the enum couldn't
 represent -1 -- they got what they asked for.

 This argument is completely missing the point of the test, which is to
 verify whether or not the caller adhered to the enum's contract.  You
 can *not* assume that he did while arguing about whether the test is
 valid or accomplishes its goals.

I did not assume anything about the caller or their trustworthiness.
The whole point of my argument is that passing a negative integer
where the enum is represented as unsigned is just another way of
violating the contract (passing a negative integer where the enum is
represented as signed is another), that is equally well handled by the
test; the whole test though, not the isolated part of it that you
referred to.

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Fast GiST index build

2011-08-04 Thread Alexander Korotkov
Uhh, my bad, really stupid bug. Many thanks.

--
With best regards,
Alexander Korotkov.

On Wed, Aug 3, 2011 at 8:31 PM, Heikki Linnakangas 
heikki.linnakan...@enterprisedb.com wrote:

 On 03.08.2011 11:18, Alexander Korotkov wrote:

 I found that in previous version of patch I missed PageSetLSN
 and PageSetTLI, but huge amount of WAL is still here. Also I found that
 huge
 amount of WAL appears only with -O2. With -O0 amount of WAL is ok, but
 messages FATAL:  xlog flush request BFF11148/809A600 is not satisfied ---
 flushed only to 44/9C518750 appears. Seems that there is some totally
 wrong
 use of WAL if even optimization level does matter...


 Try this:

 diff --git a/src/backend/access/gist/**gistbuild.c
 b/src/backend/access/gist/**gistbuild.c
 index 5099330..5a441e0 100644
 --- a/src/backend/access/gist/**gistbuild.c
 +++ b/src/backend/access/gist/**gistbuild.c
 @@ -478,7 +478,7 @@ bufferingbuildinsert(**GISTInsertState *state,
/* Write the WAL record */
if (RelationNeedsWAL(state-r))
{
 -   gistXLogUpdate(state-r-rd_**node, buffer,
 oldoffnum, noldoffnum,
 +   recptr = gistXLogUpdate(state-r-rd_**node,
 buffer, oldoffnum, noldoffnum,

itup, ntup, InvalidBuffer);
PageSetLSN(page, recptr);
PageSetTLI(page, ThisTimeLineID);



 --
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com



Re: [HACKERS] Postgres / plpgsql equivalent to python's getattr() ?

2011-08-04 Thread Florian Pflug
On Aug4, 2011, at 00:48 , Tom Lane wrote:
 James Robinson jlrob...@socialserve.com writes:
 Python's getattr() allows for dynamic lookup of attributes on an  
 object, as in:
 
  inst = MyClass(x=12, y=24)
  v = getattr(inst, 'x')
  assert v == 12
 
 Oftentimes in writing data validating trigger functions, it'd be real  
 handy to be able to do a similar thing in plpgsql
 
 Is there something in the internals which inherently prevent this? 
 
 plpgsql is strongly typed (much more so than python, anyway).

You've brought forth that argument against dynamic attribute lookup
in the past, but I remain unconvinced. The fact that plpgsql is strongly
(or, more precisely, statically) types doesn't really get in the way as
long as you require the dynamic attribute accessor's return type to be
determined at compile time (make that prepare time in the case of
plpgsql).

The fact that I was able to implement dynamic accessor functions without
any change to postgres or plpgsql proves that IMHO. The only bit that slightly
kludgy about this in my opinion is the return-type determining defvalue
argument of fieldvalue(). But again, this has little to do with static vs.
dynamic typing but rather with the limitations of our support for polymorphic
functions.

@OP: Here's my implementation of the feature you desire as a set of C-language
functions: https://github.com/fgp/pg_record_inspect. Other people did code up
similar things in the past, but I currently cannot find any links to their work.
But it little bit digging in the mailing list archives should turn them up.

best regards,
Florian Pflug


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cataloguing NOT NULL constraints

2011-08-04 Thread Dean Rasheed
On 4 August 2011 09:23, Nikhil Sontakke nikkh...@gmail.com wrote:
 So after writing the code to handle named NOT NULL constraints for
 tables, I'm thinking that dumpConstraints needs to be fixed thusly:

 @@ -12888,6 +12968,27 @@ dumpConstraint(Archive *fout, ConstraintInfo 
 *coninfo)
                         NULL, NULL);
        }
    }
 +   else if (coninfo-contype == 'n'  tbinfo)
 +   {
 +       /* NOT NULL constraint on a table */
 +       if (coninfo-separate)
 +       {
 +           write_msg(NULL, NOT NULL constraints cannot be dumped 
 separately from their owning table\n);
 +           exit_nicely();
 +       }
 +   }
 +   else if (coninfo-contype == 'n'  tbinfo == NULL)
 +   {
 +       /* NOT NULL constraint on a domain */
 +       TypeInfo   *tyinfo = coninfo-condomain;
 +
 +       /* Ignore if not to be dumped separately */
 +       if (coninfo-separate)
 +       {
 +           write_msg(NULL, NOT NULL constraints cannot be dumped 
 separately from their owning domain\n);
 +           exit_nicely();
 +       }
 +   }
    else
    {
        write_msg(NULL, unrecognized constraint type: %c\n, 
 coninfo-contype);


 Some nit-picking.

 AFAICS above, we seem to be only using 'tbinfo' to identify the object
 type here - 'table' visavis 'domain'. We could probably reduce the
 above two elses to a single one and use the check of tbinfo being not
 null to decide which object type name to spit out..

 Although, it's difficult to see how we could end up marking NOT NULL
 constraints as 'separate' ever. So this code will be rarely exercised,
 if ever IMO.


There's a related issue that might affect how this code ends up. I'm
not sure if this has been discussed before, but it seems to be a
problem for CHECK constraints currently, and will affect NOT NULL in
the same way - if the constraint is NOT VALID, and some of the
existing data violates the constraint, then pg_dump needs to dump the
constraint separately, after the table's data, otherwise the restore
will fail.

So it looks like this code will have to support dumping NOT NULLs
ultimately anyway.

BTW, this happens automatically for FK constraints, so I don't think
this is a problem for 9.1.

Regards,
Dean

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] error: could not find pg_class tuple for index 2662

2011-08-04 Thread daveg
On Wed, Aug 03, 2011 at 11:18:20AM -0400, Tom Lane wrote:
 Evidently not, if it's not logging anything, but now the question is
 why.  One possibility is that for some reason RelationGetNumberOfBlocks
 is persistently lying about the file size.  (We've seen kernel bugs
 before that resulted in transiently wrong values, so this isn't totally
 beyond the realm of possibility.)  Please try the attached patch, which
 extends the previous one to add a summary line including the number of
 blocks physically scanned by the seqscan.

Ok, I have results from the latest patch and have attached a redacted
server log with the select relfilenode output added inline. This is the
shorter of the logs and shows the sequence pretty clearly. I have additional
logs if wanted.

Summary: the failing process reads 0 rows from 0 blocks from the OLD
relfilenode. 

-dg


-- 
David Gould   da...@sonic.net  510 536 1443510 282 0869
If simplicity worked, the world would be overrun with insects.
SS.msec  pid   db   user  statement
--  -  ---    ---
03.804  29706  c27  postgres  connection authorized: user=postgres database=c27
...
03.824  29706  c27  postgres  set statement_timeout=60;
03.824  29706  c27  postgres  0.364 ms
03.825  29706  c27  postgres  select current_database() as db, relname, oid, 
pg_relation_filenode(oid) as filenode, clock_timestamp()::time as ts from 
pg_class where relname like 'pg_class%';
03.829  29706  c27  postgres  4.173 ms
 db  |  relname   | oid  |  filenode  |   ts
-++--++-
 c27 | pg_class   | 1259 | 1245833951 | 21:31:03.828293
 c27 | pg_class_oid_index | 2662 | 1245833955 | 21:31:03.828791
 c27 | pg_class_relname_nsp_index | 2663 | 1259100530 | 21:31:03.828807

03.829  29706  c27  postgres  vacuum full pg_catalog.pg_class;
03.829  29706  c27  postgres  LOCATION: exec_simple_query, postgres.c:900
...
03.845  29707  c27  postgres  disconnection: session time: 0:00:00.041 
user=postgres database=c27 host=bk-0
...
08.856  29706  c27  postgres  process 29706 still waiting for RowExclusiveLock 
on relation 1214 of database 0 after 5000.483 ms
08.856  29706  c27  postgres  LOCATION: ProcSleep, proc.c:1059
08.856  29706  c27  postgres  STATEMENT: vacuum full pg_catalog.pg_class;
09.383  29711  LOG: 0: process 29711 still waiting for AccessShareLock on 
relation 1259 of database 16408 after 5000.331 ms
09.383  29711  LOCATION: ProcSleep, proc.c:1059
...
11.559  28857  c27  apps  SELECT
...
16.560  28857  c27  apps  process 28857 still waiting for AccessShareLock 
on relation 1259 of database 16408 after 5001.209 ms
16.560  28857  c27  apps  LOCATION: ProcSleep, proc.c:1059
16.560  28857  c27  apps  STATEMENT: SELECT
...
19.763  29706  c27  postgres  process 29706 acquired RowExclusiveLock on 
relation 1214 of database 0 after 15907.284 ms
19.763  29706  c27  postgres  LOCATION: ProcSleep, proc.c:1063
19.763  29706  c27  postgres  STATEMENT: vacuum full pg_catalog.pg_class;
...
25.735  29711  LOG: 0: process 29711 acquired AccessShareLock on relation 
1259 of database 16408 after 21352.393 ms
25.735  29711  LOCATION: ProcSleep, proc.c:1063
25.735  28857  c27  apps  process 28857 acquired AccessShareLock on 
relation 1259 of database 16408 after 14176.040 ms
25.735  28857  c27  apps  LOCATION: ProcSleep, proc.c:1063
25.735  28857  c27  apps  STATEMENT: SELECT
25.736  28857  c27  apps  ScanPgRelationDetailed: found 0 tuples with OID 
2662 in 0 blocks of filenode 1245833951
25.736  28857  c27  apps  LOCATION: ScanPgRelationDetailed, relcache.c:372
25.736  28857  c27  apps  STATEMENT: SELECT
25.736  28857  c27  apps  ERROR: XX000: could not find pg_class tuple for 
index 2662
25.736  28857  c27  apps  LOCATION: RelationReloadIndexInfo, relcache.c:1816
25.736  28857  c27  apps  STATEMENT: SELECT
25.736  29706  c27  postgres  21906.865 ms
25.737  29706  c27  postgres  select current_database() as db, relname, oid, 
pg_relation_filenode(oid) as filenode, clock_timestamp()::time as ts from 
pg_class where relname like 'pg_class%';
25.767  29706  c27  postgres  30.902 ms
 db  |  relname   | oid  |  filenode  |   ts
-++--++-
 c27 | pg_class   | 1259 | 1279787837 | 21:31:25.76726
 c27 | pg_class_oid_index | 2662 | 1279788022 | 21:31:25.767764
 c27 | pg_class_relname_nsp_index | 2663 | 1279788023 | 21:31:25.767782

25.768  29706  c27  postgres  vacuum full pg_catalog.pg_attribute;
25.775  28857  c27  apps  disconnection: session time: 0:07:07.758 
user=apps database=c27 host=op-01
25.775  28857  c27  apps  LOCATION: log_disconnections, postgres.c:4339
...
30.914  29711  LOG: 0: process 29711 still waiting for AccessShareLock on 
relation 1249 of database 16408 

Re: [HACKERS] SYNONYMS (again)

2011-08-04 Thread Boszormenyi Zoltan
2011-06-23 20:44 keltezéssel, Gurjeet Singh írta:
 On Wed, Jun 22, 2011 at 3:37 PM, Joshua D. Drake j...@commandprompt.com
 mailto:j...@commandprompt.com wrote:

 Per:

 http://archives.postgresql.org/pgsql-hackers/2010-11/msg02043.php

 It seems we did come up with a use case in the procpid discussion. The 
 ability to
 change the names of columns/databases etc, to handle the fixing of bad 
 decision
 decisions during development over time.

 Thoughts?


 Instead of just synonyms of columns, why don't we think about implementing 
 virtual
 columns (feature as named in other RDBMS). This is the ability to define a 
 column in a
 table which is derived using an expression around other non-virtual columns. 
 I agree it
 would be much more difficult and some may even argue it is pointless in the 
 presence of
 views and expression indexes, but I leave that as an exercise for others.

A few years ago I wrote a patch for (I think) 8.2 that implemented
IDENTITY and GENERATED columns. Look it up in the archives.
The virtual column is called GENERATED in the SQL standard
and is part of the table and maintained as a function/expression
of other fields in the same record.

Best regards,
Zoltán Böszörményi

-- 
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



Re: [HACKERS] Transient plans versus the SPI API

2011-08-04 Thread Yeb Havinga

On 2011-08-03 21:19, Tom Lane wrote:

Robert Haasrobertmh...@gmail.com  writes:

This seems like a good design.  Now what would be really cool is if
you could observe a stream of queries like this:
SELECT a, b FROM foo WHERE c = 123
SELECT a, b FROM foo WHERE c = 97
SELECT a, b FROM foo WHERE c = 236
...and say, hey, I could just make a generic plan and use it every
time I see one of these.  It's not too clear to me how you'd make
recognition of such queries cheap enough to be practical, but maybe
someone will think of a way...

Hm, you mean reverse-engineering the parameterization of the query?
Interesting thought, but I really don't see a way to make it practical.


See also http://archives.postgresql.org/pgsql-hackers/2010-11/msg00617.php

I don't know if any implementation can be practical - maybe the parser 
could be coerced into emitting some kind of number that's based on 
everything in the query, except constants (and whitespace), so it would 
be the same for all the queries Robert described. That could be low cost 
enough to detect of for a query's id a cached plan exists and do more 
work only in those cases.


--
Yeb Havinga
http://www.mgrid.net/
Mastering Medical Data


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Postgres / plpgsql equivalent to python's getattr() ?

2011-08-04 Thread Pavel Stehule
2011/8/4 Florian Pflug f...@phlo.org:
 On Aug4, 2011, at 00:48 , Tom Lane wrote:
 James Robinson jlrob...@socialserve.com writes:
 Python's getattr() allows for dynamic lookup of attributes on an
 object, as in:

      inst = MyClass(x=12, y=24)
      v = getattr(inst, 'x')
      assert v == 12

 Oftentimes in writing data validating trigger functions, it'd be real
 handy to be able to do a similar thing in plpgsql

 Is there something in the internals which inherently prevent this?

 plpgsql is strongly typed (much more so than python, anyway).

 You've brought forth that argument against dynamic attribute lookup
 in the past, but I remain unconvinced. The fact that plpgsql is strongly
 (or, more precisely, statically) types doesn't really get in the way as
 long as you require the dynamic attribute accessor's return type to be
 determined at compile time (make that prepare time in the case of
 plpgsql).

 The fact that I was able to implement dynamic accessor functions without
 any change to postgres or plpgsql proves that IMHO. The only bit that slightly
 kludgy about this in my opinion is the return-type determining defvalue
 argument of fieldvalue(). But again, this has little to do with static vs.
 dynamic typing but rather with the limitations of our support for polymorphic
 functions.

 @OP: Here's my implementation of the feature you desire as a set of C-language
 functions: https://github.com/fgp/pg_record_inspect. Other people did code up
 similar things in the past, but I currently cannot find any links to their 
 work.
 But it little bit digging in the mailing list archives should turn them up.


it is pltoolbox http://www.pgsql.cz/index.php/PL_toolbox_%28en%29

regards

Pavel

 best regards,
 Florian Pflug


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] TRUE/FALSE vs true/false

2011-08-04 Thread Boszormenyi Zoltan
Hi,

I looked at b4fbe392f8ff6ff1a66b488eb7197eef9e1770a4
and I noticed that it's using TRUE, FALSE, true and false
inconsistently:

@@ -248,6 +249,7 @@ CreateSharedInvalidationState(void)
shmInvalBuffer-procState[i].nextMsgNum = 0;/* meaningless 
*/
shmInvalBuffer-procState[i].resetState = false;
shmInvalBuffer-procState[i].signaled = false;
+   shmInvalBuffer-procState[i].hasMessages = false;
shmInvalBuffer-procState[i].nextLXID = 
InvalidLocalTransactionId;
}
 }

@@ -316,6 +316,7 @@ SharedInvalBackendInit(bool sendOnly)
stateP-nextMsgNum = segP-maxMsgNum;
stateP-resetState = false;
stateP-signaled = false;
+   stateP-hasMessages = false;
stateP-sendOnly = sendOnly;
 
LWLockRelease(SInvalWriteLock);


@@ -459,6 +461,19 @@ SIInsertDataEntries(const SharedInvalidationMessage *data, 
int n)
SpinLockRelease(vsegP-msgnumLock);
}
 
+   /*
+* Now that the maxMsgNum change is globally visible, we give
+* everyone a swift kick to make sure they read the newly added
+* messages.  Releasing SInvalWriteLock will enforce a full 
memory
+* barrier, so these (unlocked) changes will be committed to 
memory
+* before we exit the function.
+*/
+   for (i = 0; i  segP-lastBackend; i++)
+   {
+   ProcState  *stateP = segP-procState[i];
+   stateP-hasMessages = TRUE;
+   }
+
LWLockRelease(SInvalWriteLock);
}
 }

@@ -499,11 +514,36 @@ SIGetDataEntries(SharedInvalidationMessage *data, int 
datasize)
...
+* Note that, if we don't end up reading all of the messages, we had
+* better be certain to reset this flag before exiting!
+*/
+   stateP-hasMessages = FALSE;
+

@@ -544,10 +584,16 @@ SIGetDataEntries(SharedInvalidationMessage *data, int 
datasize)
...
if (stateP-nextMsgNum = max)
stateP-signaled = false;
+   else
+   stateP-hasMessages = TRUE;


Also, grepping for checking for or assigning bool values reveal that
true and false are used far more than TRUE and FALSE:

[zozo@localhost backend]$ find . -name *.c | xargs grep -w true | grep -v 
'true' | grep
= | wc -l
2446
[zozo@localhost backend]$ find . -name *.c | xargs grep -w false | grep -v 
'false' |
grep = | wc -l
2745
[zozo@localhost backend]$ find . -name *.c | xargs grep -w TRUE | grep -v 
'TRUE' | grep
= | wc -l
119
[zozo@localhost backend]$ find . -name *.c | xargs grep -w FALSE | grep -v 
'FALSE' |
grep = | wc -l
140

Shouldn't these get fixed to be consistent?

Best regards,
Zoltán Böszörményi

-- 
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TRUE/FALSE vs true/false

2011-08-04 Thread Robert Haas
2011/8/4 Boszormenyi Zoltan z...@cybertec.at:
 Shouldn't these get fixed to be consistent?

I believe I already did.  See commit 85b436f7b1f06a6ffa8d2f29b03d6e440de18784.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TRUE/FALSE vs true/false

2011-08-04 Thread Boszormenyi Zoltan
2011-08-04 14:32 keltezéssel, Robert Haas írta:
 2011/8/4 Boszormenyi Zoltan z...@cybertec.at:
 Shouldn't these get fixed to be consistent?
 I believe I already did.  See commit 85b436f7b1f06a6ffa8d2f29b03d6e440de18784.

I meant a mass sed -e 's/TRUE/true/g'  -e 's/FALSE/false/g' run
so all the ~200 occurrences of both TRUE and FALSE get
converted so the whole source tree is consistent.

-- 
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TRUE/FALSE vs true/false

2011-08-04 Thread Robert Haas
On Thu, Aug 4, 2011 at 8:44 AM, Boszormenyi Zoltan z...@cybertec.at wrote:
 2011-08-04 14:32 keltezéssel, Robert Haas írta:
 2011/8/4 Boszormenyi Zoltan z...@cybertec.at:
 Shouldn't these get fixed to be consistent?
 I believe I already did.  See commit 
 85b436f7b1f06a6ffa8d2f29b03d6e440de18784.

 I meant a mass sed -e 's/TRUE/true/g'  -e 's/FALSE/false/g' run
 so all the ~200 occurrences of both TRUE and FALSE get
 converted so the whole source tree is consistent.

Oh, I see.  Well, I don't care either way, so I'll let others weigh
in.  The way it is doesn't bother me, but fixing it doesn't bother me
either.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] plperl crash with Debian 6 (64 bit), pl/perlu, libwww and https

2011-08-04 Thread Hannu Krosing
Hi

I have been helping some people to debug a SIGALARM related crash
induced by using pl/perlu http get functionality

I have been so far able to repeat the crash only on Debian 64 bit
computers. DB create script and instructions for reproducing the crash
attached

The crash is related to something leaving begind a bad SIGALARM handler,
as it can be (kind of) fixed by resetting sigalarm to nothing using perl
function 

REATE OR REPLACE FUNCTION reset_sigalarm() RETURNS VOID
LANGUAGE plperlu
AS $_X$
   $SIG{ALRM} = 'IGNORE';
$_X$;

( unfortunately this hoses deadlock detection and statement_timeout )

Environment where this crash does happen:

Debian GNU/Linux 6.0  - x86-64
openssl 0.9.8o-4squeeze1
postgresql-9.0  9.0.4-1~bpo60+1 
postgresql-plperl-9.0   9.0.4-1~bpo60+1 
libwww-perl 5.836-1  

Postgresql is installed from backports

It does not happen on 32 bit ubuntu


-- 
---
Hannu Krosing
PostgreSQL Infinite Scalability and Performance Consultant
PG Admin Book: http://www.2ndQuadrant.com/books/
/*
Minimaalne komplekt vea reprodutseerimiseks:
Postgre andmebaas
LANGUAGE plpgsql
LANGUAGE plperlu
Perli moodul: LWP::UserAgent
Perl v5.10.0
LWP::UserAgent v5.813

Crashi reprodutseerimiseks tuleb kasutada HTTPS päringu.
*/

CREATE TABLE crashtest
(
  id integer,
  val varchar
);

INSERT INTO crashtest (id, val) VALUES (1, '');

CREATE OR REPLACE FUNCTION crashtest()
  RETURNS character varying AS
$BODY$
begin
perform crashtest_request();
update crashtest set val = 'XXX' where id=1;
perform pg_sleep(2);
RETURN 'OK';
END;
$BODY$
  LANGUAGE plpgsql VOLATILE SECURITY DEFINER;

CREATE OR REPLACE FUNCTION crashtest_request()
  RETURNS text AS
$BODY$
use strict;
use LWP::UserAgent;
my $ua = LWP::UserAgent-new;
# my $req = HTTP::Request-new(GET = 'https://www.eesti.ee/est/'); # Peab olema https ! 
my $req = HTTP::Request-new(GET = 'https://encrypted.google.com/'); 
# my $req = HTTP::Request-new(GET = 'https://www.paypal.com/ee/'); 
my $res = $ua-request($req);
return 'OK';
$BODY$
  LANGUAGE plperlu VOLATILE;

/*
-- Vähemalt kahes SQL aknas tuleb paralleelselt käivitada:
select crashtest();
select crashtest();
select crashtest();
select crashtest();
select crashtest();
select crashtest();
select crashtest();
select crashtest();
select crashtest();
select crashtest();
select crashtest();
select crashtest();
*/



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TRUE/FALSE vs true/false

2011-08-04 Thread Peter Geoghegan
On 4 August 2011 13:57, Robert Haas robertmh...@gmail.com wrote:
 Oh, I see.  Well, I don't care either way, so I'll let others weigh
 in.  The way it is doesn't bother me, but fixing it doesn't bother me
 either.

Idiomatic win32 code uses BOOL and TRUE/FALSE. They are macros defined
somewhere or other.

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compressing the AFTER TRIGGER queue

2011-08-04 Thread Kevin Grittner
Robert Haas robertmh...@gmail.com wrote:
 On Wed, Aug 3, 2011 at 6:05 PM, Jim Nasby j...@nasby.net wrote:
 Not sure how much this relates to this discussion, but I have
 often wished we had AFTER FOR EACH STATEMENT triggers that
 provided OLD and NEW recordsets you could make use of. Sometimes
 it's very valuably to be able to look at *all* the rows that
 changed in a transaction in one shot.
 
 Yeah, that would be awesome.  I think some of our competitors
 provide exactly that feature...
 
If I remember correctly, MS SQL Server and Sybase ASE provide
INSERTED and DELETED relations in triggers instead of NEW and OLD
records.  In a FOR EACH ROW trigger the relation contains only one
row.
 
This is related to the thread on BEFORE triggers, in that these
products require that you UPDATE the row in the base table to modify
it (normally by joining to the INSERTED relation), making the latest
values available to other trigger code, and providing a clear
distinction between the values coming in to the trigger and the
latest values in the database.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl crash with Debian 6 (64 bit), pl/perlu, libwww and https

2011-08-04 Thread Andrew Dunstan



On 08/04/2011 09:07 AM, Hannu Krosing wrote:

Hi

I have been helping some people to debug a SIGALARM related crash
induced by using pl/perlu http get functionality

I have been so far able to repeat the crash only on Debian 64 bit
computers. DB create script and instructions for reproducing the crash
attached

The crash is related to something leaving begind a bad SIGALARM handler,
as it can be (kind of) fixed by resetting sigalarm to nothing using perl
function


So doesn't this look like a bug in the perl module that sets the signal 
handler and doesn't restore it?


What happens if you wrap the calls to the module like this?:

{
local $SIG{ALRM};
# do LWP stuff here
}
return 'OK';


That should restore the old handler on exit from the block.

I think if you use a perl module that monkeys with the signal handlers 
for any signal postgres uses all bets are off.



cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl crash with Debian 6 (64 bit), pl/perlu, libwww and https

2011-08-04 Thread Hannu Krosing
On Thu, 2011-08-04 at 15:07 +0200, Hannu Krosing wrote:
 Hi
 
 I have been helping some people to debug a SIGALARM related crash
 induced by using pl/perlu http get functionality
 
 I have been so far able to repeat the crash only on Debian 64 bit
 computers. DB create script and instructions for reproducing the crash
 attached

Resending - the previous one was in pre-edit stage with
instructions/comments in estonian :(

 The crash is related to something leaving begind a bad SIGALARM handler,
 as it can be (kind of) fixed by resetting sigalarm to nothing using perl
 function 
 
 REATE OR REPLACE FUNCTION reset_sigalarm() RETURNS VOID
 LANGUAGE plperlu
 AS $_X$
$SIG{ALRM} = 'IGNORE';
 $_X$;
 
 ( unfortunately this hoses deadlock detection and statement_timeout )
 
 Environment where this crash does happen:
 
 Debian GNU/Linux 6.0  - x86-64
 openssl 0.9.8o-4squeeze1
 postgresql-9.0  9.0.4-1~bpo60+1 
 postgresql-plperl-9.0   9.0.4-1~bpo60+1 
 libwww-perl 5.836-1  
 
 Postgresql is installed from backports
 
 It does not happen on 32 bit ubuntu
 
 

-- 
---
Hannu Krosing
PostgreSQL Infinite Scalability and Performance Consultant
PG Admin Book: http://www.2ndQuadrant.com/books/
/*
Minimal scripy for reproducing the error

needed languages

LANGUAGE plpgsql
LANGUAGE plperlu
Perl module: LWP::UserAgent
Perl v5.10.0
LWP::UserAgent v5.813

NB! crash happens only with HTTPS: protocol
*/

CREATE TABLE crashtest
(
  id integer,
  val varchar
);

INSERT INTO crashtest (id, val) VALUES (1, '');

CREATE OR REPLACE FUNCTION crashtest()
  RETURNS character varying AS
$BODY$
begin
perform crashtest_request();
update crashtest set val = 'XXX' where id=1;
perform pg_sleep(2);
RETURN 'OK';
END;
$BODY$
  LANGUAGE plpgsql VOLATILE SECURITY DEFINER;

CREATE OR REPLACE FUNCTION crashtest_request()
  RETURNS text AS
$BODY$
use strict;
use LWP::UserAgent;
my $ua = LWP::UserAgent-new;
my $req = HTTP::Request-new(GET = 'https://encrypted.google.com/'); 
# my $req = HTTP::Request-new(GET = 'https://www.paypal.com/ee/'); 
my $res = $ua-request($req);
return 'OK';
$BODY$
  LANGUAGE plperlu VOLATILE;

/*
-- now, run in at least psql 2 connectios simultaneously
select crashtest();
select crashtest();
select crashtest();
select crashtest();
select crashtest();
select crashtest();
select crashtest();
select crashtest();
select crashtest();
select crashtest();
select crashtest();
select crashtest();
*/



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl crash with Debian 6 (64 bit), pl/perlu, libwww and https

2011-08-04 Thread Hannu Krosing
On Thu, 2011-08-04 at 09:42 -0400, Andrew Dunstan wrote:
 
 On 08/04/2011 09:07 AM, Hannu Krosing wrote:
  Hi
 
  I have been helping some people to debug a SIGALARM related crash
  induced by using pl/perlu http get functionality
 
  I have been so far able to repeat the crash only on Debian 64 bit
  computers. DB create script and instructions for reproducing the crash
  attached
 
  The crash is related to something leaving begind a bad SIGALARM handler,
  as it can be (kind of) fixed by resetting sigalarm to nothing using perl
  function
 
 So doesn't this look like a bug in the perl module that sets the signal 
 handler and doesn't restore it?
 
 What happens if you wrap the calls to the module like this?:
 
  {
  local $SIG{ALRM};
  # do LWP stuff here
  }
  return 'OK';
 
 
 That should restore the old handler on exit from the block.
 
 I think if you use a perl module that monkeys with the signal handlers 
 for any signal postgres uses all bets are off.

Sure, but how expensive would it be for pl/perl to do this
automatically ?

 cheers
 
 andrew
 
 
 



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Postgres / plpgsql equivalent to python's getattr() ?

2011-08-04 Thread Hannu Krosing
On Wed, 2011-08-03 at 18:48 -0400, Tom Lane wrote:
 James Robinson jlrob...@socialserve.com writes:
  Python's getattr() allows for dynamic lookup of attributes on an  
  object, as in:
 
  inst = MyClass(x=12, y=24)
  v = getattr(inst, 'x')
  assert v == 12
 
  Oftentimes in writing data validating trigger functions, it'd be real  
  handy to be able to do a similar thing in plpgsql
 
  Is there something in the internals which inherently prevent this? 
 
 plpgsql is strongly typed (much more so than python, anyway).

For example the plpgsql type RECORD is about as strongly typed as (some)
python classes , that is once assigned the record itself is typed, but
the same variable can point to different record types 

-- 
---
Hannu Krosing
PostgreSQL Infinite Scalability and Performance Consultant
PG Admin Book: http://www.2ndQuadrant.com/books/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Postgres / plpgsql equivalent to python's getattr() ?

2011-08-04 Thread James Robinson


On Aug 4, 2011, at 4:55 AM, Florian Pflug wrote:





@OP: Here's my implementation of the feature you desire as a set of  
C-language
functions: https://github.com/fgp/pg_record_inspect. Other people  
did code up
similar things in the past, but I currently cannot find any links to  
their work.
But it little bit digging in the mailing list archives should turn  
them up.



Many thanks, Florian, we'll be checking that out.

James

James Robinson
Socialserve.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cataloguing NOT NULL constraints

2011-08-04 Thread Alvaro Herrera
Excerpts from Nikhil Sontakke's message of jue ago 04 04:23:59 -0400 2011:

 Some nit-picking.
 
 AFAICS above, we seem to be only using 'tbinfo' to identify the object
 type here - 'table' visavis 'domain'. We could probably reduce the
 above two elses to a single one and use the check of tbinfo being not
 null to decide which object type name to spit out..

Yeah, I considered that, but I rejected the idea on the grounds that all
the preceding blocks use this style.  (Also, if I understand you well,
what you suggest would incur into a translatability problem; we'd have
to create two separate messages for that purpose anyway.)

 Although, it's difficult to see how we could end up marking NOT NULL
 constraints as 'separate' ever. So this code will be rarely exercised,
 if ever IMO.

Well, as Dean points out, as soon as we have NOT VALID constraints it
will be necessary.  I prefer to leave that out for a later patch.

Thanks for looking.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl crash with Debian 6 (64 bit), pl/perlu, libwww and https

2011-08-04 Thread Alvaro Herrera
Excerpts from Hannu Krosing's message of jue ago 04 09:53:40 -0400 2011:
 On Thu, 2011-08-04 at 09:42 -0400, Andrew Dunstan wrote:
  
  On 08/04/2011 09:07 AM, Hannu Krosing wrote:

   I have been helping some people to debug a SIGALARM related crash
   induced by using pl/perlu http get functionality
  
   I have been so far able to repeat the crash only on Debian 64 bit
   computers. DB create script and instructions for reproducing the crash
   attached
  
   The crash is related to something leaving begind a bad SIGALARM handler,
   as it can be (kind of) fixed by resetting sigalarm to nothing using perl
   function
  
  So doesn't this look like a bug in the perl module that sets the signal 
  handler and doesn't restore it?

I vaguely remember looking in the guts of LWP::UserAgent a few years ago
and being rather annoyed at the way it dealt with sigalrm -- it
interfered with other uses we had for the signal.  I think we had to run
a patched version of that module or something, not sure.

  What happens if you wrap the calls to the module like this?:
  
   {
   local $SIG{ALRM};
   # do LWP stuff here
   }
   return 'OK';
  
  
  That should restore the old handler on exit from the block.
  
 
 Sure, but how expensive would it be for pl/perl to do this
 automatically ?

Probably too much, but then since this is an untrusted pl my guess is
that it's OK to request the user to do it only in functions that need
it.  I wonder if we could have a check on return from a function that
the sighandler is still what we had before the function was called, to
help discover this problem.

  I think if you use a perl module that monkeys with the signal handlers 
  for any signal postgres uses all bets are off.

Yeah.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl crash with Debian 6 (64 bit), pl/perlu, libwww and https

2011-08-04 Thread Tom Lane
Hannu Krosing ha...@krosing.net writes:
 On Thu, 2011-08-04 at 09:42 -0400, Andrew Dunstan wrote:
 On 08/04/2011 09:07 AM, Hannu Krosing wrote:
 The crash is related to something leaving begind a bad SIGALARM handler,

 So doesn't this look like a bug in the perl module that sets the signal 
 handler and doesn't restore it?
 I think if you use a perl module that monkeys with the signal handlers 
 for any signal postgres uses all bets are off.

 Sure, but how expensive would it be for pl/perl to do this
 automatically ?

How can anything like that possibly work with any reliability
whatsoever?  If the signal comes in, you don't know whether it was
triggered by the event Postgres expected, or the event the perl module
expected, and hence there's no way to deliver it to the right signal
handler (not that the code you're describing is even trying to do that).

What *I'd* like is a way to prevent libperl from touching the host
application's signal handlers at all.  Sadly, Perl does not actually
think of itself as an embedded library, and therefore thinks it owns all
resources of the process and can diddle them without anybody's
permission.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl crash with Debian 6 (64 bit), pl/perlu, libwww and https

2011-08-04 Thread Andrew Dunstan



On 08/04/2011 09:53 AM, Hannu Krosing wrote:



What happens if you wrap the calls to the module like this?:

  {
  local $SIG{ALRM};
  # do LWP stuff here
  }
  return 'OK';


That should restore the old handler on exit from the block.

I think if you use a perl module that monkeys with the signal handlers
for any signal postgres uses all bets are off.

Sure, but how expensive would it be for pl/perl to do this
automatically ?




Probably not very. It could possibly be added to 
plc_perlboot.pl::mkfuncsrc() after the prolog, or maybe before.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl crash with Debian 6 (64 bit), pl/perlu, libwww and https

2011-08-04 Thread Alexey Klyukin

On Aug 4, 2011, at 5:25 PM, Alvaro Herrera wrote:

 Excerpts from Hannu Krosing's message of jue ago 04 09:53:40 -0400 2011:
 On Thu, 2011-08-04 at 09:42 -0400, Andrew Dunstan wrote:
 
 On 08/04/2011 09:07 AM, Hannu Krosing wrote:
 
 I have been helping some people to debug a SIGALARM related crash
 induced by using pl/perlu http get functionality
 
 I have been so far able to repeat the crash only on Debian 64 bit
 computers. DB create script and instructions for reproducing the crash
 attached
 
 The crash is related to something leaving begind a bad SIGALARM handler,
 as it can be (kind of) fixed by resetting sigalarm to nothing using perl
 function
 
 So doesn't this look like a bug in the perl module that sets the signal 
 handler and doesn't restore it?
 
 I vaguely remember looking in the guts of LWP::UserAgent a few years ago
 and being rather annoyed at the way it dealt with sigalrm -- it
 interfered with other uses we had for the signal.  I think we had to run
 a patched version of that module or something, not sure.
 
 What happens if you wrap the calls to the module like this?:
 
 {
 local $SIG{ALRM};
 # do LWP stuff here
 }
 return 'OK';
 
 
 That should restore the old handler on exit from the block.
 
 
 Sure, but how expensive would it be for pl/perl to do this
 automatically ?
 
 Probably too much, but then since this is an untrusted pl my guess is
 that it's OK to request the user to do it only in functions that need
 it.  I wonder if we could have a check on return from a function that
 the sighandler is still what we had before the function was called, to
 help discover this problem.

If we can do that, than why won't we move a step further and restore an old
signal handler on mismatch?

--
Command Prompt, Inc.  http://www.CommandPrompt.com
PostgreSQL Replication, Consulting, Custom Development, 24x7 support




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl crash with Debian 6 (64 bit), pl/perlu, libwww and https

2011-08-04 Thread Andrew Dunstan



On 08/04/2011 10:28 AM, Tom Lane wrote:

How can anything like that possibly work with any reliability
whatsoever?  If the signal comes in, you don't know whether it was
triggered by the event Postgres expected, or the event the perl module
expected, and hence there's no way to deliver it to the right signal
handler (not that the code you're describing is even trying to do that).


True.


What *I'd* like is a way to prevent libperl from touching the host
application's signal handlers at all.  Sadly, Perl does not actually
think of itself as an embedded library, and therefore thinks it owns all
resources of the process and can diddle them without anybody's
permission.




I'm not sure how perl (or any loadable library) could restrict that in 
loaded C code, which many perl modules call directly or indirectly. It's 
as open as, say, a loadable C function is in Postgres ;-) You have a 
gun. It's loaded. If you point it at your foot and pull the trigger 
don't blame us. I think you just need to be very careful about what you 
do with plperlu. Don't be surprised if things break.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Postgres / plpgsql equivalent to python's getattr() ?

2011-08-04 Thread Merlin Moncure
On Wed, Aug 3, 2011 at 4:19 PM, James Robinson jlrob...@socialserve.com wrote:
 Hackers,

 Python's getattr() allows for dynamic lookup of attributes on an object, as
 in:

        inst = MyClass(x=12, y=24)
        v = getattr(inst, 'x')
        assert v == 12

 Oftentimes in writing data validating trigger functions, it'd be real handy
 to be able to do a similar thing in plpgsql against column values in a row
 or record type, such as making use of a trigger argument for hint as what
 column to consider in this table's case. Oh, to be able to do something like
 (toy example known to be equivalent to a check):

        CREATE OR REPLACE FUNCTION must_be_positive() RETURNS TRIGGER AS
        $$
        begin
                if getattr(NEW, TG_ARGV[0]) = 0
                then
                        raise exception(TG_ARGV[0] || ' must be positive');
                end if;

                -- after trigger
                return null;
        end;
        $$ LANGUAGE PLPGSQL;


 A function which takes a row + a text column name, and / or a peer function
 taking row + index within row would really open up plpgsql's expressivity in
 cases where you're writing mainly SQL stuff, not really wanting to go over
 to plpythonu or whatnot (whose description of rows are as dicts).

 Is there something in the internals which inherently prevent this? Or am I
 fool and it already exists?

 Not having to defer to EXECUTE would be attractive.

Aside from the other great solutions mentioned, you can run a record
type through hstore and pull fields dynamically that way.  The hstore
method is a variant of the general 'coerce everything to text'
strategy.  Florian's approach is likely faster, but more verbose?

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] lazy vxid locks, v3

2011-08-04 Thread Jeff Davis
On Mon, 2011-08-01 at 12:12 -0400, Robert Haas wrote:
 I guess you could look at that way.  It just seemed like the obvious
 way to write the code: we do LockRefindAndRelease() only if we have a
 fast-path lock that someone else has pushed into the main table.

OK, looks good to me. Marked ready for committer.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WAL logging volume and CREATE TABLE

2011-08-04 Thread Bruce Momjian

Patch applied.

---

Bruce Momjian wrote:
 Alvaro Herrera wrote:
  Excerpts from Bruce Momjian's message of mar ago 02 22:46:55 -0400 2011:
  
   I have created a documentation patch to clarify this, and to mention
   CREATE TABLE AS which also has this optimization.
  
  It doesn't seem particularly better to me.  How about something like
  
  In minimal level, WAL-logging of some operations can be safely skipped,
  which can make those operations much faster (see blah).  Operations on
  which this optimization can be applied include: 
  simplelist
   itemCREATE INDEX/item
   itemCLUSTER/item
   itemCREATE TABLE AS/item
   itemCOPY, when tables that were created or truncated in the same
   transaction
  /simplelist
  
  Minimal WAL does not contain enough information to reconstruct the data
  from a base backup and the WAL logs, so either literalarchive/ or
  literalhot_standby/ level must be used to enable ...
 
 Good idea --- updated patch attached.
 
 -- 
   Bruce Momjian  br...@momjian.ushttp://momjian.us
   EnterpriseDB http://enterprisedb.com
 
   + It's impossible for everything to be true. +


 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] error: could not find pg_class tuple for index 2662

2011-08-04 Thread Tom Lane
daveg da...@sonic.net writes:
 Summary: the failing process reads 0 rows from 0 blocks from the OLD
 relfilenode. 

Hmm.  This seems to mean that we're somehow missing a relation mapping
invalidation message, or perhaps not processing it soon enough during
some complex set of invalidations.  I did some testing with that in mind
but couldn't reproduce the failure.  It'd be awfully nice to get a look
at the call stack when this happens for you ... what OS are you running?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] lazy vxid locks, v3

2011-08-04 Thread Robert Haas
On Thu, Aug 4, 2011 at 11:29 AM, Jeff Davis pg...@j-davis.com wrote:
 On Mon, 2011-08-01 at 12:12 -0400, Robert Haas wrote:
 I guess you could look at that way.  It just seemed like the obvious
 way to write the code: we do LockRefindAndRelease() only if we have a
 fast-path lock that someone else has pushed into the main table.

 OK, looks good to me. Marked ready for committer.

Thanks for the review!

Committed.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl crash with Debian 6 (64 bit), pl/perlu, libwww and https

2011-08-04 Thread Hannu Krosing
On Thu, 2011-08-04 at 10:28 -0400, Tom Lane wrote:
 Hannu Krosing ha...@krosing.net writes:
  On Thu, 2011-08-04 at 09:42 -0400, Andrew Dunstan wrote:
  On 08/04/2011 09:07 AM, Hannu Krosing wrote:
  The crash is related to something leaving begind a bad SIGALARM handler,
 
  So doesn't this look like a bug in the perl module that sets the signal 
  handler and doesn't restore it?
  I think if you use a perl module that monkeys with the signal handlers 
  for any signal postgres uses all bets are off.
 
  Sure, but how expensive would it be for pl/perl to do this
  automatically ?
 
 How can anything like that possibly work with any reliability
 whatsoever?  If the signal comes in, you don't know whether it was
 triggered by the event Postgres expected, or the event the perl module
 expected, and hence there's no way to deliver it to the right signal
 handler (not that the code you're describing is even trying to do that).
 
 What *I'd* like is a way to prevent libperl from touching the host
 application's signal handlers at all.  Sadly, Perl does not actually
 think of itself as an embedded library, and therefore thinks it owns all
 resources of the process and can diddle them without anybody's
 permission.

It then seems that it is a goo idea to treat any fiddling with
postgreSQL signal handlers as an error, and rise an ERROR if any signal
handler has changed between calling the function and return, in a way
suggested by Alvaro

This at least forces the developer to pay attention and in case of
pl/perl function use something like the 

 {
 local $SIG{ALRM};
 # do LWP stuff here
 }
 return 'OK';

trick suggested by Andrew Dunstan

I know that this is not the real solution, bu at least it is easier to
debug than leaving a round signal handlers pointing to non-existent
code, which will trigger next time the deadlock checker tries to run.

-- 
---
Hannu Krosing
PostgreSQL Infinite Scalability and Performance Consultant
PG Admin Book: http://www.2ndQuadrant.com/books/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cataloguing NOT NULL constraints

2011-08-04 Thread Peter Eisentraut
On tis, 2011-08-02 at 23:40 -0400, Alvaro Herrera wrote:
 Thanks.  I've done the other changes you suggested, but I don't see that
 it's desirable to have gram.y emit AT_AddConstraint directly.  It seems
 cleaner to be able to turn a NOT NULL constraint into AT_SetNotNull
 in parse_utilcmd instead.  (Maybe I'll have to bite the bullet and make
 AT_AddConstraint work for not null constraints as well, as part of the
 larger patch.  Not sure.)  Currently, the table constraint syntax only
 lets you do a single constraint at a time, but you can do multiple
 constraints with the column constraint syntax.  I am not sure how hard
 it is to rework the grammar so that only a single constraint is
 allowed, but I'm not sure that it's worth the trouble either.
 
 Attached is an updated version, touching the docs and adding a new
 simple regression test.
 
 But ... I just noticed that I need to touch ALTER DOMAIN in a similar
 way as well.

Have you considered just cataloging NOT NULL constraints as CHECK
constraints and teaching the reverse parser to convert x CHECK (x IS
NOT NULL) to x NOT NULL.  It seems to me that we're adding a whole
lot of hoopla here that is essentially identical to the existing CHECK
constraint support (it must be, per SQL standard), for no additional
functionality.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TRUE/FALSE vs true/false

2011-08-04 Thread Peter Eisentraut
On tor, 2011-08-04 at 14:44 +0200, Boszormenyi Zoltan wrote:
 2011-08-04 14:32 keltezéssel, Robert Haas írta:
  2011/8/4 Boszormenyi Zoltan z...@cybertec.at:
  Shouldn't these get fixed to be consistent?
  I believe I already did.  See commit 
  85b436f7b1f06a6ffa8d2f29b03d6e440de18784.
 
 I meant a mass sed -e 's/TRUE/true/g'  -e 's/FALSE/false/g' run
 so all the ~200 occurrences of both TRUE and FALSE get
 converted so the whole source tree is consistent.

I would be in favor of that.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: move dumpUserConfig call in dumpRoles function of pg_dumpall.c

2011-08-04 Thread Phil Sorber
On Tue, Aug 2, 2011 at 5:05 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Phil Sorber p...@omniti.com writes:
 I have included two patches in this email. The first
 (dump_user_config_last_with_set_role.patch) is an extension of my
 first patch. In addition to moving the ALTER ROLE statements after the
 CREATE ROLE statements it also inserts a SET ROLE after every connect.
 It takes the role parameter from the --role command line option. This
 fixes the problem of not being able to restore to a database because
 of lack of permissions. This is similar to the idea proposed here:
 http://archives.postgresql.org/pgsql-hackers/2010-12/msg01046.php

 I don't understand why you think that that will fix anything?

 The problem that Florian originally pointed out is that settings
 established by ALTER DATABASE/ROLE could interfere with the restoration
 script's actions.  That seems to be just as much of a risk for the
 --role role as the one originally used to connect.  I don't see a way
 around that other than not applying those settings until we are done
 reconnecting to the target database.

 Also, given that the --role switch is only defined to select the role
 to be used at *dump* time, I'm unconvinced that forcing it to be used
 at *restore* time is a good idea.  You'd really need to invent a
 separate switch if you were to go down this path.

                        regards, tom lane


Ok, here is the patch that just moves the ALTER/SET pieces to the end.
Can we get this included in the next commit fest?
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
new file mode 100644
index b5f64e8..d3929f0
*** a/src/bin/pg_dump/pg_dumpall.c
--- b/src/bin/pg_dump/pg_dumpall.c
*** static void dropTablespaces(PGconn *conn
*** 41,48 
  static void dumpTablespaces(PGconn *conn);
  static void dropDBs(PGconn *conn);
  static void dumpCreateDB(PGconn *conn);
! static void dumpDatabaseConfig(PGconn *conn, const char *dbname);
! static void dumpUserConfig(PGconn *conn, const char *username);
  static void dumpDbRoleConfig(PGconn *conn);
  static void makeAlterConfigCommand(PGconn *conn, const char *arrayitem,
  	   const char *type, const char *name, const char *type2,
--- 41,48 
  static void dumpTablespaces(PGconn *conn);
  static void dropDBs(PGconn *conn);
  static void dumpCreateDB(PGconn *conn);
! static void dumpDatabaseConfig(PGconn *conn);
! static void dumpUserConfig(PGconn *conn);
  static void dumpDbRoleConfig(PGconn *conn);
  static void makeAlterConfigCommand(PGconn *conn, const char *arrayitem,
  	   const char *type, const char *name, const char *type2,
*** main(int argc, char *argv[])
*** 500,517 
  		/* Dump CREATE DATABASE commands */
  		if (!globals_only  !roles_only  !tablespaces_only)
  			dumpCreateDB(conn);
  
! 		/* Dump role/database settings */
! 		if (!tablespaces_only  !roles_only)
  		{
  			if (server_version = 9)
  dumpDbRoleConfig(conn);
  		}
  	}
  
- 	if (!globals_only  !roles_only  !tablespaces_only)
- 		dumpDatabases(conn);
- 
  	PQfinish(conn);
  
  	if (verbose)
--- 500,524 
  		/* Dump CREATE DATABASE commands */
  		if (!globals_only  !roles_only  !tablespaces_only)
  			dumpCreateDB(conn);
+ 	}
  
! 	if (!globals_only  !roles_only  !tablespaces_only)
! 		dumpDatabases(conn);
! 
! 	if (!data_only  !tablespaces_only  server_version = 70300)
! 	{
! 		dumpUserConfig(conn);
! 		
! 		if (!roles_only)
  		{
+ 			if (!globals_only)
+ dumpDatabaseConfig(conn);
+ 
  			if (server_version = 9)
  dumpDbRoleConfig(conn);
  		}
  	}
  
  	PQfinish(conn);
  
  	if (verbose)
*** dumpRoles(PGconn *conn)
*** 804,812 
  			 buf, ROLE, rolename);
  
  		fprintf(OPF, %s, buf-data);
- 
- 		if (server_version = 70300)
- 			dumpUserConfig(conn, rolename);
  	}
  
  	PQclear(res);
--- 811,816 
*** dumpCreateDB(PGconn *conn)
*** 1358,1366 
  
  		fprintf(OPF, %s, buf-data);
  
- 		if (server_version = 70300)
- 			dumpDatabaseConfig(conn, dbname);
- 
  		free(fdbname);
  	}
  
--- 1362,1367 
*** dumpCreateDB(PGconn *conn)
*** 1375,1418 
   * Dump database-specific configuration
   */
  static void
! dumpDatabaseConfig(PGconn *conn, const char *dbname)
  {
! 	PQExpBuffer buf = createPQExpBuffer();
! 	int			count = 1;
  
! 	for (;;)
  	{
! 		PGresult   *res;
  
! 		if (server_version = 9)
! 			printfPQExpBuffer(buf, SELECT setconfig[%d] FROM pg_db_role_setting WHERE 
! 			  setrole = 0 AND setdatabase = (SELECT oid FROM pg_database WHERE datname = , count);
! 		else
! 			printfPQExpBuffer(buf, SELECT datconfig[%d] FROM pg_database WHERE datname = , count);
! 		appendStringLiteralConn(buf, dbname, conn);
  
! 		if (server_version = 9)
! 			appendPQExpBuffer(buf, ));
  
! 		appendPQExpBuffer(buf, ;);
  
! 		res = executeQuery(conn, buf-data);
! 		if (PQntuples(res) == 1 
! 			!PQgetisnull(res, 0, 0))
! 		{
! 			makeAlterConfigCommand(conn, PQgetvalue(res, 0, 

Re: [HACKERS] mosbench revisited

2011-08-04 Thread Robert Haas
On Wed, Aug 3, 2011 at 9:16 PM, Robert Haas robertmh...@gmail.com wrote:
 Spinlocks seem to have a very ugly tipping point.

And on that note, here are oprofile results from pgbench -n -T 300 -S
-c 64 -j 64 -M prepared on the latest master branch, compiled with
-O2 -fno-omit-frame-pointer.  shared_buffers=8GB, 64-core machine,
RHEL 6.1.  By running with -M prepared, it dodges the lseek()
problem.

960576   23.7580  postgres postgres s_lock
562821   13.9203  no-vmlinux   no-vmlinux   /no-vmlinux
3211917.9440  postgres postgres
LWLockRelease
3176537.8565  postgres postgres
LWLockAcquire
2248125.5603  postgres postgres
GetSnapshotData
81156 2.0072  postgres postgres _bt_compare
78744 1.9476  postgres postgres PinBuffer
58101 1.4370  postgres postgres
hash_search_with_hash_value
43865 1.0849  postgres postgres
AllocSetAlloc
25832 0.6389  postgres postgres PostgresMain

Since SpinLockAcquire() is an in-line macro that only calls s_lock()
if the initial TAS fails, not only the time directly attributed to
s_lock but also a good chunk of the CPU time attributable to
LWLockAcquire and LWLockRelease() is likely time spent fighting over
spinlocks.  Since I compiled with frame pointers, it's pretty easy to
see where those s_lock calls are coming from.  Here's an excerpt from
opreport -c:

 55.0e-04  postgres postgres _bt_getbuf
  66.0e-04  postgres postgres
_bt_relandgetbuf
  140.0014  postgres postgres
ReleaseAndReadBuffer
  850.0085  postgres postgres
ReadBuffer_common
  206   0.0207  postgres postgres
GetSnapshotData
  18344 1.8437  postgres postgres
UnpinBuffer
  24977 2.5103  postgres postgres PinBuffer
  406948   40.9009  postgres postgres
LWLockRelease
  544376   54.7133  postgres postgres
LWLockAcquire
994947   23.5746  postgres postgres s_lock

It's also fairly easy to track down who is calling LWLockAcquire and
LWLockRelease.  Nearly all of the calls are from just two
contributors:

  241655   27.6830  postgres postgres
ReadBuffer_common
  566434   64.8885  postgres postgres
GetSnapshotData
3285487.7847  postgres postgres
LWLockAcquire

  176629   23.8917  postgres postgres
ReadBuffer_common
  524348   70.9259  postgres postgres
GetSnapshotData
3323337.8744  postgres postgres
LWLockRelease

So, most of the s_lock calls come from LWLockAcquire, and most of the
LWLockAcquire calls come from GetSnapshotData.  That's not quite
enough to prove that all the spinning going on here is coming from
contention over the spinlock protecting ProcArrayLock, because it
needn't be the case that all calls to LWLockAcquire are equally likely
to end up in s_lock.  You could speculate that ProcArrayLock isn't
actually responsible for many of those s_lock calls and that some
other lock, like maybe the buffer mapping locks, is disproportionately
responsible for the s_lock calls.  But in fact I think it's exactly
the other way around: the buffer mapping locks are partitioned 16
ways, while there's only one ProcArrayLock.  I'm willing to bet that's
where nearly all of the spinning is happening, and I'll further bet
that that spinning accounts for AT LEAST a third of the total CPU time
on this workload.  And maybe closer to half.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: move dumpUserConfig call in dumpRoles function of pg_dumpall.c

2011-08-04 Thread Robert Haas
On Thu, Aug 4, 2011 at 1:53 PM, Phil Sorber p...@omniti.com wrote:
 Ok, here is the patch that just moves the ALTER/SET pieces to the end.
 Can we get this included in the next commit fest?

Yep, just make yourself an account and add it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transient plans versus the SPI API

2011-08-04 Thread Peter Eisentraut
On tis, 2011-08-02 at 16:47 -0400, Tom Lane wrote:
 The most straightforward way to reimplement things within spi.c would
 be to redefine SPI_prepare as just doing the parse-and-rewrite steps,
 with planning always postponed to SPI_execute.  In the case where you
 just prepare and then execute a SPIPlan, this would come out the same
 or better, since we'd still just do one planning cycle, but the
 planner could be given the actual parameter values to use.  However,
 if you SPI_prepare, SPI_saveplan, and then SPI_execute many times, you
 might come out behind.  This is of course the same tradeoff we are
 going to impose at the SQL level anyway, but I wonder whether there
 needs to be a control knob available to C code to retain the old
 plan-once-and-always-use-that-plan approach. 

How about a new function like SPI_parse that has the new semantics?

Note that the SPI functions are more or less directly exposed in PL/Perl
and PL/Python, and there are a number of existing idioms there that make
use of prepared plans.  Changing the semantics of those functions might
upset a lot of code.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql: bogus descriptions displayed by \d+

2011-08-04 Thread Peter Eisentraut
On ons, 2011-07-27 at 17:57 -0400, Josh Kupershmidt wrote:
  I think table_name is fine, and if you are very worried, add below
 that
  a table_name also includes views (or whatever).
 
 It includes tables, views, composite types, and foreign tables. Is
 table really an appropriate description for all those objects?

Well, the facts are:  According to the SQL standard, table includes
views and foreign tables.  According to scientific-ish database
literature, a table is a relation and vice versa.

So to someone new who doesn't know much about the PostgreSQL jargon,
neither table nor relation are very precise at all.

But I would suggest that there is more support outside of PostgreSQL
jargon for finding that replacing table by relation does not
increase precision.

And indeed, even if you know the PostgreSQL jargon, relation means
anything stored in pg_class.  And in almost all cases, a given command
does not successfully operate and any kind of pg_class object.  So using
relation here creates some kind of illusion that will eventually fail,
forcing the user to manually figure out what actually works.

So the bottom line is, I would avoid the term relation and look for
other ways to add clarity and precision to the documentation.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql: bogus descriptions displayed by \d+

2011-08-04 Thread Peter Eisentraut
On ons, 2011-07-27 at 18:08 -0400, Robert Haas wrote:
 Also, while it may be true that we haven't used the term specifically
 in SQL sypnoses, it's been extensively used in other parts of the
 documentation, in the names of system functions such as
 pg_relation_size(),

Well, that thing is just the pinnacle of silliness, because we have
pg_relation_size() and pg_table_size(), which have confusingly different
behaviors.

  and in user-visible error messages (cd
 src/backend/po; git grep relation), so I think you may be trying to
 close the barn door after the horse has got out. 

No, I'm trying to catch the horse. ;-)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql: bogus descriptions displayed by \d+

2011-08-04 Thread Robert Haas
On Thu, Aug 4, 2011 at 2:30 PM, Peter Eisentraut pete...@gmx.net wrote:
 On ons, 2011-07-27 at 18:08 -0400, Robert Haas wrote:
 Also, while it may be true that we haven't used the term specifically
 in SQL sypnoses, it's been extensively used in other parts of the
 documentation, in the names of system functions such as
 pg_relation_size(),

 Well, that thing is just the pinnacle of silliness, because we have
 pg_relation_size() and pg_table_size(), which have confusingly different
 behaviors.

Yeah, I just got flummoxed by that yesterday.  Still, the name's out there...

  and in user-visible error messages (cd
 src/backend/po; git grep relation), so I think you may be trying to
 close the barn door after the horse has got out.

 No, I'm trying to catch the horse. ;-)

Fair enough, but I think you're not running fast enough (yet).

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql: bogus descriptions displayed by \d+

2011-08-04 Thread Robert Haas
On Thu, Aug 4, 2011 at 2:26 PM, Peter Eisentraut pete...@gmx.net wrote:
 On ons, 2011-07-27 at 17:57 -0400, Josh Kupershmidt wrote:
  I think table_name is fine, and if you are very worried, add below
 that
  a table_name also includes views (or whatever).

 It includes tables, views, composite types, and foreign tables. Is
 table really an appropriate description for all those objects?

 Well, the facts are:  According to the SQL standard, table includes
 views and foreign tables.  According to scientific-ish database
 literature, a table is a relation and vice versa.

So what are you supposed to call it if you mean, specifically, a table?

 So to someone new who doesn't know much about the PostgreSQL jargon,
 neither table nor relation are very precise at all.

That can be fixed by defining them better, of course...

 And indeed, even if you know the PostgreSQL jargon, relation means
 anything stored in pg_class.  And in almost all cases, a given command
 does not successfully operate and any kind of pg_class object.  So using
 relation here creates some kind of illusion that will eventually fail,
 forcing the user to manually figure out what actually works.

This argument doesn't impress me much, because it would be true of any
word we used here.  If we start using table to mean a table, view,
or foreign table, then we're going to have to clarify that CLUSTER
only runs on tables that are actually, uh, tables.  And what about the
error messages that say x is not a table or view?

And, moreover, at least in English, it's common to make a statement
about a broader class of objects that does not necessarily apply to
every type of object in the class.  When I tell my wife your cooking
is delicious, my statement is not intended to include her
tomato-and-vodka sauce, which IMHO is really terrible.  She doesn't
react with confusion and say but wait, how can you say you like my
cooking when I know that you don't like my tomato-and-vodka sauce?;
rather, she understands that I'm talking about some probably fairly
broad subset of her cooking and that if she wants to know what I think
of a specific dish, she will need to inquire specifically about that
dish.  Similarly, I believe users will understand that when they see
relation_name, they might need to check the detailed description to
know which relation types are included.

I'm not averse to using some better terminology; I agree that relation
is kind of corny.  But if we're going to make an effort to be
consistent here, we need to come up with something that's actually
better, and then hopefully implement it fairly broadly.  We've fallen
into saying relation mostly for lack of a better term, but we can't
start getting rid of it until we have a replacement.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transient plans versus the SPI API

2011-08-04 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 How about a new function like SPI_parse that has the new semantics?

Yeah, I'd considered that idea (and even exactly that name for it).
Howver, the disadvantage of inventing a separate entry point is that
it isn't going to be nice for multi-level call chains, of which there
are several inside the core code and probably plenty elsewhere.  The
bottom level would have to do something like

if (new-behavior-wanted)
SPI_parse(args...);
else
SPI_prepare(args...);

and then invent some way for its callers to signal new-behavior-wanted,
and it won't be pretty if they all pick different ways to do that.

Plus we've already got SPI_prepare_cursor and SPI_prepare_params, each
of which would need a matching SPI_parse_foo entry point.

So if we want a knob here, I think that the sanest way to install it is
to add a couple more flag bits to the existing int cursorOptions
bitmask arguments of the latter two functions, perhaps

CURSOR_OPT_USE_GENERIC_PLAN
CURSOR_OPT_USE_CUSTOM_PLAN

to force generic-plan-always or custom-plan-always respectively.
(The cursor naming of those flag bits is starting to look a bit
unfortunate, but I'm not inclined to rename them now.)

If we set it up like that, then the default behavior with flags == 0
would be to use the heuristic plan-selection approach, and presumably
that is what you would also get from SPI_prepare (which is both coded
and documented as matching SPI_prepare_cursor with flags == 0).

So the question is whether it's okay to change the default behavior...

 Note that the SPI functions are more or less directly exposed in PL/Perl
 and PL/Python, and there are a number of existing idioms there that make
 use of prepared plans.  Changing the semantics of those functions might
 upset a lot of code.

Right, but by the same token, if we don't change the default behavior,
there is going to be a heck of a lot of code requiring manual adjustment
before it can make use of the (hoped-to-be) improvements.  To me it
makes more sense to change the default and then provide ways for people
to lock down the behavior if the heuristic doesn't work for them.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql: bogus descriptions displayed by \d+

2011-08-04 Thread Kevin Grittner
Peter Eisentraut pete...@gmx.net wrote:
 
 According to scientific-ish database literature, a table is a
 relation and vice versa.
 
I've generally understood the terms more like what is described near
the top of this page:
 
http://en.wikipedia.org/wiki/Relation_%28database%29
 
In SQL, [...] a relation variable is called a table.
 
I'll admit that how these terms are treated depends very much on the
source, and we should define our terms to avoid confusion.  But
defining a relation as set of records, and a table as a variable
which holds a maintainable concrete relation (or something more or
less to that effect) makes some sense to me.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] error: could not find pg_class tuple for index 2662

2011-08-04 Thread daveg
On Thu, Aug 04, 2011 at 12:28:31PM -0400, Tom Lane wrote:
 daveg da...@sonic.net writes:
  Summary: the failing process reads 0 rows from 0 blocks from the OLD
  relfilenode. 
 
 Hmm.  This seems to mean that we're somehow missing a relation mapping
 invalidation message, or perhaps not processing it soon enough during
 some complex set of invalidations.  I did some testing with that in mind
 but couldn't reproduce the failure.  It'd be awfully nice to get a look
 at the call stack when this happens for you ... what OS are you running?

cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 5.5 (Tikanga)
Linux version 2.6.18-194.el5

I can use gdb as well if we can get a core or stop the correct process.
Perhaps a long sleep when it hits this?
Or perhaps we could log invalidate processing for pg_class?

-dg

-- 
David Gould   da...@sonic.net  510 536 1443510 282 0869
If simplicity worked, the world would be overrun with insects.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] possible new feature: asynchronous sql or something like oracles dbms_job.submit

2011-08-04 Thread Vorarlberger
hi

i switched back from oracle xe for many reasons and because the lack of
pl/java in oralce xe. and posgesql is the best open source db i know! sadly
what i realy miss is the possibility of asynchronous sql. which means the
need to perform an sql in a completely different new session. something i
could do in oracle with dbms_job.submit.

there are may reasons for the need of this feature and not because of the
scheduler. just one actual need: assume you have a table and every time
something gets inserted you want to notify another pq-application. lets say
this table is part of your crm application. but every new inserted customer
is an interesting part of your accounting application. instead of time
scheduling or polling you could use asynchronous sql. simply add a trigger
on your customer table and execute the add_customer_to_accounting pgsql
interface in its own session. if it fails it is not the fault of the crm
application and should not raise a exception and rollback. and of course
triggering an interface for a foreign application should not be in the
native transaction. also your crm application do not want to wait on every
insert for a interface to complete.

before someone is going to say this is a q. no its not realy a q - because
you do not need a consumer daemon. there is absolutely no need for an
additional daemon. you simply trigger a procedure and send it to the
background like you would do in a unix shell. you can also use dbms_jobs for
parallel execution or to recalculate material views.

so what i would like to ask you if you can feel comfortable with this idea
and possible implement this into postgres 10.x?

thanks and cheers
chris


[HACKERS] PQescapeByteaConn - returns wrong string for PG9.1 Beta3

2011-08-04 Thread Petro Meier
Normal021false  
  falsefalseDEX-NONEX-NONE  

  MicrosoftInternetExplorer4


















Please let me clarify the bug:

 CREATE TABLE testtable

(

  ID integer NOT NULL,

  BinaryContents bytea

);

 INSERT INTO testtable (ID, BinaryContents) values (1, 
E'\xea2abd8ef3');

 returns invalid byte sequence. 

 '\xea2abd8ef3' is the string delivered by the PG 9.1 Beta3 server 
when calling PQescapeByteaConn(). It cannot be further processed by the 
server itself afterwards! There is a leading '\' missing. 

 When calling the function for a PG 9.0.1 server, then the result 
(correctly) is '\\xea2abd8ef3' (with a double-backslash!), and then the 
insert works fine, both, with PG9.1 Beta3 and PG9.0.1

 It is a serious issue, as it will break all existing PostgreSQL 
applications that deal with binary contents and use PQescapeByteaConn().


 Best regards

Petro
-- 
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de


[HACKERS] Reduce WAL logging of INSERT SELECT

2011-08-04 Thread Bruce Momjian
We currently have reduced WAL logging for wal_level = minimum for these
commands:

CREATE TABLE AS
CREATE INDEX
CLUSTER
COPY into tables that were created or truncated in the same
transaction

One thing we don't optimize is INSERT ... SELECT when the table is
created or truncated in the same transaction.  Seems we could.

We optimize CREATE TABLE AS which is effectively SELECT ... INTO using a
different syntax.  Is this a TODO?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] error: could not find pg_class tuple for index 2662

2011-08-04 Thread daveg
On Thu, Aug 04, 2011 at 12:28:31PM -0400, Tom Lane wrote:
 daveg da...@sonic.net writes:
  Summary: the failing process reads 0 rows from 0 blocks from the OLD
  relfilenode. 
 
 Hmm.  This seems to mean that we're somehow missing a relation mapping
 invalidation message, or perhaps not processing it soon enough during
 some complex set of invalidations.  I did some testing with that in mind
 but couldn't reproduce the failure.  It'd be awfully nice to get a look
 at the call stack when this happens for you ... what OS are you running?

To recap, a few observations:

When it happens the victim has recently been waiting on a lock for a
several seconds.

We create a lot of temp tables, hundreds of thousands a day.

There are catalog vacuum fulls and reindexes running on 30 odd other databases
at the same time. The script estimates the amount of bloat on each table and
index and chooses either reindex on specific indexes or vacuum full as needed.

This is a 32 core (64 with hype threading) 512GB host with several hundred
connections

We are seeing cannot read' and 'cannot open' errors too that would be
consistant with trying to use a vanished file.

-dg

-- 
David Gould   da...@sonic.net  510 536 1443510 282 0869
If simplicity worked, the world would be overrun with insects.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] possible new feature: asynchronous sql or something like oracles dbms_job.submit

2011-08-04 Thread Joe Conway
On 08/03/2011 09:20 AM, Vorarlberger wrote:
 sadly what i realy miss is the possibility of asynchronous sql. which
 means the need to perform an sql in a completely different new session.
 something i could do in oracle with dbms_job.submit.

Would this work for you?
http://www.postgresql.org/docs/9.0/interactive/contrib-dblink-send-query.html

Joe

-- 
Joe Conway
credativ LLC: http://www.credativ.us
Linux, PostgreSQL, and general Open Source
Training, Service, Consulting,  24x7 Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cataloguing NOT NULL constraints

2011-08-04 Thread Alvaro Herrera
Excerpts from Peter Eisentraut's message of jue ago 04 13:57:54 -0400 2011:
 On tis, 2011-08-02 at 23:40 -0400, Alvaro Herrera wrote:
  Thanks.  I've done the other changes you suggested, but I don't see that
  it's desirable to have gram.y emit AT_AddConstraint directly.  It seems
  cleaner to be able to turn a NOT NULL constraint into AT_SetNotNull
  in parse_utilcmd instead.  (Maybe I'll have to bite the bullet and make
  AT_AddConstraint work for not null constraints as well, as part of the
  larger patch.  Not sure.)  Currently, the table constraint syntax only
  lets you do a single constraint at a time, but you can do multiple
  constraints with the column constraint syntax.  I am not sure how hard
  it is to rework the grammar so that only a single constraint is
  allowed, but I'm not sure that it's worth the trouble either.

 Have you considered just cataloging NOT NULL constraints as CHECK
 constraints and teaching the reverse parser to convert x CHECK (x IS
 NOT NULL) to x NOT NULL.

Hmm, no, I admit I haven't.  The current approach was suggested very
early in the history of this patch.  (To be honest I didn't know NOT
NULL constraints where special forms of CHECK constraints until you
mentioned the other day regarding the information schema, and then it
didn't occur to me that it might make sense to represent them as such in
the catalog).

 It seems to me that we're adding a whole
 lot of hoopla here that is essentially identical to the existing CHECK
 constraint support (it must be, per SQL standard), for no additional
 functionality.

Yeah, perhaps you're right.  The main reason they were considered
separately is that we wanted to have them to be optimized via
pg_attribute.attnotnull, but my patch does away with the need for that
because it is maintained separately anyway.

Before embarking on rewriting this patch from scratch, I would like to
know what's your opinion (or the SQL standard's) on the fact that this
patch separated the PRIMARY KEY from NOT NULL constraints, so that they
don't act exactly alike (to wit, the not-nullness of a PK does not
inherit while the one from a NOT NULL constraint does).

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] error: could not find pg_class tuple for index 2662

2011-08-04 Thread Tom Lane
daveg da...@sonic.net writes:
 We are seeing cannot read' and 'cannot open' errors too that would be
 consistant with trying to use a vanished file.

Yeah, these all seem consistent with the idea that the failing backend
somehow missed an update for the relation mapping file.  You would get
the could not find pg_class tuple syndrome if the process was holding
an open file descriptor for the now-deleted file, and otherwise cannot
open/cannot read type errors.  And unless it later received another
sinval message for the relation mapping file, the errors would persist.

If this theory is correct then all of the file-related errors ought to
match up to recently-vacuumed mapped catalogs or indexes (those are the
ones with relfilenode = 0 in pg_class).  Do you want to expand your
logging of the VACUUM FULL actions and see if you can confirm that idea?

Since the machine is running RHEL, I think we can use glibc's
backtrace() function to get simple stack traces without too much effort.
I'll write and test a patch and send it along in a bit.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PQescapeByteaConn - returns wrong string for PG9.1 Beta3

2011-08-04 Thread k...@rice.edu
On Wed, Aug 03, 2011 at 03:19:06PM +0200, Petro Meier wrote:
 Normal021false  
   falsefalseDEX-NONEX-NONE  
 
   MicrosoftInternetExplorer4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Please let me clarify the bug:
 
  CREATE TABLE testtable
 
 (
 
   ID integer NOT NULL,
 
   BinaryContents bytea
 
 );
 
  INSERT INTO testtable (ID, BinaryContents) values (1, 
 E'\xea2abd8ef3');
 
  returns invalid byte sequence. 
 
  '\xea2abd8ef3' is the string delivered by the PG 9.1 Beta3 server 
 when calling PQescapeByteaConn(). It cannot be further processed by the 
 server itself afterwards! There is a leading '\' missing. 
 
  When calling the function for a PG 9.0.1 server, then the result 
 (correctly) is '\\xea2abd8ef3' (with a double-backslash!), and then the 
 insert works fine, both, with PG9.1 Beta3 and PG9.0.1
 
  It is a serious issue, as it will break all existing PostgreSQL 
 applications that deal with binary contents and use PQescapeByteaConn().
 
 
  Best regards
 
 Petro

That looks correct for the new default for SQL conforming strings set to
true in 9.1+. The command you should be using is:

INSERT INTO testtable (ID, BinaryContents) values (1, '\xea2abd8ef3');

Regards,
Ken

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Further news on Clang - spurious warnings

2011-08-04 Thread Peter Geoghegan
Can we please commit a fix for this problem?

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PQescapeByteaConn - returns wrong string for PG9.1 Beta3

2011-08-04 Thread Tom Lane
Petro Meier petr...@gmx.de writes:
  INSERT INTO testtable (ID, BinaryContents) values (1, 
 E'\xea2abd8ef3');
  returns invalid byte sequence. 

  '\xea2abd8ef3' is the string delivered by the PG 9.1 Beta3 server 
 when calling PQescapeByteaConn(). It cannot be further processed by the 
 server itself afterwards! There is a leading '\' missing. 

No, there isn't.  What you are doing wrong is prepending an E to the
literal.  You should not be doing that, neither in 9.1 nor any previous
version.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reduce WAL logging of INSERT SELECT

2011-08-04 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 One thing we don't optimize is INSERT ... SELECT when the table is
 created or truncated in the same transaction.  Seems we could.

 We optimize CREATE TABLE AS which is effectively SELECT ... INTO using a
 different syntax.  Is this a TODO?

Considering that SELECT INTO is deprecated, I don't think we should be
expending effort to encourage people to use it.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reduce WAL logging of INSERT SELECT

2011-08-04 Thread Andrew Dunstan



On 08/04/2011 04:55 PM, Tom Lane wrote:

Bruce Momjianbr...@momjian.us  writes:

One thing we don't optimize is INSERT ... SELECT when the table is
created or truncated in the same transaction.  Seems we could.
We optimize CREATE TABLE AS which is effectively SELECT ... INTO using a
different syntax.  Is this a TODO?

Considering that SELECT INTO is deprecated, I don't think we should be
expending effort to encourage people to use it.





Right, but the original point about INSERT ... SELECT seems reasonable, no?

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] mosbench revisited

2011-08-04 Thread Aidan Van Dyk
On Wed, Aug 3, 2011 at 5:04 PM, Robert Haas robertmh...@gmail.com wrote:

  And hoping that the Linux guys decide to do something about it.
  This isn't really our bug - lseek is quite cheap in the uncontended
 case.

Has anyone tried this on a recent kernel (i.e. 2.6.39 or later), where
they've finally remove the BKL out of VFS/inode?

I mean, complaining about scalability in linux 2.6.18 is like
complaining about scalability in postgresql 8.2 ;-)

a.

-- 
Aidan Van Dyk                                             Create like a god,
ai...@highrise.ca                                       command like a king,
http://www.highrise.ca/                                   work like a slave.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] error: could not find pg_class tuple for index 2662

2011-08-04 Thread daveg
On Thu, Aug 04, 2011 at 04:16:08PM -0400, Tom Lane wrote:
 daveg da...@sonic.net writes:
  We are seeing cannot read' and 'cannot open' errors too that would be
  consistant with trying to use a vanished file.
 
 Yeah, these all seem consistent with the idea that the failing backend
 somehow missed an update for the relation mapping file.  You would get
 the could not find pg_class tuple syndrome if the process was holding
 an open file descriptor for the now-deleted file, and otherwise cannot
 open/cannot read type errors.  And unless it later received another
 sinval message for the relation mapping file, the errors would persist.
 
 If this theory is correct then all of the file-related errors ought to
 match up to recently-vacuumed mapped catalogs or indexes (those are the
 ones with relfilenode = 0 in pg_class).  Do you want to expand your
 logging of the VACUUM FULL actions and see if you can confirm that idea?

At your service, what would you like to see?
 
 Since the machine is running RHEL, I think we can use glibc's
 backtrace() function to get simple stack traces without too much effort.
 I'll write and test a patch and send it along in a bit.

Great.

Any point to try to capture SI events somehow?

-dg

-- 
David Gould   da...@sonic.net  510 536 1443510 282 0869
If simplicity worked, the world would be overrun with insects.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] error: could not find pg_class tuple for index 2662

2011-08-04 Thread Tom Lane
daveg da...@sonic.net writes:
 On Thu, Aug 04, 2011 at 04:16:08PM -0400, Tom Lane wrote:
 If this theory is correct then all of the file-related errors ought to
 match up to recently-vacuumed mapped catalogs or indexes (those are the
 ones with relfilenode = 0 in pg_class).  Do you want to expand your
 logging of the VACUUM FULL actions and see if you can confirm that idea?

 At your service, what would you like to see?

I was thinking log the before-and-after filenode values each time you do
a VACUUM FULL, and then go through the logs to see if all the
file-related complaints refer to recently obsoleted filenodes.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reduce WAL logging of INSERT SELECT

2011-08-04 Thread Bruce Momjian
Andrew Dunstan wrote:
 
 
 On 08/04/2011 04:55 PM, Tom Lane wrote:
  Bruce Momjianbr...@momjian.us  writes:
  One thing we don't optimize is INSERT ... SELECT when the table is
  created or truncated in the same transaction.  Seems we could.
  We optimize CREATE TABLE AS which is effectively SELECT ... INTO using a
  different syntax.  Is this a TODO?
  Considering that SELECT INTO is deprecated, I don't think we should be
  expending effort to encourage people to use it.
 
  
 
 
 Right, but the original point about INSERT ... SELECT seems reasonable, no?

Right.  I brought up SELECT INTO because you could make the argument
that INSERT ... SELECT is not a utility command like the other ones and
therefore can't be done easily, but CREATE TABLE AS is internal SELECT
INTO and implemented in execMain.c, which I think is where INSERT ...
SELECT would also be implemented.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reduce WAL logging of INSERT SELECT

2011-08-04 Thread Simon Riggs
On Thu, Aug 4, 2011 at 10:46 PM, Bruce Momjian br...@momjian.us wrote:

 Right.  I brought up SELECT INTO because you could make the argument
 that INSERT ... SELECT is not a utility command like the other ones and
 therefore can't be done easily, but CREATE TABLE AS is internal SELECT
 INTO and implemented in execMain.c, which I think is where INSERT ...
 SELECT would also be implemented.

What you should be asking is whether the optimisation would be
effective for INSERT SELECT, or even test it.

My observation is that the optimisation is only effective for very
large loads that cause I/O. As RAM sizes get bigger, I'm inclined to
remove the optimisation and make it optional since it is ineffective
in many cases and negative benefit for smaller cases.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PQescapeByteaConn - returns wrong string for PG9.1 Beta3

2011-08-04 Thread Florian Pflug
On Aug4, 2011, at 22:54 , Tom Lane wrote:
 Petro Meier petr...@gmx.de writes:
 INSERT INTO testtable (ID, BinaryContents) values (1, 
 E'\xea2abd8ef3');
 returns invalid byte sequence. 
 
 '\xea2abd8ef3' is the string delivered by the PG 9.1 Beta3 server 
 when calling PQescapeByteaConn(). It cannot be further processed by the 
 server itself afterwards! There is a leading '\' missing. 
 
 No, there isn't.  What you are doing wrong is prepending an E to the
 literal.  You should not be doing that, neither in 9.1 nor any previous
 version.

Just to clarify what's going on here, in case the OP is still puzzled.

Postgres supports both a legacy mode where backslashes serve as an escape
character in single-quotes strings, and an SQL standard-compliant mode where
they don't. The mode is chosen by setting the GUC standard_conforming_strings
to either on of off. Independent of the current standard_conforming_strings
setting, once can always force a strings to be interpreted with legacy
semantics (i.e. with backslash as an escape character) by prefixing the string
literal with E. 

Thus, assuming that standard_conforming_strings is set to on, a string 
containing
exactly one backslash can be written as either
  '\' or
  E'\\',
while with standard_conforming_strings set to off, you'd have to use
  '\\' or
  E'\\'

PQescapeByteaConn() emits one backslash if it detects that
standard_conforming_strings is set to on for the given connection, and two if
it detects off. The string is thus always correctly interpreted by the 
backend as
long as you *don't* prefix it with E. If you do, you force the backend to always
interpret it with legacy semantics. Which of course causes trouble if
standard_conforming_strings is set to on, because then PQescapeByteAConn()'s
expectation of the backend's behaviour (standard mode) and it's actual behaviour
(legacy mode) no longer match.

The reason that things appeared to work for you on 9.0 is that all versions 
before
9.1 have standard_conforming_strings set to off by default. If you try your 
code
on 9.0, but with standard_conforming_strings set to on, you'll observe the 
same
breakage you observe on 9.1

Exactly the same is true for PQescapeStringConn().

best regards,
Florian Pflug


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reduce WAL logging of INSERT SELECT

2011-08-04 Thread Bruce Momjian
Simon Riggs wrote:
 On Thu, Aug 4, 2011 at 10:46 PM, Bruce Momjian br...@momjian.us wrote:
 
  Right. ?I brought up SELECT INTO because you could make the argument
  that INSERT ... SELECT is not a utility command like the other ones and
  therefore can't be done easily, but CREATE TABLE AS is internal SELECT
  INTO and implemented in execMain.c, which I think is where INSERT ...
  SELECT would also be implemented.
 
 What you should be asking is whether the optimisation would be
 effective for INSERT SELECT, or even test it.
 
 My observation is that the optimisation is only effective for very
 large loads that cause I/O. As RAM sizes get bigger, I'm inclined to
 remove the optimisation and make it optional since it is ineffective
 in many cases and negative benefit for smaller cases.

I am confused how generating WAL traffic that is larger than the heap
file we are fsync'ing can possibly be slower.  Are you just throwing out
an idea to try to make me prove it?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl crash with Debian 6 (64 bit), pl/perlu, libwww and https

2011-08-04 Thread Alex Hunsaker
On Thu, Aug 4, 2011 at 09:11, Andrew Dunstan and...@dunslane.net wrote:

 What *I'd* like is a way to prevent libperl from touching the host
 application's signal handlers at all.  Sadly, Perl does not actually
 think of itself as an embedded library, and therefore thinks it owns all
 resources of the process and can diddle them without anybody's
 permission.



 I'm not sure how perl (or any loadable library) could restrict that in
 loaded C code, which many perl modules call directly or indirectly. It's as
 open as, say, a loadable C function is in Postgres ;-) You have a gun. It's
 loaded. If you point it at your foot and pull the trigger don't blame us. I
 think you just need to be very careful about what you do with plperlu. Don't
 be surprised if things break.

Well we can't prevent perl XS (aka C) from messing with signals (and
other modules like POSIX that expose things like sigprocmask,
siglongjump etc.) , but we could prevent plperl(u) from playing with
signals on the perl level ala %SIG.

[ IIRC I proposed doing something about this when we were talking
about the whole Safe mess, but I think there was too much other
discussion going on at the time :-) ]

Mainly the options im thinking about are:
1) if anyone touches %SIG die
2) turn %SIG into a regular hash so people can set/play with %SIG, but
it has no real effect.
3) local %SIG before we call their trigger function. This lets signals
still work while in trigger scope (like we do for %_TD)
4) if we can't get any of the above to work we can save each %SIG
handler before and restore them after each trigger call. (mod_perl
does something similar so Im fairly certain we should be able to get
that to work)

Thoughts?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] mosbench revisited

2011-08-04 Thread Robert Haas
On Thu, Aug 4, 2011 at 5:09 PM, Aidan Van Dyk ai...@highrise.ca wrote:
 On Wed, Aug 3, 2011 at 5:04 PM, Robert Haas robertmh...@gmail.com wrote:

      And hoping that the Linux guys decide to do something about it.
  This isn't really our bug - lseek is quite cheap in the uncontended
 case.

 Has anyone tried this on a recent kernel (i.e. 2.6.39 or later), where
 they've finally remove the BKL out of VFS/inode?

 I mean, complaining about scalability in linux 2.6.18 is like
 complaining about scalability in postgresql 8.2 ;-)

Hmm.  This machine is running 2.6.32-131.6.1.el6.x86_64, not 2.6.18.
Not sure how much the code has changed since then, but the spinlock is
there in the master branch of Linus's repository.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transient plans versus the SPI API

2011-08-04 Thread Jeff Davis
On Tue, 2011-08-02 at 16:47 -0400, Tom Lane wrote:
 The most straightforward way to reimplement things within spi.c would be
 to redefine SPI_prepare as just doing the parse-and-rewrite steps, with
 planning always postponed to SPI_execute.  In the case where you just
 prepare and then execute a SPIPlan, this would come out the same or
 better, since we'd still just do one planning cycle, but the planner could
 be given the actual parameter values to use.  However, if you SPI_prepare,
 SPI_saveplan, and then SPI_execute many times, you might come out behind.
 This is of course the same tradeoff we are going to impose at the SQL level
 anyway, but I wonder whether there needs to be a control knob available to
 C code to retain the old plan-once-and-always-use-that-plan approach.

Would there ultimately be a difference between the way SPI_prepare and
PQprepare work? It seems like the needs would be about the same, so I
think we should be consistent.

Also, I assume that SPI_execute and PQexecParams would always force a
custom plan, just like always, right?

A control knob sounds limited. For instance, what if the application
knows that some parameters will be constant over the time that the plan
is saved? It would be nice to be able to bind some parameters to come up
with a generic (but less generic) plan, and then execute it many times.
Right now that can only be done by inlining such constants in the SQL,
which is what we want to avoid.

I'm a little bothered by prepare sometimes planning and sometimes not
(and, by implication, execute_plan sometimes planning and sometimes
not). It seems cleaner to just separate the steps into parse+rewrite,
bind parameters, plan (with whatever parameters are present, giving a
more generic plan when some aren't specified), and execute (which would
require you to specify any parameters not bound yet). Maybe we don't
need to expose all of those steps (although maybe we do), but it would
be nice if the API we do offer resembles those steps.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl crash with Debian 6 (64 bit), pl/perlu, libwww and https

2011-08-04 Thread David E. Wheeler
On Aug 4, 2011, at 3:09 PM, Alex Hunsaker wrote:

 Mainly the options im thinking about are:
 1) if anyone touches %SIG die
 2) turn %SIG into a regular hash so people can set/play with %SIG, but
 it has no real effect.

These would disable stuff like $SIG{__WARN__} and $SIG{__DIE__}, which would be 
an unfortunate side-effect.

 3) local %SIG before we call their trigger function. This lets signals
 still work while in trigger scope (like we do for %_TD)

+1

 4) if we can't get any of the above to work we can save each %SIG
 handler before and restore them after each trigger call. (mod_perl
 does something similar so Im fairly certain we should be able to get
 that to work)

+1

Best,

David


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl crash with Debian 6 (64 bit), pl/perlu, libwww and https

2011-08-04 Thread Alex Hunsaker
On Thu, Aug 4, 2011 at 16:34, David E. Wheeler da...@kineticode.com wrote:
 On Aug 4, 2011, at 3:09 PM, Alex Hunsaker wrote:

 Mainly the options im thinking about are:
 1) if anyone touches %SIG die
 2) turn %SIG into a regular hash so people can set/play with %SIG, but
 it has no real effect.

 These would disable stuff like $SIG{__WARN__} and $SIG{__DIE__}, which would 
 be an unfortunate side-effect.

Yeah,  good point.

 3) local %SIG before we call their trigger function. This lets signals
 still work while in trigger scope (like we do for %_TD)

 +1

That seems to be what most people up-thread thought as well. I dont
see it being too expensive. Ill see if I can whip something up today.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] error: could not find pg_class tuple for index 2662

2011-08-04 Thread Tom Lane
Ahh ... you know what, never mind about stack traces, let's just see if
the attached patch doesn't fix it.

I still haven't reproduced the behavior here, but I think I see what
must be happening: we are getting an sinval reset while attempting to
open pg_class_oid_index.  The latter condition causes its refcount to
be above 1, which will cause RelationClearRelation to directly call
RelationReloadIndexInfo, which enables the following sequence of calls:
RelationCacheInvalidate - RelationClearRelation - RelationReloadIndexInfo.
And the problem is that RelationCacheInvalidate intentionally forces
pg_class_oid_index to be updated first.  That was okay when the code was
written, because the relcache entry for pg_class itself never really
needed any updates.  Now it does, so we have to make sure pg_class gets
updated first, *then* pg_class_oid_index (which might result in a
seqscan of pg_class), then everything else (for which we'll try to use
pg_class_oid_index to locate their pg_class tuples).

regards, tom lane

diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 81cea8b..0e4b17c 100644
*** a/src/backend/utils/cache/relcache.c
--- b/src/backend/utils/cache/relcache.c
*** RelationCacheInvalidate(void)
*** 2185,2204 
  		{
  			/*
  			 * Add this entry to list of stuff to rebuild in second pass.
! 			 * pg_class_oid_index goes on the front of rebuildFirstList, other
! 			 * nailed indexes on the back, and everything else into
! 			 * rebuildList (in no particular order).
  			 */
! 			if (relation-rd_isnailed 
! relation-rd_rel-relkind == RELKIND_INDEX)
  			{
  if (RelationGetRelid(relation) == ClassOidIndexId)
- 	rebuildFirstList = lcons(relation, rebuildFirstList);
- else
  	rebuildFirstList = lappend(rebuildFirstList, relation);
  			}
  			else
! rebuildList = lcons(relation, rebuildList);
  		}
  	}
  
--- 2185,2207 
  		{
  			/*
  			 * Add this entry to list of stuff to rebuild in second pass.
! 			 * pg_class goes on the front of rebuildFirstList,
! 			 * pg_class_oid_index goes to the back of rebuildFirstList, other
! 			 * nailed indexes go on the front of rebuildList, and everything
! 			 * else goes to the back of rebuildList.
  			 */
! 			if (RelationGetRelid(relation) == RelationRelationId)
! rebuildFirstList = lcons(relation, rebuildFirstList);
! 			else if (relation-rd_isnailed 
! 	 relation-rd_rel-relkind == RELKIND_INDEX)
  			{
  if (RelationGetRelid(relation) == ClassOidIndexId)
  	rebuildFirstList = lappend(rebuildFirstList, relation);
+ else
+ 	rebuildList = lcons(relation, rebuildList);
  			}
  			else
! rebuildList = lappend(rebuildList, relation);
  		}
  	}
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl crash with Debian 6 (64 bit), pl/perlu, libwww and https

2011-08-04 Thread Tom Lane
Alex Hunsaker bada...@gmail.com writes:
 On Thu, Aug 4, 2011 at 16:34, David E. Wheeler da...@kineticode.com wrote:
 On Aug 4, 2011, at 3:09 PM, Alex Hunsaker wrote:
 3) local %SIG before we call their trigger function. This lets signals
 still work while in trigger scope (like we do for %_TD)

 +1

 That seems to be what most people up-thread thought as well. I dont
 see it being too expensive. Ill see if I can whip something up today.

The scenario I was imagining was:

1. perl temporarily takes over SIGALRM.

2. while perl function is running, statement_timeout expires, causing
SIGALRM to be delivered.

3. perl code is probably totally confused, and even if it isn't,
statement_timeout will not be enforced since Postgres won't ever get the
interrupt.

Even if you don't think statement_timeout is a particularly critical
piece of functionality, similar interference with the delivery of, say,
SIGUSR1 would be catastrophic.

How do you propose to prevent this sort of problem?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transient plans versus the SPI API

2011-08-04 Thread Jeff Davis
On Wed, 2011-08-03 at 12:19 -0400, Tom Lane wrote:
 Of course we could address the worst cases by providing some mechanism
 to tell the plancache code always use a generic plan for this query
 or always use a custom plan.  I'm not entirely thrilled with that,
 because it's effectively a planner hint and has got the same problems
 as all planner hints, namely that users are likely to get it wrong.

I'm not entirely convinced by that. It's fairly challenging for a human
to choose a good plan for a moderately complex SQL query, and its much
more likely that the plan will become a bad one over time. But, in many
cases, a developer knows if they simply don't care about planning time,
and are willing to always replan.

Also, we have a fairly reasonable model for planning SQL queries, but
I'm not sure that the model for determining whether to replan a SQL
query is quite as clear. Simon brought up some useful points along these
lines.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transient plans versus the SPI API

2011-08-04 Thread Jeff Davis
On Wed, 2011-08-03 at 13:07 -0400, Robert Haas wrote:
 A little OT here, but (as I think Simon said elsewhere) I think we
 really ought to be considering the table statistics when deciding
 whether or not to replan.  It seems to me that the overwhelmingly
 common case where this is going to come up is when (some subset of)
 the MCVs require a different plan than run-of-the-mill values.  It
 would be nice to somehow work that out.

That blurs the line a little bit. It sounds like this might be described
as incremental planning, and perhaps that's a good way to think about
it.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl crash with Debian 6 (64 bit), pl/perlu, libwww and https

2011-08-04 Thread Alex Hunsaker
On Thu, Aug 4, 2011 at 17:52, Tom Lane t...@sss.pgh.pa.us wrote:
 Alex Hunsaker bada...@gmail.com writes:
 On Thu, Aug 4, 2011 at 16:34, David E. Wheeler da...@kineticode.com wrote:
 On Aug 4, 2011, at 3:09 PM, Alex Hunsaker wrote:
 3) local %SIG before we call their trigger function. This lets signals
 still work while in trigger scope (like we do for %_TD)

 +1

 That seems to be what most people up-thread thought as well. I dont
 see it being too expensive. Ill see if I can whip something up today.

 The scenario I was imagining was:
 [ $SIG{ALRM} + statement timeout-- what happens?]
 
 Even if you don't think statement_timeout is a particularly critical
 piece of functionality, similar interference with the delivery of, say,
 SIGUSR1 would be catastrophic.

Yipes, I see your point.

 How do you propose to prevent this sort of problem?

Well, I think that makes it unworkable.

So back to #1 or #2.

For plperlu sounds like we are going to need to disallow setting _any_
signals (minus __DIE__ and __WARN__). I should be able to make it so
when you try it gives you a warning something along the lines of
plperl can't set signal handlers, ignoring

For plperl I think we should probably do the same. It seems like
Andrew might disagree though? Anyone else want to chime in on if
plperl lets you muck with signal handlers?

Im not entirely sure how much of this is workable, I still need to go
through perl's guts and see. At the very worst I think we can mark
each signal handler that is exposed in %SIG readonly (which would mean
we would  die instead of warning), but I think I can make the warning
variant workable as well.

I also have not dug deep enough to know how to handle __WARN__ and
__DIE__ (and exactly what limitations allowing those will impose). I
still have some work at $day_job before I can really dig into this.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reduce WAL logging of INSERT SELECT

2011-08-04 Thread Jeff Davis
On Thu, 2011-08-04 at 17:46 -0400, Bruce Momjian wrote:
 Right.  I brought up SELECT INTO because you could make the argument
 that INSERT ... SELECT is not a utility command like the other ones and
 therefore can't be done easily, but CREATE TABLE AS is internal SELECT
 INTO and implemented in execMain.c, which I think is where INSERT ...
 SELECT would also be implemented.

The above statement is a little confusing, so let me start from the
beginning:

How could we avoid WAL logging for INSERT ... SELECT?

The way we do it for CREATE TABLE AS is because nobody would even *see*
the table if our transaction doesn't commit. Therefore we don't need to
bother logging it. Same can be said for SELECT INTO.

INSERT ... SELECT is just an insert. It needs just as much logging as
inserting tuples any other way. For instance, it will potentially share
pages with other inserts, and better properly record all such page
modifications so that they return to a consistent state.

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reduce WAL logging of INSERT SELECT

2011-08-04 Thread Bruce Momjian
Jeff Davis wrote:
 On Thu, 2011-08-04 at 17:46 -0400, Bruce Momjian wrote:
  Right.  I brought up SELECT INTO because you could make the argument
  that INSERT ... SELECT is not a utility command like the other ones and
  therefore can't be done easily, but CREATE TABLE AS is internal SELECT
  INTO and implemented in execMain.c, which I think is where INSERT ...
  SELECT would also be implemented.
 
 The above statement is a little confusing, so let me start from the
 beginning:
 
 How could we avoid WAL logging for INSERT ... SELECT?
 
 The way we do it for CREATE TABLE AS is because nobody would even *see*
 the table if our transaction doesn't commit. Therefore we don't need to
 bother logging it. Same can be said for SELECT INTO.
 
 INSERT ... SELECT is just an insert. It needs just as much logging as
 inserting tuples any other way. For instance, it will potentially share
 pages with other inserts, and better properly record all such page
 modifications so that they return to a consistent state.

It would act like COPY, meaning the table would have to be truncated or
created in the same transaction.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql: bogus descriptions displayed by \d+

2011-08-04 Thread Jeff Davis
On Thu, 2011-08-04 at 14:20 -0500, Kevin Grittner wrote:
 Peter Eisentraut pete...@gmx.net wrote:
  
  According to scientific-ish database literature, a table is a
  relation and vice versa.
  
 I've generally understood the terms more like what is described near
 the top of this page:
  
 http://en.wikipedia.org/wiki/Relation_%28database%29
  
 In SQL, [...] a relation variable is called a table.

The SQL spec also uses table to refer to a *value*. So we certainly
can't turn that around and say a table in SQL is a relation variable.

It's all a bit loose anyway, because SQL tables aren't really relations
or relation variables (for instance, they can contain duplicates).

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reduce WAL logging of INSERT SELECT

2011-08-04 Thread Robert Haas
On Thu, Aug 4, 2011 at 8:55 PM, Bruce Momjian br...@momjian.us wrote:
 Jeff Davis wrote:
 On Thu, 2011-08-04 at 17:46 -0400, Bruce Momjian wrote:
  Right.  I brought up SELECT INTO because you could make the argument
  that INSERT ... SELECT is not a utility command like the other ones and
  therefore can't be done easily, but CREATE TABLE AS is internal SELECT
  INTO and implemented in execMain.c, which I think is where INSERT ...
  SELECT would also be implemented.

 The above statement is a little confusing, so let me start from the
 beginning:

 How could we avoid WAL logging for INSERT ... SELECT?

 The way we do it for CREATE TABLE AS is because nobody would even *see*
 the table if our transaction doesn't commit. Therefore we don't need to
 bother logging it. Same can be said for SELECT INTO.

 INSERT ... SELECT is just an insert. It needs just as much logging as
 inserting tuples any other way. For instance, it will potentially share
 pages with other inserts, and better properly record all such page
 modifications so that they return to a consistent state.

 It would act like COPY, meaning the table would have to be truncated or
 created in the same transaction.

It seems to me that, if we know the relation was created or truncated
in the current transaction, and if wal_level = minimal, then we don't
need to WAL-log *anything* until transaction commit (provided we fsync
at commit).

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reduce WAL logging of INSERT SELECT

2011-08-04 Thread Jeff Davis
On Thu, 2011-08-04 at 20:55 -0400, Bruce Momjian wrote:
 It would act like COPY, meaning the table would have to be truncated or
 created in the same transaction.

Well, in that case it could work for any INSERT. No need for a SELECT to
be involved. For that matter, why not make it work for DELETE and
UPDATE, too?

However, I think this is all just a workaround for not having a faster
loading path. I don't object to applying this optimization to inserts,
but I think it might be more productive to figure out if we can support
loading data efficiently -- i.e. also set hint bits and frozenxid during
the load.

Regards,
Jeff Davis



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl crash with Debian 6 (64 bit), pl/perlu, libwww and https

2011-08-04 Thread Andrew Dunstan



On 08/04/2011 08:44 PM, Alex Hunsaker wrote:

On Thu, Aug 4, 2011 at 17:52, Tom Lanet...@sss.pgh.pa.us  wrote:

Alex Hunsakerbada...@gmail.com  writes:

On Thu, Aug 4, 2011 at 16:34, David E. Wheelerda...@kineticode.com  wrote:

On Aug 4, 2011, at 3:09 PM, Alex Hunsaker wrote:

3) local %SIG before we call their trigger function. This lets signals
still work while in trigger scope (like we do for %_TD)

+1

That seems to be what most people up-thread thought as well. I dont
see it being too expensive. Ill see if I can whip something up today.

The scenario I was imagining was:
[ $SIG{ALRM} + statement timeout-- what happens?]

Even if you don't think statement_timeout is a particularly critical
piece of functionality, similar interference with the delivery of, say,
SIGUSR1 would be catastrophic.

Yipes, I see your point.


How do you propose to prevent this sort of problem?

Well, I think that makes it unworkable.

So back to #1 or #2.

For plperlu sounds like we are going to need to disallow setting _any_
signals (minus __DIE__ and __WARN__). I should be able to make it so
when you try it gives you a warning something along the lines of
plperl can't set signal handlers, ignoring

For plperl I think we should probably do the same. It seems like
Andrew might disagree though? Anyone else want to chime in on if
plperl lets you muck with signal handlers?

Im not entirely sure how much of this is workable, I still need to go
through perl's guts and see. At the very worst I think we can mark
each signal handler that is exposed in %SIG readonly (which would mean
we would  die instead of warning), but I think I can make the warning
variant workable as well.

I also have not dug deep enough to know how to handle __WARN__ and
__DIE__ (and exactly what limitations allowing those will impose). I
still have some work at $day_job before I can really dig into this.


Let's slow down a bit. Nobody that we know of has encountered the 
problem Tom's referring to, over all the years plperlu has been 
available. The changes you're proposing have the potential to downgrade 
the usefulness of plperlu considerably without fixing anything that's 
known to be an actual problem. Instead of fixing a problem caused by 
using LWP you could well make LWP totally unusable from plperlu.


And it still won't do a thing about signal handlers installed by C code.

And plperlu would be the tip of the iceberg. What about all the other 
PLs, not to mention non-PL loadable modules?


But we *can* fix the original problem reported, namely failure to 
restore signal handlers on function exit, with very little downside 
(assuming it's shown to be fairly cheap).



cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql: display of object comments

2011-08-04 Thread Josh Kupershmidt
On Thu, Aug 4, 2011 at 12:26 PM, Robert Haas robertmh...@gmail.com wrote:
 On Tue, Jul 26, 2011 at 8:38 PM, Josh Kupershmidt schmi...@gmail.com wrote:
 [new patch]

 I've committed the portion of this that displays comments on languages
 and casts.

Thanks!

 For domains and conversions, I am wondering if we should display the
 comments only when + is specified, since the output is fairly wide
 already.

I wasn't sure whether there was some sort of precedent for whether
comments should be displayed only in verbose mode, but looking through
the existing backslash commands, it seems reasonable to make it
verbose-only if the output is already pushing 80 characters for
typical usage (object names and other column outputs of lengths
typically encountered).

A few existing backslash commands, such as \dn and maybe \db, don't
exactly follow this precedent. Not sure if we want to bother adjusting
the existing commands to be consistent in this regard. Defining
typical usage is pretty wishy-washy, so I'm not real inclined to try
messing with the existing commands.

 For foreign data wrappers, foreign servers, and foreign tables, I am
 wondering if there is any particular rule we should adhere to in terms
 of where the description shows up in the output column list.  It
 doesn't seem entirely consistent the way you've done it here, but
 maybe you've put more thought into it than I have.

Hrm, what wasn't consistent? I intended to just put the Description
column at the end of the outputs for \dew, \des, and \det, which seems
to be the way other commands handle this. Though now that I double
check, I notice that the verbose modes of these commands include extra
columns which come after Description, and it might be better to
force Description to stay at the end in those cases, the way that
\dT[+] and \dFt[+] do. Though perhaps you're complaining about
something different -- \dL isn't forcing Description to the end in
verbose mode.

Josh

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PQescapeByteaConn - returns wrong string for PG9.1 Beta3

2011-08-04 Thread Tom Lane
Florian Pflug f...@phlo.org writes:
 Just to clarify what's going on here, in case the OP is still puzzled.
 [ lots o detail snipped ]

Right.  Thanks for writing out what I didn't have time for today...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl crash with Debian 6 (64 bit), pl/perlu, libwww and https

2011-08-04 Thread Alex Hunsaker
On Thu, Aug 4, 2011 at 19:40, Andrew Dunstan and...@dunslane.net wrote:

 Let's slow down a bit. Nobody that we know of has encountered the problem
 Tom's referring to, over all the years plperlu has been available. The
 changes you're proposing have the potential to downgrade the usefulness of
 plperlu considerably without fixing anything that's known to be an actual
 problem. Instead of fixing a problem caused by using LWP you could well make
 LWP totally unusable from plperlu.

Well, im not sure about it making LWP totally unusable... You could
always use statement_timeout if you were worried about it blocking
^_^.

 And it still won't do a thing about signal handlers installed by C code.

 And plperlu would be the tip of the iceberg. What about all the other PLs,
 not to mention non-PL loadable modules?

Maybe the answer is to re-issue the appropriate pqsignals() instead of
doing the perl variant?

For PL/Perl(u) we could still disallow any signals the postmaster
uses, from my quick look that would be: HUP, INT, TERM, QUIT, ALRM,
PIPE, USR1, USR2, FPE. All we would need to do is restore ALRM.

Or am I too paranoid about someone shooting themselves in the foot via
USR1? (BTW you can set signals in plperl, but you can't call alarm()
or kill())

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [GENERAL] Odd VACUUM behavior when it is expected to truncate last empty pages

2011-08-04 Thread Sergey Konoplev
Thank you very much, your explanation helped a lot.

This is the tool I needed the solution for
http://code.google.com/p/pc-tools/ if you are interested.

On 4 August 2011 01:10, Pavan Deolasee pavan.deola...@gmail.com wrote:
 On Wed, Aug 3, 2011 at 12:33 PM, Pavan Deolasee
 pavan.deola...@gmail.com wrote:


 The only problem, other than a surprising behavior that you noted,
 that I see with this approach is that we might repeatedly try to
 truncate a relation which in fact does not have anything to truncate.
 The worst  thing is we might unnecessarily take an exclusive lock on
 the table.


 So it seems we tried to fix this issue sometime back
 http://archives.postgresql.org/pgsql-hackers/2008-12/msg01994.php

 But I don't quite understand how the fix would really work.
 nonempty_pages would most likely be set at a value lower than relpages
 if the last page in the relation is all-visible according to the
 visibility map. Did we mean to test (nonempty_pages  0) there ? But
 even that may not work except for the case when there are no dead
 tuples in the relation.

 Thanks,
 Pavan

 --
 Pavan Deolasee
 EnterpriseDB     http://www.enterprisedb.com




-- 
Sergey Konoplev

Blog: http://gray-hemp.blogspot.com /
Linkedin: http://ru.linkedin.com/in/grayhemp /
JID/GTalk: gray...@gmail.com / Skype: gray-hemp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reduce WAL logging of INSERT SELECT

2011-08-04 Thread Heikki Linnakangas

On 05.08.2011 04:23, Jeff Davis wrote:

On Thu, 2011-08-04 at 20:55 -0400, Bruce Momjian wrote:

It would act like COPY, meaning the table would have to be truncated or
created in the same transaction.


Well, in that case it could work for any INSERT. No need for a SELECT to
be involved. For that matter, why not make it work for DELETE and
UPDATE, too?


Yep. If we are to expand it, we should make it work for any operation.

However, for small operations it's a net loss - you avoid writing a WAL 
record, but you have to fsync() the heap instead. If you only modify a 
few rows, the extra fsync (or fsyncs if there are indexes too) is more 
expensive than writing the WAL.


We'd need a heuristic to decide whether to write WAL or fsync at the 
end. For regular INSERTs, UPDATEs and DELETEs, you have the planner's 
estimate of number of rows affected. Another thing we should do is move 
the fsync call from the end of COPY (and other such operations) to the 
end of transaction. That way if you do e.g one COPY followed by a bunch 
of smaller INSERTs or UPDATEs, you only need to fsync the files once.



However, I think this is all just a workaround for not having a faster
loading path. I don't object to applying this optimization to inserts,
but I think it might be more productive to figure out if we can support
loading data efficiently -- i.e. also set hint bits and frozenxid during
the load.


Yeah, that would make a much bigger impact in practice.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers