[HACKERS] CONTINUE error, even though inside a loop

2005-06-22 Thread Michael Fuhr
I'm getting CONTINUE cannot be used outside a loop errors even
though it's inside a loop.  The error appears to be happening when
CONTINUE passes control to the beginning of the loop but there's
no more iterating to be done.  I'd expect the loop to end at this
point instead of getting an error.  Or did I miss something in the
discussion?

CREATE FUNCTION foo(x integer) RETURNS integer AS $$
DECLARE
i  integer;
BEGIN
FOR i IN 1 .. x LOOP
RAISE INFO 'before CONTINUE: i = %', i;
CONTINUE WHEN i = 2;
RAISE INFO 'after CONTINUE: i = %', i;
END LOOP;

RETURN x;
END;
$$ LANGUAGE plpgsql IMMUTABLE STRICT;

SELECT foo(1);
INFO:  before CONTINUE: i = 1
ERROR:  CONTINUE cannot be used outside a loop
CONTEXT:  PL/pgSQL function foo

SELECT foo(2);
INFO:  before CONTINUE: i = 1
INFO:  before CONTINUE: i = 2
ERROR:  CONTINUE cannot be used outside a loop
CONTEXT:  PL/pgSQL function foo

SELECT foo(3);
INFO:  before CONTINUE: i = 1
INFO:  before CONTINUE: i = 2
INFO:  before CONTINUE: i = 3
INFO:  after CONTINUE: i = 3
 foo 
-
   3
(1 row)

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] CONTINUE error, even though inside a loop

2005-06-22 Thread Neil Conway

Michael Fuhr wrote:

I'm getting CONTINUE cannot be used outside a loop errors even
though it's inside a loop.  The error appears to be happening when
CONTINUE passes control to the beginning of the loop but there's
no more iterating to be done.


Woops, sorry for missing this. This should be fixed in HEAD; thanks for 
the report.


-Neil

---(end of broadcast)---
TIP 6: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Schedule for 8.1 feature freeze

2005-06-22 Thread Fabien COELHO


Dear Stephen,


 I'd really like to see role support added into 8.1.


I'm also pretty interested in this, and was planing loosely to think about 
implementing roles someday. It is even better if it is done by someone 
else;-)


I've sent Alvaro and Tom versions of the patch in the past and I was 
planning on submitting it to -patches soon.  There's a few remaining 
issue but I don't think they'll take very long to clean up; I've just 
been unfortunately pretty busy lately.  I'm hopeful that I'll be able to 
spend some time on it this week and next week to hopefully address at 
least the remaining issues on my list.


I could not find this patch, although I wanted to have a look. After some 
thinking on the subjet, ISTM that there are 4 key issues for roles to be 
really interesting in postgresql, and I just wanted to state them:


(1) the role rights should *supercede* group/user rights.
E.g. if an admin chose to act in the role readonly-client,
then it should be restricted to what this role allows.

(2) roles should be per *catalog* (database), not per cluster
(installation) as user and group.

(3) role management should be a natural privilege of the database *owner*.

(4) user/group/role will make permission denied errors even harder to
comprehend and solve as they are already, so instead of
permission denied on table foo, give a more precise form such as
permission denied on table foo, cannot perform update in role bla.

Otherwise ISTM that role would not be an improvement over the current 
group concept in postgresql.


Maybe it is already what was submitted, but as I could not check...

Thanks for working on this useful feature, have a nice day,

--
Fabien.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] pg_terminate_backend idea

2005-06-22 Thread Magnus Hagander
  But it still requires me to send some data (such as a dummy 
 query) to 
  the backend before it exits. This is because server side 
 libpq blocks 
  when reading and ignores signals at this time. I believe 
 the fix for 
  this would be to pass a flag down to the libpq routines 
 that we want 
  to be abort in case of signal+flag, set only when doing the main 
  call to recv, so we can kill idle process.
 
 Yech!  That code is messy enough already, lets not pile 
 another kluge atop it in order to handle something that's not 
 even being requested AFAIR.

While I agree it'sa  bit of a cludge, saying that it's not requested is
absolutely and totally untrue. It has been requested *a lot*. People
even use a method that is now *known* to be unsafe, simply because we do
not provide another alternative.

Therefor, I prefer a kludge than nothing at all. But a proper solution
is of course better.


 In any case the correct way to solve the problem is to find 
 out what's being left corrupt by SIGTERM, rather than install 
 more messiness in order to avoid facing the real issue ...

That is unfortunatly way over my head. And it doesn't seem like anybody
who actually has what it takes to do the proper solution is interested
in doing it.


//Magnus

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Server instrumentation patch

2005-06-22 Thread Dave Page
 

 -Original Message-
 From: Bruce Momjian [mailto:[EMAIL PROTECTED] 
 Sent: 21 June 2005 18:06
 To: Dave Page
 Cc: PostgreSQL-development; Andreas Pflug
 Subject: Server instrumentation patch
 
 
 OK, let me address this, but you might not like what I have 
 to say.  ;-)
 
 Basically, Andreas' approach for 8.0 was to develop a patch (without
 posting a proposal or interface), and then argue why pgadmin needs it,
 but without addressing the real concerns about the patch.  Saying
 pgadmin needs it just isn't enough to get a patch in.  There are the
 issues of security and maintainability that have to be addressed, and
 in the limited time we had to do this in 8.0, it was clear the patch
 should not be applied.

The reason it happen that way was because we already had the code as a
contrib-style module for pgAdmin. It was posted because we recognised
that it was becoming a PITA for pgAdmin users to compile a new
server-side component and the functions seemed like they would be useful
to other tools similar to pgAdmin.

Yes, this is not the normal way to proprose new features, but I'm sure
you appreciate that as picture speaks a thousand words, posting the
*existing* code with minor changes to properly integrate it shows
exactly what is being proposed, both in functional and impelmentation
detail.

 Now, in 8.1, the same thing has happened.  Two weeks before feature
 freeze, with no discussion, the patch appears, and makes no 
 reference to
 concerns raised during the 8.0 discussion.

OK, first it was the 10th of June which is a little more than two weeks,
however, Andreas clearly did reference previous discussions on the
subject - see his message
http://archives.postgresql.org/pgsql-patches/2005-06/msg00226.php in
which he points out that 2 functions are from the logger suprocess patch
from 07/2004, that the file related stuff is based on discussions
starting at
http://archives.postgresql.org/pgsql-patches/2004-07/msg00287.php,
including comments from yourself!!

 pg_terminate_backend is even
 in the patch, and there is no mention or attempt to address 
 concerns we
 had in 8.0.

No. I cannot argue with that, and for that reason I suggested that
Andreas repost the patch without that function so it can be properly
discussed and implemented in a safe way in the future. I'm sure you have
see the reposted patch.

 The move of dbsize into the backend is similar.  He moves the parts of
 dbsize the pgadmin needs into the backend, but makes no mention or
 change to /contrib/dbsize to adjust it to the movement of the code. He
 has since posted and updated version that fixes this, I think, but
 again, we have to discuss how this is to be done --- do we 
 move all the
 dbsize functions into the backend, some, or none?  Do the other dbsize
 functions stay in /contrib or get deleted?

Well as far as I can see, Andreas did respond to all queries about it,
and then posted his updated patch after it became apparent noone else
was going to discuss the issue further -
http://archives.postgresql.org/pgsql-patches/2005-06/msg00309.php. From
what I can see, no-one has argued or disagreed with his opinion given a
few days to do so, therefore there was nothing further to discuss. 

Unfortunately sometimes people don't respond - either because they don't
care, or because they agree. Either way, *we* cannot force a discussion,
and in this sort of development model we have no choice than to assume
that if discussion of a issue stops and there are no outstanding
queries, concerns or objections, it's because it's everyone is happy for
the result of those discussions to be accepted into the project.

 This needs discussion, not a patch.  And because there are so many
 assumptions made in the patch, the patch committers look unreasonable
 asking for X changes to his patch, when in fact he made X 
 assumptions in
 the patch and never asked anyone before developing the patch 
 about those
 assumptions.

With the exception of the now removed pg_terminate_backend, I am unaware
of any issues that are outstanding. If the committers have issues they
*must* raise them for *any* submitted patch otherwise developers will
lose faith in the process when their hard work gets ignored.

Now, to try to get this ball rolling again - do the committers or anyone
else have any outstanding issues with the instrumentation or dbsize
patches that haven't been answered in public discussion and addressed in
the patches already?

Regards, Dave.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] User Quota Implementation

2005-06-22 Thread Yann Michel
Hi again,

On Mon, Jun 13, 2005 at 04:47:20PM -0600, Jonah H. Harris wrote:
 Well... a maximum tablespace size would be much easier to implement and 
 would still accomplish this level of quota for larger organizations and 
 database systems.
 
 I vote for implmenting the maximum tablespace size and revisiting actual 
 user/group quotas when the need arises.
 
 Was someone going to implement this?  If not, I can probably get it done 
 in a couple days.

are you still working on this or what has hapened to the idea of
MAXSIZE?

Regards,
Yann

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] pg_terminate_backend idea

2005-06-22 Thread Andreas Pflug

Bruce Momjian wrote:

Tom Lane wrote:


Magnus Hagander [EMAIL PROTECTED] writes:


But it still requires me to send some data (such as a dummy query) to
the backend before it exits. This is because server side libpq blocks
when reading and ignores signals at this time. I believe the fix for
this would be to pass a flag down to the libpq routines that we want to
be abort in case of signal+flag, set only when doing the main call to
recv, so we can kill idle process.


Yech!  That code is messy enough already, lets not pile another kluge
atop it in order to handle something that's not even being requested
AFAIR.

In any case the correct way to solve the problem is to find out what's
being left corrupt by SIGTERM, rather than install more messiness in
order to avoid facing the real issue ...



I am confused.  Are you talking about the client SIGTERM or the server? 
I thought we agreed that using the cancel functionality, which we know

works and is tested,


I've seen cancel *not* working. In 80 % this was the reason to use 
terminate.


Regards,
Andreas

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Server instrumentation patch

2005-06-22 Thread Dave Page
 

 -Original Message-
 From: Bruce Momjian [mailto:[EMAIL PROTECTED] 
 Sent: 22 June 2005 04:08
 To: Andreas Pflug
 Cc: Dave Page; PostgreSQL-development
 Subject: Re: Server instrumentation patch
 
   The move of dbsize into the backend is similar.  He moves 
 the parts of
   dbsize the pgadmin needs into the backend, but makes no mention or
   change to /contrib/dbsize to adjust it to the movement of 
 the code. He
   has since posted and updated version that fixes this, I think, but
   again, we have to discuss how this is to be done --- do 
 we move all the
   dbsize functions into the backend, some, or none?  Do the 
 other dbsize
   functions stay in /contrib or get deleted?
   This needs discussion, not a patch.  And because there are so many
   assumptions made in the patch, the patch committers look 
 unreasonable
   asking for X changes to his patch, when in fact he made X 
 assumptions in
   the patch and never asked anyone before developing the 
 patch about those
   assumptions.
  
  This was discussed lengthy starting May 11th, except for the broken 
  dbsize functions. My post is the result from that.
 
 Really?  Where?  I don't remember anything about it.

I imagine that would be this lengthy thread:
http://archives.postgresql.org/pgsql-hackers/2005-05/msg00837.php

Regards, Dave.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] PROPOSAL FE/BE extension to handle IN/OUT parameters

2005-06-22 Thread Dave Cramer

Tom,

This will work just great, please go ahead, and I'll adjust the  
driver accordingly


Dave
On 21-Jun-05, at 5:49 PM, Tom Lane wrote:


Dave Cramer [EMAIL PROTECTED] writes:


Yeah, I think that might work if I understand it correctly.
Assuming I would be able to prepare, and bind all the parameters, and
the OUT parameters
would be ignored.



This is what I've got in mind:

regression=# create function myfunc(f1 int, f2 int, out sum int,  
out prod int)

regression-# language plpgsql strict immutable as $$
regression$# begin
regression$#   sum := f1 + f2;
regression$#   prod := f1 * f2;
regression$# end$$;
CREATE FUNCTION
regression=# select * from myfunc(11,22);
 sum | prod
-+--
  33 |  242
(1 row)

Using PREPARE/EXECUTE as a SQL-level substitute for Parse/Bind/Execute
messages, the CVS-tip behavior is

regression=# prepare foo(int,int,void,void) as
regression-# select * from myfunc($1,$2,$3,$4);
ERROR:  function myfunc(integer, integer, void, void) does not exist

and with the attached patch you'd get

regression=# prepare foo(int,int,void,void) as
regression-# select * from myfunc($1,$2,$3,$4);
PREPARE
regression=# execute foo(11,22,null,null);
 sum | prod
-+--
  33 |  242
(1 row)

Does that solve your problem?

regards, tom lane


*** src/backend/parser/parse_func.c.origMon May 30 21:03:23 2005
--- src/backend/parser/parse_func.cTue Jun 21 17:43:51 2005
***
*** 64,69 
--- 64,70 
  Oidrettype;
  Oidfuncid;
  ListCell   *l;
+ ListCell   *nextl;
  Node   *first_arg = NULL;
  intnargs = list_length(fargs);
  intargn;
***
*** 85,90 
--- 86,118 
   errmsg(cannot pass more than %d arguments to a  
function,

  FUNC_MAX_ARGS)));

+ /*
+  * Extract arg type info in preparation for function lookup.
+  *
+  * If any arguments are Param markers of type VOID, we  
discard them
+  * from the parameter list.  This is a hack to allow the JDBC  
driver
+  * to not have to distinguish input and output parameter  
symbols
+  * while parsing function-call constructs.  We can't use  
foreach()

+  * because we may modify the list ...
+  */
+ argn = 0;
+ for (l = list_head(fargs); l != NULL; l = nextl)
+ {
+ Node   *arg = lfirst(l);
+ Oidargtype = exprType(arg);
+
+ nextl = lnext(l);
+
+ if (argtype == VOIDOID  IsA(arg, Param))
+ {
+ fargs = list_delete_ptr(fargs, arg);
+ nargs--;
+ continue;
+ }
+
+ actual_arg_types[argn++] = argtype;
+ }
+
  if (fargs)
  {
  first_arg = linitial(fargs);
***
*** 99,105 
   */
  if (nargs == 1  !agg_star  !agg_distinct  list_length 
(funcname) == 1)

  {
! Oidargtype = exprType(first_arg);

  if (argtype == RECORDOID || ISCOMPLEX(argtype))
  {
--- 127,133 
   */
  if (nargs == 1  !agg_star  !agg_distinct  list_length 
(funcname) == 1)

  {
! Oidargtype = actual_arg_types[0];

  if (argtype == RECORDOID || ISCOMPLEX(argtype))
  {
***
*** 117,134 
  }

  /*
!  * Okay, it's not a column projection, so it must really be a
!  * function. Extract arg type info in preparation for  
function lookup.

!  */
! argn = 0;
! foreach(l, fargs)
! {
! Node   *arg = lfirst(l);
!
! actual_arg_types[argn++] = exprType(arg);
! }
!
! /*
   * func_get_detail looks up the function in the catalogs, does
   * disambiguation for polymorphic functions, handles  
inheritance, and
   * returns the funcid and type and set or singleton status of  
the

--- 145,151 
  }

  /*
!  * Okay, it's not a column projection, so it must really be a  
function.

   * func_get_detail looks up the function in the catalogs, does
   * disambiguation for polymorphic functions, handles  
inheritance, and
   * returns the funcid and type and set or singleton status of  
the






---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] pl/pgsql: END verbosity

2005-06-22 Thread Andrew Dunstan
Neil Conway said:
 Andrew Dunstan wrote:
 I'm unkeen. I see no technical advantage - it's just a matter of
 taste.

 There is no technical advantage to case insensitive keywords, or
 dollar quoting, or a variety of other programming language features
 that  don't change functionality but exist to make using the
 programming  language easier.



But this doesn't make it easier to use - users don't just include those who
write it. The antecedent language of these, Ada, from which this syntax
comes, was explicitly designed to be reader-friendly as opposed to
writer-friendly, and this is a part of that. I can tell you from experience
of programming Ada a long time ago that I have been profoundly grateful that
this was required in the language when disentangling a badly written 1000+
line long multibranch IF statement. And I still find myself having to hunt
for what sort of block a } is closing in C, and I still find it annoying.

 We advertise that plpgsql is similar to plsql - we should not do
 anything to make that less so IMNSHO.

 Do you *really* mean that? This principle would mean we should reject
 patches like the CONTINUE statement patch I just applied, for example,
 as PL/SQL has no such construct.


Well, perhaps I should have qualified that a bit - we shouldn't do it
gratuitously.

Getting the effect of CONTINUE for nested loops can be sufficiently hard
that it is arguable that implementing it is not just syntactic sugar. I seem
to recall muttering about how implementing GOTO wasn't worth the trouble.


 Terseness is not always good, redundancy is not always bad.

 Granted -- but why is redundancy a good thing here?


see above

cheers

andrew




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] pg_terminate_backend idea

2005-06-22 Thread Tom Lane
Magnus Hagander [EMAIL PROTECTED] writes:
 In any case the correct way to solve the problem is to find 
 out what's being left corrupt by SIGTERM, rather than install 
 more messiness in order to avoid facing the real issue ...

 That is unfortunatly way over my head. And it doesn't seem like anybody
 who actually has what it takes to do the proper solution is interested
 in doing it.

A test case --- even one that fails only a small percentage of the time
--- would make things far easier.  So far all I've seen are very vague
reports, and it's impossible to do anything about it without more info.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] pg_terminate_backend idea

2005-06-22 Thread Tom Lane
Andreas Pflug [EMAIL PROTECTED] writes:
 I thought we agreed that using the cancel functionality, which we know
 works and is tested,

 I've seen cancel *not* working. In 80 % this was the reason to use 
 terminate.

Even a moment's perusal of the code will prove that there is no
situation in which a backend will respond to SIGTERM but not SIGINT
--- there is only one InterruptPending flag and both cases are checked
in ProcessInterrupts().  So I don't believe the above argument for
using terminate in the slightest.

I can easily believe that we have missed some places that need a
CHECK_FOR_INTERRUPTS() call added, to ensure the backend can't go too
long without making these checks.  I added one in the planner main
loop just a couple weeks ago, for instance.  If you can identify what
a backend that's ignoring a cancel request is doing, please let us know.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] User Quota Implementation

2005-06-22 Thread Jonah H. Harris
If I recall correctly, I never got a response.  I can still get it done 
quickly and probably before the July 1st feature freeze (if that's still 
the date).  Tom, Bruce, Josh, et al what are your thoughts if I submit a 
patch in the next few days?  Is everyone already too busy reviewing the 
current patches?


-Jonah


Yann Michel wrote:


Hi again,

On Mon, Jun 13, 2005 at 04:47:20PM -0600, Jonah H. Harris wrote:
 

Well... a maximum tablespace size would be much easier to implement and 
would still accomplish this level of quota for larger organizations and 
database systems.


I vote for implmenting the maximum tablespace size and revisiting actual 
user/group quotas when the need arises.


Was someone going to implement this?  If not, I can probably get it done 
in a couple days.
   



are you still working on this or what has hapened to the idea of
MAXSIZE?

Regards,
Yann
 




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] pl/pgsql: END verbosity

2005-06-22 Thread Neil Conway

Andrew Dunstan wrote:

But this doesn't make it easier to use - users don't just include those who
write it. The antecedent language of these, Ada, from which this syntax
comes, was explicitly designed to be reader-friendly as opposed to
writer-friendly, and this is a part of that.


IMHO it is just needless verbiage that makes programs both harder to 
read *and* harder to write, albeit marginally so. I think there is a 
reason why Ada-style block terminators are in the minority among 
block-structured languages :)


But obviously this is a matter of taste -- does anyone else like or 
dislike the current syntax?


-Neil

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Problem with dblink regression test

2005-06-22 Thread Tom Lane
Andrew Dunstan [EMAIL PROTECTED] writes:
 Tom Lane said:
 There are several buildfarm machines failing like this.  I think a
 possible solution is for the postmaster to do putenv(PGPORT=nnn) so
 that libpq instances running in postmaster children will default to the
 local installation's actual port rather than some compiled-in default
 port.

 If this diagnosis were correct, wouldn't every buildfarm member be failing
 at the ContribCheck stage (if they get that far)? They all run on non
 standard ports and all run the contrib installcheck suite if they can (this
 is required, not optional). So if they show OK then they do not exhibit the
 problem.

Now that I'm a little more awake ...

I think the difference between the working and not-working machines
probably has to do with dynamic-linker configuration.  You have the
buildfarm builds using configure --prefix=something
--with-pgport=something.  So, the copy of libpq.so installed into
the prefix tree has the right default port.  But on a machine with
a regular installation of Postgres, there is also going to be a copy
of libpq.so in /usr/lib or some such place ... and that copy thinks
the default port is where the regular postmaster lives (eg 5432).
When dblink.so is loaded into the backend, if the dynamic linker chooses
to resolve its requirement for libpq.so by loading /usr/lib/libpq.so,
then the wrong things happen.

In the make check case this is masked because pg_regress.sh has set
PGPORT in the postmaster's environment, and that will override the
compiled-in default.  But of course the contrib tests only work in
installcheck mode.

To believe this, you have to assume that psql links to the correct
version (the test version) of libpq.so but dblink.so fails to do so.
So it's only an issue on platforms where rpath works for executables
but not for shared libraries.  I haven't run down exactly which
buildfarm machines have shown this symptom --- do you know offhand?

(Thinks some more...)  Another possibility is that on the failing
machines, there is a system-wide PGPORT environment variable; however,
unless you specify -p on the postmaster command line when you start
the installed postmaster, I'd expect that to change where the
postmaster puts its socket, so that's probably not the right answer.

If this is the correct explanation, then fooling with PGPORT would
mask this particular symptom, but it wouldn't fix the fundamental
problem that we're loading the wrong version of libpq.so.  Eventually
that would come back to bite us (whenever dblink.so requires some
feature that doesn't exist in older libpq.so versions).

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] User Quota Implementation

2005-06-22 Thread Tom Lane
Jonah H. Harris [EMAIL PROTECTED] writes:
 If I recall correctly, I never got a response.  I can still get it done 
 quickly and probably before the July 1st feature freeze (if that's still 
 the date).  Tom, Bruce, Josh, et al what are your thoughts if I submit a 
 patch in the next few days?  Is everyone already too busy reviewing the 
 current patches?

I don't actually believe that this can be done at the drop of a hat ---
at least not in a way that will perform acceptably.  I haven't seen a
design proposal that looks like it will work, anyway.  What do you
intend to check exactly, where, and how often?

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] pl/pgsql: END verbosity

2005-06-22 Thread Steve Atkins
On Thu, Jun 23, 2005 at 01:41:49AM +1000, Neil Conway wrote:
 Andrew Dunstan wrote:
 But this doesn't make it easier to use - users don't just include those who
 write it. The antecedent language of these, Ada, from which this syntax
 comes, was explicitly designed to be reader-friendly as opposed to
 writer-friendly, and this is a part of that.
 
 IMHO it is just needless verbiage that makes programs both harder to 
 read *and* harder to write, albeit marginally so. I think there is a 
 reason why Ada-style block terminators are in the minority among 
 block-structured languages :)
 
 But obviously this is a matter of taste -- does anyone else like or 
 dislike the current syntax?

Like is a bit strong. But it does make functions written in it easier
to read. And given that the primary debugging methodolofy for pl/pgsql
is Look at it hard and see what might be incorrect I can't see that
as a bad thing.

I'd trade a whole lot of harder to write for even some likely to
work.

Cheers,
  Steve

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Problem with dblink regression test

2005-06-22 Thread Jim C. Nasby
On Wed, Jun 22, 2005 at 11:45:09AM -0400, Tom Lane wrote:
 Andrew Dunstan [EMAIL PROTECTED] writes:
  Tom Lane said:
  There are several buildfarm machines failing like this.  I think a
  possible solution is for the postmaster to do putenv(PGPORT=nnn) so
  that libpq instances running in postmaster children will default to the
  local installation's actual port rather than some compiled-in default
  port.
 
  If this diagnosis were correct, wouldn't every buildfarm member be failing
  at the ContribCheck stage (if they get that far)? They all run on non
  standard ports and all run the contrib installcheck suite if they can (this
  is required, not optional). So if they show OK then they do not exhibit the
  problem.
 
 Now that I'm a little more awake ...
 
 I think the difference between the working and not-working machines
 probably has to do with dynamic-linker configuration.  You have the
 buildfarm builds using configure --prefix=something
 --with-pgport=something.  So, the copy of libpq.so installed into
 the prefix tree has the right default port.  But on a machine with
 a regular installation of Postgres, there is also going to be a copy
 of libpq.so in /usr/lib or some such place ... and that copy thinks
 the default port is where the regular postmaster lives (eg 5432).
 When dblink.so is loaded into the backend, if the dynamic linker chooses
 to resolve its requirement for libpq.so by loading /usr/lib/libpq.so,
 then the wrong things happen.
 
 In the make check case this is masked because pg_regress.sh has set
 PGPORT in the postmaster's environment, and that will override the
 compiled-in default.  But of course the contrib tests only work in
 installcheck mode.
 
 To believe this, you have to assume that psql links to the correct
 version (the test version) of libpq.so but dblink.so fails to do so.
 So it's only an issue on platforms where rpath works for executables
 but not for shared libraries.  I haven't run down exactly which
 buildfarm machines have shown this symptom --- do you know offhand?
 
 (Thinks some more...)  Another possibility is that on the failing
 machines, there is a system-wide PGPORT environment variable; however,
 unless you specify -p on the postmaster command line when you start
 the installed postmaster, I'd expect that to change where the
 postmaster puts its socket, so that's probably not the right answer.
 
 If this is the correct explanation, then fooling with PGPORT would
 mask this particular symptom, but it wouldn't fix the fundamental
 problem that we're loading the wrong version of libpq.so.  Eventually
 that would come back to bite us (whenever dblink.so requires some
 feature that doesn't exist in older libpq.so versions).

Here's the info I have for my two machines (platypus and cuckoo), both
of which are exhibiting this behavior.

I manually ran the dblink regression on platypus to see what was going
on. If I added port=5682 to the connection string, it would properly
connect to the test database. Without that it complained that the
contrib_regression database didn't exist. After adding
contrib_regression to the default postgresql cluster on that machine it
then errored out saying that there was no buildfarm user, which is true
on the default install on that machine. $PGPORT isn't set globally or in
the buildfarm user account.

ISTM there's a couple ways a buildfarm machine could pass besides what
Tom's mentioned. If the machine doesn't have a default install at all
it's possible that dblink will act differently. It's also possible that
the default install has both the contrib_regression database and the
user that's running the buildfarm.

Is there a way to confirm which libpq.so psql and/or dblink.so has
linked to? Are there any other tests I could run to shed some light on
this?
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: Where do you want to go today?
Linux: Where do you want to go tomorrow?
FreeBSD: Are you guys coming, or what?

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[HACKERS] commit_delay, siblings

2005-06-22 Thread Josh Berkus
Hackers:

I've been trying to get a test result for 8.1 that shows that we can eliminate 
commit_delay and commit_siblings, as I believe that these settings no longer 
have any real effect on performance.  However, the checkpointing performance 
issues have so far prevented me from getting a good test result for this. 

Just a warning, because I might bring it up after feature freeze.

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Problem with dblink regression test

2005-06-22 Thread Tom Lane
Jim C. Nasby [EMAIL PROTECTED] writes:
 Is there a way to confirm which libpq.so psql and/or dblink.so has
 linked to? Are there any other tests I could run to shed some light on
 this?

On Linux you use ldd to find out what the linker will do with
dependencies of an executable or shared library.  I don't recall the
equivalent incantation on FreeBSD or OS X but I'm sure there is one.

Note that this is very likely to depend on environment (eg
LD_LIBRARY_PATH) so make sure you do it in the same environment the
buildfarm test has.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Problem with dblink regression test

2005-06-22 Thread Jim C. Nasby
On Tue, Jun 21, 2005 at 08:49:12PM -0700, Joe Conway wrote:
 I think most people would expect that if they don't specify a port, they 
 would be talking to the same postmaster that they are running under on 
 whatever port it is using, not some compiled in default. So your 
 proposal makes perfect sense to me. Then the dblink regression test 
 would not specify a port at all, correct?

Actually, the regression test currently doesn't specify a port. If it
did we wouldn't have found this problem.
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: Where do you want to go today?
Linux: Where do you want to go tomorrow?
FreeBSD: Are you guys coming, or what?

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] commit_delay, siblings

2005-06-22 Thread Tom Lane
Josh Berkus josh@agliodbs.com writes:
 I've been trying to get a test result for 8.1 that shows that we can 
 eliminate 
 commit_delay and commit_siblings, as I believe that these settings no longer 
 have any real effect on performance.

I don't think they ever did :-(.  The theory is good, but useful values
for commit_delay would probably be under a millisecond, and there isn't
any portable way to sleep for such short periods.  We've been leaving
them there just in case somebody can find a use for 'em, but I wouldn't
object to taking them out.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] pl/pgsql: END verbosity

2005-06-22 Thread Alvaro Herrera
On Wed, Jun 22, 2005 at 09:23:17AM -0700, Steve Atkins wrote:
 On Thu, Jun 23, 2005 at 01:41:49AM +1000, Neil Conway wrote:
  Andrew Dunstan wrote:
  But this doesn't make it easier to use - users don't just include those who
  write it. The antecedent language of these, Ada, from which this syntax
  comes, was explicitly designed to be reader-friendly as opposed to
  writer-friendly, and this is a part of that.
  
  IMHO it is just needless verbiage that makes programs both harder to 
  read *and* harder to write, albeit marginally so. I think there is a 
  reason why Ada-style block terminators are in the minority among 
  block-structured languages :)
  
  But obviously this is a matter of taste -- does anyone else like or 
  dislike the current syntax?
 
 Like is a bit strong. But it does make functions written in it easier
 to read. And given that the primary debugging methodolofy for pl/pgsql
 is Look at it hard and see what might be incorrect I can't see that
 as a bad thing.

Yeah, while we don't have good debugging support in pl/pgsql we
shouldn't be making it harder to read.  (FWIW, yes, I think it's useful
for those keywords to be required when you have to look at homongous
functions.)

-- 
Alvaro Herrera (alvherre[a]surnet.cl)
No renuncies a nada. No te aferres a nada.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] commit_delay, siblings

2005-06-22 Thread Hans-Jürgen Schönig

Tom Lane wrote:

Josh Berkus josh@agliodbs.com writes:

I've been trying to get a test result for 8.1 that shows that we can eliminate 
commit_delay and commit_siblings, as I believe that these settings no longer 
have any real effect on performance.



I don't think they ever did :-(.  The theory is good, but useful values
for commit_delay would probably be under a millisecond, and there isn't
any portable way to sleep for such short periods.  We've been leaving
them there just in case somebody can find a use for 'em, but I wouldn't
object to taking them out.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



We have done extensive testing some time ago.
We could not see any difference on any platform we have tested (AIX, 
Linux, Solaris). I don't think that there is one at all - at least not 
on common systems.


best regards,

hans

--
Cybertec Geschwinde u Schoenig
Schoengrabern 134, A-2020 Hollabrunn, Austria
Tel: +43/664/393 39 74
www.cybertec.at, www.postgresql.at


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] commit_delay, siblings

2005-06-22 Thread Josh Berkus
Hans, Tom,

 We have done extensive testing some time ago.
 We could not see any difference on any platform we have tested (AIX,
 Linux, Solaris). I don't think that there is one at all - at least not
 on common systems.

Keen then.  Any objections to removing the GUC?   We desperately need means 
to cut down on GUC options.

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] User Quota Implementation

2005-06-22 Thread Jonah H. Harris

Tom,

You're right, this is going to take more work to make sure all is 
perfect.  Let me work up a formal definition and send it to the group.


Thanks for bringing me back to my senses.

-Jonah


Tom Lane wrote:


Jonah H. Harris [EMAIL PROTECTED] writes:
 

If I recall correctly, I never got a response.  I can still get it done 
quickly and probably before the July 1st feature freeze (if that's still 
the date).  Tom, Bruce, Josh, et al what are your thoughts if I submit a 
patch in the next few days?  Is everyone already too busy reviewing the 
current patches?
   



I don't actually believe that this can be done at the drop of a hat ---
at least not in a way that will perform acceptably.  I haven't seen a
design proposal that looks like it will work, anyway.  What do you
intend to check exactly, where, and how often?

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
 




---(end of broadcast)---
TIP 6: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] pg_terminate_backend idea

2005-06-22 Thread Andrew - Supernews
On 2005-06-22, Tom Lane [EMAIL PROTECTED] wrote:
 Andreas Pflug [EMAIL PROTECTED] writes:
 I thought we agreed that using the cancel functionality, which we know
 works and is tested,

 I've seen cancel *not* working. In 80 % this was the reason to use 
 terminate.

 Even a moment's perusal of the code will prove that there is no
 situation in which a backend will respond to SIGTERM but not SIGINT

idle in transaction. (or idle for that matter, but that's usually less
significant.)

-- 
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] pg_terminate_backend idea

2005-06-22 Thread Tom Lane
Andrew - Supernews [EMAIL PROTECTED] writes:
 On 2005-06-22, Tom Lane [EMAIL PROTECTED] wrote:
 Andreas Pflug [EMAIL PROTECTED] writes:
 I've seen cancel *not* working.
 
 Even a moment's perusal of the code will prove that there is no
 situation in which a backend will respond to SIGTERM but not SIGINT

 idle in transaction. (or idle for that matter, but that's usually less
 significant.)

In that case there's no query to cancel, so I would dispute the claim
that that constitutes not working.  QueryCancel is defined to cancel
the current query, not necessarily to abort your whole transaction.
(Before 8.0 there wasn't much of a difference, but now there is:
QueryCancel is an ordinary error that can be trapped by a savepoint.
Are you arguing it should not be so trappable?)

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [PATCHES] O_DIRECT for WAL writes

2005-06-22 Thread Greg Stark
Tom Lane [EMAIL PROTECTED] writes:

 Unfortunately, I cannot believe these numbers --- the near equality of
 fsync off and fsync on means there is something very wrong with the
 measurements.  What I suspect is that your ATA drives are doing write
 caching and thus the fsyncs are not really waiting for I/O at all.

I wonder whether it would make sense to have an automatic test for this
problem. I suspect there are lots of installations out there whose admins
don't realize that their hardware is doing this to them.

It shouldn't be too hard to test a few hundred or even a few thousand fsyncs
and calculate the seek time. If it implies a rotational speed over 15kRPM then
you know the drive is lying and the data storage is unreliable.

-- 
greg


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] pg_terminate_backend idea

2005-06-22 Thread Andrew - Supernews
On 2005-06-22, Tom Lane [EMAIL PROTECTED] wrote:
 Andrew - Supernews [EMAIL PROTECTED] writes:
 On 2005-06-22, Tom Lane [EMAIL PROTECTED] wrote:
 Andreas Pflug [EMAIL PROTECTED] writes:
 I've seen cancel *not* working.
 
 Even a moment's perusal of the code will prove that there is no
 situation in which a backend will respond to SIGTERM but not SIGINT

 idle in transaction. (or idle for that matter, but that's usually less
 significant.)

 In that case there's no query to cancel, so I would dispute the claim
 that that constitutes not working.

You are totally missing the point.

A backend that is idle in transaction is holding locks and an open xid
that cannot be cleared by anything short of SIGTERM.

Whether the fact that it ignores SIGINT is intentional or not is irrelevent,
the fact is that this is the classic scenario where SIGTERM is effective and
SIGINT is not.

-- 
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Why is checkpoint so costly?

2005-06-22 Thread Greg Stark
Josh Berkus josh@agliodbs.com writes:

 Folks,
 
 Going over some performance test results at OSDL, our single greatest 
 performance issue seems to be checkpointing.Not matter how I fiddle 
 with it, checkpoints seem to cost us 1/2 of our throughput while they're 
 taking place.  Overally, checkpointing costs us about 25% of our 
 performance on OLTP workloads.

I think this is a silly statement. *Of course* checkpointing is a big
performance issue. Checkpointing basically *is* what the database's job is.
It stores data; checkpointing is the name for the process of storing the data.

Looking at the performance without counting the checkpoint time is cheating,
the database hasn't actually completed processing the data; it's still sitting
in the pipeline of the WAL log.

The question should be why is there any time when a checkpoint *isn't*
happening? For maximum performance the combination of bgwriter (basically
preemptive checkpoint i/o) and the actual checkpoint i/o should be executing
at a more or less even pace throughout the time interval between checkpoints.

I do have one suggestion. Is the WAL log on a separate set of drives from the
data files? If not then the checkpoint (and bgwriter i/o) will hurt WAL log
performance by forcing the drive heads to move away from their sequential
writing of WAL logs.

That said, does checkpointing (and bgwriter i/o) require rereading the WAL
logs? If so then if the buffers aren't found in cache then it'll cause some
increase in seek latency just from that even if it does have a dedicated
set of drives.


-- 
greg


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] pl/pgsql: END verbosity

2005-06-22 Thread Pavel Stehule
On Tue, 21 Jun 2005, Andrew Dunstan wrote:

 Neil Conway said:
  In PL/PgSQL, END LOOP is used to terminate loop blocks, and END IF
  is used to terminate IF blocks. This is needlessly verbose: we could
  simply accept END in both cases without syntactic ambiguity. I'd like
   to make this change, so that END can be used to terminate any kind of
  block. There's no need to remove support for the present syntax, of
  course, so there's no backward compatibility concern. Oracle's PL/SQL
  does require END IF and END LOOP, but folks interested in maximum
  compatibility can always use those forms if they like.
 

Hello,

I prefer actual syntax too, Neil. The reason isn't compatibility with 
Oracle, but better readibility - it's mean more work with finishing code 
but less with debugging

Regards
Pavel


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [PATCHES] O_DIRECT for WAL writes

2005-06-22 Thread Tom Lane
Greg Stark [EMAIL PROTECTED] writes:
 Tom Lane [EMAIL PROTECTED] writes:
 Unfortunately, I cannot believe these numbers --- the near equality of
 fsync off and fsync on means there is something very wrong with the
 measurements.  What I suspect is that your ATA drives are doing write
 caching and thus the fsyncs are not really waiting for I/O at all.

 I wonder whether it would make sense to have an automatic test for this
 problem. I suspect there are lots of installations out there whose admins
 don't realize that their hardware is doing this to them.

Not sure about automatic, but a simple little test program to measure
the speed of rewriting/fsyncing a small test file would surely be a nice
thing to have.

The reason I question automatic is that you really want to test each
drive being used, if the system has more than one; but Postgres has no
idea what the actual hardware layout is, and so no good way to know what
needs to be tested.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Why is checkpoint so costly?

2005-06-22 Thread Tom Lane
Greg Stark [EMAIL PROTECTED] writes:
 The question should be why is there any time when a checkpoint *isn't*
 happening? For maximum performance the combination of bgwriter (basically
 preemptive checkpoint i/o) and the actual checkpoint i/o should be executing
 at a more or less even pace throughout the time interval between checkpoints.

I think Josh's complaint has to do with the fact that performance
remains visibly affected after the checkpoint is over.  (It'd be nice
if those TPM graphs could be marked with the actual checkpoint begin
and end instants, so we could confirm or deny that we are looking at a
post-checkpoint recovery curve and not some very weird behavior inside
the checkpoint.)  It's certainly true that tuning the bgwriter ought to
help in reducing the amount of I/O done by a checkpoint, but why is
there a persistent effect?

 That said, does checkpointing (and bgwriter i/o) require rereading the WAL
 logs?

No.  In fact, the WAL is never read at all, except during actual
post-crash recovery.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [PATCHES] Removing Kerberos 4

2005-06-22 Thread Greg Stark
Tom Lane [EMAIL PROTECTED] writes:

 Last chance for any Kerberos 4 users to speak up --- otherwise I'll
 apply this soon.

If you just want someone to test it I can do that. I don't actually use it
normally though.

As far as security issues the only issues I'm aware of is a) it uses plain DES
which is just a 56 bit key and crackable by brute force and b) cross-domain
authentication is broken. 

But if you just have a single domain it's a lot simpler to set up than the
poster child for second system effect, Kerberos 5.

-- 
greg


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] commit_delay, siblings

2005-06-22 Thread Greg Stark

Hans-Jürgen Schönig [EMAIL PROTECTED] writes:

  The theory is good, but useful values for commit_delay would probably be
  under a millisecond, and there isn't any portable way to sleep for such
  short periods.

Just because there's no portable way to be sure it'll work doesn't mean
there's no point in trying. If one user sets it to 5ms and it's effective for
him there's no reason to take out the option for him just because it doesn't
work out as well on all platforms.

Linux, for example has moved to higher clock speeds precisely because things
like movie and music players need to be able to control their timing to much
more precision than 10ms.

-- 
greg


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] [PATCHES] Removing Kerberos 4

2005-06-22 Thread Magnus Hagander
  Last chance for any Kerberos 4 users to speak up --- otherwise I'll 
  apply this soon.
 
 If you just want someone to test it I can do that. I don't 
 actually use it normally though.

I don't think just testing is enough - somebody needs to actually
maintain it... 


 As far as security issues the only issues I'm aware of is a) 
 it uses plain DES which is just a 56 bit key and crackable by 
 brute force and b) cross-domain authentication is broken. 

Yeah. But it has been declared dead by the Kerberos folks
(http://www.faqs.org/faqs/kerberos-faq/general/section-7.html. And this
document is from 2000, an dit was declared already then)...


//Magnus

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] pl/pgsql: END verbosity

2005-06-22 Thread Merlin Moncure
  Neil Conway said:
   In PL/PgSQL, END LOOP is used to terminate loop blocks, and END
IF
   is used to terminate IF blocks. This is needlessly verbose: we
could
   simply accept END in both cases without syntactic ambiguity. I'd
 like
to make this change, so that END can be used to terminate any
kind of
   block. There's no need to remove support for the present syntax,
of
   course, so there's no backward compatibility concern. Oracle's
PL/SQL
   does require END IF and END LOOP, but folks interested in
maximum
   compatibility can always use those forms if they like.
  
 
 Hello,
 
 I prefer actual syntax too, Neil. The reason isn't compatibility with
 Oracle, but better readibility - it's mean more work with finishing
code
 but less with debugging

COBOL, which is a kissing-cousin of pl/sql, allows this.  You can have a
line terminator (a period) or a specialized block terminator.  Based on
my experience here I would suggest not allowing a choice.  It's a famous
source of bugs.

Merlin

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] [PATCHES] Removing Kerberos 4

2005-06-22 Thread Tom Lane
Magnus Hagander [EMAIL PROTECTED] writes:
 Yeah. But it has been declared dead by the Kerberos folks
 (http://www.faqs.org/faqs/kerberos-faq/general/section-7.html. And this
 document is from 2000, an dit was declared already then)...

Right.  The real question here is who's going to be using a 2005
database release with a pre-2000 security system?  There's a fair
amount of code there and no evidence that time spent on testing
and maintaining it is going to benefit anyone anymore.

If someone wakes up and says hey, I'm still ACTUALLY using that code,
I'm willing to forbear ... but otherwise I think its time is long gone.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] pg_terminate_backend idea

2005-06-22 Thread Magnus Hagander
  In any case the correct way to solve the problem is to find out 
  what's being left corrupt by SIGTERM, rather than install more 
  messiness in order to avoid facing the real issue ...
 
  That is unfortunatly way over my head. And it doesn't seem like 
  anybody who actually has what it takes to do the proper 
 solution is 
  interested in doing it.
 
 A test case --- even one that fails only a small percentage 
 of the time
 --- would make things far easier.  So far all I've seen are 
 very vague reports, and it's impossible to do anything about 
 it without more info.

Very well. Let me try putting it like this, then:

Assuming we don't get such a case, and a chance to fix it, before 8.1
(while still hoping we will get it fixed properly, we can't be sure, can
we? If we were, it'd be fixed already). In this case, will you consider
such a kludgy solution as a temporary fix to resolve a problem that a
lot of users are having? And then plan to have it removed once sending
SIGTERM directly to a backend can be considered safe?


//Magnus

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Problem with dblink regression test

2005-06-22 Thread Chris Campbell

On Jun 22, 2005, at 12:52, Tom Lane wrote:


Jim C. Nasby [EMAIL PROTECTED] writes:


Is there a way to confirm which libpq.so psql and/or dblink.so has
linked to? Are there any other tests I could run to shed some  
light on

this?


On Linux you use ldd to find out what the linker will do with
dependencies of an executable or shared library.  I don't recall the
equivalent incantation on FreeBSD or OS X but I'm sure there is one.


On OS X, use otool -L:

$ otool -L /Library/PostgreSQL/bin/psql
/Library/PostgreSQL/bin/psql:
/Library/PostgreSQL/lib/libpq.3.dylib (compatibility version  
3.0.0, current version 3.2.0)
/usr/lib/libssl.0.9.7.dylib (compatibility version 0.9.7,  
current version 0.9.7)
/usr/lib/libcrypto.0.9.7.dylib (compatibility version 0.9.7,  
current version 0.9.7)
/usr/lib/libz.1.dylib (compatibility version 1.0.0, current  
version 1.0.0)
/usr/lib/libncurses.5.dylib (compatibility version 5.0.0,  
current version 5.0.0)
/usr/lib/libresolv.9.dylib (compatibility version 1.0.0,  
current version 324.9.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0,  
current version 71.1.1)


- Chris


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] pg_terminate_backend idea

2005-06-22 Thread Tom Lane
Magnus Hagander [EMAIL PROTECTED] writes:
 Assuming we don't get such a case, and a chance to fix it, before 8.1
 (while still hoping we will get it fixed properly, we can't be sure, can
 we? If we were, it'd be fixed already). In this case, will you consider
 such a kludgy solution as a temporary fix to resolve a problem that a
 lot of users are having? And then plan to have it removed once sending
 SIGTERM directly to a backend can be considered safe?

Kluges tend to become institutionalized, so my reaction is no.  It's
also worth pointing out that with so little understanding of the problem
Rod is reporting, it's tough to make a convincing case that this kluge
will avoid it.  SIGTERM exit *shouldn't* be leaving any corrupted
locktable entries behind; it's not that much different from the normal
case.  Until we find out what's going on, introducing still another exit
path isn't really going to make me feel more comfortable, no matter how
close it's alleged to be to the normal path.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[HACKERS] [PATCH] pgcrypto: pgp_encrypt

2005-06-22 Thread Marko Kreen
Finally, here is pgp_encrypt()/pgp_decrypt() - implementation
of password-based encryption from RFC2440 (OpenPGP).

The goal of this code is to be more featureful encryption solution
than current encrypt(), which only functionality is running cipher
over data.

Compared to encrypt(), pgp_encrypt() does following:

* It uses the equvialent of random Inital Vector to get cipher
  into random state before it processes user data
* Stores SHA-1 of the data into result so any modification
  will be detected.
* Remembers if data was text or binary - thus it can decrypt
  to/from text data.  This was a major nuisance for encrypt().
* Stores info about used algorithms with result, so user needs
  not remember them - more user friendly!
* Uses String2Key algorithms (similar to crypt()) with random salt
  to generate full-length binary key to be used for encrypting.
* Uses standard format for data - you can feed it to GnuPG, if needed.

Optional features (off by default):

* Can use separate session key - user data will be encrypted
  with totally random key, which will be encrypted with S2K
  generated key and attached to result.
* Data compression with zlib.
* Can convert between CRLF-LF line-endings - to get fully
  RFC2440-compliant behaviour.  This is off by default as
  pgcrypto does not know the line-endings of user data.

Interface is simple:

pgp_encrypt(data text, key text) returns bytea
pgp_decrypt(data text, key text) returns text
pgp_encrypt_bytea(data bytea, key text) returns bytea
pgp_decrypt_bytea(data bytea, key text) returns bytea

To change parameters (cipher, compression, mdc):

pgp_encrypt(data text, key text, parms text) returns bytea
pgp_decrypt(data text, key text, parms text) returns text
pgp_encrypt_bytea(data bytea, key text, parms text) returns bytea
pgp_decrypt_bytea(data bytea, key text, parms text) returns bytea

Parameter names I lifted from gpg:

   pgp_encrypt('message', 'key', 'compress-algo=1,cipher-algo=aes256')

For text data, pgp_encrypt simply encrypts the PostgreSQL internal
data.  This maps to RFC2440 data type 't' - 'extenally specified
encoding'.  But this may cause problems if data is dumped and reloaded
into database which as different internal encoding.  My next goal is
to implement data type 'u' - which means data is in UTF-8 encoding
by converting internal encoding to UTF-8 and back.  And there wont
be any compatibility problems with current code, I think its ok to
submit this without UTF-8 support.

-- 
marko


PS.  openssl/3des regression test fails, I'll send fix once
I understand why its happening.


pgp.diff.gz
Description: Binary data

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[HACKERS] COPY FROM performance improvements

2005-06-22 Thread Alon Goldshuv
This is a second iteration of a previous thread that didn't resolve few
weeks ago. I made some more modifications to the code to make it compatible
with the current COPY FROM code and it should be more agreeable this time.

The main premise of the new code is that it improves the text data parsing
speed by about 4-5x, resulting in total improvements that lie between 15% to
95% for data importing (higher range gains will occur on large data rows
without many columns - implying more parsing and less converting to internal
format). This is done by replacing a char-at-a-time parsing with buffered
parsing and also using fast scan routines and minimum amount of
loading/appending into line and attribute buf.

The new code passes both COPY regression tests (copy, copy2) and doesn't
break any of the others.

It also supports encoding conversions (thanks Peter and Tatsuo and your
feedback) and the 3 line-end types. Having said that, using COPY with
different encodings was only minimally tested. We are looking into creating
new tests and hopefully add them to postgres regression suite one day if
it's desired by the community.

This new code is improving the delimited data format parsing. BINARY and CSV
will stay the same and will be executed separately for now (therefore there
is some code duplication) In the future I plan to write improvements to the
CSV path too, so that it will be executed without duplication of code.

I am still missing supporting data that uses COPY_OLD_FE (question: what are
the use cases? When will it be used? Please advise)

I'll send out the patch soon. It's basically there to show that there is a
way to load data faster. In future releases of the patch it will be more
complete and elegant.

I'll appreciate any comments/advices.

Alon.



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[HACKERS] GiST rtree logic is not right

2005-06-22 Thread Tom Lane
It occurred to me to wonder whether contrib/rtree_gist fixes the rtree
bug documented here:
http://archives.postgresql.org/pgsql-general/2004-03/msg01143.php

The answer is unfortunately no.  In the regression database,
install rtree_gist and do:

regression=# create table gist_emp4000 as select * from slow_emp4000;
SELECT
regression=# create index grect2ind ON gist_emp4000 USING gist (home_base);
CREATE INDEX
regression=# select count(*) from gist_emp4000 where home_base  
'(35565,5404),(35546,5360)';
 count 
---
  2144
(1 row)

The correct answer is

regression=# select count(*) from slow_emp4000 where home_base  
'(35565,5404),(35546,5360)';
 count 
---
  2214
(1 row)

Now this is noticeably better than the rtree implementation, which finds
only 1363 rows, but broken is still broken :-(

This is doubtless not as high priority as the concurrency stuff you
are working on, but it'd be good to fix anyway.  I was thinking of
proposing that we move rtree_gist into the core --- but the case for
it would be stronger if it worked correctly ...

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] commit_delay, siblings

2005-06-22 Thread Qingqing Zhou

Josh Berkus josh@agliodbs.com writes
 Hackers:

 I've been trying to get a test result for 8.1 that shows that we can
eliminate
 commit_delay and commit_siblings, as I believe that these settings no
longer
 have any real effect on performance.  However, the checkpointing
performance
 issues have so far prevented me from getting a good test result for this.


In my understadning, the commit_delay/commit_siblings combination simulate
the background xlog writer mechanisms in some database like Oracle.

This might be separate issue. We have code in xlogflush() like:

 /* done already? */
 if (!XLByteLE(record, LogwrtResult.Flush))
 {
  /* now wait for the write lock */
  LWLockAcquire(WALWriteLock, LW_EXCLUSIVE);
  if (XLByteLE(record, LogwrtResult.Flush))
  LWLockRelease(WALWriteLock);/* if done already, then release the
lock */
  else
 /* do it */

If the testing results turns out the LWLockRelease(WALWriteLock) actually
happened often, then it indicates that we waste some time on acquiring
WALWriteLock. Would commit_delay/commit_siblings helps or we need a
background xlog writer and notify us the completion of xlogflush is better
(so we don't compete for this lock)?

Regards,
Qingqing





---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] commit_delay, siblings

2005-06-22 Thread Tom Lane
Qingqing Zhou [EMAIL PROTECTED] writes:
 Would commit_delay/commit_siblings helps or we need a
 background xlog writer and notify us the completion of xlogflush is better
 (so we don't compete for this lock)?

The existing bgwriter already does a certain amount of xlog flushing
(since it must flush WAL at least as far as the LSN of any dirty page it
wants to write out).  However I'm not sure that this is very effective
--- in a few strace tests that I've done, it seemed that committing
backends still ended up doing the bulk of the xlog writes, especially
if they were doing small transactions.  It'd be interesting to look into
making the bgwriter (or a new dedicated xlog bgwriter) responsible for
all xlog writes.  You could imagine a loop like

forever do
if (something new in xlog)
write and flush it;
else
sleep 10 msec;
done

together with some kind of IPC to waken backends once xlog was flushed
past the point they needed.  (Designing that is the hard part.)

But in any case, the existing commit_delay doesn't seem like it's got
anything to do with a path to a better answer, so this is not an
argument against removing it.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] [PATCHES] O_DIRECT for WAL writes

2005-06-22 Thread Curt Sampson

On Thu, 22 Jun 2005, Greg Stark wrote:


Tom Lane [EMAIL PROTECTED] writes:


Unfortunately, I cannot believe these numbers --- the near equality of
fsync off and fsync on means there is something very wrong with the
measurements.  What I suspect is that your ATA drives are doing write
caching and thus the fsyncs are not really waiting for I/O at all.


I wonder whether it would make sense to have an automatic test for this
problem. I suspect there are lots of installations out there whose admins
don't realize that their hardware is doing this to them.


But is it really a problem? I somewhere got the impression that some
drives, on power failure, will be able to keep going for long enough to
write out the cache and park the heads anyway. If so, the drive is still
guaranteeing the write.

But regardless, perhaps we can add some stuff to the various OSes'
startup scripts that could help with this. For example, in NetBSD you
can dkctl device setcache r for most any disk device (certainly all
SCSI and ATA) to enable the read cache and disable the write cache.

cjs
--
Curt Sampson  [EMAIL PROTECTED]   +81 90 7737 2974   http://www.NetBSD.org
 Make up enjoying your city life...produced by BIC CAMERA

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] [PATCHES] O_DIRECT for WAL writes

2005-06-22 Thread Tom Lane
Curt Sampson [EMAIL PROTECTED] writes:
 But regardless, perhaps we can add some stuff to the various OSes'
 startup scripts that could help with this. For example, in NetBSD you
 can dkctl device setcache r for most any disk device (certainly all
 SCSI and ATA) to enable the read cache and disable the write cache.

[ shudder ]  I can see the complaints now: Merely starting up Postgres
cut my overall system performance by a factor of 10!  I wasn't even
using it!!  What a piece of junk!!!  I can hardly think of a better
way to drive away people with a marginal interest in the database...

This can *not* be default behavior, and unfortunately that limits its
value quite a lot.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] [PATCHES] O_DIRECT for WAL writes

2005-06-22 Thread Curt Sampson

On Wed, 22 Jun 2005, Tom Lane wrote:


[ shudder ]  I can see the complaints now: Merely starting up Postgres
cut my overall system performance by a factor of 10!


Yeah, quite the scenario.


This can *not* be default behavior, and unfortunately that limits its
value quite a lot.


Indeed. Maybe it's best just to document this stuff for the various
OSes, and let the admins deal with configuring their machines.

But you know, it might be a reasonable option switch, or something.

cjs
--
Curt Sampson  [EMAIL PROTECTED]   +81 90 7737 2974   http://www.NetBSD.org
 Make up enjoying your city life...produced by BIC CAMERA

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] [PATCHES] O_DIRECT for WAL writes

2005-06-22 Thread Tom Lane
[ on the other point... ]

Curt Sampson [EMAIL PROTECTED] writes:
 But is it really a problem? I somewhere got the impression that some
 drives, on power failure, will be able to keep going for long enough to
 write out the cache and park the heads anyway. If so, the drive is still
 guaranteeing the write.

If the drives worked that way, we'd not be seeing any problem, but we do
see problems.  Without having a whole lot of data to back it up, I would
think that keeping the platter spinning is no problem (sheer rotational
inertia) but seeking to a lot of new tracks to write randomly-positioned
dirty sectors would require significant energy that just ain't there
once the power drops.  I seem to recall reading that the seek actuators
eat the largest share of power in a running drive...

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] HaveNFreeProcs ?

2005-06-22 Thread Alvaro Herrera
Hackers,

I just noticed the HaveNFreeProcs routine is coded as a loop around the
ProcGlobal struct members.  I wonder if it's possible to use a simple
check in procArray-numBackends against procArray-maxBackends instead?

-- 
Alvaro Herrera (alvherre[a]surnet.cl)
Jason Tesser: You might not have understood me or I am not understanding you.
Paul Thomas: It feels like we're 2 people divided by a common language...

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [PATCHES] O_DIRECT for WAL writes

2005-06-22 Thread Gavin Sherry
On Thu, 23 Jun 2005, Tom Lane wrote:

 [ on the other point... ]

 Curt Sampson [EMAIL PROTECTED] writes:
  But is it really a problem? I somewhere got the impression that some
  drives, on power failure, will be able to keep going for long enough to
  write out the cache and park the heads anyway. If so, the drive is still
  guaranteeing the write.

 If the drives worked that way, we'd not be seeing any problem, but we do
 see problems.  Without having a whole lot of data to back it up, I would
 think that keeping the platter spinning is no problem (sheer rotational
 inertia) but seeking to a lot of new tracks to write randomly-positioned
 dirty sectors would require significant energy that just ain't there
 once the power drops.  I seem to recall reading that the seek actuators
 eat the largest share of power in a running drive...

I've seen discussion about disks behaving this way. There's no magic:
they're battery backed.

Thanks,

Gavin

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] HaveNFreeProcs ?

2005-06-22 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 I just noticed the HaveNFreeProcs routine is coded as a loop around the
 ProcGlobal struct members.  I wonder if it's possible to use a simple
 check in procArray-numBackends against procArray-maxBackends instead?

It used to look like that, but now that numBackends includes prepared
transactions that's no longer a useful test.  I think that the existing
coding is OK, because it's written to not loop more than
superuser_reserved_connections times, and it's hard to imagine anyone
would set that to more than half a dozen or so.

Also, that routine will disappear entirely if we agree to remove
commit_siblings (see nearby thread), so right at the moment I'm not very
concerned about improving it.  If it is still there forty-eight hours
from now, let's talk about it then.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] HaveNFreeProcs ?

2005-06-22 Thread Tom Lane
I wrote:
 Also, that routine will disappear entirely if we agree to remove
 commit_siblings (see nearby thread), so right at the moment I'm not very
 concerned about improving it.  If it is still there forty-eight hours
 from now, let's talk about it then.

Oh, never mind that, I was momentarily confusing it with
CountActiveBackends.  But the other point stands.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] [PATCHES] O_DIRECT for WAL writes

2005-06-22 Thread Gregory Maxwell
On 6/23/05, Gavin Sherry [EMAIL PROTECTED] wrote:

  inertia) but seeking to a lot of new tracks to write randomly-positioned
  dirty sectors would require significant energy that just ain't there
  once the power drops.  I seem to recall reading that the seek actuators
  eat the largest share of power in a running drive...
 
 I've seen discussion about disks behaving this way. There's no magic:
 they're battery backed.

Nah this isn't always the case, for example some of the IBM deskstars
had a few tracks at the start of the disk reserved.. if the power
failed the head retracted all the way and used the rotational energy
to power it long enough to write out the cache..  At start the drive
would read it back in and finish flushing it.

 unfortunately firmware bugs made it not always wait until the
head returned to the start to begin writing...

I'm not sure what other drives do this (er, well do it correctly :) ).

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] [PATCHES] O_DIRECT for WAL writes

2005-06-22 Thread Tom Lane
Gavin Sherry [EMAIL PROTECTED] writes:
 Curt Sampson [EMAIL PROTECTED] writes:
 But is it really a problem? I somewhere got the impression that some
 drives, on power failure, will be able to keep going for long enough to
 write out the cache and park the heads anyway. If so, the drive is still
 guaranteeing the write.

 I've seen discussion about disks behaving this way. There's no magic:
 they're battery backed.

Oh, sure, then it's easy ;-)

The bottom line here seems to be the same as always: you can't run an
industrial strength database on piece-of-junk consumer grade hardware.
Our problem is that because the software is free, people expect to run
it on bottom-of-the-line Joe Bob's Bait And PC Shack hardware, and then
they blame us when they don't get the same results as the guy running
Oracle on million-dollar triply-redundant server hardware.  Oh well.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] HaveNFreeProcs ?

2005-06-22 Thread Tom Lane
I wrote:
 ... because it's written to not loop more than
 superuser_reserved_connections times, and it's hard to imagine anyone
 would set that to more than half a dozen or so.

We could help keep people on the correct path if guc.c enforced a sane
upper limit on superuser_reserved_connections.  I'm thinking somewhere
around 10.

Any thoughts about that?

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] [PATCHES] O_DIRECT for WAL writes

2005-06-22 Thread Gavin Sherry
On Thu, 23 Jun 2005, Tom Lane wrote:

 Gavin Sherry [EMAIL PROTECTED] writes:
  Curt Sampson [EMAIL PROTECTED] writes:
  But is it really a problem? I somewhere got the impression that some
  drives, on power failure, will be able to keep going for long enough to
  write out the cache and park the heads anyway. If so, the drive is still
  guaranteeing the write.

  I've seen discussion about disks behaving this way. There's no magic:
  they're battery backed.

 Oh, sure, then it's easy ;-)

 The bottom line here seems to be the same as always: you can't run an
 industrial strength database on piece-of-junk consumer grade hardware.
 Our problem is that because the software is free, people expect to run
 it on bottom-of-the-line Joe Bob's Bait And PC Shack hardware, and then
 they blame us when they don't get the same results as the guy running
 Oracle on million-dollar triply-redundant server hardware.  Oh well.

If you ever need a second job, I recommend stand up comedy :-).

Gavin

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [PATCHES] O_DIRECT for WAL writes

2005-06-22 Thread Curt Sampson

On Thu, 23 Jun 2005, Tom Lane wrote:


The bottom line here seems to be the same as always: you can't run an
industrial strength database on piece-of-junk consumer grade hardware.


Sure you can, though it may take several bits of piece-of-junk
consumer-grade hardware. It's far more about how you set up your system
and implement recovery policies than it is about hardware.

I ran an ISP back in the '90s on old PC junk, and we had far better
uptime than most of our competitors running on expensive Sun gear. One
ISP was completely out for half a day because the tech. guy bent and
broke a hot-swappable circuit board while installing it, bringing down
the entire machine. (Pretty dumb of them to be running everything on a
single, irreplacable high-availablity system.)


...they blame us when they don't get the same results as the guy
running Oracle on...


Now that phrase irritates me a bit. I've been using all this stuff for
a long time (Postgres on and off since QUEL, before SQL was dropped
in instead) and at this point, for the (perhaps slim) majority of
applications, I would say that PostgreSQL is a better database than
Oracle. It requires much, much less effort to get a system and its test
framework up and running under PostgreSQL than it does under Oracle,
PostgreSQL has far fewer stupid limitations, and in other areas, such
as performance, it competes reasonably well in a lot of cases. It's a
pretty impressive piece of work, thanks in large part to efforts put in
over the last few years.

cjs
--
Curt Sampson  [EMAIL PROTECTED]   +81 90 7737 2974   http://www.NetBSD.org
 Make up enjoying your city life...produced by BIC CAMERA

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match