Re: [HACKERS] Problem with windows installer

2007-01-05 Thread Dave Page
Magnus Hagander wrote:
 I discussed this briefly with Robert on IM yesterday - he told me the
 account was installer created. Without a PC at the time I couldn't look
 into it further :-(
 
 The faq still applies as the most likely reason. It can certainly happen 
 with the installer created account as well, for example due to a group
 policy on the computer or domain.

Of course, but what was also not mentioned is that this is a standalone
machine with auto-login, so GP would not be an issues. Local policy
could be I guess, but I would imagine Andreas hasn't tweaked that given
that he's got no access control in place.

Regards, Dave.


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Problem with windows installer

2007-01-05 Thread Magnus Hagander
On Fri, Jan 05, 2007 at 08:30:58AM +, Dave Page wrote:
 Magnus Hagander wrote:
  I discussed this briefly with Robert on IM yesterday - he told me the
  account was installer created. Without a PC at the time I couldn't look
  into it further :-(
  
  The faq still applies as the most likely reason. It can certainly happen 
  with the installer created account as well, for example due to a group
  policy on the computer or domain.
 
 Of course, but what was also not mentioned is that this is a standalone
 machine with auto-login, so GP would not be an issues. Local policy
 could be I guess, but I would imagine Andreas hasn't tweaked that given
 that he's got no access control in place.
 

Point. I'm back to referring to the FAQ suggestion to enable auditing to
see exactly what it's failing on. ;-)

//Magnus

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] proposal - new SPI cursor function

2007-01-05 Thread Zeugswetter Andreas ADI SD

   I am working on support scrollable cursors in plpgpsm. Scrollable
cursors 
   are in ToDo for plpgsql too. I need new function  
   SPI_cursor_fetch_with_direction(Portal portal, int direction, long
count) 
  
  Is int a reasonable domain for directions?  I'd think there would
be
  at most values.

 enum is better, true

I don't think we want an enum here (maybe short).

 tree fields: FORWARD, BACKWARD, ABSOLUTE

I think for fetch absolute you are missing an offset argument.

Andreas

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Deadline-Based Vacuum Delay

2007-01-05 Thread Galy Lee

Tom Lane wrote:
 I think the context for this is that you have an agreed-on maintenance
 window, say extending from 2AM to 6AM local time, and you want to get
 all your vacuuming done in that window without undue spikes in the
 system load (because you do still have live users then, just not as many
 as during prime time).  If there were a decent way to estimate the
 amount of work to be done then it'd be possible to spread the work
 fairly evenly across the window.  What I do not see is where you get
 that estimate from --- especially since you probably have more than one
 table to vacuum in your window.

It is true that there is not a decent way to estimate the amount of work 
to be done. But the purpose in here is not “spread the vacuum over 6 
hours exactly”, it is “finish vacuum within 6 hours, and spread the 
spikes as much as possible”. So the maximum estimation of the work is 
enough to refine the vacuum within the window, it is fine if vacuum run 
quickly than schedule. Also we don’t need to estimate the time of 
vacuum, we only need to compare the actual progress of time window and 
the progress of the work, and then adjust them to have the same pace in 
the delay point.


The maximum of the work of vacuum can be estimated by size of the heap, 
the size of the index, and the number of dead tuples. For example the 
lazy vacuum has the following works:

 1. scan heap
 2. vacuum index
 3. vacuum heap
 4. truncate heap
Although 2 and 4 are quite unpredictable, but the total amount of work 
including 1, 2, 3, and 4 can be estimated.


 The other problem is that vacuum only during a maintenance window
 doesn't seem all that compelling a policy anyway.  We see a lot of
 examples of tables that need to be vacuumed much more often than once
 a day.  So I'd rather put effort into making sure that vacuum can be run
 in the background even under high load, instead of designing around a
 maintenance-window assumption.

This feature is not necessary has a maintenance window assumption. For 
example, if a table needs to be vacuumed every 3 hours to sweep the 
garbage, then instead of tuning cost delay GUC hardly to refine vacuum 
in 3 hours, we can make vacuum finish within the time frame by “VACUUM 
IN time” feature.


If we can find a good way to tune the cost delay GUC to enable vacuum to 
catch up with the speed of garbage generation in the high frequency 
update system, then we won’t need this feature. For example, the 
interval of two vacuums can be estimated by tracking the speed of the 
dead tuple generation, but how can you tune the vacuum time to fit in 
the interval of two vacuums? It seems that there is not easy to tune the 
delay time of vacuum correctly.


Best Regards
--
Galy Lee lee.galy _at_ oss.ntt.co.jp
NTT Open Source Software Center



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] ideas for auto-processing patches

2007-01-05 Thread Stefan Kaltenbrunner

Andrew Dunstan wrote:

Gavin Sherry wrote:

With PLM, you could test patches against various code branches. I'd
guessed Mark would want to provide this capability. Pulling branches from
anonvcvs regularly might be burdensome bandwidth-wise. So, like you say, a
local mirror would be beneficial for patch testing.



I think you're missing the point. Buildfarm members already typically have
or can get very cheaply a copy of each branch they build (HEAD and/or
REL*_*_STABLE).  As long as the patch feed is kept to just patches which
they can apply there should be no great bandwidth issues.


yeah - another thing to consider is that switching to a different scm 
repository qould put quite a burden on the buildfarm admins (most of 
those are not that easily available for the more esotheric platforms for 
example).
I'm also not sure how useful it would be to test patches against 
branches other then HEAD - new and complex patches will only get applied 
on HEAD anyway ...



Stefan

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [PATCHES] wal_checksum = on (default) | off

2007-01-05 Thread Zeugswetter Andreas ADI SD

   Recovery can occur with/without same setting of wal_checksum, to
avoid
   complications from crashes immediately after turning GUC on.
   
   Surely not.  Otherwise even the on setting is not really a
defense.
  
   Only when the CRC is exactly zero, which happens very very rarely.
  
  It works most of the time doesn't exactly satisfy me.  What's the

Agreed

  use-case for changing the variable on the fly anyway?  Seems a
better
  solution is just to lock down the setting at postmaster start.

I guess that the use case is more for a WAL based replicate, that 
has/wants a different setting. Maybe we want a WAL entry for the change,
or force a log switch (so you can interrupt the replicate, change it's
setting
and proceed with the next log) ?

Maybe a 3rd mode for replicates that ignores 0 CRC's ?

Andreas

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [PATCHES] wal_checksum = on (default) | off

2007-01-05 Thread Simon Riggs
On Fri, 2007-01-05 at 11:01 +0100, Zeugswetter Andreas ADI SD wrote:

   What's the use-case for changing the variable on the fly anyway?  Seems a
 better
   solution is just to lock down the setting at postmaster start.
 
 I guess that the use case is more for a WAL based replicate, that 
 has/wants a different setting. Maybe we want a WAL entry for the change,
 or force a log switch (so you can interrupt the replicate, change it's
 setting
 and proceed with the next log) ?
 
 Maybe a 3rd mode for replicates that ignores 0 CRC's ?

Well, wal_checksum allows you to have this turned ON for the main server
and OFF on a Warm Standby. 

The recovery process doesn't check for postgresql.conf reloads, so
setting it at server start is effectively the same thing in that case.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [PATCHES] wal_checksum = on (default) | off

2007-01-05 Thread Zeugswetter Andreas ADI SD

What's the use-case for changing the variable on the fly anyway?
Seems a
  better
solution is just to lock down the setting at postmaster start.
  
  I guess that the use case is more for a WAL based replicate, that 
  has/wants a different setting. Maybe we want a WAL entry for the
change,
  or force a log switch (so you can interrupt the replicate, change
it's
  setting
  and proceed with the next log) ?
  
  Maybe a 3rd mode for replicates that ignores 0 CRC's ?
 
 Well, wal_checksum allows you to have this turned ON for the main
server
 and OFF on a Warm Standby.

Ok, so when you need CRC's on a replicate (but not on the master) you
turn it
off during standby replay, but turn it on when you start the replicate
for normal operation.

Andreas

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Deadline-Based Vacuum Delay

2007-01-05 Thread Tom Lane
Galy Lee [EMAIL PROTECTED] writes:
 It is true that there is not a decent way to estimate the amount of work 
 to be done. But the purpose in here is not “spread the vacuum over 6 
 hours exactly”, it is “finish vacuum within 6 hours, and spread the 
 spikes as much as possible”. So the maximum estimation of the work is 
 enough to refine the vacuum within the window, it is fine if vacuum run 
 quickly than schedule.

Is it?  If I tell the thing to take 6 hours and it finishes in 5
minutes, why would I be happy?  It could obviously have spread out the
work more, and presumably if I'm using this feature at all then I want
the least possible load added from vacuum while it's running.

But this is all academic, because there's no way to produce a
trustworthy maximum estimate either.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] ideas for auto-processing patches

2007-01-05 Thread Andrew Dunstan

Tino Wildenhain wrote:

[EMAIL PROTECTED] schrieb:

On 1/4/07, Gavin Sherry [EMAIL PROTECTED] wrote:

On Thu, 4 Jan 2007, Andrew Dunstan wrote:

...

Pulling branches from
anonvcvs regularly might be burdensome bandwidth-wise. So, like you 
say, a

local mirror would be beneficial for patch testing.


Right some sort of local mirror would definitely speed things up.


Easier speedup in this regard would be using subversion instead
of cvs. It transfers only diffs to your working copy (or rather,
to your last checkout) so its really saving on bandwidth.



cvs update isn't too bad either. I just did a substantial update on a 
tree that had not been touched for nearly 6 months, and ethereal tells 
me that total traffic was 7343004 bytes in 7188 packets. Individual 
buildfarm updates are going to be much lower than that, by a couple of 
orders of magnitude, I suspect.


If we were to switch to subversion we should do it for the right reason 
- this isn't one.


cheers

andrew

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [HACKERS] [PATCHES] wal_checksum = on (default) | off

2007-01-05 Thread Tom Lane
Zeugswetter Andreas ADI SD [EMAIL PROTECTED] writes:
 Ok, so when you need CRC's on a replicate (but not on the master) you
 turn it
 off during standby replay, but turn it on when you start the replicate
 for normal operation.

Thought: even when it's off, the CRC had better be computed for
shutdown-checkpoint records.  Else there's no way to turn it on even
with a postmaster restart --- unless we accept the idea of poking a hole
in the normal mode.  (Which I still dislike, and even more so if the
special value is zero.  Almost any other value would be safer than zero.)

On the whole, though, I still don't want to put this in.  I don't think
Simon has thought it through sufficiently, and we haven't even seen any
demonstration of a big speedup.

(Another hole in the implementation as given: pg_resetxlog.)

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [PATCHES] Last infomask bit

2007-01-05 Thread Tom Lane
Heikki Linnakangas [EMAIL PROTECTED] writes:
  I believe it would actually be even better to combine the t_natts and 
 t_infomask fields to a single 32-bit infomask field.

That's not happening, because the alignment is wrong ...unless maybe
we switch this field to fall before t_ctid, but that would screw up
the MinimalTuple hack.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [PATCHES] wal_checksum = on (default) | off

2007-01-05 Thread Zeugswetter Andreas ADI SD

  Ok, so when you need CRC's on a replicate (but not on the master)
you
  turn it
  off during standby replay, but turn it on when you start the
replicate
  for normal operation.
 
 Thought: even when it's off, the CRC had better be computed for
 shutdown-checkpoint records.  Else there's no way to turn it on even
 with a postmaster restart --- unless we accept the idea of poking a
hole
 in the normal mode.  (Which I still dislike, and even more so if the
 special value is zero.  Almost any other value would be safer than
zero.)
 
 On the whole, though, I still don't want to put this in.  I don't
think
 Simon has thought it through sufficiently, 

Well, the part that we do not really want a special value (at least not
0)
is new, and makes things a bit more complicated.

 and we haven't even seen any demonstration of a big speedup.

Yes, iirc the demonstration was with the 64 bit crc instead of the
sufficient 
32-bit (or a bad crc compiler optimization?).
But I do think it can be shown to provide significant speedup
(at least peak burst performance).

Especially on target hardware WAL write IO is extremely fast 
(since it is write cached), so the CPU should show.

Andreas

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] -f output file option for pg_dumpall

2007-01-05 Thread Dave Page
In pgAdmin we use pg_dump's -f option to write backup files. The IO
streams are redirected to display status and errors etc. in the GUI.

In order to enhance the interface to allow backup of entire clusters as
well as role and tablespace definitions, we need to be able to get
pg_dumpall to write it's output directly to a file in the same way,
because we cannot redirect the child pg_dump IO streams (which also
means we may miss errors, but I need to think about that some more).

As far as I can see, adding a -f option to pg_dumpall should be straight
forward, the only issue being that we'd need to pass pg_dump an
additional (undocumented?) option to tell it to append to the output
file instead of writing it as normal.

Any thoughts or better ideas?

Regards, Dave.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] -f output file option for pg_dumpall

2007-01-05 Thread Joshua D. Drake

 As far as I can see, adding a -f option to pg_dumpall should be straight
 forward, the only issue being that we'd need to pass pg_dump an
 additional (undocumented?) option to tell it to append to the output
 file instead of writing it as normal.
 
 Any thoughts or better ideas?

Yeah :) Getting rid of pg_dumpall entirely and merging it into pg_dump.
It is kind of silly that we have two different versions. Although that
may be more work than you were considering.

Joshua D. Drake


 
 Regards, Dave.
 
 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
 
-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] -f output file option for pg_dumpall

2007-01-05 Thread Andrew Dunstan

Dave Page wrote:

In pgAdmin we use pg_dump's -f option to write backup files. The IO
streams are redirected to display status and errors etc. in the GUI.

In order to enhance the interface to allow backup of entire clusters as
well as role and tablespace definitions, we need to be able to get
pg_dumpall to write it's output directly to a file in the same way,
because we cannot redirect the child pg_dump IO streams (which also
means we may miss errors, but I need to think about that some more).

As far as I can see, adding a -f option to pg_dumpall should be straight
forward, the only issue being that we'd need to pass pg_dump an
additional (undocumented?) option to tell it to append to the output
file instead of writing it as normal.

Any thoughts or better ideas?

  



This seems a bit like piecemeal reform. Here are some things I'd like to 
see that affect this area:


. merge pg_dump and pg_dumpall (e.g. add a flag to pg_dump that says do 
the lot)

. multi-db non-text dumps

And while we're about it, can we teach pg_restore to handle text dumps? 
Even if it just transparently called psql it would be nice.


cheers

andrew




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] -f output file option for pg_dumpall

2007-01-05 Thread Darcy Buskermolen
On Friday 05 January 2007 09:40, Andrew Dunstan wrote:
 Dave Page wrote:
  In pgAdmin we use pg_dump's -f option to write backup files. The IO
  streams are redirected to display status and errors etc. in the GUI.
 
  In order to enhance the interface to allow backup of entire clusters as
  well as role and tablespace definitions, we need to be able to get
  pg_dumpall to write it's output directly to a file in the same way,
  because we cannot redirect the child pg_dump IO streams (which also
  means we may miss errors, but I need to think about that some more).
 
  As far as I can see, adding a -f option to pg_dumpall should be straight
  forward, the only issue being that we'd need to pass pg_dump an
  additional (undocumented?) option to tell it to append to the output
  file instead of writing it as normal.
 
  Any thoughts or better ideas?

 This seems a bit like piecemeal reform. Here are some things I'd like to
 see that affect this area:

 . merge pg_dump and pg_dumpall (e.g. add a flag to pg_dump that says do
 the lot)
 . multi-db non-text dumps

 And while we're about it, can we teach pg_restore to handle text dumps?
 Even if it just transparently called psql it would be nice.

+1 on on having pg_restore work with text dumps.


 cheers

 andrew




 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?

http://archives.postgresql.org

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[HACKERS] PGCon 2007 Program Committee

2007-01-05 Thread Dan Langille
I have the pleasure of announcing your PGCon 2007 program committee. 

Bruce Momjian
Christopher Browne
Josh Berkus
Robert Treat
Luke Lonergan
Neil Conway
Robert Bernier

These people are responsible for reviewing your submissions and 
selecting the presentations for PGCon 2007.

Speaking of presentations, now that the major holidays are over, 
please submit your proposal now.  Instructions for submissions are at 
http://www.pgcon.org/2007/submissions.php

The original call for papers: http://www.pgcon.org/2007/papers.php

NOTE: Please get your proposal in by 19 Jan (that is in two weeks).

-- 
Dan Langille : Software Developer looking for work
my resume: http://www.freebsddiary.org/dan_langille.php
PGCon - The PostgreSQL Conference - http://www.pgcon.org/



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] -f output file option for pg_dumpall

2007-01-05 Thread David Fetter
On Fri, Jan 05, 2007 at 12:40:20PM -0500, Andrew Dunstan wrote:
 Dave Page wrote:
 In pgAdmin we use pg_dump's -f option to write backup files. The IO
 streams are redirected to display status and errors etc. in the GUI.
 
 In order to enhance the interface to allow backup of entire clusters as
 well as role and tablespace definitions, we need to be able to get
 pg_dumpall to write it's output directly to a file in the same way,
 because we cannot redirect the child pg_dump IO streams (which also
 means we may miss errors, but I need to think about that some more).
 
 As far as I can see, adding a -f option to pg_dumpall should be straight
 forward, the only issue being that we'd need to pass pg_dump an
 additional (undocumented?) option to tell it to append to the output
 file instead of writing it as normal.
 
 Any thoughts or better ideas?
 
 
 This seems a bit like piecemeal reform. Here are some things I'd like to 
 see that affect this area:
 
 . merge pg_dump and pg_dumpall (e.g. add a flag to pg_dump that says do 
 the lot)
 . multi-db non-text dumps
 
 And while we're about it, can we teach pg_restore to handle text dumps? 
 Even if it just transparently called psql it would be nice.

+1 on all of this :)

Cheers,
D (who can contribute code once we've agreed on what it should do)
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Reverse-sort indexes and NULLS FIRST/LAST sorting

2007-01-05 Thread Michael Glaesemann


On Jan 4, 2007, at 13:33 , Tom Lane wrote:


Another possible objection is that in the proposed CREATE INDEX syntax

index-column-id [ opclass-name ] [ DESC ] [ NULLS {FIRST|LAST} ]

DESC must be a fully reserved word else it can't be distinguished from
an opclass name.  But guess what, it already is.


A point in favor of using DESC over REVERSE as you had earlier  
proposed is that DESC is already a reserved word, while REVERSE isnt'  
even in the list of key words. As DESC is quite closely associated  
with its antonym ASC wrt ordering, any thoughts of allowing ASC as an  
optional noise word? Users may be surprised if ASC were to throw an  
error.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] -f output file option for pg_dumpall

2007-01-05 Thread Dave Page

David Fetter wrote:
This seems a bit like piecemeal reform. Here are some things I'd like to 
see that affect this area:


. merge pg_dump and pg_dumpall (e.g. add a flag to pg_dump that says do 
the lot)

. multi-db non-text dumps

And while we're about it, can we teach pg_restore to handle text dumps? 
Even if it just transparently called psql it would be nice.


+1 on all of this :)


Hmm, well I have no interest in the latter at present, but assuming the 
powers that be will allow me some time to do so, I will look at merging 
pg_dump and pg_dumpall as that seems to be the way people want to go. 
I'd also like to tweak the options for global objects to allow roles and 
tablespaces to be dumped seperately from each other if no-one objects.



Cheers,
D (who can contribute code once we've agreed on what it should do)


Cool - fancy picking up the multi-db non text and pg_restore bits if I 
do the former?


Regards, Dave.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] -f output file option for pg_dumpall

2007-01-05 Thread Dave Page

Andreas Pflug wrote:

Dave Page wrote:

In pgAdmin we use pg_dump's -f option to write backup files. The IO
streams are redirected to display status and errors etc. in the GUI.

In order to enhance the interface to allow backup of entire clusters as
well as role and tablespace definitions, we need to be able to get
pg_dumpall to write it's output directly to a file in the same way,
because we cannot redirect the child pg_dump IO streams (which also
means we may miss errors, but I need to think about that some more).

As far as I can see, adding a -f option to pg_dumpall should be straight
forward, the only issue being that we'd need to pass pg_dump an
additional (undocumented?) option to tell it to append to the output
file instead of writing it as normal.

Any thoughts or better ideas?
  

Use pgAdmin's create script funcion on the server.


We said long ago we weren't going to replicate pg_dump functionality in 
pgAdmin - a brief discussion on the pgadmin-hackers list earlier 
indicates that people still feel the same way, and that time would be 
better spend fixing pg_dump/pg_dumpall.


Regards, Dave.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] InitPostgres and flatfiles question

2007-01-05 Thread Ron Mayer
Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
 What value is allowing multiple queies via PQexec()
 
 The only argument I can think of is that it allows applications to be
 sloppy about parsing a SQL script into individual commands before they
 send it.  (I think initdb may be guilty of exactly that BTW...)  At the
 same time you could argue that such sloppiness is inherently a Bad Idea.

Doesn't it also avoid some network(?) overhead when you have
a large number of small inserts or updates?

I seem to recall a previous company where we had a major performance
by concatenating a bunch of updates with ;s in between and sending
them to postgresql as a single command.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Reverse-sort indexes and NULLS FIRST/LAST sorting

2007-01-05 Thread Tom Lane
Michael Glaesemann [EMAIL PROTECTED] writes:
 On Jan 4, 2007, at 13:33 , Tom Lane wrote:
 index-column-id [ opclass-name ] [ DESC ] [ NULLS {FIRST|LAST} ]
 
 DESC must be a fully reserved word else it can't be distinguished from
 an opclass name.  But guess what, it already is.

 A point in favor of using DESC over REVERSE as you had earlier  
 proposed is that DESC is already a reserved word, while REVERSE isnt'  
 even in the list of key words.

Right, that's what convinced me not to use REVERSE.  Also, the
parallelism of this construct to what is allowed in ORDER BY seems a
bit pleasing.

 As DESC is quite closely associated  
 with its antonym ASC wrt ordering, any thoughts of allowing ASC as an  
 optional noise word? Users may be surprised if ASC were to throw an  
 error.

Yup, I'd come to the same plan.  Actually ASC will not be a complete
noise word: if you specify it (or a NULLS clause) on an index type that
doesn't have a sort order, you'll get an error.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [COMMITTERS] pgsql: Stamp major release 8.3.0, and increment

2007-01-05 Thread Stefan Kaltenbrunner
Bruce Momjian wrote:
 Log Message:
 ---
 Stamp major release 8.3.0, and increment library version numbers.

this commit broke the buildfarm(ECPG-checks):

http://www.pgbuildfarm.org/cgi-bin/show_status.pl


Stefan

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [COMMITTERS] pgsql: Stamp major release 8.3.0, and

2007-01-05 Thread Bruce Momjian
Stefan Kaltenbrunner wrote:
 Bruce Momjian wrote:
  Log Message:
  ---
  Stamp major release 8.3.0, and increment library version numbers.
 
 this commit broke the buildfarm(ECPG-checks):
 
 http://www.pgbuildfarm.org/cgi-bin/show_status.pl

Thanks, fixed.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] -f output file option for pg_dumpall

2007-01-05 Thread Tom Lane
Dave Page [EMAIL PROTECTED] writes:
 As far as I can see, adding a -f option to pg_dumpall should be straight
 forward, the only issue being that we'd need to pass pg_dump an
 additional (undocumented?) option to tell it to append to the output
 file instead of writing it as normal.

Wouldn't it be easier/better to re-point stdout at the -f file, and not
touch pg_dump at all?

regards, tom lane

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] -f output file option for pg_dumpall

2007-01-05 Thread Tom Lane
Dave Page [EMAIL PROTECTED] writes:
 Hmm, well I have no interest in the latter at present, but assuming the 
 powers that be will allow me some time to do so, I will look at merging 
 pg_dump and pg_dumpall as that seems to be the way people want to go. 

I think this will be an exercise in time-wasting, and very possibly
destabilize *both* tools.  pg_dump has never been designed to reconnect
to a different database; for instance there isn't any code for resetting
all the internal state that it gathers.  I think forking a separate
pg_dump for each database is a perfectly fine arrangement, and should be
left alone.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [COMMITTERS] pgsql: Stamp major release 8.3.0, and

2007-01-05 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 Stefan Kaltenbrunner wrote:
 Bruce Momjian wrote:
 Stamp major release 8.3.0, and increment library version numbers.
 
 this commit broke the buildfarm(ECPG-checks):
 http://www.pgbuildfarm.org/cgi-bin/show_status.pl

 Thanks, fixed.

The idea of having to do this at every version number bump is pretty
unappetizing.  Shouldn't we fix things so that the version number
doesn't appear in the ecpg regression files?

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [COMMITTERS] pgsql: Stamp major release 8.3.0,

2007-01-05 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  Stefan Kaltenbrunner wrote:
  Bruce Momjian wrote:
  Stamp major release 8.3.0, and increment library version numbers.
  
  this commit broke the buildfarm(ECPG-checks):
  http://www.pgbuildfarm.org/cgi-bin/show_status.pl
 
  Thanks, fixed.
 
 The idea of having to do this at every version number bump is pretty
 unappetizing.  Shouldn't we fix things so that the version number
 doesn't appear in the ecpg regression files?

It would be nice, yea, or we can stip out that line when doing the diff.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] -f output file option for pg_dumpall

2007-01-05 Thread Tino Wildenhain

Tom Lane schrieb:

Dave Page [EMAIL PROTECTED] writes:

As far as I can see, adding a -f option to pg_dumpall should be straight
forward, the only issue being that we'd need to pass pg_dump an
additional (undocumented?) option to tell it to append to the output
file instead of writing it as normal.


Wouldn't it be easier/better to re-point stdout at the -f file, and not
touch pg_dump at all?


Yeah, and maybe have a modifier like %n or something which would instead
write different files and replaces %n with the name of the database...

Regards
Tino

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Tabs or Spaces

2007-01-05 Thread Bruce Momjian
Simon Riggs wrote:
 On Thu, 2007-01-04 at 16:01 -0500, Bruce Momjian wrote:
  Tom Lane wrote:
   Simon Riggs [EMAIL PROTECTED] writes:
Minor request for clarification:
I read in the Developer's FAQ that tabs should be interpreted as 4
spaces. ...and also that pgindent replaces tabs as spaces.
   
   No, it does the opposite (or I would hope so anyway).
  
  Sorry, the mistake was in a reference to entab.  It now reads:
  
  entab   converts spaces to tabs, used by pgindent
 
 S'OK, just thought it was funny I'd been doing it backwards all this
 time: Actively been replacing tabs with spaces. :-)

It was a little more complex because entab does conversion to/from tabs,
but pgindent uses it in the to tabs mode.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] -f output file option for pg_dumpall

2007-01-05 Thread Andreas Pflug
Dave Page wrote:
 In pgAdmin we use pg_dump's -f option to write backup files. The IO
 streams are redirected to display status and errors etc. in the GUI.

 In order to enhance the interface to allow backup of entire clusters as
 well as role and tablespace definitions, we need to be able to get
 pg_dumpall to write it's output directly to a file in the same way,
 because we cannot redirect the child pg_dump IO streams (which also
 means we may miss errors, but I need to think about that some more).

 As far as I can see, adding a -f option to pg_dumpall should be straight
 forward, the only issue being that we'd need to pass pg_dump an
 additional (undocumented?) option to tell it to append to the output
 file instead of writing it as normal.

 Any thoughts or better ideas?
   
Use pgAdmin's create script funcion on the server.

Regards,
Andreas

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Status of Fix Domain Casting TODO

2007-01-05 Thread elein
On Tue, Jan 02, 2007 at 10:04:43AM -0500, Andrew Dunstan wrote:
 Jim C. Nasby wrote:
 On Mon, Jan 01, 2007 at 06:30:40PM -0600, Andrew Dunstan wrote:
   
 Tom Lane wrote:
 
 Jim C. Nasby [EMAIL PROTECTED] writes:
   
 FWIW, I'm running into this trying to create a 'raw' domain that would
 automagically convert hex strings into actual binary data for storage in
 a bytea.
 
 I think you've got 0 chance of implementing that as a domain rather than
 an independent type.  Without or without revisions in the casting rules,
 a domain has not got its own I/O functions, and never will.
   
 This might be less of an issue if we allowed such IO functions to be
 written in a loadable PL rather than in C.
 
 
 I'm confused... couldn't I just write a cast function? Or is that what's
 meant by I/O functions?
 
 And yes, in this case I should be able to accomplish what I'm looking
 for just using encode() and decode().
   
 
 The I/O functions are set up by the INPUT and OUTPUT params of the 
 CREATE TYPE statement. They convert to and from the type 'cstring'. If 
 you want to change the way a piece of data is read/produced (e.g. 
 automatically encode/decode the value) these are what you would need. A 
 domain is in effect a constrained type. But it inherits the I/O 
 functions of its base type. But constraints are not what you want - you 
 want to deal with representation, which is the property dealt with by 
 I/O functions - their fundamental purpose is to convert between external 
 and internal representation.
 
You can fake out the input function by putting a check clause on
the type definition.  I agree there should be hooks allowing 
input/output functions to be written in pls.

late to the thread, again,

--elein

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[HACKERS] Parsing ambiguity for ORDER BY ... NULLS FIRST/LAST

2007-01-05 Thread Tom Lane
I've run into an annoying parsing problem while working on the NULLS
FIRST/LAST stuff.  It turns out this is ambiguous:

... ORDER BY foo! NULLS ...

It could be that the ! is a postfix operator and the NULLS is the
start of a NULLS FIRST/LAST clause, or it could be that the ! is an
infix operator and NULLS is just a column name.  Bison needs to know
which before shifting the NULLS, since in the former case it has to
reduce a_expr at this point.

The problem would go away if we made NULLS a fully reserved word, but
I don't think that's acceptable from a backward-compatibility or
spec-compliance standpoint, considering it's not a keyword at all in any
pre-2003 SQL spec and even in SQL2003 it's a non-reserved keyword.

The only other solution I can see is to make use of the lookahead filter
we already have in filtered_base_yylex() to combine NULLS FIRST and
NULLS LAST into single tokens.  This is not an ideal solution: consider

SELECT * FROM nulls first;

This should be considered a valid selection from a relation named nulls
with alias first, but if it's reduced to a single token the grammar
will not see it that way, and will give an error.  However, that's a
sufficiently unlikely scenario that maybe we can just ignore it.  (It's
possible to work around the case by inserting AS, of course.)  We could
probably fix it if we really had to, but it would involve some pretty
ugly coding AFAICS.

BTW: the existing lookahead hack for WITH CASCADED etc. has the
identical problem.

BTW^2: the draft patch Teodor submitted awhile back tries to paper over
this problem by attaching a precedence to NULLS.  But that does not
fix it, it just makes the parser ignore the possibility of infix !
in the example.  This would result in bizarre behavior anytime someone
tried to use a column named nulls in arithmetic expressions.

Thoughts?  Anyone see a better way?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] InitPostgres and flatfiles question

2007-01-05 Thread Tom Lane
Ron Mayer [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
 What value is allowing multiple queies via PQexec()
 
 The only argument I can think of is that it allows applications to be
 sloppy about parsing a SQL script into individual commands before they
 send it.  (I think initdb may be guilty of exactly that BTW...)  At the
 same time you could argue that such sloppiness is inherently a Bad Idea.

 Doesn't it also avoid some network(?) overhead when you have
 a large number of small inserts or updates?

 I seem to recall a previous company where we had a major performance
 by concatenating a bunch of updates with ;s in between and sending
 them to postgresql as a single command.

These days you'd probably be better off using a multi-row VALUES() list
if relevant.  Also, if you really want to send multiple statements like
that, there's a cleaner way to do it: use the extended query protocol
and don't Sync or wait for a reply until you've sent them all.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Tabs or Spaces

2007-01-05 Thread Jim Nasby

On Jan 4, 2007, at 7:54 PM, Simon Riggs wrote:

On Thu, 2007-01-04 at 21:41 -0300, Alvaro Herrera wrote:

Simon Riggs wrote:

S'OK, just thought it was funny I'd been doing it backwards all this
time: Actively been replacing tabs with spaces. :-)


That explains why your patches always seemed to be so weird to me


LOL


spacing-wise :-)


Nobody ever said... laughs/rolls eyes/sighs


I just figured you were using British tabs instead of American ones. ;P
--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] InitPostgres and flatfiles question

2007-01-05 Thread Jim Nasby

On Jan 4, 2007, at 11:30 PM, Bruce Momjian wrote:
No, psql isn't the point: we can certainly make its behavior match  
the

backend's.  What I'm wondering about is the effect on random PG-using
applications: should we forbid them from sending multiple SQL  
commands

per PQexec (or equivalent in other client library APIs)?

Backwards compatibility says no, but you can make some decent  
arguments

for forbidding it anyway.


Yea, I was trying to separate the psql case from the PQexec()  
case.  For

psql, I think it is clear that -c _should_ act like a normal stdin
query.  That would eliminate confusion, and I don't see a large  
loss of

functionality.


Heh, something I hadn't expected to work:

decibel=# select 1
decibel-# ; select 2
?column?
--
1
(1 row)

decibel-# ;
?column?
--
2
(1 row)


The PQexec() case, the problem is we don't know who is using
multi-statement PQexec() calls, and users can't always add BEGIN/ 
END to

fix them if they are embedded in applications.

What we could do it do both and see what pushback we get during beta.
We could always revert it before the final release.


There is one (weak) argument for allowing multiple commands in a  
single call to the backend; it's going to perform better in an OLTP  
environment because of fewer round-trips between the client and server..


Actually, there's some cases there that might not fit well into  
wrapping them into a function, ie: multiple selects issued in one go.  
So maybe the argument isn't that weak afterall...

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] ideas for auto-processing patches

2007-01-05 Thread Jim Nasby

On Jan 5, 2007, at 10:24 AM, Andrew Dunstan wrote:
cvs update isn't too bad either. I just did a substantial update on  
a tree that had not been touched for nearly 6 months, and ethereal  
tells me that total traffic was 7343004 bytes in 7188 packets.  
Individual buildfarm updates are going to be much lower than that,  
by a couple of orders of magnitude, I suspect.


More important, I see no reason to tie applying patches to pulling  
from CVS. In fact, I think it's a bad idea: you want to build just  
what's in CVS first, to make sure that it's working, before you start  
testing any patches against it. So if this were added to buildfarm,  
presumably it would build plain CVS, then start testing patches. It  
could try a CVS up between each patch to see if anything changed, and  
possibly start back at the top at that point.

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [PATCHES] wal_checksum = on (default) | off

2007-01-05 Thread Jim Nasby

On Jan 5, 2007, at 6:30 AM, Zeugswetter Andreas ADI SD wrote:

Ok, so when you need CRC's on a replicate (but not on the master) you
turn it
off during standby replay, but turn it on when you start the replicate
for normal operation.


Which sounds to me like a good reason to allow the option in  
recovery.conf as well...

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Dead Space Map for vacuum

2007-01-05 Thread Jim Nasby

On Jan 3, 2007, at 11:42 PM, ITAGAKI Takahiro wrote:
BTW, if we want to achieve the index-only scan, we might have to do  
more
aggressive VACUUM FREEZE. There were many comments that we should  
avoid
vacuuming pages that contain only unfrozen tuples or a few dead  
tuples.
I think it it true for efficient VACUUM, but the index-only scan  
does not

work for the unfrozen pages. Which should we give priority?


Unless I'm mistaken, as soon as vacuum decides to dirty a page, the  
only cost involved in freezing the page is CPU - and vacuum isn't  
exactly a CPU-intensive process.

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [PATCHES] Patch to log usage of temporary files

2007-01-05 Thread Jim Nasby

On Jan 3, 2007, at 4:20 PM, Bill Moran wrote:
* trace_temp_files is now an int: -1 disables, 0 and up equate to  
log if

  the file is this size or larger


Another thought is to allow ignoring files over a certain size. The  
reason is that if you end up creating 10MB of temp files, you can  
probably avoid that by adjusting work_mem. But if you just created  
10GB, you probably have no choice in the matter.

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] [PATCHES] A patch to pg_regress for Windows port

2007-01-05 Thread Tom Lane
Gurjeet Singh [EMAIL PROTECTED] writes:
 This patch removes double-quotes from around the listen_addresses=%s part; I
 couldn't find a way of doing that. But then, the questions is, can the %s
 (hostname) have spaces embedded in it?

Yes, because it can be more than one hostname.  Why do you want to
remove the quotes?

regards, tom lane

PS: there wasn't any patch attached, but I doubt we'd have accepted it
anyway ...

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] InitPostgres and flatfiles question

2007-01-05 Thread Andrew Dunstan

 Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
 What value is allowing multiple queies via PQexec()

 The only argument I can think of is that it allows applications to be
 sloppy about parsing a SQL script into individual commands before they
 send it.  (I think initdb may be guilty of exactly that BTW...)  At the
 same time you could argue that such sloppiness is inherently a Bad
 Idea.


initdb doesn't use libpq at all ... are you saying it should only pass
individual statements to the backend? Adding smarts to enable that would
add complexity without any benefit I can see.

cheers

andrew


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] ideas for auto-processing patches

2007-01-05 Thread Andrew Dunstan


Jim Nasby wrote:
 On Jan 5, 2007, at 10:24 AM, Andrew Dunstan wrote:
 cvs update isn't too bad either. I just did a substantial update on
 a tree that had not been touched for nearly 6 months, and ethereal
 tells me that total traffic was 7343004 bytes in 7188 packets.
 Individual buildfarm updates are going to be much lower than that,
 by a couple of orders of magnitude, I suspect.

 More important, I see no reason to tie applying patches to pulling
 from CVS. In fact, I think it's a bad idea: you want to build just
 what's in CVS first, to make sure that it's working, before you start
 testing any patches against it. So if this were added to buildfarm,
 presumably it would build plain CVS, then start testing patches. It
 could try a CVS up between each patch to see if anything changed, and
 possibly start back at the top at that point.


Actually, I think a patch would need to be designated against a particular
branch and timestamp, and the buildfarm member would need to update to
that on its temp copy before applying the patch.

Certainly patch processing would be both optional and something done
separately from standard CVS branch processing.

cheers

andrew



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [PATCHES] wal_checksum = on (default) | off

2007-01-05 Thread Tom Lane
Jim Nasby [EMAIL PROTECTED] writes:
 On Jan 5, 2007, at 6:30 AM, Zeugswetter Andreas ADI SD wrote:
 Ok, so when you need CRC's on a replicate (but not on the master) you

 Which sounds to me like a good reason to allow the option in  
 recovery.conf as well...

Actually, I'm not seeing the use-case for a slave having a different
setting from the master at all?

My backup server is less reliable than the primary.

My backup server is more reliable than the primary.

Somehow, neither of these statements seem likely to be uttered by
a sane DBA ...

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [PATCHES] wal_checksum = on (default) | off

2007-01-05 Thread Joshua D. Drake

 Actually, I'm not seeing the use-case for a slave having a different
 setting from the master at all?
 
   My backup server is less reliable than the primary.
 
   My backup server is more reliable than the primary.
 
 Somehow, neither of these statements seem likely to be uttered by
 a sane DBA ...

My backup server is actually my dev machine.
My backup server is just a reporting machine.
My backup machine is using SATA just because it is just an absolute
emergency machine.
My backups machine is also my web server.

Real world dictates differently. Let's not forget that not every company
can spend 100k on two identical machines, yet many companies can spend
50k + 5k for a backup machine based on Sata or secondary services.

Sincerely,

Joshua D. Drake

-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] ideas for auto-processing patches

2007-01-05 Thread Tom Lane
Andrew Dunstan [EMAIL PROTECTED] writes:
 Jim Nasby wrote:
 More important, I see no reason to tie applying patches to pulling
 from CVS. In fact, I think it's a bad idea: you want to build just
 what's in CVS first, to make sure that it's working, before you start
 testing any patches against it.

 Actually, I think a patch would need to be designated against a particular
 branch and timestamp, and the buildfarm member would need to update to
 that on its temp copy before applying the patch.

I think I like Jim's idea better: you want to find out if some other
applied patch has broken the patch-under-test, so I cannot see a reason
for testing against anything except branch tip.

There certainly is value in being able to test against a non-HEAD branch
tip, but I don't see the point in testing against a back timestamp.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [PATCHES] A patch to pg_regress for Windows port

2007-01-05 Thread Gurjeet Singh

On 1/6/07, Tom Lane [EMAIL PROTECTED] wrote:


Gurjeet Singh [EMAIL PROTECTED] writes:
 This patch removes double-quotes from around the listen_addresses=%s
part; I
 couldn't find a way of doing that. But then, the questions is, can the
%s
 (hostname) have spaces embedded in it?

Yes, because it can be more than one hostname.



But the code in postmaster.c expects the list to be comma separated.

   if (!SplitIdentifierString(rawstring, ',', elemlist))
   {
   /* syntax error in list */
   ereport(FATAL,
   (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg(invalid list syntax for
\listen_addresses\)));
   }

 Why do you want to

remove the quotes?



No, I didn't want to, but I was getting FATAL:  the database system is
starting up in the log if I surrounded that var=val with double-quotes.
Later I remedied that by attaching a -w option to the pg_ctl command.

PS: there wasn't any patch attached,


Please refer
http://archives.postgresql.org/pgsql-patches/2007-01/msg00056.php

but I doubt we'd have accepted it

anyway ...



Any particular reason?

A few minutes after the submission, after I read more code, I wouldn't have
either! I noticed (and Magnus also pointed it out) that we use the returned
PID to kill the postmaster if it doesn't respond. There's no easy way of
doing that if we use pg_ctl to start the postmaster! Let's wait for Magnus'
patch to make pg_regress independent of MinGW.

Best regards,

--
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | yahoo }.com


Re: [HACKERS] ideas for auto-processing patches

2007-01-05 Thread Andrew Dunstan
Tom Lane wrote:
 Andrew Dunstan [EMAIL PROTECTED] writes:
 Jim Nasby wrote:
 More important, I see no reason to tie applying patches to pulling
 from CVS. In fact, I think it's a bad idea: you want to build just
 what's in CVS first, to make sure that it's working, before you start
 testing any patches against it.

 Actually, I think a patch would need to be designated against a
 particular
 branch and timestamp, and the buildfarm member would need to update to
 that on its temp copy before applying the patch.

 I think I like Jim's idea better: you want to find out if some other
 applied patch has broken the patch-under-test, so I cannot see a reason
 for testing against anything except branch tip.

 There certainly is value in being able to test against a non-HEAD branch
 tip, but I don't see the point in testing against a back timestamp.


OK, if the aim is to catch patch bitrot, then you're right, of course.

cheers

andrew


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [PATCHES] A patch to pg_regress for Windows port

2007-01-05 Thread Tom Lane
Gurjeet Singh [EMAIL PROTECTED] writes:
 On 1/6/07, Tom Lane [EMAIL PROTECTED] wrote:
 Yes, because it can be more than one hostname.

 But the code in postmaster.c expects the list to be comma separated.

Sure, but SplitIdentifierString allows for whitespace, eg

'host1, host2, host3'

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster