Re: [HACKERS] [COMMITTERS] pgsql: Stamp major release 8.3.0,

2007-01-12 Thread Joachim Wieland
On Fri, Jan 12, 2007 at 01:45:10AM +0100, Chris Mair wrote:
 I just wanted to mention that the latest release of OpenBSD i386
 (4.0) is still broken too. So the ecpg-check failure would apply to
 (at least) to 3.8, 4.0, and likely 3.9.

ok, but then we have some hosts in the buildfarm that run the updated
versions like zebra and spoonbill. In this case we can't decide on the
OS version number and cannot provide alternative files. Any idea except for
actively replacing the offending line via sed from the regression script?


Joachim




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [COMMITTERS] pgsql: Stamp major release 8.3.0,

2007-01-12 Thread Stefan Kaltenbrunner

Joachim Wieland wrote:

On Fri, Jan 12, 2007 at 01:45:10AM +0100, Chris Mair wrote:

I just wanted to mention that the latest release of OpenBSD i386
(4.0) is still broken too. So the ecpg-check failure would apply to
(at least) to 3.8, 4.0, and likely 3.9.


ok, but then we have some hosts in the buildfarm that run the updated
versions like zebra and spoonbill. In this case we can't decide on the
OS version number and cannot provide alternative files. Any idea except for
actively replacing the offending line via sed from the regression script?


that is incorrect - both zebra(4.0) and spoonbill(3.9) are not affected 
by this bug - the libc issue in question only affects i386 and m68k with 
OpenBSD 4.0 and older.

So neither Spoonbill (Sparc64) nor Zebra (amd64/x86_64) ever had that issue.


Stefan

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] [COMMITTERS] pgsql: Stamp major release 8.3.0,

2007-01-12 Thread Chris Mair
On Fri, 12 Jan 2007 10:09:34 +0100 Joachim Wieland [EMAIL PROTECTED] wrote:
 On Fri, Jan 12, 2007 at 09:29:36AM +0100, Stefan Kaltenbrunner wrote:
  ok, but then we have some hosts in the buildfarm that run the updated
  versions like zebra and spoonbill. In this case we can't decide on the
  OS version number and cannot provide alternative files. Any idea except for
  actively replacing the offending line via sed from the regression script?
 
  that is incorrect - both zebra(4.0) and spoonbill(3.9) are not affected 
  by this bug - the libc issue in question only affects i386 and m68k with 
  OpenBSD 4.0 and older.
  So neither Spoonbill (Sparc64) nor Zebra (amd64/x86_64) ever had that issue.
 
 Okay, so it also depends on the platform... In this case I suggest to add
 the special expected/ files only for guppy, i.e. only for
 i386-unknown-openbsd3.8. If we get another i386 or m68k host that runs one
 of the affected systems, we have to update the check. However, if guppy got
 upgraded to 4.0 we'd have the problem that both guppy and emu would return
 i386-unknown-openbsd4.0 (while emu is running 4.0-current and hence is
 not affected)...
 
 Attached patch enables the special expected files only for
 i386-unknown-openbsd3.8.

Ok,
I feel sorry, guppy is causing so much trouble :|

I guess then I'm going to upgrade her only when 4.1-stable comes out (in May).

(i keep having this idea that if we all run current/unstable versions
of our OSes we might overlook other issues ...)

Bye, Chris.

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] [GENERAL] Checkpoint request failed on version 8.2.1.

2007-01-12 Thread Zeugswetter Andreas ADI SD

  I find it very unlikely that you would during normal operations
end up
  in a situation where you would first have permissions to create
files in
  a directory, and then lose them.
  What could be is that you have a directory where you never had
  permissions to create the file in the first place.
 
  Any chance to differentiate between these?
 
 The cases we're concerned about involve access to an existing file,
not
 attempts to create a new one, so I'm not clear what your point is.

I am wondering if we can delete the file by opening it with
FILE_FLAG_DELETE_ON_CLOSE, and immediately close it again. 
The semantics should be clear if we let the OS delete the file after the

last handle on it is closed ? 
Until all handles are closed another process can still open it with 
FILE_SHARE_DELETE (according to docs), but not without the flag.
This seems to be what we want.
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/fileio/
fs/createfile.asp

If this fails (see the loop in dirmod.c) we could try to move it to
the recycle bin with SHFileOperation with FO_DELETE.

It seems the win unlink is not implemented correctly and we need to
replace it.
I don't feel easy with the ignore EACCES idea. 

Should I try to supply a patch along this line ?

Andreas

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] share info between backends

2007-01-12 Thread Zeugswetter Andreas ADI SD

   comments?
 
  Why not make it probabilistic by using, say, MyProcPid % n where n
is the
  number of tablespaces? Then you don't need anything shared.

You still want a (local) counter, so that temp files for one session are
spread out.

Andreas

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Problem linking libecpg.5.3.dylib on OS X

2007-01-12 Thread Michael Meskes
On Thu, Jan 11, 2007 at 10:47:32PM -0500, Tom Lane wrote:
 Actually I'd suggest that using an exported variable at all is probably
 bad style.  I'd suggest that libecpg export a set() function instead:
 ...

I think I found an easier solution. With my latest commit ecpg uses a
#define to set a different debug level and the lib sets the regression
flags according to that level. This still needs a global variable in the
library but it is not accessed from the outside.

Hopefully this works on all archs.

Michael
-- 
Michael Meskes
Email: Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [COMMITTERS] pgsql: Stamp major release 8.3.0,

2007-01-12 Thread Michael Meskes
On Fri, Jan 12, 2007 at 01:20:15AM +0100, Joachim Wieland wrote:
 Attached is a patch to get guppy green again (hopefully).

Applied.

Michael
-- 
Michael Meskes
Email: Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] O_DIRECT, or madvise and/or posix_fadvise

2007-01-12 Thread Martijn van Oosterhout
On Thu, Jan 11, 2007 at 02:35:13PM -0800, [EMAIL PROTECTED] wrote:
 I caught this thread about O_DIRECT on kerneltrap.org:
  http://kerneltrap.org/node/7563
 
 It sounds like there is much to be gained here in terms of reducing
 the number of user/kernel space copies in the operating system.  I got
 the impression that posix_fadvise in the Linux kernel isn't as good as
 it could be.  I noticed in xlog.c that the use of posix_fadvise is
 disabled.  Maybe it's time to do some more experimenting and working
 with the Linux kernel developers.  Or perhaps there is another OS that
 would be better to experiment with?

Postgres doesn't use O_DIRECT and probably never will. The system is
esigned to use the system cache, not bypass it.

What recent discussions have highlighted is the need to more accurately
control the flow of data to disk. Apparently currently kernel try to
hold data back much longer than is useful.

Not that I'm volunterring to deal with this.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


[HACKERS] to_char not IMMUTABLE?

2007-01-12 Thread Mario Weilguni
I had a problem when upgrading a database from 8.1.4 to 8.2.1:
Sorry, the error messages are in german.

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 1531; 1259 3477393 INDEX 
idx_inspektionen_dat_inspektion 
pg_restore: [archiver (db)] could not execute query: FEHLER:  Funktionen im 
Indexausdruck muessen als IMMUTABLE markiert sein
Command was: CREATE INDEX idx_inspektionen_dat_inspektion ON inspektionen 
USING btree (to_char(dat_inspektion, ''::text));
WARNING: errors ignored on restore: 1

to_char(timestamp, '') should be constant and marked immutable, or am I 
wrong here? Or is it not marked immutable because of possible changes on 
date_format?

Regards,
Mario Weilguni

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] share info between backends

2007-01-12 Thread Andrew Dunstan
Zeugswetter Andreas ADI SD wrote:

   comments?
 
  Why not make it probabilistic by using, say, MyProcPid % n where n
 is the
  number of tablespaces? Then you don't need anything shared.

 You still want a (local) counter, so that temp files for one session are
 spread out.



Yes, but this can be used as a seed so not all backends start with the
same counter value.

cheers

andrew



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] to_char not IMMUTABLE?

2007-01-12 Thread Martijn van Oosterhout
On Fri, Jan 12, 2007 at 11:55:07AM +0100, Mario Weilguni wrote:
 to_char(timestamp, '') should be constant and marked immutable,
 or am I wrong here? Or is it not marked immutable because of possible
 changes on date_format?

AIUI, to_char is not immutable because it can be effected by external
variables, like LC_TIME.

As it is though, I'm not sure why you're using to_char here, surely
extract or date_truc would be more appropriate?

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [HACKERS] [COMMITTERS] pgsql: Stamp major release 8.3.0,

2007-01-12 Thread Tom Lane
Joachim Wieland [EMAIL PROTECTED] writes:
 Attached patch enables the special expected files only for
 i386-unknown-openbsd3.8.

This seems the wrong approach; we do not have anywhere near that good a
handle on which platforms have this behavior.  I'd vote for treating it
like a locale difference, ie, just accept either result on any platform.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] to_char not IMMUTABLE?

2007-01-12 Thread Mario Weilguni
Am Freitag, 12. Januar 2007 14:48 schrieb Martijn van Oosterhout:
 On Fri, Jan 12, 2007 at 11:55:07AM +0100, Mario Weilguni wrote:
  to_char(timestamp, '') should be constant and marked immutable,
  or am I wrong here? Or is it not marked immutable because of possible
  changes on date_format?

 AIUI, to_char is not immutable because it can be effected by external
 variables, like LC_TIME.

 As it is though, I'm not sure why you're using to_char here, surely
 extract or date_truc would be more appropriate?

Thanks for the info. Changing this to use extract is no real problem, I was 
just curious if this is intendend behaviour.

Best regards,
Mario Weilguni

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [GENERAL] Checkpoint request failed on version 8.2.1.

2007-01-12 Thread Magnus Hagander
On Fri, Jan 12, 2007 at 10:49:53AM +0100, Zeugswetter Andreas ADI SD wrote:
 
   I find it very unlikely that you would during normal operations
 end up
   in a situation where you would first have permissions to create
 files in
   a directory, and then lose them.
   What could be is that you have a directory where you never had
   permissions to create the file in the first place.
  
   Any chance to differentiate between these?
  
  The cases we're concerned about involve access to an existing file,
 not
  attempts to create a new one, so I'm not clear what your point is.
 
 I am wondering if we can delete the file by opening it with
 FILE_FLAG_DELETE_ON_CLOSE, and immediately close it again. 
 The semantics should be clear if we let the OS delete the file after the
 
 last handle on it is closed ? 
 Until all handles are closed another process can still open it with 
 FILE_SHARE_DELETE (according to docs), but not without the flag.
 This seems to be what we want.
 http://msdn.microsoft.com/library/default.asp?url=/library/en-us/fileio/
 fs/createfile.asp
 
 If this fails (see the loop in dirmod.c) we could try to move it to
 the recycle bin with SHFileOperation with FO_DELETE.
 
 It seems the win unlink is not implemented correctly and we need to
 replace it.
 I don't feel easy with the ignore EACCES idea. 
 
 Should I try to supply a patch along this line ?

Doesn't sound unreasonable, so yes, let's give it a try at least.

//Magnus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [GENERAL] Checkpoint request failed on version 8.2.1.

2007-01-12 Thread Magnus Hagander
On Thu, Jan 11, 2007 at 10:39:47PM -0500, Tom Lane wrote:
 Jim C. Nasby [EMAIL PROTECTED] writes:
  On Thu, Jan 11, 2007 at 03:14:37PM -0500, Tom Lane wrote:
  ... And anyway there should never
  *be* a real permissions problem; if there is then the user's been poking
  under the hood sufficient to void the warranty anyway ;-)
 
  Or some other helpful process such as a virus scanner has been poking
  under the hood for you... :(
 
 One point worth making is that I'm not really convinced anymore that
 we have proof that antivirus code has been creating any such problems.

We do. I have positive proof of this being caused by AV software.

I don't know that it has been the problem in *all cases*, certainly, but
I've had kernel stacktraces pointing into AV filter drivers more than
once.


 We have several anecdotal cases where someone reported erratic
 permission denied problems on Windows, and we suggested getting rid
 of any AV code, and it seemed to fix their problem --- but how long did
 they test?  This problem is inherently very timing-sensitive, and so the
 fact that you don't see it for a little while is hardly proof that it's
 gone.  See the report that started this thread for examples of apparent
 correlations that are really quite spurious, like whether the test case
 is being driven locally or not.  It could easy be that every report
 we've heard really traces to the not-yet-deleted-file problem.

No, not all of them. But certainly a fair share of them can have been.

 So basically what we'd have is that if you manually remove permissions
 on a database file or directory you'd be risking data loss; but heck,
 if you manually move, rename, delete such a file you're risking
 (guaranteeing) data loss.

That was the point I was trying tom ake erarlier :-)

//Magnus

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [GENERAL] Checkpoint request failed on version 8.2.1.

2007-01-12 Thread Tom Lane
Zeugswetter Andreas ADI SD [EMAIL PROTECTED] writes:
 It seems the win unlink is not implemented correctly and we need to
 replace it.

Easier said than done ...

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] to_char not IMMUTABLE?

2007-01-12 Thread Tom Lane
Mario Weilguni [EMAIL PROTECTED] writes:
 Thanks for the info. Changing this to use extract is no real problem, I was 
 just curious if this is intendend behaviour.

From the CVS logs:

2006-11-28 14:18  tgl

* src/include/catalog/: pg_proc.h (REL7_3_STABLE), pg_proc.h
(REL7_4_STABLE), pg_proc.h (REL8_1_STABLE), pg_proc.h
(REL8_0_STABLE): Mark to_number() and the numeric-type variants of
to_char() as stable, not immutable, because their results depend on
lc_numeric; this is a longstanding oversight.  We cannot force
initdb for this in the back branches, but we can at least provide
correct catalog entries for future installations.

2006-11-28 14:18  tgl

* src/include/catalog/pg_proc.h: Mark to_char(timestamp without
timezone) as stable, not immutable, since its result now depends on
the lc_messages setting, as noted by Bruce.  Also, mark to_number()
and the numeric-type variants of to_char() as stable, because their
results depend on lc_numeric; this is a longstanding oversight. 
Also, mark to_date() and to_char(interval) as stable; although
these appear not to depend on any GUC variables as of CVS HEAD,
that seems a property unlikely to survive future improvements.  It
seems best to mark all the formatting functions stable and be done
with it.  catversion not bumped, because this does not seem
critical enough to force a post-RC1 initdb, and anyway we cannot do
so in the back branches.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] to_char not IMMUTABLE?

2007-01-12 Thread Albe Laurenz
Mario Weilguni wrote:
 
 I had a problem when upgrading a database from 8.1.4 to 8.2.1:
 Sorry, the error messages are in german.
 
 pg_restore: [archiver (db)] Error while PROCESSING TOC:
 pg_restore: [archiver (db)] Error from TOC entry 1531; 1259 
 3477393 INDEX idx_inspektionen_dat_inspektion 
 pg_restore: [archiver (db)] could not execute query: FEHLER:  
 Funktionen im Indexausdruck muessen als IMMUTABLE markiert sein
 Command was: CREATE INDEX idx_inspektionen_dat_inspektion 
 ON inspektionen USING btree (to_char(dat_inspektion, ''::text));
 WARNING: errors ignored on restore: 1
 
 to_char(timestamp, '') should be constant and marked 
 immutable, or am I wrong here? Or is it not marked immutable 
 because of possible changes on date_format?

At some point, the configuration parameter lc_time should have
an influence on the output of to_char(timestamp, text), although
this behaviour is not yet implemented.

I guess that is why the function is STABLE ind not IMMUTABLE.

Maybe you can use date_part('YEAR', dat_inspektion)::bpchar

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [COMMITTERS] pgsql: Stamp major release 8.3.0,

2007-01-12 Thread Chris Mair

 Attached is a patch to get guppy green again (hopefully).
 
 The two new files go into src/interfaces/ecpg/test/expected

Hi,

I just wanted to mention that the latest release of OpenBSD i386
(4.0) is still broken too. So the ecpg-check failure would apply to
(at least) to 3.8, 4.0, and likely 3.9.

Bye :)
Chris.

PS: OpenBSD 4.0 current is fixed, but I was reluctant to update to
current...



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [GENERAL] Checkpoint request failed on version 8.2.1.

2007-01-12 Thread Tom Lane
Magnus Hagander [EMAIL PROTECTED] writes:
 On Thu, Jan 11, 2007 at 10:39:47PM -0500, Tom Lane wrote:
 One point worth making is that I'm not really convinced anymore that
 we have proof that antivirus code has been creating any such problems.

 We do. I have positive proof of this being caused by AV software.
 I don't know that it has been the problem in *all cases*, certainly, but
 I've had kernel stacktraces pointing into AV filter drivers more than
 once.

No, I didn't claim that Windows AV software is bug-free ;-).  What I
said was that I'm not certain it's related to the permission denied
reports, as opposed to other problems.  Or are your stack traces
specifically for permission denied failures?

regards, tom lane

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] SPAR Simple PostgreSQL AddOn Replication System

2007-01-12 Thread Jonah H. Harris

On 1/11/07, Gurjeet Singh [EMAIL PROTECTED] wrote:

Also, the sources are not available.


Do you plan to make the sources available?

--
Jonah H. Harris, Software Architect | phone: 732.331.1324
EnterpriseDB Corporation| fax: 732.331.1301
33 Wood Ave S, 3rd Floor| [EMAIL PROTECTED]
Iselin, New Jersey 08830| http://www.enterprisedb.com/

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [HACKERS] [COMMITTERS] pgsql: Stamp major release 8.3.0,

2007-01-12 Thread Stefan Kaltenbrunner
Tom Lane wrote:
 Joachim Wieland [EMAIL PROTECTED] writes:
 Attached patch enables the special expected files only for
 i386-unknown-openbsd3.8.
 
 This seems the wrong approach; we do not have anywhere near that good a
 handle on which platforms have this behavior.  I'd vote for treating it
 like a locale difference, ie, just accept either result on any platform.

well the information I from the openbsd-developers is that only i386 and
m68k are affected(and fixed in
http://archives.neohapsis.com/archives/openbsd/cvs/2006-10/0006.html but
only for -current). If you are concerned about other OSes however ...


Stefan

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [GENERAL] Checkpoint request failed on version 8.2.1.

2007-01-12 Thread Zeugswetter Andreas ADI SD
I wrote:
   I find it very unlikely that you would during normal operations
end up
   in a situation where you would first have permissions to create
files in
   a directory, and then lose them.
   What could be is that you have a directory where you never had
   permissions to create the file in the first place.
  
   Any chance to differentiate between these?
  
  The cases we're concerned about involve access to an existing file,
not
  attempts to create a new one, so I'm not clear what your point is.
 
 I am wondering if we can delete the file by opening it with
 FILE_FLAG_DELETE_ON_CLOSE, and immediately close it again. 
 The semantics should be clear if we let the OS delete the 
 file after the last handle on it is closed ?
 
 Until all handles are closed another process can still open it with 
 FILE_SHARE_DELETE (according to docs), but not without the flag.

Say the docs, but win2000 gives EACCES :-( 

 This seems to be what we want.

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/fileio/
fs/createfile.asp

Seems we don't get what we want :-(

 If this fails (see the loop in dirmod.c) we could try to move it to
 the recycle bin with SHFileOperation with FO_DELETE.

This does not seem to work eighter.

Andreas

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [GENERAL] Checkpoint request failed on version 8.2.1.

2007-01-12 Thread Magnus Hagander
On Fri, Jan 12, 2007 at 09:47:55AM -0500, Tom Lane wrote:
 Magnus Hagander [EMAIL PROTECTED] writes:
  On Thu, Jan 11, 2007 at 10:39:47PM -0500, Tom Lane wrote:
  One point worth making is that I'm not really convinced anymore that
  we have proof that antivirus code has been creating any such problems.
 
  We do. I have positive proof of this being caused by AV software.
  I don't know that it has been the problem in *all cases*, certainly, but
  I've had kernel stacktraces pointing into AV filter drivers more than
  once.
 
 No, I didn't claim that Windows AV software is bug-free ;-).  What I
 said was that I'm not certain it's related to the permission denied
 reports, as opposed to other problems.  Or are your stack traces
 specifically for permission denied failures?

I have at least two cases specifically for the permission denied
failures in postgres.

//Magnus

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Request for review: tsearch2 patch

2007-01-12 Thread Teodor Sigaev

Yeah, it's a workaround. Since there's no concept other than
alpha/numeric/latin in tsearch2, Asian characters have to be fall in
one of them.


Ok, I see.

Pls, test attached patch - if it is good then I'll commit it at Monday to HEAD 
and 8.2 branches.


PS. Magnus, may I ask you to test under Windows? Thank you.

--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/
diff -c -r -N ../tsearch2.orig/ts_locale.c ./ts_locale.c
*** ../tsearch2.orig/ts_locale.cFri Jan 12 10:53:11 2007
--- ./ts_locale.c   Fri Jan 12 18:10:27 2007
***
*** 12,24 
  size_t
  wchar2char(char *to, const wchar_t *from, size_t len)
  {
if (GetDatabaseEncoding() == PG_UTF8)
{
int r;
  
-   if (len == 0)
-   return 0;
- 
r = WideCharToMultiByte(CP_UTF8, 0, from, -1, to, len,
NULL, NULL);
  
--- 12,24 
  size_t
  wchar2char(char *to, const wchar_t *from, size_t len)
  {
+   if (len == 0)
+   return 0;
+ 
if (GetDatabaseEncoding() == PG_UTF8)
{
int r;
  
r = WideCharToMultiByte(CP_UTF8, 0, from, -1, to, len,
NULL, NULL);
  
***
*** 34,50 
  
return wcstombs(to, from, len);
  }
  
  size_t
  char2wchar(wchar_t *to, const char *from, size_t len)
  {
if (GetDatabaseEncoding() == PG_UTF8)
{
int r;
  
-   if (len == 0)
-   return 0;
- 
r = MultiByteToWideChar(CP_UTF8, 0, from, len, to, len);
  
if (!r)
--- 34,52 
  
return wcstombs(to, from, len);
  }
+ #endif   /* WIN32 */
  
  size_t
  char2wchar(wchar_t *to, const char *from, size_t len)
  {
+   if (len == 0)
+   return 0;
+ 
+ #ifdef WIN32
if (GetDatabaseEncoding() == PG_UTF8)
{
int r;
  
r = MultiByteToWideChar(CP_UTF8, 0, from, len, to, len);
  
if (!r)
***
*** 60,88 
  
return r;
}
  
return mbstowcs(to, from, len);
  }
- #endif   /* WIN32 */
  
  int
  _t_isalpha(const char *ptr)
  {
!   wchar_t character;
  
!   char2wchar(character, ptr, 1);
  
!   return iswalpha((wint_t) character);
  }
  
  int
  _t_isprint(const char *ptr)
  {
!   wchar_t character;
  
!   char2wchar(character, ptr, 1);
  
!   return iswprint((wint_t) character);
  }
  #endif   /* TS_USE_WIDE */
  
--- 62,105 
  
return r;
}
+   else 
+ #endif /* WIN32 */
+   if ( lc_ctype_is_c() )
+   {
+   /*
+* pg_mb2wchar_with_len always adds trailing '\0', so 
+* 'to' should be allocated with sufficient space 
+*/
+   return pg_mb2wchar_with_len(from, (pg_wchar *)to, len);
+   }
  
return mbstowcs(to, from, len);
  }
  
  int
  _t_isalpha(const char *ptr)
  {
!   wchar_t character[2];
! 
!   if (lc_ctype_is_c())
!   return isalpha(TOUCHAR(ptr));
  
!   char2wchar(character, ptr, 1);
  
!   return iswalpha((wint_t) *character);
  }
  
  int
  _t_isprint(const char *ptr)
  {
!   wchar_t character[2];
! 
!   if (lc_ctype_is_c())
!   return isprint(TOUCHAR(ptr));
  
!   char2wchar(character, ptr, 1);
  
!   return iswprint((wint_t) *character);
  }
  #endif   /* TS_USE_WIDE */
  
***
*** 126,132 
if ( wlen  0 )
ereport(ERROR,

(errcode(ERRCODE_CHARACTER_NOT_IN_REPERTOIRE),
!errmsg(transalation failed from 
server encoding to wchar_t)));
  
Assert(wlen=len);
wstr[wlen] = 0;
--- 143,149 
if ( wlen  0 )
ereport(ERROR,

(errcode(ERRCODE_CHARACTER_NOT_IN_REPERTOIRE),
!errmsg(translation failed from server 
encoding to wchar_t)));
  
Assert(wlen=len);
wstr[wlen] = 0;
***
*** 152,158 
if ( wlen  0 )
ereport(ERROR,

(errcode(ERRCODE_CHARACTER_NOT_IN_REPERTOIRE),
!errmsg(transalation failed from 
wchar_t to server encoding %d, errno)));
Assert(wlen=len);
out[wlen]='\0';
}
--- 169,175 
if ( wlen  0 )
ereport(ERROR,
   

Re: [HACKERS] [GENERAL] Checkpoint request failed on version 8.2.1.

2007-01-12 Thread Tom Lane
Magnus Hagander [EMAIL PROTECTED] writes:
 On Fri, Jan 12, 2007 at 09:47:55AM -0500, Tom Lane wrote:
 No, I didn't claim that Windows AV software is bug-free ;-).  What I
 said was that I'm not certain it's related to the permission denied
 reports, as opposed to other problems.  Or are your stack traces
 specifically for permission denied failures?

 I have at least two cases specifically for the permission denied
 failures in postgres.

Actually, it could still be the same problem, with the AV software only
involved to the extent that it's trying to scan files for viruses.
That could result in the AV code holding a table file open for a little
bit (or not such a little bit, if it's a big table) after it's nominally
been deleted, and that's exactly the situation we see checkpoints
failing in.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] copy table from file: with row replacement?

2007-01-12 Thread Bruce Momjian
Michael Enke wrote:
 Hello all,
 I have a feature request as I think it is not possible with the actual 
 version:
 
 I want to load huge amount of data and I know that COPY is much faster than 
 doing inserts.
 But in my case I have an already filled table and rows (not all, only partly) 
 from this table
 should be replaced. The table has a primary key for one column.
 If I do a COPY table FROM file and the key value already exists, postgresql 
 tells me
 that the import is not possible because of the violation of the PK.
 
 If postgres is aware of such a violation, couldn't there be an option to the 
 COPY command
 to delete such existing rows so that a COPY table FROM file will never 
 generate a PK violation message
 but replaces existing rows?
 
 If this is not possible, would it be the next fastes solution to create a 
 before trigger and to
 delete rows in this trigger? Or is this not different from issuing for every 
 line an insert
 and if this fails (because of the PK) than an update?

I would just COPY into another table, remove any duplicates by joining
the two tables, and then do a INSERT INTO ... SELECT.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [GENERAL] Checkpoint request failed on version 8.2.1.

2007-01-12 Thread Magnus Hagander
Tom Lane wrote:
 Magnus Hagander [EMAIL PROTECTED] writes:
 On Fri, Jan 12, 2007 at 09:47:55AM -0500, Tom Lane wrote:
 No, I didn't claim that Windows AV software is bug-free ;-).  What I
 said was that I'm not certain it's related to the permission denied
 reports, as opposed to other problems.  Or are your stack traces
 specifically for permission denied failures?
 
 I have at least two cases specifically for the permission denied
 failures in postgres.
 
 Actually, it could still be the same problem, with the AV software only
 involved to the extent that it's trying to scan files for viruses.
 That could result in the AV code holding a table file open for a little
 bit (or not such a little bit, if it's a big table) after it's nominally
 been deleted, and that's exactly the situation we see checkpoints
 failing in.

Partially the same, but I've seen AV software keeping it open for
hours... Basically until reboot.

//Magnus

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [GENERAL] Checkpoint request failed on version 8.2.1.

2007-01-12 Thread Tom Lane
Magnus Hagander [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 Actually, it could still be the same problem, with the AV software only
 involved to the extent that it's trying to scan files for viruses.

 Partially the same, but I've seen AV software keeping it open for
 hours... Basically until reboot.

Well, the bug report that just went by proves there's another problem:

: select version();
: 
: PostgreSQL 8.2.1 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC)
: 3.4.2 (mingw-special)
: 
: pg_log:
: 
: 2007-01-12 17:23:16 PANIC:  could not open control file
: global/pg_control: Permission denied

pg_control is certainly not ever deleted or renamed, and in fact I
believe there's an LWLock enforcing that only one PG process at a time
is even touching it.  So we need another theory to explain this one :-(
... anyone have a better one than Windows is a piece of junk?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] ideas for auto-processing patches

2007-01-12 Thread markwkm

On 1/11/07, Andrew Dunstan [EMAIL PROTECTED] wrote:

[EMAIL PROTECTED] wrote:

 I am not clear about what is being proposed. Currently buildfarm syncs
 against (or pulls a fresh copy from, depending on configuration) either
 the main anoncvs repo or a mirror (which you can get using cvsup or
 rsync,
 among other mechanisms). I can imagine a mechanism in which we pull
 certain patches from a patch server (maybe using an RSS feed, or a SOAP
 call?) which could be applied before the run. I wouldn't want to couple
 things much more closely than that.

 I'm thinking that a SOAP call might be easier to implement?  The RSS
 feed seems like it would be more interesting as I am imagining that a
 buildfarm system might be able to react to new patches being added to
 the system.  But maybe that's a trivial thing for either SOAP or an
 RSS feed.

I'd be quite happy with SOAP. We can make SOAP::Lite an optional load
module, so if you don't want to run patches you don't need to have the
module available.


 The patches would need to be vetted first, or no sane buildfarm owner
 will
 want to use them.

 Perhaps as a first go it can pull any patch that can be applied
 without errors?  The list of patches to test can be eventually
 restricted by name and who submitted them.



This reasoning seems unsafe. I am not prepared to test arbitrary patches
on my machine - that seems like a perfect recipe for a trojan horse. I
want to know that they have been vetted by someone I trust. That means
that in order to get into the feed in the first place there has to be a
group of trusted submitters. Obviously, current postgres core committers
should be in that group, and I can think of maybe 5 or 6 other people
that could easily be on it. Perhaps we should leave the selection to the
core team.


That's an excellent point; I didn't think of the trojan horse
scenario.  What do you think about setting up the buildfarm clients
with the users they are willing to test patches for, as opposed to
having the patch system track who is are trusted users?  My thoughts
are the former is easier to implement and that it allows anyone to use
the buildfarm to test a patch for anyone, well each buildfarm client
user permitting.

Regards,
Mark

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [HACKERS] ideas for auto-processing patches

2007-01-12 Thread Andrew Dunstan

[EMAIL PROTECTED] wrote:

What do you think about setting up the buildfarm clients
with the users they are willing to test patches for, as opposed to
having the patch system track who is are trusted users?  My thoughts
are the former is easier to implement and that it allows anyone to use
the buildfarm to test a patch for anyone, well each buildfarm client
user permitting.


We can do this, but the utility will be somewhat limited. The submitters 
will still have to be known and authenticated on the patch server. I 
think you're also overlooking one of the virtues of the buildfarm, 
namely that it does its thing unattended. If there is a preconfigured 
set of submitters/vetters then we can rely on them all to do their 
stuff. If it's more ad hoc, then when Joe Bloggs submits a spiffy new 
patch every buildfarm owner that wanted to test it would need to go and 
add him to their configured list of patch submitters. This doesn't seem 
too workable.


cheers

andrew





Regards,
Mark

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] [GENERAL] Checkpoint request failed on version 8.2.1.

2007-01-12 Thread Magnus Hagander
Tom Lane wrote:
 Magnus Hagander [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 Actually, it could still be the same problem, with the AV software only
 involved to the extent that it's trying to scan files for viruses.
 
 Partially the same, but I've seen AV software keeping it open for
 hours... Basically until reboot.
 
 Well, the bug report that just went by proves there's another problem:
 
 : select version();
 : 
 : PostgreSQL 8.2.1 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC)
 : 3.4.2 (mingw-special)
 : 
 : pg_log:
 : 
 : 2007-01-12 17:23:16 PANIC:  could not open control file
 : global/pg_control: Permission denied
 
 pg_control is certainly not ever deleted or renamed, and in fact I
 believe there's an LWLock enforcing that only one PG process at a time
 is even touching it.  So we need another theory to explain this one :-(
 ... anyone have a better one than Windows is a piece of junk?

Right. What we need is a list of which processes have handles open to
the file, which can be dumped using Process Explorer (there are other
sysinternals tools to do it as well, but PE is probably the easiest)-

//Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [GENERAL] Checkpoint request failed on version 8.2.1.

2007-01-12 Thread Tom Lane
Magnus Hagander [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 pg_control is certainly not ever deleted or renamed, and in fact I
 believe there's an LWLock enforcing that only one PG process at a time
 is even touching it.  So we need another theory to explain this one :-(

 Right. What we need is a list of which processes have handles open to
 the file, which can be dumped using Process Explorer (there are other
 sysinternals tools to do it as well, but PE is probably the easiest)-

Hmm, are you just assuming that the underlying error is
ERROR_SHARING_VIOLATION?  One of the things that's bothered me all along
is that there are a dozen different Windows error codes that we map to
EACCES ... perhaps it's time to think about disambiguating that a bit
better?

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[HACKERS] Idea for fixing the Windows fsync problem

2007-01-12 Thread Tom Lane
I just had a thought about fixing those Windows permission denied
problems.  The case that we believe we understand is where the bgwriter
is trying to execute a previously-logged fsync request against a table
file that is pending delete --- that is, actually has been unlink()'d,
but some other process is holding an open file reference to it.  The
problem is only for fsync, not for write(), because the table drop
sequence always invalidates every shared buffer for the table before
trying to unlink it.

So: maybe the solution is to add a step to the drop sequence, namely
revoking any pending fsync request, before unlink.  This would not only
clean up the Windows issue, it'd also let us remove the current hack in
md.c to not complain about an ENOENT failure (which is really hardly any
safer than ignoring EACCES would be, if you want to be honest about it).

The problem is that the ForwardFsyncRequest() mechanism is asynchronous:
currently, a backend could see pending fsync requests that are still in
the shared-memory queue, but there's no way to tell whether the bgwriter
has already absorbed some requests into its private memory.  How can a
backend tell the bgwriter to forget about it, and then delay until it
can be sure that the bgwriter won't try it later?

We could have backends put revoke fsync requests into the shared queue
and then sleep until they see the queue has been drained ... but there's
not a convenient way to implement that delay, and I hardly want to just
sleep and retry during every table drop.  It'd probably take at least
one more LWLock, and noticeably more complicated ForwardFsyncRequest()
logic, to make this work.

Thoughts?  Is this a reasonable solution path, or is it likely to be a
waste of time?  We know that there are causes of permission denied
that are not explained by the pending-delete problem.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] SPAR Simple PostgreSQL AddOn Replication System

2007-01-12 Thread Gurjeet Singh

On 1/12/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:



Hi Gurjeet glad you copied me, else would have missed it no longer
subscribed.



I also almost missed your mail. I was monitoring just the conversation
(gmail) that I sent you mail in, but since you changed the subject, it
almost was lost in the other traffic!!!

Maybe you want to post the this reply to the group. I'm now messing around

with search engines... moved on.



CC'ing the list

Looks like you almost got it right, just missing one step.


1) You use CONFIGURATION to set up the dB connections and yes if they
green... good.
then you clicked on ADD SCRIPTS and yes, this puts all the *** supporting
*** scripts on ALL the dB's for you.

ALSO just make sure the MONITOR (M) selection is checked leave all the
others UNCHECKED.

BUT... SPAR *** doesnt *** assume you want to replicate all your tables...
so
Have a look at the TABLE SETUP on the the configuration screen.



You mean the the link (Table Setup) on the homepage? Yeah, I did not notice
it, and expected it to work automatically!!!

2) Setup the tables you want to replicate.

When you click this is will show you all the tables in your dB if they
RED... they cant be replicatied... structure or data wrong... BUT if they
GREEN then click the INSTALL BOTH option and submit. that table will
start replicating.
What this actually does is activate the trigger on the that table across
ALL db's for you.



For the tables created by the script I sent you earlier, (one int
primary-key, one normal int, columns), the entries are red; I did try the
databases individually, still didn't turn green. One thing though, when I
selected the databases individually, the background turned white, which,
according to comments on that page, means the rep-server was not able to
determine the status of this table.

I stopped the replication server from the control-panel and then fired the
following to create no-primary-key (identical) tables:

psql.exe -p 6543 test -c create table test1( a int )
psql.exe -p 5432 test -c create table test1( a int )
psql.exe -p 4321 test -c create table test1( a int )

And then again went to the 'Table Setup' page and now the new table (test1)
does show-up there, but it is also painted red!!!

Can you tell what is the problem here? Or can you give us a setup, tables
and some data, which can prove that it actually works!

3) START server you got that one.


You'll see, the dB is still lightning fast.
Then just for fun take one server down for a while... make changes to the
others then bring it back... and see what happens.

Thanks for trying Spar... and email me anytime... remember I'm not
subscribed anymore.

Regards Johnny



One more thing Johnny, I asked for sources, others (Jonah) are also asking
for the same. I am sure the community won't show any interest in it if they
don't have the sources. So please consider sending the sources too; and as I
mentioned earlier, you can stick some liberal (preferably BSD) license in
the sources.

Best regards,

--
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | yahoo }.com


Re: [HACKERS] [COMMITTERS] pgsql: Stamp major release 8.3.0,

2007-01-12 Thread Magnus Hagander
Joachim Wieland wrote:
 On Thu, Jan 11, 2007 at 01:15:56PM +0100, Magnus Hagander wrote:
 Can't comment on that one, since I just noticed it existed. How similar
 was this one to the standard regression tests? Those were moved into a
 C executable so they'd run on a Windows system without a shell, could
 the same be done relatively easilyi with this one?
 
 (Obviously we can't run the ecpg regression tests on msvc builds now -
 oops, didn't know those had their own script)
 
 The ecpg regression tests came in when you started to rewrite the old
 regression script. Actually we exchanged some e-mails about this topic at
 that time :-)

Crappy memory then :-) I don't even recall it now that you mention it ;-)
Too bad you didn't have a ready-made solution...

//Magnus


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [GENERAL] Autovacuum Improvements

2007-01-12 Thread Simon Riggs
On Fri, 2006-12-29 at 20:25 -0300, Alvaro Herrera wrote:
 Christopher Browne wrote:
 
  Seems to me that you could get ~80% of the way by having the simplest
  2 queue implementation, where tables with size  some threshold get
  thrown at the little table queue, and tables above that size go to
  the big table queue.
  
  That should keep any small tables from getting vacuum-starved.
 
 Hmm, would it make sense to keep 2 queues, one that goes through the
 tables in smaller-to-larger order, and the other one in the reverse
 direction?
 
 I am currently writing a design on how to create vacuum queues but I'm
 thinking that maybe it's getting too complex to handle, and a simple
 idea like yours is enough (given sufficient polish).

Sounds good to me. My colleague Pavan has just suggested multiple
autovacuums and then prototyped something almost as a side issue while
trying to solve other problems. I'll show him this entry, maybe he saw
it already? I wasn't following this discussion until now.

The 2 queue implementation seemed to me to be the most straightforward
implementation, mirroring Chris' suggestion. A few aspects that haven't
been mentioned are:
- if you have more than one VACUUM running, we'll need to watch memory
management. Having different queues based upon table size is a good way
of doing that, since the smaller queues have a naturally limited memory
consumption.
- with different size-based queues, the larger VACUUMs can be delayed so
they take much longer, while the small tables can go straight through

Some feedback from initial testing is that 2 queues probably isn't
enough. If you have tables with 100s of blocks and tables with millions
of blocks, the tables in the mid-range still lose out. So I'm thinking
that a design with 3 queues based upon size ranges, plus the idea that
when a queue is empty it will scan for tables slightly above/below its
normal range. That way we wouldn't need to specify the cut-offs with a
difficult to understand new set of GUC parameters, define them exactly
and then have them be wrong when databases grow.

The largest queue would be the one reserved for Xid wraparound
avoidance. No table would be eligible for more than one queue at a time,
though it might change between queues as it grows.

Alvaro, have you completed your design?

Pavan, what are your thoughts?

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] O_DIRECT, or madvise and/or posix_fadvise

2007-01-12 Thread markwkm

On 1/12/07, Martijn van Oosterhout kleptog@svana.org wrote:

On Thu, Jan 11, 2007 at 02:35:13PM -0800, [EMAIL PROTECTED] wrote:
 I caught this thread about O_DIRECT on kerneltrap.org:
  http://kerneltrap.org/node/7563

 It sounds like there is much to be gained here in terms of reducing
 the number of user/kernel space copies in the operating system.  I got
 the impression that posix_fadvise in the Linux kernel isn't as good as
 it could be.  I noticed in xlog.c that the use of posix_fadvise is
 disabled.  Maybe it's time to do some more experimenting and working
 with the Linux kernel developers.  Or perhaps there is another OS that
 would be better to experiment with?

Postgres doesn't use O_DIRECT and probably never will. The system is
esigned to use the system cache, not bypass it.

What recent discussions have highlighted is the need to more accurately
control the flow of data to disk. Apparently currently kernel try to
hold data back much longer than is useful.


Right, so my understanding is that.PostgreSQL needs to provide the OS
with information with how it wants it to control the flow with
posix_fadvise, and it sounds like the Linux folks believe their
implementation of posix_fadvise needs some work.


Not that I'm volunterring to deal with this.

Have a nice day,


Regards,
Mark

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] TODO items for removal

2007-01-12 Thread Simon Riggs
These two items are complete in 8.2, IIRC

Allow constraint_exclusion to work for UNIONs like it does for
inheritance, allow it to work for UPDATE and DELETE statements, and
allow it to be used for all statements with little performance impact

Fix memory leak from exceptions 
http://archives.postgresql.org/pgsql-performance/2006-06/msg00305.php


This item was rejected by Tom, since a workaround exists

Add estimated_count(*) to return an estimate of COUNT(*) 
This would use the planner ANALYZE statistics to return an estimated
count. http://archives.postgresql.org/pgsql-hackers/2005-11/msg00943.php

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [GENERAL] Autovacuum Improvements

2007-01-12 Thread Simon Riggs
On Fri, 2007-01-12 at 19:33 -0300, Alvaro Herrera wrote:

  Alvaro, have you completed your design?
 
 No, I haven't, and the part that's missing is precisely the queues
 stuff.  I think I've been delaying posting it for too long, and that is
 harmful because it makes other people waste time thinking on issues that
 I may already have resolved, and delays the bashing that yet others will
 surely inflict on my proposal, which is never a good thing ;-)  So maybe
 I'll put in a stub about the queues stuff and see how people like the
 whole thing.

I've not read a word spoken against the general idea, so I think we
should pursue this actively for 8.3. It should be straightforward to
harvest the good ideas, though there will definitely be many.

Perhaps we should focus on the issues that might result, so that we
address those before we spend time on the details of the user interface.
Can we deadlock or hang from running multiple autovacuums?

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [GENERAL] Checkpoint request failed on version 8.2.1.

2007-01-12 Thread Jim C. Nasby
On Thu, Jan 11, 2007 at 06:04:56PM -0500, Andrew Dunstan wrote:
 Please don't. At least not on the PostgreSQL web site nor in the docs. 
 And no, I don't run my production servers on Windows either.
 
 For good or ill, we made a decision years ago to do a proper Windows 
 port. I think that it's actually worked out reasonably well. All 
 operating systems have warts. Not long ago I tended to advise people not 
 to run mission critical Postgresql on Linux unless they were *very* 
 careful, due to the over-commit issue.
 
Yes, and IIRC we documented the overcommit stuff as well.

This isn't about OS holy wars, it's about providing information so that
people can make an informed decision about what OS to run their database
on.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] copy table from file: with row replacement?

2007-01-12 Thread Michael Enke

This works for small amount of data. But for large amount of data
the join takes a lot of time.

Regards,
Michael

Bruce Momjian wrote:

Michael Enke wrote:


Hello all,
I have a feature request as I think it is not possible with the actual version:

I want to load huge amount of data and I know that COPY is much faster than 
doing inserts.
But in my case I have an already filled table and rows (not all, only partly) 
from this table
should be replaced. The table has a primary key for one column.
If I do a COPY table FROM file and the key value already exists, postgresql 
tells me
that the import is not possible because of the violation of the PK.

If postgres is aware of such a violation, couldn't there be an option to the 
COPY command
to delete such existing rows so that a COPY table FROM file will never generate 
a PK violation message
but replaces existing rows?

If this is not possible, would it be the next fastes solution to create a 
before trigger and to
delete rows in this trigger? Or is this not different from issuing for every 
line an insert
and if this fails (because of the PK) than an update?



I would just COPY into another table, remove any duplicates by joining
the two tables, and then do a INSERT INTO ... SELECT.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] copy table from file: with row replacement?

2007-01-12 Thread Florian G. Pflug

Michael Enke wrote:

This works for small amount of data. But for large amount of data
the join takes a lot of time.


It certainly is faster then anly algorithm that checks for duplicates
for each lines of copy input could ever be. Especially for joins, doing
them in one large batch allows you to use better algorithms then looping
over one table, and searching for matching rows in the other - which is
exactly what copy would need to do if it had an replace on duplicate
flag.

I think the fastest way to join two large tables would be a mergejoin.
Try doing an explain select (or explain delete) to see what algorithm
postgresc chooses. Check if you actually declared your primary key
in both tables - it might help postgres to know that the column you're joining
in is unique. Also check your work_mem setting - if this is set too low,
it often forces postgres to use inferior plans becaues it tries to save memory.

greetings, Florian Pflug


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Request for review: tsearch2 patch

2007-01-12 Thread Tatsuo Ishii
  Yeah, it's a workaround. Since there's no concept other than
  alpha/numeric/latin in tsearch2, Asian characters have to be fall in
  one of them.
 
 Ok, I see.
 
 Pls, test attached patch - if it is good then I'll commit it at Monday to 
 HEAD 
 and 8.2 branches.

I have tested on a Linux box running PostgreSQL 8.2.1 (C locale,
EUC_JP encoding), and it worked great!

BTW, is your patch supposed to work with PostgreSQL 8.1?
--
Tatsuo Ishii
SRA OSS, Inc. Japan

 PS. Magnus, may I ask you to test under Windows? Thank you.
 
 -- 
 Teodor Sigaev   E-mail: [EMAIL PROTECTED]
 WWW: http://www.sigaev.ru/

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] [GENERAL] Autovacuum Improvements

2007-01-12 Thread Christopher Browne
In an attempt to throw the authorities off his trail, [EMAIL PROTECTED] (Alvaro 
Herrera) transmitted:
 Simon Riggs wrote:

 Some feedback from initial testing is that 2 queues probably isn't
 enough. If you have tables with 100s of blocks and tables with
 millions of blocks, the tables in the mid-range still lose out. So
 I'm thinking that a design with 3 queues based upon size ranges,
 plus the idea that when a queue is empty it will scan for tables
 slightly above/below its normal range.

 Yeah, eventually it occurred to me the fact that as soon as you have
 2 queues, you may as well want to have 3 or in fact any number.
 Which in my proposal is very easily achieved.

Adding an extra attribute to reflect a different ordering or a
different policy allows having as many queues in one queue table as
you might need.

 Alvaro, have you completed your design?

 No, I haven't, and the part that's missing is precisely the queues
 stuff.  I think I've been delaying posting it for too long, and that
 is harmful because it makes other people waste time thinking on
 issues that I may already have resolved, and delays the bashing that
 yet others will surely inflict on my proposal, which is never a good
 thing ;-) So maybe I'll put in a stub about the queues stuff and
 see how people like the whole thing.

Seems like a good idea to me.

Implementing multiple queues amounts to having different worker
processes/threads that operate on the queue table using varying
policies.
-- 
output = reverse(gro.mca @ enworbbc)
http://linuxdatabases.info/info/lisp.html
Rules of the  Evil Overlord #60. My five-year-old  child advisor will
also  be asked to  decipher any  code I  am thinking  of using.  If he
breaks the code  in under 30 seconds, it will not  be used. Note: this
also applies to passwords. http://www.eviloverlord.com/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [PERFORM] unusual performance for vac following 8.2upgrade

2007-01-12 Thread Richard Huxton

Tom Lane wrote:

Alvaro Herrera [EMAIL PROTECTED] writes:

Can we actually get rid of pg_class entries for temp tables.  Maybe
creating a temp pg_class which would be local to each session?  Heck,
it doesn't even have to be an actual table -- it just needs to be
somewhere from where we can load entries into the relcache.


A few things to think about:

1. You'll break a whole lotta client-side code if temp tables disappear
from pg_class.



2. How do you keep the OIDs for temp tables (and their associated
rowtypes) from conflicting with OIDs for real tables?



3. What about dependencies on user-defined types, functions, etc?


Is there not some gain from just a standard partitioning of pg_class 
into: (system-objects, user-objects, temp-objects)? I'd expect them to 
form a hierarchy of change+vacuum rates (if you see what I mean).


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] [PERFORM] unusual performance for vac following 8.2upgrade

2007-01-12 Thread Gregory Stark

Tom Lane [EMAIL PROTECTED] writes:

 3. What about dependencies on user-defined types, functions, etc?
 How will you get things to behave sanely if one backend tries to drop a
 type that some other backend is using in a column of a temp table?  Even
 if you put entries into pg_depend, which would kind of defeat the point
 of not having on-disk catalog entries for temp tables, I don't see how
 the other backend figures out what the referencing object is.

We could just lock the object it depends on. Only really makes sense for very
temporary tables though, not tables a session expects to use for a long series
of transactions.

Another direction to go to address the same problem would be to implement the
standard temporary table concept of a permanent table definition for which
each session gets a different actual set of data which is reset frequently.
Then the meta-data isn't changing frequently.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [PERFORM] unusual performance for vac following 8.2upgrade

2007-01-12 Thread Jim C. Nasby
On Thu, Jan 11, 2007 at 09:51:39PM -0500, Tom Lane wrote:
 Alvaro Herrera [EMAIL PROTECTED] writes:
  Can we actually get rid of pg_class entries for temp tables.  Maybe
  creating a temp pg_class which would be local to each session?  Heck,
  it doesn't even have to be an actual table -- it just needs to be
  somewhere from where we can load entries into the relcache.
 
 A few things to think about:
 
 1. You'll break a whole lotta client-side code if temp tables disappear
 from pg_class.  This is probably solvable --- one thought is to give
 pg_class an inheritance child that is a view on a SRF that reads out the
 stored-in-memory rows for temp pg_class entries.  Likewise for
 pg_attribute and everything else related to a table definition.
 
 2. How do you keep the OIDs for temp tables (and their associated
 rowtypes) from conflicting with OIDs for real tables?  Given the way
 that OID generation works, there wouldn't be any real problem unless a
 temp table survived for as long as it takes the OID counter to wrap all
 the way around --- but in a database that has WITH OIDS user tables,
 that might not be impossibly long ...
 
 3. What about dependencies on user-defined types, functions, etc?
 How will you get things to behave sanely if one backend tries to drop a
 type that some other backend is using in a column of a temp table?  Even
 if you put entries into pg_depend, which would kind of defeat the point
 of not having on-disk catalog entries for temp tables, I don't see how
 the other backend figures out what the referencing object is.
 
 I don't really see any solution to that last point :-(

Perhaps it would be better to partition pg_class and _attributes based
on whether an object is temporary or not. Granted, that still means
vacuuming is a consideration, but at least it wouldn't be affecting
pg_class itself. Separating temp objects out would also make it more
reasonable to have the system automatically vacuum those tables after
every X number of dropped objects.

Unfortunately, that still wouldn't help with the OID issue. :( Unless
there was a SERIAL column in pg_class_temp and other parts of the system
could differentiate between temp and non-temp objects.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] [PATCHES] Patch to log usage of temporary files

2007-01-12 Thread Guillaume Smet

Hi all,

Sorry for arriving so late into the discussion.

I don't know if it's possible but it could be useful to have the text
of the query which required the creation of the temporary files as an
additional DETAIL line. At least, if it's possible to have it in this
part of the code.

Thoughts?

--
Guillaume

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] [PATCHES] wal_checksum = on (default) | off

2007-01-12 Thread Martijn van Oosterhout
On Thu, Jan 11, 2007 at 11:10:38PM +, Simon Riggs wrote:
 On Thu, 2007-01-11 at 17:06 +, Gregory Stark wrote:
  Having a CRC in WAL but not in the heap seems kind of pointless. 
 
 Yes...
 
  If your
  hardware is unreliable the corruption could anywhere. 
 
 Agreed.

I thought the point was that the WAL protects against unexpected power
failure, that sort of thing. In that situation, the memory is the first
to be corrupted, and an active DMA transfer will thus be corrupted
also. We don't need to worry about the data, because the WAL is known
to be accurate.

The WAL does not protect against random data corruption, in normal
operation it is never read. If we want to detect random corruption,
we'd need checksum everywhere, yes. But that's not the goal here.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [pgsql-patches] [HACKERS] [PATCHES] Patch to log usage of

2007-01-12 Thread Bruce Momjian
Simon Riggs wrote:
 On Thu, 2007-01-11 at 12:37 -0500, Bruce Momjian wrote:
 
  The trace probe was incorrect 
 
 Yes, incomplete, no doubt. On that point you were 100% right to reject.
 
  and kind of at an odd place.  I don't
  think we want to go down the road of throwing trace in everwhere, do we?
  I would like to see a more systematic approach to it.
 
 I guess my systematic approach was to add PG_TRACE to all new log points
 from now on, so we have a choice of which trace/log mechanism to use.

I think the right approach is to look at our existing code and come up
with places we want them, and add them in one shot.  Doing thing
in small parts doesn't work too well with a project this size.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [pgsql-patches] [HACKERS] [PATCHES] Patch to log usage oftemporary files

2007-01-12 Thread Simon Riggs
On Fri, 2007-01-12 at 11:44 -0500, Bruce Momjian wrote:

 I think the right approach is to look at our existing code and come up
 with places we want them, and add them in one shot.  Doing thing
 in small parts doesn't work too well with a project this size.

Will do.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [pgsql-patches] [PATCHES] Patch to log usage of temporary

2007-01-12 Thread Bruce Momjian
Guillaume Smet wrote:
 Hi all,
 
 Sorry for arriving so late into the discussion.
 
 I don't know if it's possible but it could be useful to have the text
 of the query which required the creation of the temporary files as an
 additional DETAIL line. At least, if it's possible to have it in this
 part of the code.
 
 Thoughts?

We have the ability to conditionally print statements based on error
level, but LOG isn't a valid level for log_min_error_statement.

We could add a parameter that few people would use, but the right way to
do this is to log all queries.

We do not allow unlimited logging control or the system would be too
complex.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [pgsql-patches] [PATCHES] Patch to log usage of temporary files

2007-01-12 Thread Guillaume Smet

Hi Bruce,

Thanks for your answer.

On 1/12/07, Bruce Momjian [EMAIL PROTECTED] wrote:

We have the ability to conditionally print statements based on error
level, but LOG isn't a valid level for log_min_error_statement.

We could add a parameter that few people would use, but the right way to
do this is to log all queries.

We do not allow unlimited logging control or the system would be too
complex.


That's not what I had in mind. I was asking if the text of the query
was available when logging the temp file usage. If so it could be good
to add a DETAIL line with it directly and systematically when logging
the temp file usage.

Sure that if you log every query, you could find which query was
responsible for that  temp file but I suspect that this new log
feature will be used on production servers and so usually without
statement logging enabled.

IMHO, it's really important to know which queries are responsible for
the temp file usage.

If the text of the query is not available when logging the temp file
usage then I agree that we cannot do anything.

--
Guillaume

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] [pgsql-patches] [PATCHES] Patch to log usage of temporary files

2007-01-12 Thread Tom Lane
Guillaume Smet [EMAIL PROTECTED] writes:
 That's not what I had in mind. I was asking if the text of the query
 was available when logging the temp file usage. If so it could be good
 to add a DETAIL line with it directly and systematically when logging
 the temp file usage.

(1) you could make that argument about *any* log message whatsoever.

(2) there is already a generalized solution to this, it's called
log_min_error_statement.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [pgsql-patches] [PATCHES] Patch to log usage of temporary

2007-01-12 Thread Bruce Momjian
Guillaume Smet wrote:
 Hi Bruce,
 
 Thanks for your answer.
 
 On 1/12/07, Bruce Momjian [EMAIL PROTECTED] wrote:
  We have the ability to conditionally print statements based on error
  level, but LOG isn't a valid level for log_min_error_statement.
 
  We could add a parameter that few people would use, but the right way to
  do this is to log all queries.
 
  We do not allow unlimited logging control or the system would be too
  complex.
 
 That's not what I had in mind. I was asking if the text of the query
 was available when logging the temp file usage. If so it could be good
 to add a DETAIL line with it directly and systematically when logging
 the temp file usage.
 
 Sure that if you log every query, you could find which query was
 responsible for that  temp file but I suspect that this new log
 feature will be used on production servers and so usually without
 statement logging enabled.
 
 IMHO, it's really important to know which queries are responsible for
 the temp file usage.
 
 If the text of the query is not available when logging the temp file
 usage then I agree that we cannot do anything.

Usually people don't want th query unless they ask for it.  One nify
trick would be to print the query as DETAIL unless they are already
logging queries, but that just seems too complex.  If you want the
query, why not just log them all?

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [pgsql-patches] [PATCHES] Patch to log usage of temporary files

2007-01-12 Thread Guillaume Smet

On 1/12/07, Bruce Momjian [EMAIL PROTECTED] wrote:

Usually people don't want th query unless they ask for it.  One nify
trick would be to print the query as DETAIL unless they are already
logging queries, but that just seems too complex.  If you want the
query, why not just log them all?


Because they can't? On a highly loaded production server, people
usually don't log all the queries.

Anyway, if it's too complicated to implement it, perhaps it's not
worth it. I'm just curious to see how people will use this information
if they don't know why the temp file was created.

--
Guillaume

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [pgsql-patches] [PATCHES] Patch to log usage of temporary files

2007-01-12 Thread Guillaume Smet

On 1/12/07, Tom Lane [EMAIL PROTECTED] wrote:

Guillaume Smet [EMAIL PROTECTED] writes:
 That's not what I had in mind. I was asking if the text of the query
 was available when logging the temp file usage. If so it could be good
 to add a DETAIL line with it directly and systematically when logging
 the temp file usage.

(1) you could make that argument about *any* log message whatsoever.


Yes, probably. The fact is I can't figure out how I will use this
information if I don't know why the file was created but perhaps I
should just wait this release and see how I can use it in a production
environment.


(2) there is already a generalized solution to this, it's called
log_min_error_statement.


I didn't think of that when posting my message but Bruce seems to say
that we can't use it in this case.

--
Guillaume

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] [pgsql-patches] [PATCHES] Patch to log usage of temporary files

2007-01-12 Thread Tom Lane
Guillaume Smet [EMAIL PROTECTED] writes:
 On 1/12/07, Tom Lane [EMAIL PROTECTED] wrote:
 (2) there is already a generalized solution to this, it's called
 log_min_error_statement.

 I didn't think of that when posting my message but Bruce seems to say
 that we can't use it in this case.

Dunno why he thinks that.  But there is a point here that could use
improvement: shouldn't log_min_error_statement be measured on the same
scale as log_min_messages, ie, LOG is relatively high priority rather
than relatively low priority?  As the code stands, you'd have to knock
it down to DEBUG1 in order to see the statement generating a LOG
message.  This might be harmless (since messages below log_min_messages
won't generate log output at all), but it's surely a bit confusing.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [pgsql-patches] [PATCHES] Patch to log usage of temporary

2007-01-12 Thread Bruce Momjian
Guillaume Smet wrote:
 On 1/12/07, Bruce Momjian [EMAIL PROTECTED] wrote:
  Usually people don't want th query unless they ask for it.  One nify
  trick would be to print the query as DETAIL unless they are already
  logging queries, but that just seems too complex.  If you want the
  query, why not just log them all?
 
 Because they can't? On a highly loaded production server, people
 usually don't log all the queries.
 
 Anyway, if it's too complicated to implement it, perhaps it's not
 worth it. I'm just curious to see how people will use this information
 if they don't know why the temp file was created.

We have to balance functionality and ease of use.  That is the way I
analyze these issue.  We usually wait for a few people to request
additional functionality of this type, and then figure out the cleanest
way to implement it.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [pgsql-patches] [GENERAL] Corrupt database? 8.1/FreeBSD6.0

2007-01-12 Thread Alvaro Herrera
Alvaro Herrera wrote:
 Tom Lane wrote:
  Alvaro Herrera [EMAIL PROTECTED] writes:
 
   Maybe we could forcibly activate the freeze mode on a template database?
  
  Might not be a bad idea.  And even more to the point, forcibly disable
  analyze.
 
 Patch implementing this (albeit untested!) attached.  I'll try to
 reproduce the problem without the patch, and then test with the patch
 applied.

Ok, it does what it's intended to do.  But in testing it I also
confirmed that a database-wide vacuum creates a pgstat entry for it and
for all tables in it.  Is this something we want to prevent?

I'll apply this patch later today to the 8.1 branch unless somebody
objects.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [pgsql-patches] [GENERAL] Corrupt database? 8.1/FreeBSD6.0

2007-01-12 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 Ok, it does what it's intended to do.  But in testing it I also
 confirmed that a database-wide vacuum creates a pgstat entry for it and
 for all tables in it.  Is this something we want to prevent?

That's odd, because I didn't see any such thing when I tested in CVS tip
the other day.  Or did you have stats_block_level turned on?

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [pgsql-patches] [PATCHES] Patch to log usage of

2007-01-12 Thread Bruce Momjian
Tom Lane wrote:
 Guillaume Smet [EMAIL PROTECTED] writes:
  On 1/12/07, Tom Lane [EMAIL PROTECTED] wrote:
  (2) there is already a generalized solution to this, it's called
  log_min_error_statement.
 
  I didn't think of that when posting my message but Bruce seems to say
  that we can't use it in this case.
 
 Dunno why he thinks that.  But there is a point here that could use
 improvement: shouldn't log_min_error_statement be measured on the same
 scale as log_min_messages, ie, LOG is relatively high priority rather
 than relatively low priority?  As the code stands, you'd have to knock
 it down to DEBUG1 in order to see the statement generating a LOG
 message.  This might be harmless (since messages below log_min_messages
 won't generate log output at all), but it's surely a bit confusing.

I assume log_min_error_messages wasn't supported because it isn't listed
in the postgresql.conf file as a valid value.  Let me look at adding LOG
in there in the place you suggest.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [pgsql-patches] [GENERAL] Corrupt database? 8.1/FreeBSD6.0

2007-01-12 Thread Alvaro Herrera
Tom Lane wrote:
 Alvaro Herrera [EMAIL PROTECTED] writes:
  Ok, it does what it's intended to do.  But in testing it I also
  confirmed that a database-wide vacuum creates a pgstat entry for it and
  for all tables in it.  Is this something we want to prevent?
 
 That's odd, because I didn't see any such thing when I tested in CVS tip
 the other day.  Or did you have stats_block_level turned on?

Yes, I turned it on for this test.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [pgsql-patches] [PATCHES] Patch to log usage of temporary files

2007-01-12 Thread Bill Moran
In response to Guillaume Smet [EMAIL PROTECTED]:

 On 1/12/07, Bruce Momjian [EMAIL PROTECTED] wrote:
  Usually people don't want th query unless they ask for it.  One nify
  trick would be to print the query as DETAIL unless they are already
  logging queries, but that just seems too complex.  If you want the
  query, why not just log them all?
 
 Because they can't? On a highly loaded production server, people
 usually don't log all the queries.
 
 Anyway, if it's too complicated to implement it, perhaps it's not
 worth it. I'm just curious to see how people will use this information
 if they don't know why the temp file was created.

I can only speak for myself but:
* I'm already using the patch in our lab.  Since the lab is the same
  hardware/config/etc as production, I can use the information to fine
  tune configs that then get migrated to production after careful testing.
  Since it's a lab environment, I'm free to turn on and off all sorts of
  stuff that would be unwise in production.  Thus the lab frequently has
  full query logging turned on.
* Currently, our production systems have plenty of spare IO.  The result
  is that I _do_ log queries on production servers, and will continue
  to do so until it becomes an issue.  Additionally, we have lots of
  room to grow with this hardware, so I can use the data collected about
  temp file usage to justify additional RAM.  Don't know how long I'll be
  able to leave query logging enabled on the production systems, but I'm
  taking advantage of it as long as possible.
* This variable can be tweaked per-session, which means if I've got queries
  that I suspect are causing unwarranted temp files on a production server,
  I can enable it on a per-connection basis to track down the problem and
  work on a specific query, on production systems, without too much
  disruption of the rest of the work that's going on:
  set log_temp_files = 0;
  run suspect query
  set log_temp_files = -1;
  investigate logs

At least, those are my current plans ...

-- 
Bill Moran
Collaborative Fusion Inc.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [pgsql-patches] [GENERAL] Corrupt database? 8.1/FreeBSD6.0

2007-01-12 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 Alvaro Herrera [EMAIL PROTECTED] writes:
 Ok, it does what it's intended to do.  But in testing it I also
 confirmed that a database-wide vacuum creates a pgstat entry for it and
 for all tables in it.  Is this something we want to prevent?
 
 That's odd, because I didn't see any such thing when I tested in CVS tip
 the other day.  Or did you have stats_block_level turned on?

 Yes, I turned it on for this test.

Well, the vacuums certainly accounted for I/O, so I suppose this is
reasonable behavior.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] NaN behavior

2007-01-12 Thread Neil Conway
On Thu, 2007-01-11 at 21:04 -0500, Neil Conway wrote:
 Comments? I'll write up a doc patch, barring any objections.

I'll apply the attached doc patch to CVS tomorrow, barring any
objections.

-Neil

Index: doc/src/sgml/datatype.sgml
===
RCS file: /home/neilc/postgres/cvs_root/pgsql/doc/src/sgml/datatype.sgml,v
retrieving revision 1.183
diff -c -p -r1.183 datatype.sgml
*** doc/src/sgml/datatype.sgml	22 Dec 2006 22:09:31 -	1.183
--- doc/src/sgml/datatype.sgml	12 Jan 2007 21:52:22 -
***
*** 438,447 
 sect2 id=datatype-numeric-decimal
  titleArbitrary Precision Numbers/title
  
! indexterm zone=datatype-numeric-decimal
   primarynumeric (data type)/primary
  /indexterm
  
  indexterm
   primarydecimal/primary
   seenumeric/see
--- 438,451 
 sect2 id=datatype-numeric-decimal
  titleArbitrary Precision Numbers/title
  
! indexterm
   primarynumeric (data type)/primary
  /indexterm
  
+indexterm
+ primaryarbitrary precision numbers/primary
+/indexterm
+ 
  indexterm
   primarydecimal/primary
   seenumeric/see
*** NUMERIC
*** 515,520 
--- 519,529 
   plus eight bytes overhead.
  /para
  
+ indexterm
+  primarynot a number/primary
+  secondarynumeric (data type)/secondary
+ /indexterm
+ 
  para
   In addition to ordinary numeric values, the typenumeric/type
   type allows the special value literalNaN/, meaning 
*** NUMERIC
*** 525,530 
--- 534,551 
   the string literalNaN/ is recognized in a case-insensitive manner.
  /para
  
+ note
+  para
+   In most implementations of the quotenot-a-number/ concept,
+   literalNaN/ is not considered equal to any other numeric
+   value (including literalNaN/).  In order to allow
+   typenumeric/ values to be sorted and used in tree-based
+   indexes, productnamePostgreSQL/ treats literalNaN/
+   values as equal, and greater than all non-literalNaN/
+   values.
+  /para
+ /note
+ 
  para
   The types typedecimal/type and typenumeric/type are
   equivalent.  Both types are part of the acronymSQL/acronym
*** NUMERIC
*** 613,618 
--- 634,644 
   from zero will cause an underflow error.
  /para
  
+ indexterm
+  primarynot a number/primary
+  secondarydouble precision/secondary
+ /indexterm
+ 
  para
   In addition to ordinary numeric values, the floating-point types
   have several special values:
*** NUMERIC
*** 631,636 
--- 657,673 
   these strings are recognized in a case-insensitive manner.
  /para
  
+ note
+  para
+   IEEE754 specifies that literalNaN/ should not compare equal
+   to any other floating-point value (including literalNaN/).
+   In order to allow floating-point values to be sorted and used
+   in tree-based indexes, productnamePostgreSQL/ treats
+   literalNaN/ values as equal, and greater than all
+   non-literalNaN/ values.
+  /para
+ /note
+ 
  para
   productnamePostgreSQL/productname also supports the SQL-standard
   notations typefloat/type and

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings