Re: [HACKERS] Data loss, vacuum, transaction wrap-around

2005-02-20 Thread Bruno Wolff III
On Sat, Feb 19, 2005 at 18:04:42 -0500,
 
 Now, lets imagine PostgreSQL is being developed by a large company. QA
 announces it has found a bug that will cause all the users data to
 disappear if they don't run a maintenence program correctly. Vacuuming one
 or two tables is not enough, you have to vacuum all tables in all
 databases.

Except that Postgres isn't a large company and doing the work of
back patching and testing old versions will be done instead of
more important work.

 This bug would get marked as a critical error and a full scale effort
 would be made to contact previous users to upgrade or check their
 procedures.

I don't think all commercial companies would do that. I doubt that even
most of them would.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Data loss, vacuum, transaction wrap-around

2005-02-20 Thread pgsql
 On Sat, Feb 19, 2005 at 18:04:42 -0500,

 Now, lets imagine PostgreSQL is being developed by a large company. QA
 announces it has found a bug that will cause all the users data to
 disappear if they don't run a maintenence program correctly. Vacuuming
 one
 or two tables is not enough, you have to vacuum all tables in all
 databases.

 Except that Postgres isn't a large company and doing the work of
 back patching and testing old versions will be done instead of
 more important work.

PostgreSQL is an open source project that plays with the big guys. Look at
the Linux kernel. Imagine their file system guys thinking this way. Linux
would still be Linus' hobbie.


 This bug would get marked as a critical error and a full scale effort
 would be made to contact previous users to upgrade or check their
 procedures.

 I don't think all commercial companies would do that. I doubt that even
 most of them would.

Database companies? You bet they would.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Data loss, vacuum, transaction wrap-around

2005-02-20 Thread Robert Treat
On Sunday 20 February 2005 00:30, Tom Lane wrote:
 Mark Kirkwood [EMAIL PROTECTED] writes:
  To be fair to Mark, there does seem to be an increasing number of
  reports of this issue. In spite of the in-the-works fix for 8.1, it
  would be a pity to see customers losing data from xid wrap-around.

 The question is whether we are willing to back-patch a fairly large
 amount of not-very-well-tested code into 8.0.  See
 http://archives.postgresql.org/pgsql-patches/2005-02/msg00123.php
 http://archives.postgresql.org/pgsql-committers/2005-02/msg00127.php
 http://archives.postgresql.org/pgsql-committers/2005-02/msg00131.php

 I personally don't think it's worth the risk.  The code works well
 enough to commit to development tip, but it's fundamentally alpha
 quality code.


I would lean away from putting it in 8.0, however aren't we planning an 8.0.x 
release that will have a beta and/or rc testing for arc related changes? If 
so I might be open to putting it in that release (though the bits requiring 
initdb are a killer).  

-- 
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] Data loss, vacuum, transaction wrap-around

2005-02-20 Thread Stephan Szabo
On Sun, 20 Feb 2005 [EMAIL PROTECTED] wrote:

  On Sat, Feb 19, 2005 at 18:04:42 -0500,
 
  Now, lets imagine PostgreSQL is being developed by a large company. QA
  announces it has found a bug that will cause all the users data to
  disappear if they don't run a maintenence program correctly. Vacuuming
  one
  or two tables is not enough, you have to vacuum all tables in all
  databases.
 
  Except that Postgres isn't a large company and doing the work of
  back patching and testing old versions will be done instead of
  more important work.

 PostgreSQL is an open source project that plays with the big guys. Look at
 the Linux kernel. Imagine their file system guys thinking this way. Linux
 would still be Linus' hobbie.

So, you are certain that every Linux file system bug has been patched all
the way back to say kernel version 1.0 then?  Do you have any evidence of
this claim?

  This bug would get marked as a critical error and a full scale effort
  would be made to contact previous users to upgrade or check their
  procedures.
 
  I don't think all commercial companies would do that. I doubt that even
  most of them would.

 Database companies? You bet they would.

Do you have any evidence or are you merely spouting an opinion as fact?

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Data loss, vacuum, transaction wrap-around

2005-02-20 Thread pgsql
 On Sun, 20 Feb 2005 [EMAIL PROTECTED] wrote:

  On Sat, Feb 19, 2005 at 18:04:42 -0500,
 
  Now, lets imagine PostgreSQL is being developed by a large company.
 QA
  announces it has found a bug that will cause all the users data to
  disappear if they don't run a maintenence program correctly.
 Vacuuming
  one
  or two tables is not enough, you have to vacuum all tables in all
  databases.
 
  Except that Postgres isn't a large company and doing the work of
  back patching and testing old versions will be done instead of
  more important work.

 PostgreSQL is an open source project that plays with the big guys. Look
 at
 the Linux kernel. Imagine their file system guys thinking this way.
 Linux
 would still be Linus' hobbie.

 So, you are certain that every Linux file system bug has been patched all
 the way back to say kernel version 1.0 then?  Do you have any evidence of
 this claim?

No one is suggesting back to version 1.0, but critical data loss bugs that
are present and relvent in used prior versions are fixed.


  This bug would get marked as a critical error and a full scale effort
  would be made to contact previous users to upgrade or check their
  procedures.
 
  I don't think all commercial companies would do that. I doubt that
 even
  most of them would.

 Database companies? You bet they would.

 Do you have any evidence or are you merely spouting an opinion as fact?

With Oracle and DB2, yes I have some personal experience.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Get rid of system attributes in pg_attribute?

2005-02-20 Thread Robert Treat
On Sunday 20 February 2005 00:25, Tom Lane wrote:
 Robert Treat [EMAIL PROTECTED] writes:
  If I am understanding this correctly, they could only be displayed if
  selected explicitly right?

 That's always been true.  The behavior at the level of SQL commands
 wouldn't change.  The question is whether any apps out there examine
 pg_attribute and expect these rows to be present.  Most of the code
 I've seen that examines pg_attribute explicitly disregards rows with
 attnum  0 ...


One of us is not understanding the other :-)  I'm asking if I have a piece of 
code that does something like select attname from pg_attribute where attrelid 
= 'stock'::regclass::oid with the intent of displaying all those attnames, 
then the system atts will no longer show up in that list, correct?  I'm 
asking cause I have some code that does something like this at work so 
wondering if I need to do some further investigating come Tuesday morning. 

-- 
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Data loss, vacuum, transaction wrap-around

2005-02-20 Thread Stephan Szabo
On Sun, 20 Feb 2005 [EMAIL PROTECTED] wrote:

  On Sun, 20 Feb 2005 [EMAIL PROTECTED] wrote:
 
   On Sat, Feb 19, 2005 at 18:04:42 -0500,
  
   Now, lets imagine PostgreSQL is being developed by a large company.
  QA
   announces it has found a bug that will cause all the users data to
   disappear if they don't run a maintenence program correctly.
  Vacuuming
   one
   or two tables is not enough, you have to vacuum all tables in all
   databases.
  
   Except that Postgres isn't a large company and doing the work of
   back patching and testing old versions will be done instead of
   more important work.
 
  PostgreSQL is an open source project that plays with the big guys. Look
  at
  the Linux kernel. Imagine their file system guys thinking this way.
  Linux
  would still be Linus' hobbie.
 
  So, you are certain that every Linux file system bug has been patched all
  the way back to say kernel version 1.0 then?  Do you have any evidence of
  this claim?

 No one is suggesting back to version 1.0, but critical data loss bugs that
 are present and relvent in used prior versions are fixed.

I still doubt your claim about patching and youhaven't given any evidence,
but let's just make the assumption it's true because otherwise even trying
to hold a discussion is fruitless.

I also dispute your claim based on the backpatching claim that Linux would
be Linus' hobby if the file system guys thought this way.  Given that
stable Linux branches often aren't, if there weren't aggregators who
provide upgrades that are at least supposedly tested and reasonably stable
and sometimes did their own back patching, Linux wouldn't have the sort of
success it does because people would have to do alot more choosing between
getting bug fix X and doing huge amounts of tests to make sure nothing
else is broken. Thus, I believe you are greatly overstating the effect
that your first claim has towards your second to the point of making an
invalid argument.

Personally, I'd in general wish that 8.0 got a fix for this because that
way we could (after sufficient testing) push an 8.0 version that we
considered stable to suggest people move to.  However, I don't have a
whole lot of time to do such a patch nor to do sufficient testing, and
I'm not arrogant enough to believe I can order around volunteers and
companies I'm not a customer of.

   This bug would get marked as a critical error and a full scale effort
   would be made to contact previous users to upgrade or check their
   procedures.
  
   I don't think all commercial companies would do that. I doubt that
  even
   most of them would.
 
  Database companies? You bet they would.
 
  Do you have any evidence or are you merely spouting an opinion as fact?
 
 With Oracle and DB2, yes I have some personal experience.

My last company's experience with Oracle support still leaves me
questioning that claim.  They basically got don't do that then or move to
the newest major revision when they had a construct which caused the
server to stop responding.  It's not the same conditions (although I
believe the DBA did reload from backup because noone could guarantee that
there couldn't possibly have been dataloss), but it's certainly not
indicative of the sort of full scale efforts you're describing.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [pgsql-hackers-win32] [HACKERS] win32 performance - fsync question

2005-02-20 Thread Magnus Hagander
 Magnus prepared a trivial patch which added the O_SYNC flag 
 for windows and mapped it to FILE_FLAG_WRITE_THROUGH in 
 win32_open.c. 

Attached is this trivial patch. As Merlin says, it needs some more
reliability testing. But the numbers are at least reasonable - it
*seems* like it's doing the right thing (as long as you turn off write
cache). And it's certainly a significant performance increase - it
brings the speed almost up to the same as linux.

For testing, I have built and uploaded binaries from the 8.0 stable
branch with this patch applied. They are available from
http://www.hagander.net/pgsql/. Install the 8.0.1 version first (from
MSI or manually, your choice), then replace postmaster.exe and
postgres.exe with the ones in the ZIP file. If you're running as a
service, make sure to stop the service first.

To make sure it uses the new code, change wal_sync_method to open_sync
in postgresql.conf and restart the service.

The kind of testing we need help is pulling the plug reliability
testing. For this, make sure you have write caching turned off (it's no
the disks properties page in the Device Manager), run a bunch of
transactions on the db and then pull the plug of the machine in the
middle. It should come up with all acknowledged transactions still
applied, and all others not.


//Magnus

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Get rid of system attributes in pg_attribute?

2005-02-20 Thread Tom Lane
Robert Treat [EMAIL PROTECTED] writes:
 One of us is not understanding the other :-)  I'm asking if I have a piece of
 code that does something like select attname from pg_attribute where attrelid
 = 'stock'::regclass::oid with the intent of displaying all those attnames, 
 then the system atts will no longer show up in that list, correct?

Correct.  What I'm asking is whether that's a problem for anyone.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Fwd: Apple Darwin disabled fsync?

2005-02-20 Thread Greg Stark

Peter Bierman [EMAIL PROTECTED] writes:

  In most cases you do not need such a heavy handed operation and fsync() is
  good enough.

Really? Can you think of a single application for which this definition of
fsync is useful?

Kernel buffers are transparent to the application, just as the disk buffer is.
It doesn't matter to an application whether the data is sitting in a kernel
buffer, or a buffer in the disk, it's equivalent. If fsync doesn't guarantee
the writes actually end up on non-volatile disk then as far as the application
is concerned it's just an expensive noop.

-- 
greg


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[HACKERS] SMP buffer management test question

2005-02-20 Thread lsunley
There was a mention of a context-swap-storm test in the posts on the ARC
patent. Where might I obtain a copy of this test procedure?

Thanks

-- 
---
[EMAIL PROTECTED]
---


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] win32 performance - fsync question

2005-02-20 Thread Magnus Hagander
 Portability, or rather the complete lack of it.  Stuff that 
isn't in the
 Single Unix Spec is a hard sell.

O_DIRECT is reasonably common among modern Unixen (it is supported by 
Linux, FreeBSD, and probably a couple of the commercial variants like 
AIX or IRIX); it should also be reasonably easy to check for 
support at 
configure time. It's on my TODO list to take a gander at 
adding support 
for O_DIRECT for WAL, I just haven't gotten around to it yet.

Let me know when you do, and if you need some pointers on the win32
parts of it :-) I'll happily leave the main changes alone.

//Magnus

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] SMP buffer management test question

2005-02-20 Thread Tom Lane
[EMAIL PROTECTED] writes:
 There was a mention of a context-swap-storm test in the posts on the ARC
 patent. Where might I obtain a copy of this test procedure?

http://archives.postgresql.org/pgsql-performance/2004-04/msg00280.php

(The archiver seems to have lost the message's separation into parts,
but the setup script is the part down through the checkpoint command.)

Read the whole thread for context.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Query optimizer 8.0.1 (and 8.0)

2005-02-20 Thread Bruce Momjian
Jim C. Nasby wrote:
 On Mon, Feb 14, 2005 at 09:55:38AM -0800, Ron Mayer wrote:
  
  I still suspect that the correct way to do it would not be
  to use the single correlation, but 2 stats - one for estimating
  how sequential/random accesses would be; and one for estimating
  the number of pages that would be hit.  I think the existing
  correlation does well for the first estimate; but for many data
  sets, poorly for the second type.
  
 Should this be made a TODO? Is there some way we can estimate how much
 this would help without actually building it?

I guess I am confused how we would actually do that or if it is
possible.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] win32 performance - fsync question

2005-02-20 Thread Magnus Hagander
 One point that I no longer recall the reasoning behind is that xlog.c
 doesn't think O_SYNC is a preferable default over fsync.  

For larger (8k) transactions O_SYNC|O_DIRECT is only good 
with the recent
pending patch to group WAL writes together. The fsync method 
gives the OS a 
chance to do the grouping. (Of course it does not matter if 
you have small
tx  8k WAL)

This would be true for fdatasync() but not for fsync(), I think. 

On win32 (which started this discussion, fsync will sync the directory
entry as well, which will lead to *at least* two seeks on the disk.
Writing two blocks after each other to an O_SYNC opened file should give
exactly two seeks.

Of course, this only moves the breakpoint up to n blocks, where n  2 (3
or 4 depending on how many seeks the filesystem will require).

//Magnus
 

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Data loss, vacuum, transaction wrap-around

2005-02-20 Thread Mark Kirkwood
Tom Lane wrote:
The question is whether we are willing to back-patch a fairly large
amount of not-very-well-tested code into 8.0.  See
http://archives.postgresql.org/pgsql-patches/2005-02/msg00123.php
http://archives.postgresql.org/pgsql-committers/2005-02/msg00127.php
http://archives.postgresql.org/pgsql-committers/2005-02/msg00131.php
I personally don't think it's worth the risk.  The code works well
enough to commit to development tip, but it's fundamentally alpha
quality code.
I think this makes the most sense. If we are going to do an extended 
testing period for 8.0.without-arc then bundling it in there might worth 
considering.

regards
Mark
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Query optimizer 8.0.1 (and 8.0)

2005-02-20 Thread Mark Kirkwood
Bruce Momjian wrote:
Jim C. Nasby wrote:
On Mon, Feb 14, 2005 at 09:55:38AM -0800, Ron Mayer wrote:

I still suspect that the correct way to do it would not be
to use the single correlation, but 2 stats - one for estimating
how sequential/random accesses would be; and one for estimating
the number of pages that would be hit.  I think the existing
correlation does well for the first estimate; but for many data
sets, poorly for the second type.

Should this be made a TODO? Is there some way we can estimate how much
this would help without actually building it?

I guess I am confused how we would actually do that or if it is
possible.
I spent a while on the web looking for some known way to calculate
local correlation or clumping in some manner analogous to how we do
correlation currently. As yet I have only seen really specialized
examples that were tangentially relevant. We need a pet statistician to ask.
regards
Mark
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[HACKERS] Can we remove SnapshotSelf?

2005-02-20 Thread Tom Lane
As of CVS tip, there is no code in the system that uses SnapshotSelf.
I am wondering if we can get rid of it and thereby save one test in
the heavily used HeapTupleSatisfiesVisibility() macro.

There is one place in the foreign-key triggers that uses the underlying
HeapTupleSatisfiesItself() test directly.  So it seems possible that we
might someday need SnapshotSelf again as the representation of the
appropriate snapshot test for a foreign-key check.  But I suspect that
any future changes in the FK stuff will go in the direction of using
dirty reads instead.  In any case we could always put back SnapshotSelf
if we needed it.

Comments?

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] SMP buffer management test question

2005-02-20 Thread lsunley
Thanks

Well I tried it out and on OS/2 using 8.0 it has the same problems as on
Linux, with two clients running on a dual PIII 750 both CPUs go to 100%
and the system is madly switching between the two processes. With a 
single client, about 35% CPU is used.

Should the new buffer manager patch eliminate this behaviour? That is what
I gathered from the discussions.

Lorne

In [EMAIL PROTECTED], on 02/20/05 
   at 02:01 PM, Tom Lane [EMAIL PROTECTED] said:

[EMAIL PROTECTED] writes:
 There was a mention of a context-swap-storm test in the posts on the ARC
 patent. Where might I obtain a copy of this test procedure?

http://archives.postgresql.org/pgsql-performance/2004-04/msg00280.php

(The archiver seems to have lost the message's separation into parts, but
the setup script is the part down through the checkpoint command.)

Read the whole thread for context.

   regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings

-- 
---
[EMAIL PROTECTED]
---


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] SMP buffer management test question

2005-02-20 Thread Tom Lane
[EMAIL PROTECTED] writes:
 Well I tried it out and on OS/2 using 8.0 it has the same problems as on
 Linux, with two clients running on a dual PIII 750 both CPUs go to 100%
 and the system is madly switching between the two processes. With a 
 single client, about 35% CPU is used.

 Should the new buffer manager patch eliminate this behaviour?

It does for me ...

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Fwd: Apple Darwin disabled fsync?

2005-02-20 Thread Peter Bierman
At 12:38 AM -0500 2/20/05, Tom Lane wrote:
Dominic Giampaolo [EMAIL PROTECTED] writes:
 I believe that what the above comment refers to is the fact that
 fsync() is not sufficient to guarantee that your data is on stable
 storage and on MacOS X we provide a fcntl(), called F_FULLFSYNC,
 to ask the drive to flush all buffered data to stable storage.
I've been looking for documentation on this without a lot of luck
(man fcntl on OS X 10.3.8 has certainly never heard of it).
It's not completely clear whether this subsumes fsync() or whether
you're supposed to fsync() and then use the fcntl.
My understanding is that you're supposed to fsync() and then use the 
fcntl, but I'm not the filesystems expert. (Dominic, who wrote the 
original message that I forwarded, is.)

I've filed a bug report asking for better documentation about this to 
be placed in the fsync man page. radar://4012378


Also, isn't it fundamentally at the wrong level?  One would suppose that
the drive flush operation is going to affect everything the drive
currently has queued, not just the one file.  That makes it difficult
if not impossible to use efficiently.
I think the intent is to make the fcntl more accurate in time, as the 
ability to do so appears in hardware.

One of the advantages Apple has is the ability to set very specific 
requirements for our hardware. So if a block specific flush command 
becomes part of the ATA spec, Apple can require vendors to support 
it, and support it correctly, before using those drives.

On the other hand, as Dominic described, once the hardware is 
external (like a firewire enclosure), we lose that leverage.

At 12:42 PM -0500 2/20/05, Greg Stark wrote:
Dominic Giampaolo [EMAIL PROTECTED] writes:
  In most cases you do not need such a heavy handed operation and fsync() is
  good enough.
Really? Can you think of a single application for which this definition of
fsync is useful?
Kernel buffers are transparent to the application, just as the disk buffer is.
It doesn't matter to an application whether the data is sitting in a kernel
buffer, or a buffer in the disk, it's equivalent. If fsync doesn't guarantee
the writes actually end up on non-volatile disk then as far as the application
is concerned it's just an expensive noop.
I think the intent of fsync() is closer to what you describe, but the 
convention is that fsync() hands responsibility to the disk hardware. 
That's how every other Unix seems to handle fsync() too. This gives 
you good performance, and if you combine a smart fsync()ing 
application with reliable storage hardware (like an XServe RAID that 
battery backs it's own write caches), you get the best combination.

If you know you have unreliable hardware, and critical reliability 
issues, then you can use the fcntl, which seems to be more control 
than other OSes give.

-pmb
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[HACKERS] Time Zone Names Problem

2005-02-20 Thread Curt Sampson
I'm running PostgreSQL 8.0.1 from the NetBSD pkgsrc package. I'm getting some
odd behavior with time zone names:
SET timezone TO 'JST';
psql:z.sql:1: ERROR:  unrecognized time zone name: JST
SET timezone TO 'EST';
psql:z.sql:2: ERROR:  unrecognized time zone name: EST
SET timezone TO 'Asia/Tokyo';
psql:z.sql:3: ERROR:  unrecognized time zone name: Asia/Tokyo
SET timezone TO 'US/Eastern';
psql:z.sql:4: ERROR:  unrecognized time zone name: US/Eastern
SET timezone TO 'Asia/Tokyo-9';
SET
SELECT '2004-08-22 18:42:12' AT TIME ZONE 'Asia/Tokyo';
psql:z.sql:7: ERROR:  time zone asia/tokyo not recognized
SELECT '2004-08-22 18:42:12' AT TIME ZONE 'Asia/Tokyo-9';
psql:z.sql:8: ERROR:  time zone asia/tokyo-9 not recognized
SELECT '2004-08-22 18:42:12' AT TIME ZONE 'JST';
  timezone
-
 2004-08-22 18:42:12
(1 row)
Anybody have any idea what's going on here? The only patch pkgsrc makes
is related to shared library version numbers for a couple of operating
systems, so I don't think that this is package-specific. It may be
NetBSD-specific, however.
cjs
--
Curt Sampson  [EMAIL PROTECTED]   +81 90 7737 2974   http://www.NetBSD.org
 Make up enjoying your city life...produced by BIC CAMERA
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] postgres crashing on a seemingly good query

2005-02-20 Thread Neil Conway
On Sat, 2005-02-19 at 14:35 -0500, Tom Lane wrote:
 Good catch.  I've applied the attached patch (this is against 8.0/CVS
 tip but applies with some fuzz to 7.4).

Is there a way to repro this via SQL? (It would be nice to have a
regression test...)

-Neil



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] postgres crashing on a seemingly good query

2005-02-20 Thread Tom Lane
Neil Conway [EMAIL PROTECTED] writes:
 On Sat, 2005-02-19 at 14:35 -0500, Tom Lane wrote:
 Good catch.  I've applied the attached patch (this is against 8.0/CVS
 tip but applies with some fuzz to 7.4).

 Is there a way to repro this via SQL? (It would be nice to have a
 regression test...)

No, because there's no way to specify a parameter of unknown type at
the SQL level.

The V3 protocol has a whole lot of behavior that cannot be tested by
psql scripts.  Maybe we ought to think about adding some other kind
of test mechanism for it.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Time Zone Names Problem

2005-02-20 Thread Tom Lane
Curt Sampson [EMAIL PROTECTED] writes:
 I'm running PostgreSQL 8.0.1 from the NetBSD pkgsrc package. I'm getting some
 odd behavior with time zone names:

  SET timezone TO 'EST';
  psql:z.sql:2: ERROR:  unrecognized time zone name: EST
  SET timezone TO 'Asia/Tokyo';
  psql:z.sql:3: ERROR:  unrecognized time zone name: Asia/Tokyo
  SET timezone TO 'US/Eastern';
  psql:z.sql:4: ERROR:  unrecognized time zone name: US/Eastern

Hmm, all of those should work (and do work here).  Are the corresponding
files present in the appropriate installation directory?  Look under
.../share/postgresql/timezone/

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Fwd: Apple Darwin disabled fsync?

2005-02-20 Thread Greg Stark

Peter Bierman [EMAIL PROTECTED] writes:

 I think the intent of fsync() is closer to what you describe, but the
 convention is that fsync() hands responsibility to the disk hardware.

The convention was also that the hardware didn't confirm the command until
it had actually been executed...

None of this matters to the application. A specification for fsync(2) that
says it forces the data to be shuffled around under the hood but fundamentally
the doesn't change the semantics (that the data isn't guaranteed to be in
non-volatile storage) means that fsync didn't really do anything.

-- 
greg


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Time Zone Names Problem

2005-02-20 Thread Curt Sampson
On Sun, 20 Feb 2005, Tom Lane wrote:
Hmm, all of those should work (and do work here).  Are the corresponding
files present in the appropriate installation directory?  Look under
.../share/postgresql/timezone/
They are not; it's evidently a problem with the way the NetBSD package
installs things. I'll look into that. Thanks for the tip.
cjs
--
Curt Sampson  [EMAIL PROTECTED]   +81 90 7737 2974   http://www.NetBSD.org
 Make up enjoying your city life...produced by BIC CAMERA
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] Design notes for BufMgrLock rewrite

2005-02-20 Thread Bruce Momjian
Tom Lane wrote:
 Jim C. Nasby [EMAIL PROTECTED] writes:
  The advantage of using a counter instead of a simple active
  bit is that buffers that are (or have been) used heavily will be able to
  go through several sweeps of the clock before being freed. Infrequently
  used buffers (such as those from a vacuum or seq.  scan), would get
  marked as inactive the first time they were hit by the clock hand.
 
 Hmm.  It would certainly be nearly as easy to adjust a counter as to
 manipulate the RECENTLY_USED flag bit that's in the patch now.  (You
 could imagine the RECENTLY_USED flag bit as a counter with max value 1.)
 
 What I'm envisioning is that pinning (actually unpinning) a buffer
 increments the counter (up to some limit), and the clock sweep
 decrements it (down to zero), and only buffers with count zero are taken
 by the sweep for recycling.  That could work well, but I think the limit
 needs to be relatively small, else we could have the clock sweep having
 to go around many times before it finally frees a buffer.  Any thoughts
 about that?  Anyone seen any papers about this sort of algorithm?

One idea would be for the clock to check X% of the buffer cache and just
recycle the page it saw with the lowest usage count.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Query optimizer 8.0.1 (and 8.0)

2005-02-20 Thread pgsql
 Jim C. Nasby wrote:
 On Mon, Feb 14, 2005 at 09:55:38AM -0800, Ron Mayer wrote:

  I still suspect that the correct way to do it would not be
  to use the single correlation, but 2 stats - one for estimating
  how sequential/random accesses would be; and one for estimating
  the number of pages that would be hit.  I think the existing
  correlation does well for the first estimate; but for many data
  sets, poorly for the second type.

 Should this be made a TODO? Is there some way we can estimate how much
 this would help without actually building it?

 I guess I am confused how we would actually do that or if it is
 possible.

Bruce, we didn't get much time to talk at Linux world (It was nice to meet
you).

I'm not sure how you would go about it, but in my post One big trend ..
(In don't even remember anymore), I talk about tables that are physically
sorted on multiple keys, the addresses:

streetname, typename, state, zip.

maple, st, ma, 02186
maple, st, ma, 02186
maple, rd, ma, 02186
maple, ave, ma, 02186
maple, st, me, ??

Assuming the table is physically sorted by state, town (zip), streetname,
streettype, zip.

If one were to type:

select * from locations where streetname = 'maple';


The analysis of that query improperly minimizes the correlation of the
street address of the table at whole.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[HACKERS] Transaction Aborted

2005-02-20 Thread Edwin S. Ramirez
Hello,
   This may sound familiar since there was a discussion before...
I created an importing tool which allows users to load data into their 
tables using a graphical interface.
The import operations occur within a transaction so that the user gets a 
chance to change their mind.
After the import is finished the user is presented with a choice to 
abort, commit or interact with the
database.  Interacting with the database allows the user to submit 
queries to verify the data load.

Everything works great except that if the user makes any typos the 
entire transaction is aborted,
requiring the import process to be done again.

I was under the impression that the nested transaction feature would 
make this go away. 
I played a little with it and it seems that the entire transaction is 
aborted even if a checkpoint is
created.

Could someone please help me with this?
-Thanks,
Edwin S. Ramirez
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] Strange RETURN NEXT behaviour in Postgres 8.0

2005-02-20 Thread Sergey E. Koposov
 Sergey E. Koposov [EMAIL PROTECTED] writes:
  LOOP
  FETCH cur into rec;
  RETURN NEXT rec;
  EXIT WHEN NOT FOUND;
  END LOOP;
  RETURN;
 
 Don't you think you should have the EXIT *above* the RETURN NEXT?
 I would expect this to emit a bogus row of nulls after the last row
 returned by the cursor.  (At least that's what I get with current
 sources.  Pre-8.0 it might return the last row twice.)

Yes, surely EXIT should be written before RETURN NEXT, it was my error,
(thanks, but I've found that error by myself, after posting my message) 
But that small bug does not affect the original problem.

 Running it on a 500-million-row table would quite possibly run out of
 memory or disk space, too, because RETURN NEXT accumulates all the
 results before the function is actually exited.

Yes, that's right, but I did not waited until the whole table was loaded in
the function. The error, which is the subject of current thread occured
just immediately after select * from yyy(), so surely was not caused by
memory overfilling. 

Concerning to the exact form of my functions (using cursors, but still
collecting all the data in the memory). As I understand this is the only one
way (or just the simplest way ???) 
to execute fully dynamic queries returned by C function in PL/SQL.
For the real functions which I use, instead of 

query = ''SELECT * FROM usno''; 

I have 

query = my_C_function(some_args);

(see full code in my first message)



Sergey E. Koposov
Sternberg Astronomical Institute, Moscow University (Russia)
Max-Planck Institute for Astronomy (Germany) 
Internet: [EMAIL PROTECTED], http://lnfm1.sai.msu.su/~math/



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Transaction Aborted

2005-02-20 Thread Jeff Davis
I think that the feature you're looking for is a SAVEPOINT.

After the import, create a savepoint, and then if the user makes a typo
revert to that savepoint.

Regards,
Jeff Davis

On Sun, 2005-02-20 at 22:05 -0500, Edwin S. Ramirez wrote:
 Hello,
 
 This may sound familiar since there was a discussion before...
 
 I created an importing tool which allows users to load data into their 
 tables using a graphical interface.
 The import operations occur within a transaction so that the user gets a 
 chance to change their mind.
 After the import is finished the user is presented with a choice to 
 abort, commit or interact with the
 database.  Interacting with the database allows the user to submit 
 queries to verify the data load.
 
 Everything works great except that if the user makes any typos the 
 entire transaction is aborted,
 requiring the import process to be done again.
 
 I was under the impression that the nested transaction feature would 
 make this go away. 
 I played a little with it and it seems that the entire transaction is 
 aborted even if a checkpoint is
 created.
 
 Could someone please help me with this?
 
 -Thanks,
 Edwin S. Ramirez
 
 
 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])