RE : [HACKERS] Stability problems

2002-11-12 Thread Nicolas VERGER
Scott you're right, it was a hardware problem.
Thanks for your help.

Nicolas VERGER

 -Message d'origine-
 De : [EMAIL PROTECTED] [mailto:pgsql-hackers-
 [EMAIL PROTECTED]] De la part de scott.marlowe
 Envoyé : mercredi 6 novembre 2002 21:38
 À : Nicolas VERGER
 Cc : 'PostgreSQL Hackers Mailing List'
 Objet : Re: [HACKERS] Stability problems
 
 I would recommend checking your memory (look for memtest86 online
 somewhere.  Good tool.)  Anytime a machine seems to act flakely
there's a
 better than even chance it has a bad bit of memory in it.
 
 On Wed, 6 Nov 2002, Nicolas VERGER wrote:
 
  Hi,
  I have strange stability problems.
  I can't access a table (the table is different each time I get the
  problem, it could be a system table (pg_am), or a user defined one):
  Can't select * the whole table but can select * limit x offset
y, so
  it appears that only a tuple is in bad status. I can't vacuum or
pg_dump
  this table too.
  The error disappears after waiting some time.
 
  I get the following error in log when select the 'bad' line:
 

  
  2002-11-05 11:26:42 [3062]   DEBUG:  server process (pid 4551) was
  terminated by signal 11
  2002-11-05 11:26:42 [3062]   DEBUG:  terminating any other active
server
  processes
  2002-11-05 11:26:42 [4555]   FATAL 1:  The database system is in
  recovery mode
  2002-11-05 11:26:42 [3062]   DEBUG:  all server processes
terminated;
  reinitializing shared memory and semaphores
  2002-11-05 11:26:42 [4557]   DEBUG:  database system was interrupted
at
  2002-11-05 11:23:00 CET
 

  
 
  I get the following error in log when vacuuming the 'bad' table:
 

  
  2002-11-05 14:46:44 [5768]   FATAL 2:  failed to add item with len =
191
  to page 150 (free space 4294967096, nusd 0, noff 0)
  2002-11-05 14:46:44 [5569]   DEBUG:  server process (pid 5768)
exited
  with exit code 2
  2002-11-05 14:46:44 [5569]   DEBUG:  terminating any other active
server
  processes
  2002-11-05 14:46:44 [5771]   NOTICE:  Message from PostgreSQL
backend:
  The Postmaster has informed me that some other backend
  died abnormally and possibly corrupted shared memory.
  I have rolled back the current transaction and am
  going to terminate your database system connection and exit.
  Please reconnect to the database system and repeat your
query.
  2002-11-05 14:46:44 [5772]   NOTICE:  Message from PostgreSQL
backend:
  The Postmaster has informed me that some other backend
  died abnormally and possibly corrupted shared memory.
  I have rolled back the current transaction and am
  going to terminate your database system connection and exit.
  Please reconnect to the database system and repeat your
query.
  2002-11-05 14:46:44 [5569]   DEBUG:  all server processes
terminated;
  reinitializing shared memory and semaphores
  2002-11-05 14:46:44 [5774]   DEBUG:  database system was interrupted
at
  2002-11-05 14:46:40 CET
 

  
 
  template1=# select version();
  PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.96
 
  Is it a lock problem? Is there a way to log it?
 
 
  Thanks for all making such a good job.
 
  Nicolas VERGER
 
 
  ---(end of
broadcast)---
  TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly
 
 
 
 ---(end of
broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: Geometry regression tests (was Re: [HACKERS] Float output formatting

2002-11-12 Thread Pedro M. Frazao F. Ferreira
Tom Lane wrote:

Pedro M. Ferreira [EMAIL PROTECTED] writes:


[ patch for extra_float_digits ]



I've applied this patch along with followup changes to pg_dump (it sets
extra_float_digits to 2 to allow accurate dump/reload) and the geometry
regression test (it sets extra_float_digits to -3).

I find that two geometry 'expected' files are now sufficient to cover
all the platforms I have available to test.  (We'd only need one, if
everyone displayed minus zero as '-0', but some platforms print '0'.)
I tested on HPUX 10.20 (HPPA), Red Hat Linux 8.0 (Intel), Mac OS X 10.2.1
and LinuxPPC (PPC).

I'd be interested to hear results of testing CVS tip (now 7.4devel)
on other platforms.  Does geometry pass cleanly for you?


Yes!   :)
All tests passed on a dual AMD Athlon MP with Debian GNU/Linux 3.0 (Woody), 
kernel 2.4.18-5.

Tested with a snapshot downloaded yesterday.

Best regards,
Pedro M. Ferreira



			regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] RC1?

2002-11-12 Thread Vince Vielhaber
On Tue, 12 Nov 2002, Bruce Momjian wrote:

 Are we ready for RC1 yet?

This is Tuesday, you can only ask on Fridays :)

Vince.
-- 
   http://www.meanstreamradio.com   http://www.unknown-artists.com
 Internet radio: It's not file sharing, it's just radio.


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] RC1?

2002-11-12 Thread Tatsuo Ishii
 Are we ready for RC1 yet?

I'm waiting for jenny wang confirms the fix regarding GB18030
support. In the mean time, I'll commit the fix anyway since current
GB183030 support is so badly broken (I have checked all regression
tests have passed).
--
Tatsuo Ishii

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] need a current cvsup file

2002-11-12 Thread Thomas Lockhart
Does any know the location of a good cvsup file for grabbing the
various releases? 

There is none; when the configuration was changed the docs were not.

Use the appendix in the current docs (not whatever TODO is) and replace 
the pgsql project name with repository. Your CVS area will then 
contain a few top-level CVS projects, including pgsql-server.

   - Thomas


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] 500 tpsQL + WAL log implementation

2002-11-12 Thread Curtis Faith
tom lane wrote:
 What can you do *without* using a raw partition?

 I dislike that idea for two reasons: portability and security.  The
 portability disadvantages are obvious.  And in ordinary system setups
 Postgres would have to run as root in order to write on a raw partition.

 It occurs to me that the same technique could be used without any raw
 device access.  Preallocate a large WAL file and apply the method within
 it.  You'll have more noise in the measurements due to greater
 variability in the physical positioning of the blocks --- but it's
 rather illusory to imagine that you know the disk geometry with any
 accuracy anyway.  Modern drives play a lot of games under the hood.

A write to a raw disk file is immediate and completes with minimal system
overhead. I'll try to test a file-based approach using a write followed by an
immediate fdatasynch and see if that approaches the speed of the raw
partition access. I suspect we'll get decent performance, only perhaps 10% to
15% slower.

As you mention, there is nothing exact about the technique, so we should be
able to get similar improvements with a file based system. I've been able to
get over 1,500 raw writes confirmed to disk using raw partition writes each
slightly offset ahead of they other, yet, only somewhere between 500 and 650
on a sustained basis using the technique I described because of the noise in
the geometry measurements and variable timing for the writes themselves.


 This scares me quite a bit too.  The reason that the existing
 implementation maxes out at one WAL write per rotation is that for small
 transactions it's having to repeatedly write the same disk sector.  You
 could only get around that by writing multiple versions of the same WAL
 page at different disk locations.  Reliably reconstructing what data to
 use is not something that I'm prepared to accept on a handwave...

I'm pretty sure this could be done very reliably but at the cost of slightly
slower reading after a failure for redo.

I figured that whenever a transaction wrote to the log it would set the log
offset marker for new transactions to force the next transaction to use a new
block. This would result in space waste which could be partially offset by
using writes smaller than the 8K block size (along disk block size
boundaries, 512 bytes for my disk). This has the advantage of making it
fairly easy to make sure that the log can be reconstructed in order since
there would be no partial block writes to worry about.

I believe that 4 to 8 full rotations worth of usable blocks could be
maintained and blocks would be written to the lowest offset tracks first
unless there were no free blocks of sufficient size. This would probably
result in 90% to 95% utilization of the blocks (disregarding waste inside the
blocks themselves). When the lowest offset track filled up sufficiently,
another empty track would be added to the usable blocks list and the lowest
offset track taken off the unused list.

This would ensure that a read of 4 to 8 tracks, which needs to be a fixed
number for any given installation, could reconstruct the order of the WAL log
since at no time would blocks be out of order beyond that range.

Disk space is much cheaper than CPU and memory so I think that a logging
system that used as much as three or four times the space but is three or
four times faster would be a worthwhile improvement for those systems where
updates or insert volume are very heavy. Obviously, this needs to be an
option, not the default configuration.

- Curtis


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] RC1?

2002-11-12 Thread Marc G. Fournier

'K, looks like we need two things confirmed ... the change that Tom made
concerning mktime(), which we need someone on AIX to test ... and the
following ...

I've been following the commit messages closely, and haven't seen anything
go in that make me edgy, so if we can get validation on those two, I think
we're good to go ...



On Tue, 12 Nov 2002, Tatsuo Ishii wrote:

  Are we ready for RC1 yet?

 I'm waiting for jenny wang confirms the fix regarding GB18030
 support. In the mean time, I'll commit the fix anyway since current
 GB183030 support is so badly broken (I have checked all regression
 tests have passed).
 --
 Tatsuo Ishii

 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?

 http://archives.postgresql.org



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] RC1?

2002-11-12 Thread Zeugswetter Andreas SB SD

  Are we ready for RC1 yet?
 
 I think so.  The NO_MKTIME_BEFORE_1970 issue was bothering me, but I
 feel that's resolved now.  (It'd be nice to hear a crosscheck from
 some AIX users though...)

abstime, tinterval and horology fail on AIX. 
The rest is now working (AIX 4.3.2 xlc 5.0.0.2).

I am just now rebuilding with removing the #define NO_MKTIME_BEFORE_1970.
My feeling is, that there is no difference. Can that be ?
Attached are the regression diffs for vanilla 7.3b5

Andreas



regression.diffs.gz
Description: regressiondiffsgz

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: RE : [HACKERS] Stability problems

2002-11-12 Thread scott.marlowe
On Tue, 12 Nov 2002, Nicolas VERGER wrote:

 Scott you're right, it was a hardware problem.
 Thanks for your help.
 

Glad to be of help.  What was the problem?  Bad memory or bad hard drive?  
Just curious.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] 500 tpsQL + WAL log implementation

2002-11-12 Thread scott.marlowe

Curtis, have you considered comparing raw writes versus file system writes 
on a raw multi-disk partition?

I always set up my machines to store data on a mirror set (RAID1) or RAID5 
set, and it seems your method should be tested there too.

P.s., Tom, the postgresql user would NOT need to run as root to do this, 
you can just do a 'chown postgres.postgres /dev/sda1' or 'chown 
postgres.postgres /dev/md0' to give postgresql permission to write / read 
from a raw partition.

That said, I am not sure how much we are likely to gain from this method 
either.  But if no one tries it out we'll never know.


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Problem with 7.3 on Irix with dates before 1970

2002-11-12 Thread Zeugswetter Andreas SB SD

   I have removed the NO_MKTIME_BEFORE_1970 symbol from irix5.h,
 rebuilt 7.3b2, and reran the regression. The three time tests
 (tinterval, horology, abstime) now match the Solaris expected files.
   I checked the timezone files, and the system does not appear to
 have savings time defined for 1947, but it does report it as such
 in the PostgreSQL regression tests.

I think that is because both irix and aix seem to use TZ or the current 
year's DST rules for dates before 1970. Can that be ?

Andreas

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] ecpg problem ...

2002-11-12 Thread Marc G. Fournier


if (ic_flag == 1) {
/*only select those non-IC/Spyder nodes that has full update set*/
EXEC SQL DECLARE full_dyn_node CURSOR FOR
 SELECT node_name FROM NODE
 WHERE dynamic_community = 'f' AND ic_flag='n' AND machine_type!=22
 AND node_id != 0 AND NODE_NAME != :nodename;
}
else{
EXEC SQL DECLARE full_dyn_node CURSOR FOR
 SELECT node_name FROM NODE
 WHERE dynamic_community = 'f'
 AND node_id != 0 AND NODE_NAME != :nodename; (line#493)
}

the above code generates the following error:

The compiler complains:
../subapi.pgc:493: ERROR: cursor full_dyn_node already defined

since its envelop'd in an if/else clause, shouldn't it work?




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] Prepare enabled pgbench

2002-11-12 Thread Curtis Faith
Tatsuo, are you or anyone else working on adding PREPARE, EXECUTE support to
pgbench?

If not, I can do it myself and if you are interested, I'll send you the
patch.

- Curtis


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Idea for better handling of cntxDirty

2002-11-12 Thread Tom Lane
Mikheev, Vadim [EMAIL PROTECTED] writes:
 Wouldn't it work for cntxDirty to be set not by LockBuffer, but by
 XLogInsert for each buffer that is included in its argument list?

 I thought to add separate call to mark context dirty but above
 should work if all callers to XLogInsert always pass all
 modified buffers - please check.

AFAICT it is safe.  There are some places (in sequences and btree)
where not all the modified buffers are explicitly listed in XLogInsert's
arguments, but redo of those types of WAL records will always reinit the
affected pages anyway.  So we don't need to worry about forcing
checkpoint to write the pages early.

In general I don't think this adds any fragility to the system.  A WAL
record that is not set up to restore all buffers modified by the logged
operation would be broken by definition, no?

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] RC1?

2002-11-12 Thread Tom Lane
Zeugswetter Andreas SB SD [EMAIL PROTECTED] writes:
 abstime, tinterval and horology fail on AIX.=20

I would expect them now (without NO_MKTIME_BEFORE_1970) to match the
solaris-1947 comparison files for these tests.  Could you confirm that?

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] RC1?

2002-11-12 Thread Zeugswetter Andreas SB SD
  I think so.  The NO_MKTIME_BEFORE_1970 issue was bothering me, but I
  feel that's resolved now.  (It'd be nice to hear a crosscheck from
  some AIX users though...)
 
 abstime, tinterval and horology fail on AIX. 
 The rest is now working (AIX 4.3.2 xlc 5.0.0.2).
 
 I am just now rebuilding with removing the #define NO_MKTIME_BEFORE_1970.

Ok, when #define NO_MKTIME_BEFORE_1970 is removed from aix.h, then the results 
match the Solaris files.

Attached is a patch to make AIX match Solaris. Please apply and add AIX to
the supported platforms.

Thank you
Andreas

PS: what should we do with the rest of the resultmap entries for no-DST-before-1970 ?



aix.mktimerm.patch.gz
Description: aixmktimermpatchgz

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] ecpg problem ...

2002-11-12 Thread Jakub Ouhrabka
hi,

i think that ecpg is only text preprocessor. it doesn't understand the c
semantics - it goes from the top to the end of the file row by row and
sees your declaration twice.

kuba

On Tue, 12 Nov 2002, Marc G. Fournier wrote:



 if (ic_flag == 1) {
 /*only select those non-IC/Spyder nodes that has full update set*/
 EXEC SQL DECLARE full_dyn_node CURSOR FOR
  SELECT node_name FROM NODE
  WHERE dynamic_community = 'f' AND ic_flag='n' AND machine_type!=22
  AND node_id != 0 AND NODE_NAME != :nodename;
 }
 else{
 EXEC SQL DECLARE full_dyn_node CURSOR FOR
  SELECT node_name FROM NODE
  WHERE dynamic_community = 'f'
  AND node_id != 0 AND NODE_NAME != :nodename; (line#493)
 }

 the above code generates the following error:

 The compiler complains:
 ../subapi.pgc:493: ERROR: cursor full_dyn_node already defined

 since its envelop'd in an if/else clause, shouldn't it work?




 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] RC1?

2002-11-12 Thread Tom Lane
Zeugswetter Andreas SB SD [EMAIL PROTECTED] writes:
 Ok, when #define NO_MKTIME_BEFORE_1970 is removed from aix.h, then the
 results match the Solaris files.

Great!

 Attached is a patch to make AIX match Solaris. Please apply and add AIX
 to the supported platforms.

Patch applied to 7.3 and CVS tip --- Bruce, you're maintaining the
supported-platforms list, right?

 PS: what should we do with the rest of the resultmap entries for
 no-DST-before-1970 ?

I can tell you that the hppa entry is correct.  I presume the cygwin
folks would've mentioned it by now if theirs wasn't.

I suspect we are looking at two different behaviors for systems with no
old DST data: either assume all before 1970 is standard time (hppa does
this) or assume that years before 1970 use the same transition rule as
1970 (I'll bet that's what Solaris, AIX, etc are doing).

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] Collation and case mapping thoughts (long)

2002-11-12 Thread Peter Eisentraut
I have been doing some research about how to create new routines for
string collation and character case mapping that would allow us to break
out of the one-locale-per-process scheme.  I have found that the Unicode
standard provides a lot of useful specifications and data for this.  The
Unicode data can be mapped to the other character sets (so you don't
actually have to use Unicode), and it should also be future-proof, in case
one day the entire world uses Unicode.

I am looking for replacements for the following C functions:

isalpha(), isdigit(), etc. -- character properties
toupper(), tolower()   -- case mapping
strcoll(), strxfrm()   -- string collation

(Those should be all the cases relying on LC_CTYPE and LC_COLLATE that
we're interested in.)

What we basically need is an API that allows passing locale and character
encoding as parameters, so they can be used flexibly in a server that
might be using many locales and encodings.

(These musings do not cover how to actually implement per-column or
per-datum locales, but they represent prerequisite work.)

Character properties are easy to handle, because they aren't
locale-dependent at all.  (A letter is a letter and a digit is a digit any
way you look at it.)  The Unicode standard defines a character category
for each character which can be mapped to the POSIX categories (alpha,
digit, punct, blank, etc.).  (Some details on the exact mapping are a bit
fuzzy to me, because the POSIX standard is a bit vague on these points,
but that isn't a terribly hard problem to resolve.)

I imagine that for each encoding supported by the PostgreSQL server we
create a simple lookup array indexed by character code.  Those arrays can
be created from Unicode data and conversion mapping files using a bit of
Perl.

Case mapping is only minimally locale-dependent.  For most languages, the
correspondence between lower-case and upper-case letters is the same (even
if the language wouldn't normally use many of those letters).  The Unicode
standard only enumerates a handful of exceptions, which can easily be
hard-coded.  (The only exception we will really be interested in is the
mapping of the Turkish i and I.  The other exceptions mostly apply to
esoteric Unicode features about which way accents are combined --
something we don't support yet, to my knowledge.)

Thus, we can create for each supported encoding a pair of functions
(tolower/toupper) that maps an input character to the corresponding
lower/upper-case character.  The function only needs to cover the Turkish
exception if all the involved characters are contained in the respective
character set (which they aren't, for example, in ISO 8859-1).  Again, we
can create these functions from Unicode data and conversion map files
using a bit of Perl.

I've already created prototypes for the mentioned character property and
case mapping tables.  The only thing that remains to be done is figuring
out a reasonable space/time tradeoff.

The Unicode standard also defines a collation algorithm.  The collation
algorithm essentially converts a string (of Unicode characters) into a
sequence of numbers which can be compared with, say, memcmp -- much like
strxfrm() does.  The assignment of numbers for characters is the tricky
part.  The Unicode standard defines a default collation order, which is a
nice compromise but not appropriate for all languages.  Thus, making up
various linguistically correct collation tables is the laborious part of
this proposal.

There are a couple of possible implementation approaches for the collation
algorithm:

We can follow the earlier ideas and preprocess collation tables for each
combination of language and character set.  Considering that my system
knows 69 different languages and PostgreSQL supports 28 server-side
character sets, this would give far more than a thousand combinations.

Or we could pick out the combinations that are actually distinct and
deemed to be useful.  (For example, Finnish collation with a character set
typically used for Vietnamese does not seem useful, although it's
perfectly possible.)  I can't really judge what volume of data and work
this would give us.

Finally, we could transcode a given character string to Unicode on the fly
before computing the collation transformation.  This would simply require
two transformations instead of the one we need anyway, and it would keep
everything quite manageable since we'd only need one routine to do it all
and only one set of collation tables.  (Note that the collation problem
does not require round trip transcoding.  We only need conversion *to*
Unicode, which should always be possible for those characters that matter
in collation.)

So, any thoughts on these ideas?

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] RC1?

2002-11-12 Thread Bruce Momjian
Tom Lane wrote:
 Zeugswetter Andreas SB SD [EMAIL PROTECTED] writes:
  Ok, when #define NO_MKTIME_BEFORE_1970 is removed from aix.h, then the
  results match the Solaris files.
 
 Great!
 
  Attached is a patch to make AIX match Solaris. Please apply and add AIX
  to the supported platforms.
 
 Patch applied to 7.3 and CVS tip --- Bruce, you're maintaining the
 supported-platforms list, right?

AIX updated in 7.3 and CVS.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Prepare enabled pgbench

2002-11-12 Thread Tatsuo Ishii
 Tatsuo, are you or anyone else working on adding PREPARE, EXECUTE support to
 pgbench?

As far as I know, no one is working on that.

 If not, I can do it myself and if you are interested, I'll send you the
 patch.

Thanks. I can commit it for 7.4. BTW, it would be nice if we could
have a switch to turn on/off PREPARE/EXECUTE in pgbench so that we
could see how PRPARE/EXECUTE could improve the performance...
--
Tatsuo Ishii


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] RC1?

2002-11-12 Thread Peter Eisentraut
Bruce Momjian writes:

 Are we ready for RC1 yet?

Questionable.  We don't even have 50% confirmation coverage for the
supported platforms yet.

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] RC1?

2002-11-12 Thread Tom Lane
Peter Eisentraut [EMAIL PROTECTED] writes:
 Bruce Momjian writes:
 Are we ready for RC1 yet?

 Questionable.  We don't even have 50% confirmation coverage for the
 supported platforms yet.

We can't just wait around indefinitely for port reports that may or may
not ever appear.  In any case, most of the 7.3 entries in the list
seem to be various flavors of *BSD; I think it's unlikely we broke
those ...

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] RC1?

2002-11-12 Thread Robert Treat
On Tue, 2002-11-12 at 16:27, Tom Lane wrote:
 Peter Eisentraut [EMAIL PROTECTED] writes:
  Bruce Momjian writes:
  Are we ready for RC1 yet?
 
  Questionable.  We don't even have 50% confirmation coverage for the
  supported platforms yet.
 
 We can't just wait around indefinitely for port reports that may or may
 not ever appear.  In any case, most of the 7.3 entries in the list
 seem to be various flavors of *BSD; I think it's unlikely we broke
 those ...

Why not send an email to the folks who last reported a supported
platform and ask for an update? Probably won't get through to everyone,
but it might help pare down the list of unconfirmed.

Robert Treat




---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] RC1?

2002-11-12 Thread scott.marlowe
On 12 Nov 2002, Robert Treat wrote:

 On Tue, 2002-11-12 at 16:27, Tom Lane wrote:
  Peter Eisentraut [EMAIL PROTECTED] writes:
   Bruce Momjian writes:
   Are we ready for RC1 yet?
  
   Questionable.  We don't even have 50% confirmation coverage for the
   supported platforms yet.
  
  We can't just wait around indefinitely for port reports that may or may
  not ever appear.  In any case, most of the 7.3 entries in the list
  seem to be various flavors of *BSD; I think it's unlikely we broke
  those ...
 
 Why not send an email to the folks who last reported a supported
 platform and ask for an update? Probably won't get through to everyone,
 but it might help pare down the list of unconfirmed.

I'm testing x86 solaris right now.  It's turning into a giant pain because 
of how the box I'm on is configured.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] RC1?

2002-11-12 Thread scott.marlowe
On 12 Nov 2002, Robert Treat wrote:

 On Tue, 2002-11-12 at 16:27, Tom Lane wrote:
  Peter Eisentraut [EMAIL PROTECTED] writes:
   Bruce Momjian writes:
   Are we ready for RC1 yet?
  
   Questionable.  We don't even have 50% confirmation coverage for the
   supported platforms yet.
  
  We can't just wait around indefinitely for port reports that may or may
  not ever appear.  In any case, most of the 7.3 entries in the list
  seem to be various flavors of *BSD; I think it's unlikely we broke
  those ...
 
 Why not send an email to the folks who last reported a supported
 platform and ask for an update? Probably won't get through to everyone,
 but it might help pare down the list of unconfirmed.

I get this for gmake check:

(Lotsa messages deleted):

== removing existing temp installation==
== creating temporary installation==
== initializing database system   ==
== starting postmaster==
running on port 65432 with pid 19771
== creating database regression ==
CREATE DATABASE
ALTER DATABASE
== dropping regression test user accounts ==
== installing PL/pgSQL==
== running regression test queries==
parallel group (13 tests):  float4 int8 text int2 oid int4 char boolean 
varchar name float8 bit numeric boolean  ... ok
 char ... ok
 name ... ok
 varchar  ... ok
 text ... ok
 int2 ... ok
 int4 ... ok
 int8 ... ok
 oid  ... ok
 float4   ... ok
 float8   ... ok
 bit  ... ok
 numeric  ... ok
== shutting down postmaster   ==

==
 All 13 tests passed.
==


rm regress.o
gmake[2]: Leaving directory 
`/home/smarlowe/postgresql-7.3b5/src/test/regress'
gmake[1]: Leaving directory `/home/smarlowe/postgresql-7.3b5/src/test'

(END QUOTE)

And then it stops.  Anyone know why it doesn't run the rest of the 
regresssion tests?  


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] RC1?

2002-11-12 Thread Tom Lane
scott.marlowe [EMAIL PROTECTED] writes:
 And then it stops.  Anyone know why it doesn't run the rest of the 
 regresssion tests?  

Somebody else just reported the same thing on Solaris.  Must be
something about the pg_regress script that doesn't play nicely with
Solaris' shell.  Can you poke into it and try to figure out what?
(Perhaps running the script with +x would help.)

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] RC1?

2002-11-12 Thread scott.marlowe
On Tue, 12 Nov 2002, Tom Lane wrote:

 scott.marlowe [EMAIL PROTECTED] writes:
  And then it stops.  Anyone know why it doesn't run the rest of the 
  regresssion tests?  
 
 Somebody else just reported the same thing on Solaris.  Must be
 something about the pg_regress script that doesn't play nicely with
 Solaris' shell.  Can you poke into it and try to figure out what?
 (Perhaps running the script with +x would help.)

will do.


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] RC1?

2002-11-12 Thread scott.marlowe
On Tue, 12 Nov 2002, Tom Lane wrote:

 scott.marlowe [EMAIL PROTECTED] writes:
  And then it stops.  Anyone know why it doesn't run the rest of the 
  regresssion tests?  
 
 Somebody else just reported the same thing on Solaris.  Must be
 something about the pg_regress script that doesn't play nicely with
 Solaris' shell.  Can you poke into it and try to figure out what?
 (Perhaps running the script with +x would help.)

OK, make -x check fails, is there some other way to use -x I'm not 
thinking of here?


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] RC1?

2002-11-12 Thread Tom Lane
scott.marlowe [EMAIL PROTECTED] writes:
 OK, make -x check fails, is there some other way to use -x I'm not 
 thinking of here?

I was thinking of running the script by hand, not via make:

/bin/sh -x ./pg_regress --temp-install --top-builddir=../../.. 
--schedule=./parallel_schedule --multibyte=SQL_ASCII

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] RC1?

2002-11-12 Thread scott.marlowe
On Tue, 12 Nov 2002, Tom Lane wrote:

 scott.marlowe [EMAIL PROTECTED] writes:
  OK, make -x check fails, is there some other way to use -x I'm not 
  thinking of here?
 
 I was thinking of running the script by hand, not via make:
 
 /bin/sh -x ./pg_regress --temp-install --top-builddir=../../.. 
--schedule=./parallel_schedule --multibyte=SQL_ASCII

Ok, now that I've run it that way, the last couple of pages of output 
look like this:

formatted=numeric
+ echo  numeric  ... \c
EXPECTED=./expected/numeric
 numeric  ... + expr abstime=abstime-solaris-1947 : 
numeric=
+ [ 0 -ne 0 ]
+ expr geometry=geometry-solaris-i386-pc : numeric=
+ [ 0 -ne 0 ]
+ expr horology=horology-solaris-1947 : numeric=
+ [ 0 -ne 0 ]
+ expr tinterval=tinterval-solaris-1947 : numeric=
+ [ 0 -ne 0 ]
bestfile=
bestdiff=
result=2
+ [ ! -r ./expected/numeric.out ]
+ diff -w ./expected/numeric.out ./results/numeric.out
result=0
+ break
+ echo ok
ok
+ read line
+ [ 0 -ne 0 ]
+ [ -n 22844 ]
+ message shutting down postmaster
_dashes===
_spaces=
+ cut -c 1-38
+ echo shutting down postmaster
_msg=shutting down postmaster
+ echo == shutting down postmaster   
==
== shutting down postmaster   ==
+ kill -15 22844
+ unset postmaster_pid
+ rm -f /tmp/pg_regress.19030
+ cat ./regression.out
+ grep \.\.\.
+ sed s/ //g
+ wc -l
count_total=13
+ cat ./regression.out
+ grep \.\.\. ok
+ + wc -l sed
 s/ //g
count_ok=13
+ cat ./regression.out
+ sed s/ //g
+ wc -l
+ grep \.\.\. FAILED
count_failed=0
+ cat ./regression.out
+ grep \.\.\. failed (ignored)
+ sed s/ //g
+ wc -l
count_ignored=0
+ echo

+ [ 13 -eq 13 ]
msg=All 13 tests passed.
result=0
+ sed s/./=/g
+ echo  All 13 tests passed.
dashes===
+ echo ==
==
+ echo  All 13 tests passed.
 All 13 tests passed.
+ echo ==
==
+ echo

+ [ -s ./regression.diffs ]
+ rm -f ./regression.diffs ./regression.out
+ exit 0
+ exit
savestatus=0
+ [ -n  ]
+ rm -f /tmp/pg_regress.19030
+ exit 0

Hope that helps.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] RC1?

2002-11-12 Thread Tom Lane
scott.marlowe [EMAIL PROTECTED] writes:
 Ok, now that I've run it that way, the last couple of pages of output 
 look like this:

Hm.  So the while read line loop is iterating only once.

I was thinking to myself that something within the while loop must be
eating up stdin, so that there's nothing left for the while read to
read when control returns to the top of the loop.  This strengthens that
theory.  Now, exactly what is reading stdin?

My suspicion falls on the very-recently-added awk calls.  Try changing

(echo SET autocommit TO 'on';; awk 'BEGIN {printf \\set ECHO all\n}'; cat 
$inputdir/sql/$1.sql) |

to

(echo SET autocommit TO 'on';; awk 'BEGIN {printf \\set ECHO all\n}' 
/dev/null; cat $inputdir/sql/$1.sql) |

(there are two places to do this)

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Prepare enabled pgbench

2002-11-12 Thread Bruce Momjian
Tatsuo Ishii wrote:
  Tatsuo, are you or anyone else working on adding PREPARE, EXECUTE support to
  pgbench?
 
 As far as I know, no one is working on that.
 
  If not, I can do it myself and if you are interested, I'll send you the
  patch.
 
 Thanks. I can commit it for 7.4. BTW, it would be nice if we could
 have a switch to turn on/off PREPARE/EXECUTE in pgbench so that we
 could see how PRPARE/EXECUTE could improve the performance...

We could probably just run before-after patch tests to see the
performance change.  I am afraid adding that switch into the code may
make it messy.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] 500 tpsQL + WAL log implementation

2002-11-12 Thread Justin Clift
Bravo Curtis,

This is all excellent research.

:-)

Regards and best wishes,

Justin Clift


Curtis Faith wrote:
snip
 Disk space is much cheaper than CPU and memory so I think that a logging
 system that used as much as three or four times the space but is three or
 four times faster would be a worthwhile improvement for those systems where
 updates or insert volume are very heavy. Obviously, this needs to be an
 option, not the default configuration.
 
 - Curtis

-- 
My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there.
   - Indira Gandhi

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] CREATE TABLE/AS does not allow WITH OIDS?

2002-11-12 Thread Bruce Momjian
Tom Lane wrote:
 Thomas Lockhart [EMAIL PROTECTED] writes:
  What would be required to have OIDs for all SELECT/INTO product tables 
  for this release?
 
 It *might* work to just reverse the default assumption in
 ExecAssignResultTypeFromTL().  But I will vote against making such a
 change at this late hour.  Quite possibly we'd introduce bugs in
 features that are much more critical than whether a table created by
 SELECT INTO has OIDs or not.

With no one concerned about this except Thomas and myself, added to
TODO:

* Fix SELECT ... INTO and CREATE TABLE AS to have appopriate OID column

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Prepare enabled pgbench

2002-11-12 Thread Tatsuo Ishii
  Thanks. I can commit it for 7.4. BTW, it would be nice if we could
  have a switch to turn on/off PREPARE/EXECUTE in pgbench so that we
  could see how PRPARE/EXECUTE could improve the performance...
 
 We could probably just run before-after patch tests to see the
 performance change.  I am afraid adding that switch into the code may
 make it messy.

But one of the purposes of pgbench is examining performance on
different environments, doesn't it? I'm afraid hard coded
PREPARE/EXECUTE makes it harder.
--
Tatsuo Ishii

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Prepare enabled pgbench

2002-11-12 Thread Bruce Momjian
Tatsuo Ishii wrote:
   Thanks. I can commit it for 7.4. BTW, it would be nice if we could
   have a switch to turn on/off PREPARE/EXECUTE in pgbench so that we
   could see how PRPARE/EXECUTE could improve the performance...
  
  We could probably just run before-after patch tests to see the
  performance change.  I am afraid adding that switch into the code may
  make it messy.
 
 But one of the purposes of pgbench is examining performance on
 different environments, doesn't it? I'm afraid hard coded
 PREPARE/EXECUTE makes it harder.

I was just thinking that pgbench is for measuring code changes, not for
testing changes _in_ pgbench.  Once we know the performance difference
for PERFORM, would we still keep the code in pgbench?  Maybe to test
later, I guess.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Prepare enabled pgbench

2002-11-12 Thread Tom Lane
Tatsuo Ishii [EMAIL PROTECTED] writes:
 Thanks. I can commit it for 7.4. BTW, it would be nice if we could
 have a switch to turn on/off PREPARE/EXECUTE in pgbench so that we
 could see how PRPARE/EXECUTE could improve the performance...

That is a *must*.  Otherwise, you've simply made an arbitrary change
in the benchmark ... which is no benchmark at all.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] pg_dump in 7.4

2002-11-12 Thread Christopher Kings-Lynne
Hi,

Has anyone given much thought to improving pg_dump's object order algorithm
for 7.4?  It seems that now we have dependencies, it should just be a matter
of doing a breadth-first or depth-first search over the pg_depend table to
generate a valid order of oids.

To allow for mess-ups in that table, the next step would be to add to the
end of the list of oids any objects that for whatever reason aren't in the
dependency system.  (Is this possible?  Manual hacking can do it
methinks...)

Does this sound like an idea?

I've just become rather frustrated trying to do a test reload of our 7.2.3
dump into 7.3b5.  The problem is all the tsearch types are declared after
the tables that actually use them!

Chris


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] pg_dump in 7.4

2002-11-12 Thread Philip Warner
At 01:33 PM 13/11/2002 +0800, Christopher Kings-Lynne wrote:

Does this sound like an idea?


It does, but in keeping with allowing pg_restore to be quite flexible, I'd 
like to see the dependency data stored in the dump file, then processed at 
restore-time.


I've just become rather frustrated trying to do a test reload of our 7.2.3
dump into 7.3b5.  The problem is all the tsearch types are declared after
the tables that actually use them!


pg_dump already has rudimentary dependency tracking (one level deep); each 
item can have a list of oid's it depends on. You *could* patch it to add 
the types to the table dependencies.

In the future I'd imagine we'll just dump the OIDs of all first level 
dependencies for each object, then at restore-time, process them in 
whatever order the user requests (defaulting to dependency-order).




Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /()   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 03 5330 3172  | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] pg_dump in 7.4

2002-11-12 Thread Christopher Kings-Lynne
 pg_dump already has rudimentary dependency tracking (one level
 deep); each
 item can have a list of oid's it depends on. You *could* patch it to add
 the types to the table dependencies.

 In the future I'd imagine we'll just dump the OIDs of all first level
 dependencies for each object, then at restore-time, process them in
 whatever order the user requests (defaulting to dependency-order).

Well, the problem is that you can add a new type and then add a column to a
really old table that uses that type - that causes pain.  Lots of other
people have also reported the view dumped before table it is based on
problem.

Chris


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Prepare enabled pgbench

2002-11-12 Thread Tatsuo Ishii
  But one of the purposes of pgbench is examining performance on
  different environments, doesn't it? I'm afraid hard coded
  PREPARE/EXECUTE makes it harder.
 
 I was just thinking that pgbench is for measuring code changes, not for
 testing changes _in_ pgbench.  Once we know the performance difference
 for PERFORM, would we still keep the code in pgbench?  Maybe to test
 later, I guess.

My concern is PREPARE/EXECUTE may NOT always improve the
performance. I guess we have very few data to judge PREPARE/EXECUTE is
good or not. Moreover PREPARE/EXECUTE might be improved in the
future. If that happens, keeping that switch would help examining the
effect, no?
--
Tatsuo Ishii

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Prepare enabled pgbench

2002-11-12 Thread Bruce Momjian
Tatsuo Ishii wrote:
   But one of the purposes of pgbench is examining performance on
   different environments, doesn't it? I'm afraid hard coded
   PREPARE/EXECUTE makes it harder.
  
  I was just thinking that pgbench is for measuring code changes, not for
  testing changes _in_ pgbench.  Once we know the performance difference
  for PERFORM, would we still keep the code in pgbench?  Maybe to test
  later, I guess.
 
 My concern is PREPARE/EXECUTE may NOT always improve the
 performance. I guess we have very few data to judge PREPARE/EXECUTE is
 good or not. Moreover PREPARE/EXECUTE might be improved in the
 future. If that happens, keeping that switch would help examining the
 effect, no?

It would.  I was just concerned that having both in there would be a
maintenance headache and would perhaps double the amount of code and
make it complicated.  Let see what the author does and we can decide
then.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[HACKERS] performance regression, 7.2.3 - 7.3b5 w/ VIEW

2002-11-12 Thread Ross J. Reedstrom
Hey Hackers - 
I was testing beta5 and found a performance regression involving
application of constraints into a VIEW - I've got a view that is fairly
expensive, involving a subselet and an aggregate.  When the query is
rewritten in 7.2.3, the toplevel constraint is used to filter before
the subselect - in 7.3b5, it comes after.

For this query, the difference is 160 ms vs. 2 sec. Any reason for this
change?

Here's the view def., and explain analyzes for the view, and two hand
rewritten versions (since the explain analyze in 7.2.3 doesn't display
the filter parameters)

Ross

CREATE VIEW current_modules AS 
   SELECT * FROM modules m 
  WHERE module_ident = 
(SELECT max(module_ident) FROM modules 
WHERE m.moduleid = moduleid GROUP BY moduleid);

repository=# explain analyze select * from current_modules where name ~ 'Fourier';
  QUERY PLAN   
   
--
 Seq Scan on modules m  (cost=0.00..116090.23 rows=1 width=135) (actual 
time=18.74..1968.01 rows=37 loops=1)
   Filter: ((module_ident = (subplan)) AND (name ~ 'Fourier'::text))
   SubPlan
 -  Aggregate  (cost=0.00..25.57 rows=1 width=13) (actual time=0.41..0.41 rows=1 
loops=4534)
   -  Group  (cost=0.00..25.55 rows=6 width=13) (actual time=0.08..0.37 
rows=10 loops=4534)
 -  Index Scan using moduleid_idx on modules  (cost=0.00..25.54 
rows=6 width=13) (actual time=0.06..0.27 rows=10 loops=4534)
   Index Cond: ($0 = moduleid)
 Total runtime: 1968.65 msec
(8 rows)

repository=# explain analyze select module_ident from modules m where m.name ~ 
'Fourier' and m.module_ident = (SELECT max(modules.module_ident) as max from modules 
where (m.moduleid=moduleid) group by modules.moduleid);
 QUERY PLAN
 

 Seq Scan on modules m  (cost=0.00..116090.23 rows=1 width=4) (actual 
time=2.46..158.33 rows=37 loops=1)
   Filter: ((name ~ 'Fourier'::text) AND (module_ident = (subplan)))
   SubPlan
 -  Aggregate  (cost=0.00..25.57 rows=1 width=13) (actual time=0.35..0.35 rows=1 
loops=270)
   -  Group  (cost=0.00..25.55 rows=6 width=13) (actual time=0.07..0.31 
rows=9 loops=270)
 -  Index Scan using moduleid_idx on modules  (cost=0.00..25.54 
rows=6 width=13) (actual time=0.06..0.22 rows=9 loops=270)
   Index Cond: ($0 = moduleid)
 Total runtime: 158.81 msec
(8 rows)

repository=# explain analyze  select module_ident from modules m where m.module_ident 
= (SELECT max(modules.module_ident) as max from modules where (m.moduleid=moduleid) 
group by modules.moduleid) and m.name ~ 'Fourier';
  QUERY PLAN   
   
--
 Seq Scan on modules m  (cost=0.00..116090.23 rows=1 width=4) (actual 
time=18.66..1959.31 rows=37 loops=1)
   Filter: ((module_ident = (subplan)) AND (name ~ 'Fourier'::text))
   SubPlan
 -  Aggregate  (cost=0.00..25.57 rows=1 width=13) (actual time=0.41..0.41 rows=1 
loops=4534)
   -  Group  (cost=0.00..25.55 rows=6 width=13) (actual time=0.08..0.37 
rows=10 loops=4534)
 -  Index Scan using moduleid_idx on modules  (cost=0.00..25.54 
rows=6 width=13) (actual time=0.06..0.27 rows=10 loops=4534)
   Index Cond: ($0 = moduleid)
 Total runtime: 1959.84 msec
(8 rows)

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Prepare enabled pgbench

2002-11-12 Thread Tatsuo Ishii
  My concern is PREPARE/EXECUTE may NOT always improve the
  performance. I guess we have very few data to judge PREPARE/EXECUTE is
  good or not. Moreover PREPARE/EXECUTE might be improved in the
  future. If that happens, keeping that switch would help examining the
  effect, no?
 
 It would.  I was just concerned that having both in there would be a
 maintenance headache and would perhaps double the amount of code and
 make it complicated.  Let see what the author does and we can decide
 then.

Ok.
--
Tatsuo Ishii

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] performance regression, 7.2.3 - 7.3b5 w/ VIEW

2002-11-12 Thread Mike Mascari
Ross J. Reedstrom wrote:

Hey Hackers - 
I was testing beta5 and found a performance regression involving
application of constraints into a VIEW - I've got a view that is fairly
expensive, involving a subselet and an aggregate.  When the query is
rewritten in 7.2.3, the toplevel constraint is used to filter before
the subselect - in 7.3b5, it comes after.

For this query, the difference is 160 ms vs. 2 sec. Any reason for this
change?

I could be way off base, but here's a shot in the dark:

http://groups.google.com/groups?hl=enlr=ie=UTF-8oe=UTF-8threadm=3D0885E1.8F369ACA%40mascari.comrnum=3prev=/groups%3Fq%3DMike%2BMascari%2Bsecurity%2BTom%2BLane%26ie%3DUTF-8%26oe%3DUTF-8%26hl%3Den

At the time I thought PostgreSQL was doing something naughty by 
allowing user functions to be invoked on data that would 
ultimately not be returned. Now I know how Oracle uses VIEWS for 
row security: Oracle functions invoked in DML statements can't 
record any changes to the database. So if the above is the 
cause, I wouldn't have any problems with the patch being 
reversed. Maybe separate privileges for read-only vs. read-write 
functions are in order at some point in the future though...

Mike Mascari
[EMAIL PROTECTED]



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org


Re: [HACKERS] [SQL] Inconsistent or incomplete behavior obverse in where

2002-11-12 Thread Josh Berkus
Paul,

 Unable to identify an operator '=' for types 'numeric' and 'double
 precision' You will have to retype this query using an explicit cast

This is due, as you surmised, to decimal values defaulting to floats.
 While there is little problem with an = operator for numeric and
float, you would not want an implicit cast for a / operator with
numeric and float.   As a result, I believe that all numeric and float
operators have been left undefined.

 I am aware of the use of type casting to force the desired 
 behavior in these situations.  I have also started to go down 
 the road of creating functions and operators to force numeric 
 to numeric comparison operations when comparing numeric to float, 
 but realize that this approach is fraught with pitfalls, in fact 
 it is interesting to us to note that with an operator in place 
 to force numeric = float comparisons to parse as numeric = numeric, 
 we started getting the opposite behavior.  Queries with 'column 
 reference' = 0.0 worked fine, but queries with 'column reference' = 0
 
 threw a variant of the previous exception:
 
 Unable to identify an operator '=' for types 'numeric' and
 'integer'

Now, that's interesting.   Why would defining a numeric = float have
broken numeric = integer?   There's no reason I can think of.
  Perhaps I will try this myself and see if I encounter the same
problem, or if your team modified the numeric = integer operator by
mistake.

 Overall, this behavior appears to be inconsistent and is not 
 the same behavior I have experienced with many other DBMS's.
 Specifically, it seems strange that the parser does not treat 
 values 0.0 or 77.5 as numeric(s[,p]) when comparing the values
 to a column reference known to be of type numeric (s,[p]).  
 
 Is an unquoted number in the form of NN.N always treated as a 
 float?  

Yes.   I believe that this is from the SQL 92 spec; hopefully someone
on this list with a copy of the Guide to the SQL Standard can quote it
for you.

 If the planner could somehow recognize that the constant/
 literal value was being compared to a column reference of the
 type numeric (s,p) and treat the value accordingly, then would
 operator identification no longer be a problem?

It's an interesting idea, and would be wonderful if it could be made to
work.  However, the challenge of getting the program to correctly
recognize the context for all literal values *without* making any wrong
assumptions that would afffect the data could be substantial.

Most other RDBMSs deal with this, not by any kind of data type
context-sensitivity, but simply by supporting a large number of
implicit casts.  This approach can have its own perils, as I have
experienced with MS SQL Server, where the average of splits for 120,000
transactions is significantly different if you accidentally let the
database implicitly cast the values as Float instead of Numeric.

As such, there was talk on the Hackers list at one time of *reducing*
the number of implicit casts instead of increasing them.   This would
obviously make your particular problem even worse, but the proponents
of reduction point out that implicit casts can get you into real
trouble if you're not aware of them, wheras forcing explicit casts just
gets you error messages.

Hmmm ... in fact, I'd think the perfect solution would be a
compile-time option or contrib package which allows you to
enable/disable implicit casts for many data types.

 We are looking to maintain a high degree of portability in our 
 application code, and while CAST ( expression as type ) is 
 fairly portable, no one here feels that it is a portable as
 column reference = literal/constant value.   If someone knows
 of a better approach, or can point us to documentation of build or
 run-time configuration that affects the query planner where this 
 issue is concerned, it would be much appreciated.

Hopefully someone else will respond to your message as well.   I'll
re-phrase one of your questions for the Hackers list:

QUESTION:  Is there any way we could distinguish between literals and
column references when processing operators?   That is, while we would
*not* want to implicitly convert a float column to numeric for equality
comparison, would it be possible to convert a literal value to match
the column to which it is compared?  Or is literal processing completed
before any expressions are evaluated?

-Josh Berkus





---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] [SQL] Inconsistent or incomplete behavior obverse in where

2002-11-12 Thread Tom Lane
Josh Berkus [EMAIL PROTECTED] writes:
 Now, that's interesting.   Why would defining a numeric = float have
 broken numeric = integer?   There's no reason I can think of.

The problem probably is that the parser now finds two possible
interpretations that look equally good to it, so it can't choose.
It could coerce the integer constant to numeric (and use numeric=numeric)
or to float (and use the added numeric=float operator), and there's no
rule that can break the tie.

In 7.3 and 7.4 we are actually going in the direction of removing
cross-data-type operators, not adding them, because they tend to create
too many options for the parser to choose from.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org