[HACKERS] lwlocks and starvation

2004-11-24 Thread Neil Conway
LWLockRelease() currently does something like (simplifying a lot):

acquire lwlock spinlock
decrement lock count
if lock is free
  if first waiter in queue is waiting for exclusive lock,
  awaken him; else, walk through the queue and awaken
  all the shared waiters until we reach an exclusive waiter
end if
release lwlock spinlock

This has the nice property that locks are granted in FIFO order. Is it
essential that we maintain that property? If not, we could instead walk
through the wait queue and awaken *all* the shared waiters, and get a
small improvement in throughput.

I can see that this might starve exclusive waiters; however, we allow
the following:

Proc A = LWLockAcquire(lock, LW_SHARED); -- succeeds
Proc B = LWLockAcquire(lock, LW_EXCLUSIVE); -- blocks
Proc C = LWLockAcquire(lock, LW_SHARED); -- succeeds

i.e. we don't *really* follow strict FIFO order anyway.

-Neil



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Bitmap index

2004-11-24 Thread Neil Conway
On Mon, 2004-11-22 at 07:57 +, PaweX Niewiadomski wrote:
 I saw discussion about bitmap indexes few weeks ago. I wonder if
 any of you is working on it (in secret)?

For what it's worth, I don't know of anyone working on them.

 I will be chosing subject
 of my master thesis and thougth about implementing bitmap indexes.

No objection here :)

-Neil



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] [JDBC] Strange server error with current 8.0beta driver

2004-11-24 Thread Oliver Jowett
Barry Lind wrote:
I also have the test case (in java) down to the bare minimum that
generated the following output (that test case is attached).  (Note that
if the FETCH in the test case is not executed then the backend crashes;
with the FETCH you get an error: ERROR: unrecognized node type: 0)
I narrowed this down to:
  while (true) {
l_stmtDeclare.execute();
  }
producing:
 FE= Parse(stmt=S_1,query=BEGIN,oids={})
 FE= Bind(stmt=S_1,portal=null)
 FE= Execute(portal=null,limit=0)
 FE= Parse(stmt=S_2,query=DECLARE CUR CURSOR FOR SELECT 1,oids={})
 FE= Bind(stmt=S_2,portal=null)
 FE= Describe(portal=null)
 FE= Execute(portal=null,limit=0)
 FE= Sync
 =BE ParseComplete [S_1]
 =BE BindComplete [null]
 =BE CommandStatus(BEGIN)
 =BE ParseComplete [S_2]
 =BE BindComplete [null]
 =BE NoData
 =BE CommandStatus(DECLARE CURSOR)
 =BE ReadyForQuery(T)
simple execute, [EMAIL PROTECTED], maxRows=0, fetchSize=0, flags=0
 FE= Bind(stmt=S_2,portal=null)
 FE= Describe(portal=null)
 FE= Execute(portal=null,limit=0)
 FE= Sync
 =BE BindComplete [null]
 =BE NoData
 =BE ErrorMessage(ERROR: unrecognized node type: 2139062143
  Location: File: clauses.c, Routine: expression_tree_mutator, Line: 3237
  Server SQLState: XX000)
Valgrind says this is the culprit:
==26451== Invalid read of size 4
==26451==at 0x8185C86: eval_const_expressions_mutator (clauses.c:1185)
==26451==by 0x8185C32: eval_const_expressions (clauses.c:1152)
==26451==by 0x817D1A6: preprocess_expression (planner.c:415)
==26451==by 0x817CEBF: subquery_planner (planner.c:240)
==26451==by 0x817CD59: planner (planner.c:129)
==26451==by 0x810DF03: PerformCursorOpen (portalcmds.c:87)
==26451==by 0x81C1402: PortalRunUtility (pquery.c:934)
==26451==by 0x81C1762: PortalRunMulti (pquery.c:1001)
==26451==by 0x81C0D8E: PortalRun (pquery.c:617)
==26451==by 0x81BDDA7: exec_execute_message (postgres.c:1673)
==26451==by 0x81BF6E1: PostgresMain (postgres.c:3035)
==26451==by 0x818FC39: BackendRun (postmaster.c:2817)
==26451==by 0x818F642: BackendStartup (postmaster.c:2453)
==26451==by 0x818D989: ServerLoop (postmaster.c:1198)
==26451==by 0x818CDBA: PostmasterMain (postmaster.c:917)
==26451==by 0x81570F4: main (main.c:268)
==26451==  Address 0x1BBBF704 is 260 bytes inside a block of size 1024 free'd
==26451==at 0x1B905460: free (vg_replace_malloc.c:153)
==26451==by 0x8245706: AllocSetDelete (aset.c:466)
==26451==by 0x82468B8: MemoryContextDelete (mcxt.c:193)
==26451==by 0x8247BCF: PortalDrop (portalmem.c:384)
==26451==by 0x82475B5: CreatePortal (portalmem.c:179)
==26451==by 0x81BD735: exec_bind_message (postgres.c:1369)
==26451==by 0x81BF4EF: PostgresMain (postgres.c:3023)
==26451==by 0x818FC39: BackendRun (postmaster.c:2817)
==26451==by 0x818F642: BackendStartup (postmaster.c:2453)
==26451==by 0x818D989: ServerLoop (postmaster.c:1198)
==26451==by 0x818CDBA: PostmasterMain (postmaster.c:917)
==26451==by 0x81570F4: main (main.c:268)
With a bit of gdb work, I think what is happening is this:
The first Execute of S_2, running in portal context, calls the planner 
on the query contained in S_2's DeclareCursorStmt. The planner modifies 
the query tree in the course of planning it (specifically, it modifies 
parse-targetList). Memory allocated for the modified query comes from 
the portal context.

The portal context is freed implicitly by the second Bind of S_2 (second 
stack trace above).

The second Execute of S_2 then tries to use parse-targetList when 
planning (first stack trace above), but that's now pointing to freed 
memory. Boom.

Perhaps PerformCursorOpen should copy the query tree before planning, or 
plan in a different memory context?

-O
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] [JDBC] Strange server error with current 8.0beta driver

2004-11-24 Thread Oliver Jowett
Oliver Jowett wrote:
Perhaps PerformCursorOpen should copy the query tree before planning, or 
plan in a different memory context?
Patch attached. It moves query planning inside the new portal's memory 
context. With this applied I can run Barry's testcase without errors, 
and valgrind seems OK with it too.

-O
Index: src/backend/commands/portalcmds.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/commands/portalcmds.c,v
retrieving revision 1.36
diff -c -r1.36 portalcmds.c
*** src/backend/commands/portalcmds.c   16 Sep 2004 16:58:28 -  1.36
--- src/backend/commands/portalcmds.c   24 Nov 2004 09:28:34 -
***
*** 67,73 
 * query, so we are not expecting rule rewriting to do anything
 * strange.
 */
!   rewritten = QueryRewrite((Query *) stmt-query);
if (list_length(rewritten) != 1 || !IsA(linitial(rewritten), Query))
elog(ERROR, unexpected rewrite result);
query = (Query *) linitial(rewritten);
--- 67,86 
 * query, so we are not expecting rule rewriting to do anything
 * strange.
 */
! 
!   /* Create a new portal, and do all query planning on a copy of
!* the query allocated in the new portal's memory context. The
!* planner may modify the query, and it is not safe to have
!* those modifications persist as we are ourselves running in a
!* transient portal context.
!*/
!   portal = CreatePortal(stmt-portalname, false, false);
! 
!   oldContext = MemoryContextSwitchTo(PortalGetHeapMemory(portal));
! 
!   query = copyObject(stmt-query);
! 
!   rewritten = QueryRewrite(query);
if (list_length(rewritten) != 1 || !IsA(linitial(rewritten), Query))
elog(ERROR, unexpected rewrite result);
query = (Query *) linitial(rewritten);
***
*** 86,102 
  
plan = planner(query, true, stmt-options, NULL);
  
-   /*
-* Create a portal and copy the query and plan into its memory
-* context.
-*/
-   portal = CreatePortal(stmt-portalname, false, false);
- 
-   oldContext = MemoryContextSwitchTo(PortalGetHeapMemory(portal));
- 
-   query = copyObject(query);
-   plan = copyObject(plan);
- 
PortalDefineQuery(portal,
  NULL, /* unfortunately don't 
have sourceText */
  SELECT, /* cursor's query is always 
a SELECT */
--- 99,104 

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[HACKERS] valgrind complaints in pgstat_write_statsfile

2004-11-24 Thread Oliver Jowett
Seen in passing when running valgrind against a CVS HEAD build:
==28598== Syscall param write(buf) contains uninitialised or unaddressable 
byte(s)
==28598==at 0x1BABC558: write (in /lib/libc-2.3.4.so)
==28598==by 0x1BA7165D: (within /lib/libc-2.3.4.so)
==28598==by 0x1BA715FE: _IO_do_write (in /lib/libc-2.3.4.so)
==28598==by 0x1BA70E61: _IO_file_close_it (in /lib/libc-2.3.4.so)
==28598==by 0x1BA67B07: _IO_fclose (in /lib/libc-2.3.4.so)
==28598==by 0x819369B: pgstat_write_statsfile (pgstat.c:2275)
==28598==by 0x8192A44: PgstatCollectorMain (pgstat.c:1576)
==28598==by 0x8192368: PgstatBufferMain (pgstat.c:1398)
==28598==by 0x8191656: pgstat_start (pgstat.c:617)
==28598==by 0x818EB81: reaper (postmaster.c:2096)
==28598==by 0x52BFEFFF: ???
==28598==by 0x818CDAA: PostmasterMain (postmaster.c:917)
==28598==by 0x81570E4: main (main.c:268)
Anything to be concerned about? I guess that the uninitialized bytes 
were actually provided in an earlier write but it's only on close that 
they get written out and noticed.

-O
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] patch: plpgsql - access records with rec.(expr)

2004-11-24 Thread Neil Conway
Mike Rylander wrote:
As an alternative, what would be the possibility of creating a new PL
as a contrib module, say PLPGSQL_NG, to move forward with extensions
like this and perhaps EVALUATE?
I think the idea of rewriting PL/PgSQL from scratch has merit (and it's 
something that I think would be well worth doing). IMHO it's not really 
worth the trouble to fork the existing code base and add new features to 
something that, hopefully, has a limited life span.

-Neil
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] -V, --version -- deprecated?

2004-11-24 Thread Peter Eisentraut
Neil Conway wrote:
 The --help output for most of the binaries we install does not
 include the -V option (just its alias, --version). Is this
 intentional?

 (Note that we still document this option in the reference pages for
 some commands, and initdb's help output does include -V.)

--help and --version are the standard options that are supported 
everywhere.  In the era before we had long options everywhere, we 
implemented -V as an alternative in some programs, in particular those 
in and around initdb, because of the version cross-checking it does 
using those options.

At one point, long options where broken on some BSD versions.  I don't 
know what became of that, but if we don't have new information it might 
be safest to leave things where they are.

Hence, the -V option is not the preferred public interface, so it's not 
prominently documented, which may or may not be reasonable in minds 
other than my own.

 Speaking of command-line options, --version and --help aren't
 documented in a lot of command reference pages. Is there a good
 reason why?

I am not aware of one.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] patch: plpgsql - access records with rec.(expr)

2004-11-24 Thread Joachim Wieland
On Tue, Nov 23, 2004 at 10:33:50AM -0500, Tom Lane wrote:
 We do need to do something about the fact that EXECUTE can't access
 plpgsql variables, though I'm afraid that fixing that is going to
 require a rather complete overhaul of plpgsql :-(.  But it needs one
 anyway.

Why do you think that it would be difficult to do it with the existing code?
Actually I wanted to implement this for 8.1. I've already had a quick look
at it. My idea was to allow something like EXECUTE SELECT INTO var1, var2
col1 col2 FROM  The code would have to parse the list of variables and
check if they match pl/pgsql variables, execute the query (without the INTO
stuff of course) via SPI and copy back the results, checking type mismatches
or a mismatch in the number of columns. I haven't thought about more complex
types however. Did I miss something?

Which other limitations are there in the current implementation of pl/pgsql?

Joachim




---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Trouble with plpgsql on 7.4.6

2004-11-24 Thread D'Arcy J.M. Cain
On Tue, 23 Nov 2004 07:25:17 -0500
D'Arcy J.M. Cain [EMAIL PROTECTED] wrote:
 
 #0  0x483cafeb in kill () from /usr/lib/libc.so.12
 #1  0x483cd0af in __libc_mutex_catchall_stub (m=1212478892)
 at /usr/src/lib/libc/thread-stub/thread-stub.c:112
 #2  0x4843f0f7 in free (ptr=incomplete type)
 at /usr/src/lib/libc/stdlib/malloc.c:1149
 #3  0x081b3efc in AllocSetDelete (context=error type) at aset.c:464
 #4  0x081b468a in MemoryContextDelete (context=error type) at
 #mcxt.c:192
 
 In fact it is calling the thread stubs but the applicatino is
 threaded. At least, that's what I see from this code in NetBSD.
 
 #define CHECK_NOT_THREADED_ALWAYS() \
 do {\
 if (__isthreaded)   \
 DIE();  \
 } while (/*CONSTCOND*/0)
 
 #if 1
 #define CHECK_NOT_THREADED()CHECK_NOT_THREADED_ALWAYS()
 #else
 #define CHECK_NOT_THREADED()/* nothing */
 #endif 
 
 I am going to follow up with the NetBSD team but I pass this on in
 case it suggests anything.  Note that this still only causes problems
 on 7.4.6 and not under 7.4.3.

OK, I have found the problem.  It turns out to be in the way that it is
built under the NetBSD pkgsrc system.  As soon as a module is loaded
that uses pthreads it sets a variable (__isthreaded) that causes the
above error.  The answer is to simply link PostgreSQL with -lpthread to
begin with.  We are making some changes to the pkgsrc system to do this
correctly with this and some other packages that could potentially have
the same problem.

Thanks for the help in finding this.

-- 
D'Arcy J.M. Cain [EMAIL PROTECTED] |  Democracy is three wolves
http://www.druid.net/darcy/|  and a sheep voting on
+1 416 425 1212 (DoD#0082)(eNTP)   |  what's for dinner.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] lwlocks and starvation

2004-11-24 Thread Bruce Momjian
Neil Conway wrote:
 LWLockRelease() currently does something like (simplifying a lot):
 
 acquire lwlock spinlock
 decrement lock count
 if lock is free
   if first waiter in queue is waiting for exclusive lock,
   awaken him; else, walk through the queue and awaken
   all the shared waiters until we reach an exclusive waiter
 end if
 release lwlock spinlock
 
 This has the nice property that locks are granted in FIFO order. Is it
 essential that we maintain that property? If not, we could instead walk
 through the wait queue and awaken *all* the shared waiters, and get a
 small improvement in throughput.
 
 I can see that this might starve exclusive waiters; however, we allow
 the following:
 
 Proc A = LWLockAcquire(lock, LW_SHARED); -- succeeds
 Proc B = LWLockAcquire(lock, LW_EXCLUSIVE); -- blocks
 Proc C = LWLockAcquire(lock, LW_SHARED); -- succeeds
 
 i.e. we don't *really* follow strict FIFO order anyway.

My guess is the existing behavior was designed to allow waking of
multiple waiters _sometimes_ without starving of exclusive waiters. 
There should be a comment in the code explaining this usage and I bet it
was intentional.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] lwlocks and starvation

2004-11-24 Thread Neil Conway
Bruce Momjian wrote:
My guess is the existing behavior was designed to allow waking of
multiple waiters _sometimes_ without starving of exclusive waiters.
Well, I think the current algorithm *does* allow starvation, at least in 
some situations. Consider a workload in which a new shared reader 
arrives every 50 ms, and holds the lock for, say, 500 ms. If an 
exclusive waiter arrives, they will starve with the current algorithm.

There should be a comment in the code explaining this usage and I bet it
was intentional.
Oh, I bet it was intentional as well :) I'm mostly curious to see 
exactly what the reasoning was, and whether it is necessary that we 
preserve the FIFO behavior while considering optimizations.

-Neil
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] lwlocks and starvation

2004-11-24 Thread Bruce Momjian
Neil Conway wrote:
 Bruce Momjian wrote:
  My guess is the existing behavior was designed to allow waking of
  multiple waiters _sometimes_ without starving of exclusive waiters.
 
 Well, I think the current algorithm *does* allow starvation, at least in 
 some situations. Consider a workload in which a new shared reader 
 arrives every 50 ms, and holds the lock for, say, 500 ms. If an 
 exclusive waiter arrives, they will starve with the current algorithm.

I thought the new readers will sit after the writer in the FIFO queue so
the writer will not starve.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] lwlocks and starvation

2004-11-24 Thread Neil Conway
Bruce Momjian wrote:
I thought the new readers will sit after the writer in the FIFO queue so
the writer will not starve.
AFAICS, that is not the case. See lwlock.c, circa line 264: in LW_SHARED 
mode, we check if exclusive is zero; if so, we acquire the lock 
(increment the shared lock count and do not block). And exclusive is 
set non-zero only when we _acquire_ a lock in exclusive mode, not when 
we add an exclusive waiter to the wait queue.

(Speaking of which, the exclusive field is declared as a char; I 
wonder if it wouldn't be more clear to declare it as bool, and treat 
it as a boolean field. The storage/alignment requirements should be the 
same (bool is a typedef for char, at least a C compiler), but IMHO it 
would be more logical.)

-Neil
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] -V, --version -- deprecated?

2004-11-24 Thread Neil Conway
Peter Eisentraut wrote:
--help and --version are the standard options that are supported 
everywhere.  In the era before we had long options everywhere, we 
implemented -V as an alternative in some programs, in particular those 
in and around initdb, because of the version cross-checking it does 
using those options.
Ok, good to know. FWIW -V is almost universal among the client 
binaries (not just those in and around initdb).

At one point, long options where broken on some BSD versions.  I don't 
know what became of that, but if we don't have new information it might 
be safest to leave things where they are.
Can anyone confirm this? (If this actually affects any modern platforms 
it means that --help doesn't work at the very least, which seems a Bad 
Thing. So I'm a little skeptical that this is still a problem.)

Hence, the -V option is not the preferred public interface, so it's not 
prominently documented, which may or may not be reasonable in minds 
other than my own.
Fair enough, but I think it's inconsistent to document it in some places 
but not in others. I think we ought to either declare -V deprecated 
(and perhaps remove the docs for it), or accept that we need to live 
with it because of long-options silliness and document -V as a valid 
alternate.

-Neil
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] patch: plpgsql - access records with rec.(expr)

2004-11-24 Thread Matt
 I think the idea of rewriting PL/PgSQL from scratch has merit (and it's 
 something that I think would be well worth doing). IMHO it's not really 
 worth the trouble to fork the existing code base and add new features to 
 something that, hopefully, has a limited life span.

I dunno, I kind of like the idea.

There's always going to be the age old conflict between people who are
basically users like me, and want to see a needed feature asap, and
developers who want to see it done right. And of course doing it right
is only way to go long term.

But so long as I can be fairly sure the syntax is agreed (EXECUTE SELECT
INTO ... in this case) and eventually will make it into the main code
base, I'd be willing to live on the edge for a while.

There'd have to be big 'experimental, everything may change' warnings
all over the contrib version. My only concern is if it would actually
delay the discussed rewrite of plpgsql by splitting the effort.

That's my two one hundreths of a euro, anyway.

Matt


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] patch: plpgsql - access records with rec.(expr)

2004-11-24 Thread Tom Lane
Joachim Wieland [EMAIL PROTECTED] writes:
 On Tue, Nov 23, 2004 at 10:33:50AM -0500, Tom Lane wrote:
 We do need to do something about the fact that EXECUTE can't access
 plpgsql variables, though I'm afraid that fixing that is going to
 require a rather complete overhaul of plpgsql :-(.  But it needs one
 anyway.

 Why do you think that it would be difficult to do it with the existing code?

Because the parsing context all gets thrown away, in particular the
namespace stack; so you can't tell what set of variables ought to be
visible to the EXECUTE.

 Which other limitations are there in the current implementation of pl/pgsql?

Its memory management really sucks.  Function parsetrees ought to be put
into private contexts, not malloc'd, so that they can reasonably be
freed when the function is deleted or redefined.  Likewise for the
cached query plans.  Also we need a way to discard cached query plans
when relevant object definitions change (though this is not plpgsql's
fault in particular, it's a generic problem).

Another issue is that we need a better solution for recursive plpgsql
functions.  Function parsetrees have to be read-only (with the
exception of adding cached plans that weren't there before) during
execution, else recursive functions don't work right.  We have a really
ugly and inefficient answer to this, and I'm not even sure that it
covers 100% of the cases (at one time it definitely didn't, but I can't
recall right now if that got fixed completely).  I'm not entirely sure
what a cleaner answer would look like, but I know I don't like it now.

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [Testperf-general] Re: [HACKERS] ExclusiveLock

2004-11-24 Thread Bort, Paul
Title: RE: [Testperf-general] Re: [HACKERS] ExclusiveLock





 From: Kenneth Marshall [mailto:[EMAIL PROTECTED]]
[snip]
 The simplest idea I had was to pre-layout the WAL logs in a 
 contiguous fashion
 on the disk. Solaris has this ability given appropriate FS 
 parameters and we
 should be able to get close on most other OSes. Once that has 
 happened, use
 something like the FSM map to show the allocated blocks. The 
 CPU can keep track
 of its current disk rotational position (approx. is okay) 
 then when we need to
 write a WAL block start writing at the next area that the 
 disk head will be
 sweeping. Give it a little leaway for latency in the system 
 and we should be
 able to get very low latency for the writes. Obviously, there 
 would be wasted
 space but you could intersperse writes to the granularity of 
 space overhead
 that you would like to see. As far as implementation, I was reading an
 interesting article that used a simple theoretical model to 
 estimate disk head
 position to avoid latency.
 


Ken, 


That's a neat idea, but I'm not sure how much good it will do. As bad as rotational latency is, seek time is worse. Pre-allocation isn't going to do much for rotational latency if the heads also have to seek back to the WAL. 

OTOH, pre-allocation could help two other performance aspects of the WAL: First, if the WAL was pre-allocated, steps could be taken (by the operator, based on their OS) to make the space allocated to the WAL contiguous. Statistics on how much WAL is needed in 24 hours would help with that sizing. This would reduce seeks involved in writing the WAL data.

The other thing it would do is reduce seeks and metadata writes involved in extending WAL files.


All of this is moot if the WAL doesn't have its own spindle(s).


This almost leads back to the old-fashioned idea of using a raw partition, to avoid the overhead of the OS and file structure. 

Or I could be thoroughly demonstrating my complete lack of understanding of PostgreSQL internals. :-)


Maybe I'll get a chance to try the flash drive WAL idea in the next couple of weeks. Need to see if the hardware guys have a spare flash drive I can abuse.

Paul





Re: [HACKERS] lwlocks and starvation

2004-11-24 Thread Tom Lane
Neil Conway [EMAIL PROTECTED] writes:
 LWLockRelease() currently does something like (simplifying a lot):
 ...
 This has the nice property that locks are granted in FIFO order. Is it
 essential that we maintain that property?

Not really, although avoiding indefinite starvation is important.

 If not, we could instead walk
 through the wait queue and awaken *all* the shared waiters, and get a
 small improvement in throughput.

I think this would increase the odds of starvation for exclusive waiters
significantly.  It would also complicate the list manipulation in
LWLockRelease, and the net loss of cycles to that might outweigh any
possible speedup anyway.

 I can see that this might starve exclusive waiters; however, we allow
 the following:

 Proc A = LWLockAcquire(lock, LW_SHARED); -- succeeds
 Proc B = LWLockAcquire(lock, LW_EXCLUSIVE); -- blocks
 Proc C = LWLockAcquire(lock, LW_SHARED); -- succeeds

 i.e. we don't *really* follow strict FIFO order anyway.

Uh, no; proc C will queue up behind proc B.  See LWLockAcquire.  Were
this not so, again we'd be risking starving proc B, since there could
easily be an indefinitely long series of people acquiring and releasing
the lock in shared mode.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] lwlocks and starvation

2004-11-24 Thread Tom Lane
Neil Conway [EMAIL PROTECTED] writes:
 AFAICS, that is not the case. See lwlock.c, circa line 264: in LW_SHARED 
 mode, we check if exclusive is zero; if so, we acquire the lock 
 (increment the shared lock count and do not block). And exclusive is 
 set non-zero only when we _acquire_ a lock in exclusive mode, not when 
 we add an exclusive waiter to the wait queue.

Ooh, you are right.  I think that may qualify as a bug ...

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] lwlocks and starvation

2004-11-24 Thread Tom Lane
Neil Conway [EMAIL PROTECTED] writes:
 (Speaking of which, the exclusive field is declared as a char; I 
 wonder if it wouldn't be more clear to declare it as bool, and treat 
 it as a boolean field.

I coded it that way because I was thinking of it as a count (0 or 1),
for symmetry with the count of shared holders.  You could argue it
either way I suppose.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] Encrypt data type LO

2004-11-24 Thread Alberto Piña








Hi fellows, I need to encrypt fields of data type
LO (LO is included in the contrib section of PostgreSQL) and I dont know
if pgcrypto is the way to do that or there is another way.  If anyone knows the
answer of my problem or know an alternative way to do this, Ill appreciate
you can share this with me.

Regards.



Ing. Carlos Alberto Piña Uribe

Consultoría y Desarrollo

KEBIR Infraestructura Tecnológica

[EMAIL PROTECTED]










Re: [HACKERS] valgrind complaints in pgstat_write_statsfile

2004-11-24 Thread Tom Lane
Oliver Jowett [EMAIL PROTECTED] writes:
 Seen in passing when running valgrind against a CVS HEAD build:
 ==28598== Syscall param write(buf) contains uninitialised or unaddressable 
 byte(s)

 Anything to be concerned about?

AFAIK this represents valgrind complaining because we have written out
structs containing unused/uninitialized pad bytes.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Encrypt data type LO

2004-11-24 Thread James Robinson
We do all of our encryption in the middleware:
1) translate our data which requires encryption into an XML string
2) compress + encrypt, yielding byte [].
3) Store byte [] as a bytea column.
The resulting byte arrays are relatively small in our case (1 - 3K), 
so bytea has seemed to suit us just fine.


James Robinson
Socialserve.com
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] OpenBSD/Sparc status

2004-11-24 Thread Darcy Buskermolen
On November 23, 2004 06:18 pm, Michael Fuhr wrote:
 On Tue, Nov 23, 2004 at 12:47:28PM -0800, Darcy Buskermolen wrote:
  I'm guessing we need to add some more configure logic to detect gcc
  versions 3.4 on sparc trying to produce 64bit code and disable
  optimizations, or else bail out and ask them to upgrade.

 Shouldn't that be gcc versions 3.3?

My bad, It should have read prior to 3.4.

-- 
Darcy Buskermolen
Wavefire Technologies Corp.
ph: 250.717.0200
fx:  250.763.1759
http://www.wavefire.com

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Beta5 now Available

2004-11-24 Thread Gavin M. Roy
No you can not, but the tracker isn't very resource intesive from my 
past experience.  I can host it if needed.

Gavin
Marc G. Fournier wrote:
On Wed, 24 Nov 2004, Thomas Hallgren wrote:
Gaetano Mendola wrote:
...so the very first client is the real server
that must be run 24/24.
I don't think this is correct. You need a tracker for downloaders to 
be able to find each other but no client is more important than the 
others.

can there be multiple trackers?  for instance, if we ran 
bt.postgresql.org on two different servers, could they both run 
trackers at the same time?


Marc G. Fournier   Hub.Org Networking Services 
(http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 
7615664

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Beta5 now Available

2004-11-24 Thread Marc G. Fournier
On Wed, 24 Nov 2004, Gavin M. Roy wrote:
No you can not, but the tracker isn't very resource intesive from my past 
experience.  I can host it if needed.
It wasn't that that I was thinking of ... just wondering if there was some 
way of having it redundant, instead of centralized ... nice thing about 
ftp mirrors, we have about 60 of them, so if one goes down, it doesn't 
really affect anything ... from what everyone is saying, if the tracker 
goes down, it affects everything ... seems odd to have new technology 
still having single points of failure :(

   Gavin 
Marc G. Fournier wrote:
On Wed, 24 Nov 2004, Thomas Hallgren wrote:
Gaetano Mendola wrote:
...so the very first client is the real server
that must be run 24/24.
I don't think this is correct. You need a tracker for downloaders to be 
able to find each other but no client is more important than the others.

can there be multiple trackers?  for instance, if we ran bt.postgresql.org 
on two different servers, could they both run trackers at the same time?


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Beta5 now Available

2004-11-24 Thread Joshua D. Drake
Marc G. Fournier wrote:
On Wed, 24 Nov 2004, Gavin M. Roy wrote:
No you can not, but the tracker isn't very resource intesive from my 
past experience.  I can host it if needed.

It wasn't that that I was thinking of ... just wondering if there was 
some way of having it redundant, instead of centralized ... nice thing 
about ftp mirrors, we have about 60 of them, so if one goes down, it 
doesn't really affect anything ... from what everyone is saying, if 
the tracker goes down, it affects everything ... seems odd to have 
new technology still having single points of failure :(

O.k. I know nothing of bittorrent but couldn't we just have to machines 
that are identically configured that have a round robin DNS thing going on?

That would help with load.
If the machines were on the same network we could even heartbeat or 
service check between them.

J


   Gavin 
Marc G. Fournier wrote:
On Wed, 24 Nov 2004, Thomas Hallgren wrote:
Gaetano Mendola wrote:
...so the very first client is the real server
that must be run 24/24.
I don't think this is correct. You need a tracker for downloaders 
to be able to find each other but no client is more important than 
the others.

can there be multiple trackers?  for instance, if we ran 
bt.postgresql.org on two different servers, could they both run 
trackers at the same time?


Marc G. Fournier   Hub.Org Networking Services 
(http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 
7615664

---(end of 
broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly



Marc G. Fournier   Hub.Org Networking Services 
(http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 
7615664

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
PostgreSQL Replicator -- production quality replication for PostgreSQL
begin:vcard
fn:Joshua Drake
n:Drake;Joshua
org:Command Prompt, Inc.
adr:;;PO Box 215 ;Cascade Locks;OR;97014;US
email;internet:[EMAIL PROTECTED]
title:Consultant
tel;work:503-667-4564
tel;fax:503-210-0334
x-mozilla-html:FALSE
url:http://www.commandprompt.com
version:2.1
end:vcard


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Plperl Safe version check fails for Safe 2.09

2004-11-24 Thread Andrew Dunstan
Tom Lane said:
 Mark Kirkwood [EMAIL PROTECTED] writes:
 It seems that the check in src/pl/plperl/plperl.c
   eval_pv((safe_version  2.09 ? safe_bad : safe_ok), FALSE);
 is not working quite as expected (CVS HEAD from today):

 Yah know, I looked at that on Monday and said to myself Self, that
 looks like a rounding problem waiting to happen ... but in the absence
 of a trouble report didn't want to mess with it.

 Part of the problem is that Perl NV is double, not float, and so the
 declaration of safe_version is wrong on its face.  But even with it
 properly declared, exact comparison of double values is playing with
 fire.  I'd be inclined to change it to something like

   double safe_version;
   ...
   if (safe_version  2.089) ...



Since there was no released version between 2.08 and 2.09, comparing to
2.085 should do the trick (or any of 2.08[1-9]).

cheers

andrew






---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] [JDBC] Strange server error with current 8.0beta driver

2004-11-24 Thread Barry Lind
Oliver,

The patch works for me.  Thanks.  Things look good now against an 8.0
server.  (I still have a lot more testing to do though).

However I still have problems against a 7.4 server with the 8.0 jdbc
driver.  (ERROR: no value found for parameter 1).  You mentioned that
you had found this bug and fixed it in 8.0 of the server.  Any chance of
getting a backport?  Or is my only option to run with protocolVersion=2
on the jdbc connection.

Thanks,
--Barry 


-Original Message-
From: Oliver Jowett [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 24, 2004 1:38 AM
To: Barry Lind
Cc: Tom Lane; [EMAIL PROTECTED]
Subject: Re: [HACKERS] [JDBC] Strange server error with current 8.0beta
driver

Oliver Jowett wrote:

 Perhaps PerformCursorOpen should copy the query tree before planning, 
 or plan in a different memory context?

Patch attached. It moves query planning inside the new portal's memory
context. With this applied I can run Barry's testcase without errors,
and valgrind seems OK with it too.

-O


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] MAX/MIN optimization via rewrite (plus query rewrites

2004-11-24 Thread Mark Kirkwood
I think a summary of where the discussion went might be helpful 
(especially for me after a week or so away doing perl).

There were a number of approaches suggested, which I will attempt to 
summarize in a hand wavy fashion - (apologies for any misrepresentation 
caused):

i)   Rewrite max/min querys using order by in presence of a suitable index.
ii)  Provide alternate (i.e rewritten) querys for consideration along 
with the
original, letting the planner use its costing methods to choose as 
usual.

iii) Provide alternate plans based on presence of certain aggregate types in
the query, letting the planner use its costing methods to choose as 
usual.

iv)  Create short-cut evaluations for certain aggregates that don't actually
need to see all the (to-be aggregated) data.
v)   Create a mechanism for defining per-aggregate optimization operators.
Note that some of these ideas may overlap one another to some extent.
Some critiques of the various approaches are:
i)   Too simple, rewrite may not be better than original, only simple 
queries
can be handled this way. Probably reasonably easy to implement.

ii)  Simple queries will be well handled, but very complex transformations
needed to handle even slightly more complex ones. Probably medium -
difficult to implement.
iii) Rules for creating alternate plans will mimic the issues with ii).
Probably medium - difficult to implement.
iv)  May need different short cuts for each aggregate - datatype 
combination.
Implies conventional  and  operators, or the existence of similar
use definable ones (or a way of finding suitable ones). Guessing medium
to implement.

v)   Is kind of a generalization of iv). The key areas of difficulty are the
specification of said optimization operators and the definition of 
an API
for constructing/calling them. Guessing difficult to implement.

I am leaning towards ii) or iv) as the most promising approaches - what 
do people think?

regards
Mark
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[HACKERS] Intermittent bug

2004-11-24 Thread Thomas Hallgren
Hi,
I have an intermittent bug in PL/Java that only shows up on Win32. Is 
there any way of debugging the postgres.exe process that corresponds to 
your connection on a win32 (MinGW) platform?

I have a MinGW environment with gdb but I can't figure out how to make 
gdb attach to a running process it seems to happily accept any PID and 
claim that it's attached although it isn't.

I have a Visual Studio installed too. It's able to attach but it fails 
to see any symbols.

Help greatly appreciated,
Thomas Hallgren

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[HACKERS] plpgsql lacks generic identifier for record in triggers...

2004-11-24 Thread Sean Chittenden
Now that pgmemcache is getting more use, I've heard a couple of groans 
regarding the need to have two functions with exactly the same code 
body.  This is necessary because there is no generic way of handling 
NEW/OLD.  For example:

db=# CREATE FUNCTION schma.tbl_ins_upd() RETURNS TRIGGER AS 'BEGIN
EXECUTE public.mc_init();
EXECUTE public.mc_delete(''mc_key'');
RETURN NEW;
END;' LANGUAGE 'plpgsql';
db=# CREATE FUNCTION schma.tbl_del() RETURNS TRIGGER AS 'BEGIN
EXECUTE public.mc_init();
EXECUTE public.mc_delete(''mc_key'');
RETURN OLD;
END;' LANGUAGE 'plpgsql';
db=# CREATE TRIGGER tbl_ins_upd_trg AFTER INSERT OR UPDATE ON schma.tbl 
FOR EACH ROW EXECUTE PROCEDURE schma.tbl_ins_upd();
db=# CREATE TRIGGER tbl_del_trg AFTER DELETE ON schma.tbl FOR EACH ROW 
EXECUTE PROCEDURE schma.tbl_del();

It's be nice if there was a generic return type so that one could 
collapse those two functions and trigger creation statements into one 
function and one trigger.  Something like:

db=# CREATE FUNCTION schma.tbl_inval() RETURNS TRIGGER AS 'BEGIN
EXECUTE public.mc_init();
EXECUTE public.mc_delete(''mc_key'');
RETURN ROW;
END;' LANGUAGE 'plpgsql';
db=# CREATE TRIGGER tbl_inval_trg AFTER INSERT OR UPDATE OR DELETE ON 
schma.tbl FOR EACH ROW EXECUTE PROCEDURE schma.tbl_inval();

pgmemcache has pushed this to the surface as a problem that otherwise 
wouldn't exist.  That said, plpgsql's semantics are clearly the issue 
here as it's a syntax problem.  ROW being an alias for NEW in the 
INSERT and UPDATE case, and OLD in the DELETE case.  Thoughts?  Would a 
patch be accepted that modified plpgsql's behavior to include a new 
predefined alias?  Better yet, could TRIGGER functions be allowed to 
return nothing (ala VOID)?  For example:

db=# CREATE FUNCTION schma.tbl_inval() RETURNS TRIGGER AS 'BEGIN
EXECUTE public.mc_init();
EXECUTE public.mc_delete(''mc_key'');
RETURN;
END;' LANGUAGE 'plpgsql';
db=# CREATE TRIGGER tbl_inval_trg AFTER INSERT OR UPDATE OR DELETE ON 
schma.tbl FOR EACH ROW EXECUTE PROCEDURE schma.tbl_inval();

Which would tell the backend to assume that the row wasn't changed and 
proceed with its handling.  This is the preferred approach, IMHO... but 
I think is the hardest to achieve (I haven't looked to see what'd be 
involved yet).

Enjoy your T-Day commute if you haven't yet.  -sc
--
Sean Chittenden
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] plpgsql lacks generic identifier for record in triggers...

2004-11-24 Thread Sean Chittenden
Now that pgmemcache is getting more use, I've heard a couple of groans 
regarding the need to have two functions with exactly the same code 
body.  This is necessary because there is no generic way of handling 
NEW/OLD.  For example:
[snip]  Err... wait, this is a classic case of send first then 
finishing to pondering the gripe.

db=# CREATE FUNCTION schma.tbl_inval() RETURNS TRIGGER AS 'BEGIN
EXECUTE public.mc_init();
EXECUTE public.mc_delete(''mc_key'');
RETURN ROW;
END;' LANGUAGE 'plpgsql';
db=# CREATE TRIGGER tbl_inval_trg AFTER INSERT OR UPDATE OR DELETE ON 
schma.tbl FOR EACH ROW EXECUTE PROCEDURE schma.tbl_inval();
A statement trigger should be used instead since the return value is 
ignored (and NULL can be used to satisfy the need for return to 
actually return something).  When updating dynamic keys, you always 
need to be explicit regarding NEW/OLD to get the data version, but for 
static keys, statement triggers are the way to go.  Ex:

db=# CREATE FUNCTION schma.tbl_inval() RETURNS TRIGGER AS 'BEGIN
EXECUTE public.mc_init();
EXECUTE public.mc_delete(''mc_key'');
RETURN NULL;
END;' LANGUAGE 'plpgsql';
db=# CREATE TRIGGER tbl_inval_trg AFTER INSERT OR UPDATE OR DELETE ON 
schma.tbl FOR EACH STATEMENT EXECUTE PROCEDURE schma.tbl_inval();
Very nice.  -sc
--
Sean Chittenden
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [HACKERS] -V, --version -- deprecated?

2004-11-24 Thread Bruce Momjian
Neil Conway wrote:
 Peter Eisentraut wrote:
  --help and --version are the standard options that are supported 
  everywhere.  In the era before we had long options everywhere, we 
  implemented -V as an alternative in some programs, in particular those 
  in and around initdb, because of the version cross-checking it does 
  using those options.
 
 Ok, good to know. FWIW -V is almost universal among the client 
 binaries (not just those in and around initdb).
 
  At one point, long options where broken on some BSD versions.  I don't 
  know what became of that, but if we don't have new information it might 
  be safest to leave things where they are.
 
 Can anyone confirm this? (If this actually affects any modern platforms 
 it means that --help doesn't work at the very least, which seems a Bad 
 Thing. So I'm a little skeptical that this is still a problem.)

FreeBSD had a problem with double-dash args but I thought that related
to getopt, and I can't remember how that fits in.  Maybe we defined '-'
in getopt and said it took an argument and tested for '-help' and
'-verbose', but now we just check argv right inside main.  I can't
remember totally.

  Hence, the -V option is not the preferred public interface, so it's not 
  prominently documented, which may or may not be reasonable in minds 
  other than my own.
 
 Fair enough, but I think it's inconsistent to document it in some places 
 but not in others. I think we ought to either declare -V deprecated 
 (and perhaps remove the docs for it), or accept that we need to live 
 with it because of long-options silliness and document -V as a valid 
 alternate.

Agreed.  psql --help certainly looks inconsistent --- only --help and
--version are long.

---


Usage:
  psql [OPTIONS]... [DBNAME [USERNAME]]

General options:
  -d DBNAME   specify database name to connect to (default:
postgres)
  -c COMMAND  run only single command (SQL or internal) and exit
  -f FILENAME execute commands from file, then exit
  -l  list available databases, then exit
  -v NAME=VALUE   set psql variable NAME to VALUE
  -X  do not read startup file (~/.psqlrc)
  --help  show this help, then exit
  --version   output version information, then exit


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] How to make @id or $id as parameter name in plpgsql,

2004-11-24 Thread Francisco Figueiredo Jr.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Arnold.Zhu wrote:
| I want to use @id, @name as plpgsql's parameter, then I've no need to
change C# source,
| only change Npgsql driver not to trim @ and stored procedure to plpgsql.
|
Hi Arnold,
Npgsql already supports parameter names starting with @. This was a
feature users were requesting exactly for the porpose of helping port
code from sql server to postgresql.
Please, let me know of any modifications you may still need to do to get
it working. I did some tests and it were working well.

- --
Regards,
Francisco Figueiredo Jr.
Membro Fundador do Projeto MonoBrasil - MonoBrasil Project Founder Member
http://monobrasil.softwarelivre.org

- -
Science without religion is lame;
religion without science is blind.
~  ~ Albert Einstein
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iQEVAwUBQaVATf7iFmsNzeXfAQLaMAgAgVlHRnr3amZ0kEbocnYp8tiH17fClpHv
L4J/btccMlp/j4pGNiDujLKldXI9EozgzTZ1SyATw+NVTRRJqcDxA74iAfVIxA8n
ucNAXwkeAMg/ko/B93ked6h+sUbHtvojz64osVmB2Uu0VmqBORVhuHYUZhGD+5EV
6iW+1BzIAuY4hVExnL3DxOaqnHs8iFRGj4l5Vns0Vzdq8c11FSAr9AUYDuO8sU4H
TzVnynW56n7lotqo41+ejWU/wTBDgiic8X8ZmYYBe8VB6FZjyPiiwcIhlc3mExkw
UyAIsZZlA1GhUXMWkyLrbSMV1WPa/Bq7mXy/+vtWSZ9eO0iqfOQRhA==
=ha8i
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] How to make @id or $id as parameter name in plpgsql,is it available?

2004-11-24 Thread Arnold.Zhu
Hello, Francisco Figueiredo Jr.

plpgsql did not support @id as parameter(need double-quote), somebody suggest 
me to use vid
for parameter. When use vid as parameter for plpgsql, in C# program we use 
@vid, Npgql
will delete @, then pass vid to plpgsql. So I want to change Npgsql not to 
delete @ in program.


Thanks  Regards!
 
Arnold.Zhu
2000-11-25




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Postresql 8.0 Beta 3 - SELECT ... FOR UPDATE

2004-11-24 Thread Bruce Momjian
ronzo wrote:
 Hi
 
 Was already implemented the timeout on the SELECT ... FOR UPDATE 
 (non-blocking lock) and/or is possible known if the lock exist on the 
 specified ROW before executing the SELECT?
 
 Please note: ours need is the timeout/verify at the ROW level, not at the 
 table level. 
 
 Is already OK? Is in the TODO list?
 May you suggest an alternative method?

We have discussed this at length and no one could state why having an
timeout per lock is any better than using a statement_timeout.

We can not do a NOWAIT on a single SELECT because there are alot of
locks used even for a select and having them fail randomly would be
useless.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] plpgsql lacks generic identifier for record in triggers...

2004-11-24 Thread Tom Lane
Sean Chittenden [EMAIL PROTECTED] writes:
 ... Better yet, could TRIGGER functions be allowed to 
 return nothing (ala VOID)?
 Which would tell the backend to assume that the row wasn't changed and 
 proceed with its handling.  This is the preferred approach, IMHO... but 
 I think is the hardest to achieve (I haven't looked to see what'd be 
 involved yet).

plperl is doing it that way, so I don't see why plpgsql couldn't allow
it.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Postresql 8.0 Beta 3 - SELECT ... FOR UPDATE

2004-11-24 Thread Rod Taylor
On Wed, 2004-11-24 at 22:13 -0500, Bruce Momjian wrote:
 
 We have discussed this at length and no one could state why having an
 timeout per lock is any better than using a statement_timeout.

Actually, I hit one.

I have a simple queue and a number of processes pulling jobs out of the
queue. Due to transactional requirements, the database is appropriate
for a first cut.

Anyway, a statement_timeout of 100ms is usually plenty to determine that
the job is being processed, and for one of the pollers to move on, but
every once in a while a large job (4 to 5MB chunk of data) would find
itself in the queue which takes more than 100ms to pull out.

Not a big deal, just bump the timeout in this case.

Anyway, it shows a situation where it would be nice to differentiate
between statement_timeout and lock_timeout OR it demonstrates that I
should be using userlocks...

-- 


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Postresql 8.0 Beta 3 - SELECT ... FOR UPDATE

2004-11-24 Thread Bruce Momjian
Rod Taylor wrote:
 On Wed, 2004-11-24 at 22:13 -0500, Bruce Momjian wrote:
  
  We have discussed this at length and no one could state why having an
  timeout per lock is any better than using a statement_timeout.
 
 Actually, I hit one.
 
 I have a simple queue and a number of processes pulling jobs out of the
 queue. Due to transactional requirements, the database is appropriate
 for a first cut.
 
 Anyway, a statement_timeout of 100ms is usually plenty to determine that
 the job is being processed, and for one of the pollers to move on, but
 every once in a while a large job (4 to 5MB chunk of data) would find
 itself in the queue which takes more than 100ms to pull out.
 
 Not a big deal, just bump the timeout in this case.
 
 Anyway, it shows a situation where it would be nice to differentiate
 between statement_timeout and lock_timeout OR it demonstrates that I
 should be using userlocks...

Wouldn't a LOCK NOWAIT be a better solution?  That is new in 8.0.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Postresql 8.0 Beta 3 - SELECT ... FOR UPDATE

2004-11-24 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 Rod Taylor wrote:
 Anyway, it shows a situation where it would be nice to differentiate
 between statement_timeout and lock_timeout OR it demonstrates that I
 should be using userlocks...

 Wouldn't a LOCK NOWAIT be a better solution?  That is new in 8.0.

LOCK NOWAIT is only helpful if you can express your problem as not
wanting to wait for a table-level lock.  When you're trying to grab a
row-level lock via SELECT FOR UPDATE, there isn't any provision for
NOWAIT.

The notion of a global lock_timeout setting is bogus IMHO, because
every proposed application of it has failed to consider the locks taken
internally by the system.  But that objection wouldn't apply to a SELECT
FOR UPDATE NOWAIT command where the no wait behavior only applied to
the row lock being explicitly grabbed.

I thought I remembered someone working on such a thing just recently.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Postresql 8.0 Beta 3 - SELECT ... FOR UPDATE

2004-11-24 Thread Rod Taylor
On Wed, 2004-11-24 at 22:47 -0500, Bruce Momjian wrote:
 Rod Taylor wrote:
  On Wed, 2004-11-24 at 22:13 -0500, Bruce Momjian wrote:
   
   We have discussed this at length and no one could state why having an
   timeout per lock is any better than using a statement_timeout.
  
  Actually, I hit one.
  
  I have a simple queue and a number of processes pulling jobs out of the
  queue. Due to transactional requirements, the database is appropriate
  for a first cut.
  
  Anyway, a statement_timeout of 100ms is usually plenty to determine that
  the job is being processed, and for one of the pollers to move on, but
  every once in a while a large job (4 to 5MB chunk of data) would find
  itself in the queue which takes more than 100ms to pull out.
  
  Not a big deal, just bump the timeout in this case.
  
  Anyway, it shows a situation where it would be nice to differentiate
  between statement_timeout and lock_timeout OR it demonstrates that I
  should be using userlocks...
 
 Wouldn't a LOCK NOWAIT be a better solution?  That is new in 8.0.

On a for update?

-- 


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] lwlocks and starvation

2004-11-24 Thread Tom Lane
I wrote:
 Neil Conway [EMAIL PROTECTED] writes:
 AFAICS, that is not the case. See lwlock.c, circa line 264: in LW_SHARED 
 mode, we check if exclusive is zero; if so, we acquire the lock 
 (increment the shared lock count and do not block). And exclusive is 
 set non-zero only when we _acquire_ a lock in exclusive mode, not when 
 we add an exclusive waiter to the wait queue.

 Ooh, you are right.  I think that may qualify as a bug ...

I am thinking of addressing this issue by applying the attached patch.
The patch prevents a would-be shared locker from cutting in front of a
waiting exclusive locker.  It is not a 100% solution because it does not
cover the case where a waiting exclusive locker is released, then a new
shared locker arrives at the lock before the exclusive locker is given
any cycles to acquire the lock.  However I don't see any cure for the
latter problem that's not worse than the disease (granting the lock to
the released waiter immediately is definitely not better).

On the other hand we might consider that this isn't a big problem and
just leave things as they are.  We haven't seen any indication that
starvation is a real problem in practice, and so it might be better to
avoid extra trips through the kernel scheduler.  In particular, I think
the existing behavior may fit in better with the idea that a shared
locker should be able to acquire and release the lock multiple times
during his timeslice, regardless of whether there is contention.

And on the third hand, I think the heavily-contended LWLocks are only
taken in exclusive mode anyway, so this may be mostly a moot point.

Comments?

regards, tom lane

*** src/backend/storage/lmgr/lwlock.c.orig  Sun Aug 29 01:06:48 2004
--- src/backend/storage/lmgr/lwlock.c   Wed Nov 24 23:13:19 2004
***
*** 261,267 
}
else
{
!   if (lock-exclusive == 0)
{
lock-shared++;
mustwait = false;
--- 261,273 
}
else
{
!   /*
!* If there is anyone waiting for the lock, we don't 
take it.
!* This could only happen if an exclusive locker is 
blocked
!* by existing shared lockers; we want to queue up 
behind him
!* rather than risk starving him indefinitely.
!*/
!   if (lock-exclusive == 0  lock-head == NULL)
{
lock-shared++;
mustwait = false;
***
*** 376,382 
}
else
{
!   if (lock-exclusive == 0)
{
lock-shared++;
mustwait = false;
--- 382,394 
}
else
{
!   /*
!* If there is anyone waiting for the lock, we don't take it.
!* This could only happen if an exclusive locker is blocked
!* by existing shared lockers; we want to queue up behind him
!* rather than risk starving him indefinitely.
!*/
!   if (lock-exclusive == 0  lock-head == NULL)
{
lock-shared++;
mustwait = false;

---(end of broadcast)---
TIP 8: explain analyze is your friend


[HACKERS] pg_dump for values inserted in last day

2004-11-24 Thread vertigo
Hello
I need to pg_dump my database, but i want only to receive sql commands 
which will insert row which were inserted into database today. Is there 
any way to do it ?
I have large database and i want to make incremential backups.

Thanx
Michal
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [HACKERS] pg_dump for values inserted in last day

2004-11-24 Thread Christopher Browne
Oops! vertigo [EMAIL PROTECTED] was seen spray-painting on a wall:
 Hello
 I need to pg_dump my database, but i want only to receive sql
 commands which will insert row which were inserted into database
 today. Is there any way to do it ?
 I have large database and i want to make incremential backups.

This is a feature planned for addition fairly shortly to the Slony-I
replication system; it is pretty plausible that this will be the
quickest route to it...
-- 
output = reverse(gro.gultn @ enworbbc)
http://linuxfinances.info/info/x.html
Language shapes the  way we think   and determines what we can  think
about. -- Benjamin Whorf

---(end of broadcast)---
TIP 8: explain analyze is your friend


[HACKERS] Solaris 8 regression test failure with 8.0.0beta5

2004-11-24 Thread Kenneth Marshall
Here are the diffs for the regression test failures on Solaris 8.
The tests work fine on Redhat9 and Redhat Enterprise Linux 3.

Ken Marshall

*** ./expected/errors.out   Sat Mar 13 22:25:17 2004
--- ./results/errors.outTue Nov 23 14:09:45 2004
***
*** 297,303 
  -- Check that division-by-zero is properly caught.
  --
  select 1/0;
! ERROR:  division by zero
  select 1::int8/0;
  ERROR:  division by zero
  select 1/0::int8;
--- 297,304 
  -- Check that division-by-zero is properly caught.
  --
  select 1/0;
! ERROR:  floating-point exception
! DETAIL:  An invalid floating-point operation was signaled. This probably 
means an out-of-range result or an invalid operation, such as division by zero.
  select 1::int8/0;
  ERROR:  division by zero
  select 1/0::int8;

==

*** ./expected/transactions.out Mon Sep 13 15:09:51 2004
--- ./results/transactions.out  Tue Nov 23 14:09:50 2004
***
*** 290,296 
  BEGIN;
SAVEPOINT one;
SELECT 0/0;
! ERROR:  division by zero
SAVEPOINT two;-- ignored till the end of ...
  ERROR:  current transaction is aborted, commands ignored until end of 
transaction block
RELEASE SAVEPOINT one;  -- ignored till the end of ...
--- 290,297 
  BEGIN;
SAVEPOINT one;
SELECT 0/0;
! ERROR:  floating-point exception
! DETAIL:  An invalid floating-point operation was signaled. This probably 
means an out-of-range result or an invalid operation, such as division by zero.
SAVEPOINT two;-- ignored till the end of ...
  ERROR:  current transaction is aborted, commands ignored until end of 
transaction block
RELEASE SAVEPOINT one;  -- ignored till the end of ...
***
*** 364,370 
DECLARE c CURSOR FOR SELECT unique2/0 FROM tenk1;
SAVEPOINT two;
FETCH 10 FROM c;
! ERROR:  division by zero
ROLLBACK TO SAVEPOINT two;
-- c is now dead to the world ...
FETCH 10 FROM c;
--- 365,372 
DECLARE c CURSOR FOR SELECT unique2/0 FROM tenk1;
SAVEPOINT two;
FETCH 10 FROM c;
! ERROR:  floating-point exception
! DETAIL:  An invalid floating-point operation was signaled. This probably 
means an out-of-range result or an invalid operation, such as division by zero.
ROLLBACK TO SAVEPOINT two;
-- c is now dead to the world ...
FETCH 10 FROM c;

==

*** ./expected/plpgsql.out  Tue Nov 16 12:10:15 2004
--- ./results/plpgsql.out   Tue Nov 23 14:10:17 2004
***
*** 1826,1837 
  
  select trap_zero_divide(0);
  NOTICE:  should see this
! NOTICE:  caught division_by_zero
!  trap_zero_divide 
! --
!-1
! (1 row)
! 
  select trap_zero_divide(10);
  NOTICE:  should see this
  NOTICE:  should see this only if 10  0
--- 1826,1834 
  
  select trap_zero_divide(0);
  NOTICE:  should see this
! ERROR:  floating-point exception
! DETAIL:  An invalid floating-point operation was signaled. This probably 
means an out-of-range result or an invalid operation, such as division by zero.
! CONTEXT:  PL/pgSQL function trap_zero_divide line 6 at assignment
  select trap_zero_divide(10);
  NOTICE:  should see this
  NOTICE:  should see this only if 10  0

==


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [Testperf-general] Re: [HACKERS] ExclusiveLock

2004-11-24 Thread Kenneth Marshall
On Wed, Nov 24, 2004 at 11:00:30AM -0500, Bort, Paul wrote:
  From: Kenneth Marshall [mailto:[EMAIL PROTECTED]
 [snip]
  The simplest idea I had was to pre-layout the WAL logs in a 
  contiguous fashion
  on the disk. Solaris has this ability given appropriate FS 
  parameters and we
  should be able to get close on most other OSes. Once that has 
  happened, use
  something like the FSM map to show the allocated blocks. The 
  CPU can keep track
  of its current disk rotational position (approx. is okay) 
  then when we need to
  write a WAL block start writing at the next area that the 
  disk head will be
  sweeping. Give it a little leaway for latency in the system 
  and we should be
  able to get very low latency for the writes. Obviously, there 
  would be wasted
  space but you could intersperse writes to the granularity of 
  space overhead
  that you would like to see. As far as implementation, I was reading an
  interesting article that used a simple theoretical model to 
  estimate disk head
  position to avoid latency.
  
 
 Ken, 
 
 That's a neat idea, but I'm not sure how much good it will do. As bad as
 rotational latency is, seek time is worse. Pre-allocation isn't going to do
 much for rotational latency if the heads also have to seek back to the WAL. 
 
 OTOH, pre-allocation could help two other performance aspects of the WAL:
 First, if the WAL was pre-allocated, steps could be taken (by the operator,
 based on their OS) to make the space allocated to the WAL contiguous.
 Statistics on how much WAL is needed in 24 hours would help with that
 sizing. This would reduce seeks involved in writing the WAL data.
 
 The other thing it would do is reduce seeks and metadata writes involved in
 extending WAL files.
 
 All of this is moot if the WAL doesn't have its own spindle(s).
 
 This almost leads back to the old-fashioned idea of using a raw partition,
 to avoid the overhead of the OS and file structure. 
 
 Or I could be thoroughly demonstrating my complete lack of understanding of
 PostgreSQL internals. :-)
 
 Maybe I'll get a chance to try the flash drive WAL idea in the next couple
 of weeks. Need to see if the hardware guys have a spare flash drive I can
 abuse.
 
 Paul
 

Obviously, this whole process would be much more effective on systems with
separate WAL drives. But even on less busy systems, the lock-step of
write-a-WAL/wait-for-heads/write-a-WAL can dramatically decrease your
effective throughput to the drive. For example, the worst case would be
write one WAL block to disk. Then schedule another WAL block to be written
to disk. This block will need to wait for 1 full disk rotation to perform
the write. On a 10k drive, you will be able to log in this scenario 166
TPS assuming no piggy-backed syncs. Now look at the case where we can use
the preallocated WAL and write immediately. Assuming a 100% sequential disk
layout, if we can start writing within 25% of the full rotation we can now
support 664 TPS on the same hardware. Now look at a typical hard drive on
my desktop system with 150M sectors/4 heads/5 tracks - 3000 blocks/track
or 375 8K blocks. If we can write the next block within 10 8K blocks we can
perform 6225 TPS, within 5 8K blocks = 12450 TPS, within 2 8K blocks =
31125 TPS. This is just on a simple disk drive. As you can see, even small
improvements can make a tremendous difference in throughput. My analysis
is very simplistic and whether we can model the I/O quickly enough to be
useful is still to be determined. Maybe someone on the mailing list with
more experiance in how disk drives actually function can provide more
definitive information.

Ken


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Beta5 now Available

2004-11-24 Thread Marc G. Fournier
On Wed, 24 Nov 2004, Joshua D. Drake wrote:
Marc G. Fournier wrote:
On Wed, 24 Nov 2004, Gavin M. Roy wrote:
No you can not, but the tracker isn't very resource intesive from my past 
experience.  I can host it if needed.

It wasn't that that I was thinking of ... just wondering if there was some 
way of having it redundant, instead of centralized ... nice thing about ftp 
mirrors, we have about 60 of them, so if one goes down, it doesn't really 
affect anything ... from what everyone is saying, if the tracker goes down, 
it affects everything ... seems odd to have new technology still having 
single points of failure :(

O.k. I know nothing of bittorrent but couldn't we just have to machines that 
are identically configured that have a round robin DNS thing going on?
we're not talking load issues this time ... the way I understand it, 
bittorrent has a 'tracker' process that only one can be running on the BT 
Distributed Network at once ... so, if the bt central server goes down, 
the whole bt network goes down with it ...

At least, this is my understanding, someone please correct me if I'm wrong 
...


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Beta5 now Available

2004-11-24 Thread Gavin M. Roy
To a degree you are correct.  AFAIK new downloads could not start if the 
tracker crashed. The tracker is the traffic cop that tells peer nodes 
about each other.  I dont believe the tracker that comes from the main 
bit torrent author allows for multiple trackers with a common data 
repository, but if we're really interested, maybe we could hack up the 
code to talk to a central pgsql database allowing multiple trackers on a 
dns rr.

Gavin
Marc G. Fournier wrote:
On Wed, 24 Nov 2004, Joshua D. Drake wrote:
Marc G. Fournier wrote:
On Wed, 24 Nov 2004, Gavin M. Roy wrote:
No you can not, but the tracker isn't very resource intesive from 
my past experience.  I can host it if needed.

It wasn't that that I was thinking of ... just wondering if there 
was some way of having it redundant, instead of centralized ... nice 
thing about ftp mirrors, we have about 60 of them, so if one goes 
down, it doesn't really affect anything ... from what everyone is 
saying, if the tracker goes down, it affects everything ... seems 
odd to have new technology still having single points of failure :(

O.k. I know nothing of bittorrent but couldn't we just have to 
machines that are identically configured that have a round robin DNS 
thing going on?

we're not talking load issues this time ... the way I understand it, 
bittorrent has a 'tracker' process that only one can be running on the 
BT Distributed Network at once ... so, if the bt central server 
goes down, the whole bt network goes down with it ...

At least, this is my understanding, someone please correct me if I'm 
wrong ...


Marc G. Fournier   Hub.Org Networking Services 
(http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 
7615664

---(end of broadcast)---
TIP 8: explain analyze is your friend


[HACKERS] follow-up to previous build problem for 8.0.0beta5 on SPARC

2004-11-24 Thread Kenneth Marshall
The failure that I posted earlier for 8.0.0beta5 on Solaris 8/SPARC
with gcc-3.4.0 and -O3 can be worked around by disabling the interblock
scheduling. I used the following gcc options and 8.0.0beta5 built fine
on the SPARC Solaris 8 machine:

gcc -O3 -fno-sched-interblock ...

The Redhat 9 and Redhat Enterprise Linux worked with a plan -O3.
Note, all the previous 7.4.x versions built using a plain -O3 too
on SPARC/Solaris 8.

Ken Marshall

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] lwlocks and starvation

2004-11-24 Thread Neil Conway
On Wed, 2004-11-24 at 23:30 -0500, Tom Lane wrote:
 It is not a 100% solution because it does not
 cover the case where a waiting exclusive locker is released, then a new
 shared locker arrives at the lock before the exclusive locker is given
 any cycles to acquire the lock.  However I don't see any cure for the
 latter problem that's not worse than the disease

Yeah, I don't think this is a problem -- eventually the exclusive waiter
will win the coin flip anyway.

 On the other hand we might consider that this isn't a big problem and
 just leave things as they are.  We haven't seen any indication that
 starvation is a real problem in practice, and so it might be better to
 avoid extra trips through the kernel scheduler.

Yes, I'm a little concerned about applying a patch to address what is,
so far, an entirely academic concern -- especially if it might hurt
performance.

-Neil



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[HACKERS] Help!

2004-11-24 Thread ElayaRaja S
Hi,
While configuring OpenCRX by using Postgresql i am facing probmelm.
The problem while creating db using the command ( createdb -h
localhost -E utf8 -U system crx-CRX ) .
Erro:
createdb: could not connect to database template1: could not connect
to server:
 Connection refused
  Is the server running on host localhost and accepting
  TCP/IP connections on port 5432?

Details of Platfor that i am using:
1) Linux 9
2) Postgre SQL 7.4.5
Note: I have doubt whether the error is due to my existing Postgresql
7.3. Means while installing linux there is built in postgresql 7.3.
But i am installed postgresql 7.4.5. Plz let me know its very urgent.

-- 
Warm Regards,

S.ElayaRaja
Mobile:  98450 59540
E-Mail:   [EMAIL PROTECTED]
 [EMAIL PROTECTED]

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Solaris 8 regression test failure with 8.0.0beta5

2004-11-24 Thread Tom Lane
Kenneth Marshall [EMAIL PROTECTED] writes:
 Here are the diffs for the regression test failures on Solaris 8.
 The tests work fine on Redhat9 and Redhat Enterprise Linux 3.

... and most other platforms ...

   select 1/0;
 ! ERROR:  floating-point exception
 ! DETAIL:  An invalid floating-point operation was signaled. This probably 
 means an out-of-range result or an invalid operation, such as division by 
 zero.

This query would invoke the following code in int4div:

if (arg2 == 0)
ereport(ERROR,
(errcode(ERRCODE_DIVISION_BY_ZERO),
 errmsg(division by zero)));

result = arg1 / arg2;

I suggest directing some strong words to your compiler vendor.  If the
if test doesn't complete before the subsequent division causes a trap,
that is not our problem.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org