Re: [HACKERS] subquery returning array

2005-05-25 Thread Greg Stark

Zeljko Vrba [EMAIL PROTECTED] writes:

 Is there yet another way of making WHERE field = ANY (subselect
 returning an array) work? Or make postgres to use index?

You could use the int_array_enum() function from the contrib/int_agg module.

 Also, what is the limit on the number of elements in the IN (...) condition
 before the database resorts to sequential scan?

It depends on the table. If, for example, there's little free space in your
table and the records are very narrow then sequential scans will be especially
efficient since each i/o will read in many records. An index scan of any
significant size would likely have to read nearly every page in multiple
times.

The problem in your case is that Postgres has no idea how large an array it's
going to find when it's doing the planning. I'm not sure what the fallback
logic is, apparently it's assuming a fairly low selectivity for the =ANY
constraint.

Actually I think there was some discussion a while back about making =ANY a
little less pessimistic about the selectivity. Perhaps 8.1 will be better for
this. If you're still in development with a long timeframe you could try a CVS
build to see if that's the case.


-- 
greg


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] PseudoPartitioning and agregates

2005-05-25 Thread Greg Stark

Tom Lane [EMAIL PROTECTED] writes:

 The EXPLAIN ANALYZE overhead for the Append is still pretty heavy,
 but when comparing actual runtimes for the two queries, they are
 now very nearly the same.

How hard would it be to have Postgres actually remove the gettimeofday
overhead from the EXPLAIN ANALYZE output? 

It seems like it ought to be able to time a couple hundred gettimeofday calls
and get a perfectly usable figure. The actual amount of overhead per call
should be very consistent and it should be easy to keep track of how many
gettimeofday calls were needed.

For queries that don't do much i/o, especially on loaded machines, there could
still be a problem in that the added syscalls would cause most unix schedulers
to behave differently. But at least it would be basically right.

-- 
greg


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] IN/OUT parameters

2005-05-25 Thread Dave Cramer
I think this is the driver's problem. It appears that we are following 
the spec, so lets leave this alone.


Regarding what an OUT parameter might mean. We already have some 
facility in the FE/BE to indicate the types of the returning columns. 
Directionality is implied by where in the bind message these parameters 
are sent. I can see the extra overhead of putting directionality in for 
each column.
I'd agree that changing the FE/BE at this point is not a good idea. 
However, if we ever do get real stored procs, this may be an issue.


I've looked at the driver some more and it appears we parse the sql more 
than once due to the fact that we are supporting v2, and v3 backend 
protocols. At this point I've got to consider the TODO on the jdbc list 
of putting a proper parser in. Are there any javacc wizards around ?


Dave
Tom Lane wrote:


Dave Cramer [EMAIL PROTECTED] writes:
 

At this point I'd just like the backend to ignore the OUT parameter if 
it were set that way, but allow it to be sent.
   



I think you're thinking at the wrong level.  AIUI the issue occurs at
the Parse stage, long before any parameter value is sent (or not sent).

What you really want, if I'm understanding correctly, is to be able to
send SQL that looks like this:

SELECT * FROM myfunc($1,$2,$3);

and then have some of the Param symbols be simply ignored while looking
up myfunc().

This is pretty ugly in itself, and I'd not want to add a protocol change
into the mix to make it happen.  But possibly we could do it without any
protocol-level change.  What would you say to specifying that Params
that are declared as type VOID are ignored in a function lookup?  What
this would mean is that you'd need to do the following:

1. The SQL string can look as above.

2. When sending the Parse message, you'd specify the parameter types
  as, say, INT4/VOID/VOID (or possibly UNKNOWN/VOID/VOID).

3. When sending Bind, you'd have to specify dummy values (probably
  nulls) for the VOID parameter positions.

While I haven't looked at the backend code yet, I think we could drop
VOID-type parameters out of the argument list of a function during
parse analysis without too much trouble.  This would affect *all*
function calls, not only those appearing in SELECT * FROM, but
I don't see any legitimate use of VOID-type values that this would
interfere with.

Klugy, isn't it?  Anyone have a better idea?  Or should we just tell
Dave that the JDBC driver ought to handle it?

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


 



--
Dave Cramer
http://www.postgresintl.com
519 939 0336
ICQ#14675561


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] PseudoPartitioning and agregates

2005-05-25 Thread Tom Lane
Greg Stark [EMAIL PROTECTED] writes:
 How hard would it be to have Postgres actually remove the gettimeofday
 overhead from the EXPLAIN ANALYZE output? 

Personally, I dislike measurement tools that lie to you under the flag
of producing more-easily-interpreted results.

As an example of why this would be a bad idea, the total time would no
longer be closely related to the actual elapsed time (as measured by
psql's \timing for instance) so you would be entirely unable to tell
whether there was some significant factor not being measured.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[HACKERS] WAL replay failure after file truncation(?)

2005-05-25 Thread Tom Lane
We've seen two recent reports:
http://archives.postgresql.org/pgsql-admin/2005-04/msg8.php
http://archives.postgresql.org/pgsql-general/2005-05/msg01143.php
of postmaster restart failing because the WAL contains a reference
to a page that no longer exists.

I can think of a couple of possible explanations:
1. filesystem corruption, ie the page should exist in the file but the
   kernel has forgotten about it;
2. we truncated the file subsequent to the WAL record that causes
   the panic.

However, neither of these theories is entirely satisfying, because
the WAL replay logic has always acted like this; why haven't we
seen similar reports ever since 7.1?  And why are both of these
reports connected to btrees, when file truncation probably happens
far more often on regular tables?

But, setting those nagging doubts aside, theory #2 seems like a definite
bug that we ought to do something about.

The only really clean answer I can see is for file truncation to force a
checkpoint just before issuing the ftruncate call.  That way, no WAL
records referencing the to-be-deleted pages would need to be replayed in
a subsequent crash.  However, checkpoints are expensive enough to make
this solution very unattractive from a performance point of view.  And
I fear it's not a 100% solution anyway: what about the PITR scenario,
where you need to replay a WAL log that was made concurrently with a
filesystem backup being taken?  The backup might well include the
truncated version of the file, but you can't avoid replaying the
beginning portion of the WAL log.

Plan B is for WAL replay to always be willing to extend the file to
whatever record number is mentioned in the log, even though this
may require inventing the contents of empty pages; we trust that their
contents won't matter because they'll be truncated again later in the
replay sequence.  This seems pretty messy though, especially for
indexes.  The major objection to it is that it gives up error detection
in real filesystem-corruption cases: we'll just silently build an
invalid index and then try to run with it.  (Still, that might be better
than refusing to start; at least you can REINDEX afterwards.)

Any thoughts?

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] PseudoPartitioning and agregates

2005-05-25 Thread Greg Stark
Tom Lane [EMAIL PROTECTED] writes:

 Greg Stark [EMAIL PROTECTED] writes:

  How hard would it be to have Postgres actually remove the gettimeofday
  overhead from the EXPLAIN ANALYZE output? 
 
 Personally, I dislike measurement tools that lie to you under the flag
 of producing more-easily-interpreted results.

This is pretty standard practice for profilers in other contexts.

 As an example of why this would be a bad idea, the total time would no
 longer be closely related to the actual elapsed time (as measured by
 psql's \timing for instance) so you would be entirely unable to tell
 whether there was some significant factor not being measured.

Well that would be easily remedied by printing the total overhead subtracted
from all the nodes after the plan.

-- 
greg


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] WAL replay failure after file truncation(?)

2005-05-25 Thread Bruce Momjian
Tom Lane wrote:
 Plan B is for WAL replay to always be willing to extend the file to
 whatever record number is mentioned in the log, even though this
 may require inventing the contents of empty pages; we trust that their
 contents won't matter because they'll be truncated again later in the
 replay sequence.  This seems pretty messy though, especially for
 indexes.  The major objection to it is that it gives up error detection
 in real filesystem-corruption cases: we'll just silently build an
 invalid index and then try to run with it.  (Still, that might be better
 than refusing to start; at least you can REINDEX afterwards.)

Should we add a GUC to allow recovery in such cases, but don't mention
it in postgresql.conf?  This way we could give people a recovery
solution, and also track the cases it happens, and not accidentally
trigger the recovery case.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[HACKERS] logging sql from JDBC

2005-05-25 Thread David Parker



It appears that SQL 
submited via the latest JDBC driver won't get logged even if 
log_min_duration_statement is set appropriately. From what I've found in the 
archives, this is becausethe driveruses an "extended" version of the 
protocol.

Could somebody point 
me at the area of the source where this decision gets made, and/or how difficult 
it would be to enable this logging?

Thanks!
- 
DAP--David 
Parker Tazz Networks (401) 
709-5130



[HACKERS] Source Code Help Needed

2005-05-25 Thread Vikram Kalsi
Hello,

I've been using Postgresql-8.0.1 (Release date: 2005-01-31) for my
research work and I guess I finally need some help with it...

I'm not trying to modify the existing functionality, but I want to add
few things. In particular, I'm calculating two new Cost values and I
need to use them while the query is executing. Please See code
snippets below-

1.) New Variables ADDED to src/include/nodes/plannodes.h
typedef struct Plan
{
Costhutz_tbl_benefit;   /* Benefit for TableAccess */
Costhutz_idx_benefit;   /* Benefit for IndexScan */
}


2.)  New Variables ADDED to src/include/nodes/relation.h
typedef struct Plan
{
Costhutz_tbl_benefit;   /* Benefit for TableAccess */
Costhutz_idx_benefit;   /* Benefit for IndexScan */
}


3.) ADDITIONS to costsize.c

void cost_seqscan(Path *path, Query *root,
 RelOptInfo *baserel)
{
path-hutz_tbl_benefit = x;
path-hutz_idx_benefit = x;  
}


void cost_index(Path *path, Query *root, RelOptInfo *baserel,
   IndexOptInfo *index, List *indexQuals, bool is_injoin)
{
path-hutz_tbl_benefit = x;
path-hutz_idx_benefit = x;  
}


However, after these modifications the server process crashes on
running a Join query like
select s_suppkey,c_custkey from supplier,customer where s_suppkey125
and s_suppkey128 and c_custkey100 and c_custkey103 and
c_custkey=s_suppkey

But, this query runs fine select s_suppkey from supplier where
s_suppkey125 and s_suppkey128

I'm tracing the point at which the process crashes and at this point
it seems to inside
src/backend/optimizer/path/joinrels.cmake_rels_by_joins()

So, my question is, after adding the two new variables what other
modifications do I need to make for code to work, And later on, what
changes are reqd so that I can access these variables while executing
the Query Plan in lets say ExecutePlan() and its sub-functions like
ExecProcNode()...

Thanks to everybody on this group,
-Vikram Kalsi
MSEE PennStateUniv

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] WAL replay failure after file truncation(?)

2005-05-25 Thread Manfred Koizar
On Wed, 25 May 2005 11:02:11 -0400, Tom Lane [EMAIL PROTECTED]
wrote:
Plan B is for WAL replay to always be willing to extend the file to
whatever record number is mentioned in the log, even though this
may require inventing the contents of empty pages; we trust that their
contents won't matter because they'll be truncated again later in the
replay sequence.

Another idea: WAL replay does not apply changes to nonexistent blocks,
but it keeps a list (hash table, file, whatever) of those blocks.
When a truncate WAL record is found, all entries for blocks affected
by the truncation are removed from the list.  Is it sufficient to
remember just the relation and the block number or do we need the
contents a well?

If the list is non-empty at the end of WAL replay, this is evidence of
a serious problem (file system corruption or Postgres bug).

Servus
 Manfred


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Cost of XLogInsert CRC calculations

2005-05-25 Thread Manfred Koizar
On Wed, 18 May 2005 13:50:22 +0200, I wrote:
The most important figure is, that at MaxSpeed (/O2) 2x32 is almost
twice as fast as CRC32 while only being marginally slower than CRC32.
  ^
Silly typo!  That should have been:
The most important figure is, that at MaxSpeed (/O2) 2x32 is almost
twice as fast as CRC64 while only being marginally slower than CRC32.

Servus
 Manfred


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Source Code Help Needed

2005-05-25 Thread Tom Lane
Vikram Kalsi [EMAIL PROTECTED] writes:
 1.) New Variables ADDED to src/include/nodes/plannodes.h
 2.)  New Variables ADDED to src/include/nodes/relation.h
 ...
 However, after these modifications the server process crashes on
 running a Join query like
 select s_suppkey,c_custkey from supplier,customer where s_suppkey125
 and s_suppkey128 and c_custkey100 and c_custkey103 and
 c_custkey=s_suppkey

Did you do a full recompile (make clean and rebuild) after modifying
these widely-known structures?

Unless you configured with --enable-depend, you can't expect that plain
make will recompile everything that needs recompiled.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] WAL replay failure after file truncation(?)

2005-05-25 Thread Tom Lane
Manfred Koizar [EMAIL PROTECTED] writes:
 Another idea: WAL replay does not apply changes to nonexistent blocks,
 but it keeps a list (hash table, file, whatever) of those blocks.
 When a truncate WAL record is found, all entries for blocks affected
 by the truncation are removed from the list.  Is it sufficient to
 remember just the relation and the block number or do we need the
 contents a well?

We don't *have* the contents ... that's exactly why it's panicking ...

 If the list is non-empty at the end of WAL replay, this is evidence of
 a serious problem (file system corruption or Postgres bug).

That seems like a good idea --- it covers the problem, and what's more,
it won't complain until after it finishes replay.  Which means that if
you do get the PANIC, you can get out of it with pg_resetxlog and not
need to worry that you are throwing away whatever good data is available
from the WAL log.  (This assumes that we go ahead and checkpoint out
the working buffers before we make the check for empty list.)

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


[HACKERS] adding a function to pg_proc.h

2005-05-25 Thread Jaime Casanova
I have added a function in sources and added appropiate lines in pg_proc.h

DATA(insert OID = 2560 ( get_view_column_defaultPGNSP PGUID 12 f f
t f i 1 2283 25 25 21 _null_ _null_ _null_ get_view_column_default -
_null_));
DESCR(get default value for view column);

then make; make install; and initdb the data directory.

When i start postgres i look for the function using:

\df get_view*

   List of functions
   Schema   |  Name   | Result data type | Argument data types  
+-+--+--
 pg_catalog | get_view_column_default | anyelement   | text, text, smallint
(1 row)

So far, so good... I will do a test

create table foo (
col1serial,
col2int2
);
NOTICE:  CREATE TABLE will create implicit sequence foo_col1_seq for
serial column foo.col1

create view v_foo as select * from foo;
NOTICE:  CREATE VIEW will create implicit INSERT/UPDATE/DELETE rules

alter table v_foo alter col1 set default
get_view_column_default('public'::text, 'foo'::text, 1::smallint);
ERROR:  function get_view_column_default(text, text, smallint) does not exist
HINT:  No function matches the given name and argument types. You may
need to add explicit type casts.


So the question is why ERROR message tell the function doesn't exist,
as you can see the function actually exists.
I change the return data type to bigint and got the same error so it
seems is not a problem of returning anyelement.

any comments will be appreciated.

-- 
regards,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] adding a function to pg_proc.h

2005-05-25 Thread Tom Lane
Jaime Casanova [EMAIL PROTECTED] writes:
 I have added a function in sources and added appropiate lines in pg_proc.h
 DATA(insert OID = 2560 ( get_view_column_defaultPGNSP PGUID 12 f f
 t f i 1 2283 25 25 21 _null_ _null_ _null_ get_view_column_default -
 _null_));
 DESCR(get default value for view column);

Try putting the right value for pronargs ;-)

Also, it seems highly unlikely that this function should be marked
as immutable.  Stable sounds more likely.

Also, defining the return type as ANYELEMENT will definitely NOT work,
since none of the input arguments are polymorphic.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Source Code Help Needed

2005-05-25 Thread Vikram Kalsi
Thanks Tom, that solved it...I added the new variables one at a time
and did do a make clean on the first occasion but I must have
forgotten to do it the second time...

I have another question-
The new variables that I've added to Plan and Path i.e.
hutz_tbl_benefit and hutz_idx_benefit are being assigned values inside
cost_seqscan() and cost_index() in costsize.c and this is done
alongside startup_cost and total_cost.

But, when I read the value of hutz_tbl_benefit and hutz_idx_benefit
inside pg_plan_query() or later at the execution stage inside
ExecProcNode(), the value is absent, but startup_cost and total_cost
retain their values.

So, I suppose that during the query planning and optimization stage,
the value of the original variables in the plan are somehow copied to
the plan which is finally returned inside pg_plan_query().

Could somebody direct me to the appropriate code/function(s) which
does this copying so that I can add hutz_tbl_benefit and
hutz_idx_benefit to that as well.

Thanks in anticipation,
Regards,





On 5/25/05, Tom Lane [EMAIL PROTECTED] wrote:
 Vikram Kalsi [EMAIL PROTECTED] writes:
  1.) New Variables ADDED to src/include/nodes/plannodes.h
  2.)  New Variables ADDED to src/include/nodes/relation.h
  ...
  However, after these modifications the server process crashes on
  running a Join query like
  select s_suppkey,c_custkey from supplier,customer where s_suppkey125
  and s_suppkey128 and c_custkey100 and c_custkey103 and
  c_custkey=s_suppkey
 
 Did you do a full recompile (make clean and rebuild) after modifying
 these widely-known structures?
 
 Unless you configured with --enable-depend, you can't expect that plain
 make will recompile everything that needs recompiled.
 
regards, tom lane


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] adding a function to pg_proc.h

2005-05-25 Thread Jaime Casanova
On 5/25/05, Tom Lane [EMAIL PROTECTED] wrote:
 Jaime Casanova [EMAIL PROTECTED] writes:
  I have added a function in sources and added appropiate lines in
 pg_proc.h
  DATA(insert OID = 2560 ( get_view_column_defaultPGNSP PGUID 12 f f
  t f i 1 2283 25 25 21 _null_ _null_ _null_ get_view_column_default -
  _null_));
  DESCR(get default value for view column);
 
 Try putting the right value for pronargs ;-)
 
jeje... fooly of me

 Also, it seems highly unlikely that this function should be marked
 as immutable.  Stable sounds more likely.
 
i don't know when i change this i marked it volatile, but well.. again
it's my fault

 Also, defining the return type as ANYELEMENT will definitely NOT work,
 since none of the input arguments are polymorphic.
 
mmm... This is a problem, there is a way to make a function that can
be used for returning different datatypes depending on the columns

   regards, tom lane
 

-- 
regards,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Network write errors (was: Re: Feature freeze date for

2005-05-25 Thread Bruce Momjian
Andrew - Supernews wrote:
 On 2005-05-01, Peter Eisentraut [EMAIL PROTECTED] wrote:
  The problem, as I understand it, is that if you have a long-running 
  query and the client process disappears, the query keeps running and 
  holds whatever resources it may have until it finishes.  In fact, it 
  keeps sending data to the client and keeps ignoring the SIGPIPE it gets 
  (in case of a Unix-domain socket connection).
 
 Ignoring the SIGPIPE is exactly the right thing to do.
 
 What's _not_ a good idea is ignoring the EPIPE error from write(), which
 seems to currently be reported via ereport(COMMERROR) which doesn't try
 and abort the query as far as I can tell.

Where are you seeing this?  I looked from PostgresMain() to
ReadCommand() to SocketBackend() to pq_getbyte() which returns EOF, and
PostgresMain checks that and does a proc_exit(0).

I think the main problem is that a long-running query never tries to
interact with the client during the query.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Source Code Help Needed

2005-05-25 Thread Tom Lane
Vikram Kalsi [EMAIL PROTECTED] writes:
 So, I suppose that during the query planning and optimization stage,
 the value of the original variables in the plan are somehow copied to
 the plan which is finally returned inside pg_plan_query().

Look in createplan.c --- there are a couple places in there you need to
fix.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] WAL replay failure after file truncation(?)

2005-05-25 Thread Christopher Kings-Lynne

Plan B is for WAL replay to always be willing to extend the file to
whatever record number is mentioned in the log, even though this
may require inventing the contents of empty pages; we trust that their
contents won't matter because they'll be truncated again later in the
replay sequence.  This seems pretty messy though, especially for
indexes.  The major objection to it is that it gives up error detection
in real filesystem-corruption cases: we'll just silently build an
invalid index and then try to run with it.  (Still, that might be better
than refusing to start; at least you can REINDEX afterwards.)


You could at least log some sort of warning during the PITR process. 
Anyone running a PITR not paying attention to their logs is in trouble 
anyway...


Chris

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] WAL replay failure after file truncation(?)

2005-05-25 Thread Tom Lane
Christopher Kings-Lynne [EMAIL PROTECTED] writes:
 Plan B is for WAL replay to always be willing to extend the file to
 whatever record number is mentioned in the log, even though this
 may require inventing the contents of empty pages; we trust that their
 contents won't matter because they'll be truncated again later in the
 replay sequence.  This seems pretty messy though, especially for
 indexes.  The major objection to it is that it gives up error detection
 in real filesystem-corruption cases: we'll just silently build an
 invalid index and then try to run with it.  (Still, that might be better
 than refusing to start; at least you can REINDEX afterwards.)

 You could at least log some sort of warning during the PITR process. 
 Anyone running a PITR not paying attention to their logs is in trouble 
 anyway...

I'm more worried about the garden variety restart-after-power-failure
scenario.  As long as the postmaster starts up, it's unlikely people
will inspect the postmaster log too closely.  I think we have a choice
of PANICking and refusing to start, or assuming that no one will notice
that we did something dubious.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Network write errors (was: Re: Feature freeze date for

2005-05-25 Thread Tom Lane
Bruce Momjian pgman@candle.pha.pa.us writes:
 Andrew - Supernews wrote:
 What's _not_ a good idea is ignoring the EPIPE error from write(), which
 seems to currently be reported via ereport(COMMERROR) which doesn't try
 and abort the query as far as I can tell.

 Where are you seeing this?  I looked from PostgresMain() to
 ReadCommand() to SocketBackend() to pq_getbyte() which returns EOF, and
 PostgresMain checks that and does a proc_exit(0).

It sounds like you were following the input-from-client logic.  Andrew
is complaining about the output-to-client side.

We deliberately don't abort on write-to-client failure.  There have
been periodic discussions about changing that, but I'm not convinced
that the advocates for a change have made a good case.  Right now,
it's predictable that the backend only fails due to loss of connection
when it waits for a new command.  The behavior would become much less
predictable if we allowed write failure to abort the query.  As an
example: whether an UPDATE command completes might depend on whether
any invoked triggers try to issue NOTICEs.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[HACKERS] Josh Narins' Subcountry System

2005-05-25 Thread David Fetter
Folks,

Josh Narins asked me to post this as he was unable to post it himself.

http://narins.net:4321/blog/subcountry.html

Cheers,
D
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
phone: +1 510 893 6100   mobile: +1 415 235 3778

Remember to vote!

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]