[GENERAL] SQL question on chunking aggregates

2014-03-04 Thread Owen Hartnett
Hi all:

I have a table that has multiple records for a single owner_id.  I'm able to 
use array_arg to combine the records into a single row, which works fine.  I'm 
using this sql:

select owner_id, array_agg(trim(maplot)), array_agg(revallandvalue + 
revalbuildingvalues) from parcel group by owner_id;

Which results in the following (sometimes there's only one record per 
aggregate, sometimes multiple):

1030600;{154191};{244690}
1030900;{22202};{217210}
1031130;{113135,113138,113132,113130,113133,113 
   127,113126,113131,113129,113136,113125,113   
 137,113134,113
128};{7700,7700,7700,7700,7700,7700,7700,7700,7700,191770,7700,7700,7700,7700}

What I want to do, is where there are more than 5 rows involved in the 
aggregate, as in the last example, to split it into multiple rows of 5 
aggregated rows.  It's for a mailing list and I want to combine like addresses 
into one record, but if I'm over 5, I have to print the rest on a separate 
letter.

1031130;{113135,113138,113132,113130,113
133};{7700,7700,7700,7700,7700}
1031130;{113127,113126,113131,113129,113
136};{7700, 7700,7700,7700,191770}
1031130;{113125,113137,113134,113
128};{7700,7700,7700,7700}

 It looks like I should be able to use the window function to do this, but I've 
been unsuccessful.  The following runs, but doesn't seem to have any effect:

select owner_id, array_agg(trim(maplot)), array_agg(revallandvalue + 
revalbuildingvalues) from parcel group by owner_id
window mywindow as (rows between current row and 5 following);

Does anyone have any suggestions on what I should try?

-Owen

Re: [GENERAL] Re: PG fails on Windows Server 2008: could not reattach to shared memory ... : 487

2010-02-04 Thread Owen Hartnett

On Aug 9, 2009, at 3:10 PM, Magnus Hagander wrote:

 On Sun, Aug 9, 2009 at 10:22, Abraham, Dannydanny_abra...@bmc.com wrote:
 Does anybody have a proven way to regenerate this problem?
 
 SO that I can tell that the patch really does fix it?
 
 No. That's the reason it has taken so long to figure out how to fix it
 (which we hopefully have done now).

Has a consensus been arrived at?  Is this patch rolled into the current system? 
 

I have a client wants to install W2K8 server release 2 and he's asking if 
there's any conflict.  I can update him to the latest when he does.

Thanks,

-Owen
 
 
 -- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: PG fails on Windows Server 2008: could not reattach to shared memory ... : 487

2010-02-04 Thread Owen Hartnett

On Feb 4, 2010, at 3:47 PM, Magnus Hagander wrote:

 2010/2/4 Owen Hartnett o...@clipboardinc.com:
 
 On Aug 9, 2009, at 3:10 PM, Magnus Hagander wrote:
 
 On Sun, Aug 9, 2009 at 10:22, Abraham, Dannydanny_abra...@bmc.com wrote:
 Does anybody have a proven way to regenerate this problem?
 
 SO that I can tell that the patch really does fix it?
 
 No. That's the reason it has taken so long to figure out how to fix it
 (which we hopefully have done now).
 
 Has a consensus been arrived at?  Is this patch rolled into the current 
 system?
 
 The patch is included in the latest versions, yes. I don't recall
 seeing any reports of the problem remaining after the patch, but it's
 not 100% sure of course.

Thanks, Magnus, I can see no reason why we shouldn't attempt to migrate to 
Server 2008 now.

-Owen
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Functions returning multiple rowsets

2009-09-28 Thread Owen Hartnett


On Sep 28, 2009, at 3:31 PM, Mike Christensen wrote:

One thing I like about Microsoft SQL is you can write a sproc that  
does:


SELECT * FROM TableA
SELECT * FROM TableB

And in .NET, you'll have a DataSet object with two DataTables, one for
each table.  Do either of the techniques outlined below provided this
functionality, though I suppose in .NET you'd be using the NpgSql
adapter instead..


I use the NpgSql interface for just this type of transparent .NET  
stuff, and it works plenty fine for my uses.


-Owen




Mike

On Mon, Sep 28, 2009 at 11:17 AM, Merlin Moncure  
mmonc...@gmail.com wrote:
On Mon, Sep 28, 2009 at 2:05 PM, Merlin Moncure  
mmonc...@gmail.com wrote:
On Mon, Sep 28, 2009 at 12:49 PM, Thom Brown thombr...@gmail.com  
wrote:

Hi,

Is it possible to create a function using 'SQL' as language which  
could
return multiple rowsets, such as SELECT * FROM TABLE1; SELECT *  
FROM
TABLE2; where both results are returned in the output?  I know  
this can be
done in stored procedures in other RBDMS but can this be done in  
a function?


you have a couple of approaches:
*) declare refcursors inside the function and references them  
later in

the transaction
*) make temp tables
*) arrays:
create function two_sets(_foos out foo[], _bars out bar[]) returns  
record as

$$
 select array(select foo from foo), array(select bar from bar);
$$ language sql;

with s as (select * from two_sets()),
foo as (select unnest(_foos) from s),
bar as (select unnest(_bars) from s)
select
 (select count(*) from foo) as no_foos,
 (select count(*) from bar) as no_bars;


I should mention the query above only works in 8.4+.  the array
approach generally only works as of 8.3 and has limits (don't return
billion records).  Also, it's not good style (IMO) to name 'with'
expressions same as actual tables:

with s as (select * from two_sets()),
f as (select unnest(_foos) from s),
b as (select unnest(_bars) from s)
select
 (select count(*) from f) as no_foos,
 (select count(*) from b) as no_bars;

is cleaner.

merlin

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Attempting to connect

2009-02-17 Thread Owen Hartnett

At 3:34 PM -0800 2/15/09, Bob Pawley wrote:

Finally Success

Thanks everyone



Here's one I ran into today - connection with the server was blocked 
by Kasperski Anti-Virus.  The clue was that other machines on the 
network could connect, but this one couldn't.  Funny thing was that 
we had originally had it connecting, then then next day it wouldn't 
(anti-virus update maybe).  But people should add this to their 
toolkit when they're looking for connection problems.


-Owen

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] libpq on iPhone?

2009-02-02 Thread Owen Hartnett


I've been able to use the libpq libraries on the iPhone simulator, 
but I need to compile it on the ARM processor to actually run it on 
an iPhone.  It seems my choices are:


1) add the relevant .c files to the iPhone XCode project and try to 
get the conditional macros and configuration correct.


or

2) Try to recompile it from the original source distribution, only 
try to tell the makefiles that I want it for the ARM processor, 
instead of the Intel for the simulator.


I started along path 1, but wonder if path 2 is a better idea.  You 
guys seem to have a mechanism for compiling on new processors, and 
maybe that might be the strategy to take.


Any recommendations?

-Owen

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dump restore as transaction?

2008-12-01 Thread Owen Hartnett


If my perusal of the sql generated by pg_dump is correct, then it 
doesn't appear that it's wrapped in a transaction, and thus might be 
able to only complete a partial restore?


Or does

psql myDatabase mypg_dumpfile

wrap the file stream in a transaction?

If not, is there a reason why it can't be done so (some process that 
cannot be run as a transaction inside the file)?


Or should I just add begin and commit statements at the beginning and 
end of file?


I want to provide a mechanized daily update of one schema into a 
differently named database, and I'd like it to rollback if if fails.


-Owen

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump restore as transaction?

2008-12-01 Thread Owen Hartnett

At 12:37 PM -0500 12/1/08, Tom Lane wrote:

Owen Hartnett [EMAIL PROTECTED] writes:

 If my perusal of the sql generated by pg_dump is correct, then it
 doesn't appear that it's wrapped in a transaction, and thus might be
 able to only complete a partial restore?


That's correct, and intentional.  You can use pg_restore's -1 switch
or add begin/end manually if you don't want it to work that way.

regards, tom lane


Thanks to everybody for their help.  You guys are great.

-Owen

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question on libpq parameters

2008-11-30 Thread Owen Hartnett

At 1:26 PM -0500 11/30/08, Tom Lane wrote:

Owen Hartnett [EMAIL PROTECTED] writes:

 Yes, it did.  I'm confused.  My first parameter is a string, but the
 following two are integers.  I thought the paramType parameter
 indicated the type.  Do the integers need to be sprintf'd to strings?


Yes.

Alternatively, you could pass the integers as binary, but that's not
notation-free either (you need htonl or some such).

regards, tom lane


Thanks, that did it. I got confused with the binary parameters, and 
PQgetvalue returning the binary through a char * when it returns a 
binary value.


-Owen

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Question on libpq parameters

2008-11-29 Thread Owen Hartnett


The following libpq code chokes on me with invalid input to an 
integer parameter (state == PGRES_FATAL_ERR aPtr == Error: Invalid 
Input syntax for integer  .   It fails on the call to 
PQexecPrepared.  I suspect I'm not doing the parameters right.  Can 
anyone spot anything wrong?


Thanks,

-Owen

char * getPicture(PGconn * myconnection, char * maplot, int unitno, int bldgno)
{
PGresult * resultant;
Oid paramTypes[3] = { 25, 23, 23 };
ExecStatusType state;

	char * sqlquery = Select image from images where maplot = $1 
and unitno = $2 and imageno = $3;


const char * myparamValues[3];
char *  aPtr;

myparamValues[0] = maplot;
myparamValues[1] = unitno;
myparamValues[2] = bldgno;


	resultant = PQprepare(myconnection, pictureplan, sqlquery , 
3, paramTypes);

if (PQresultStatus(resultant) == PGRES_COMMAND_OK)
{
		resultant = PQexecPrepared(	myconnection, 
pictureplan, 3, myparamValues, NULL, NULL, 1);

if (PQresultStatus(resultant) == PGRES_TUPLES_OK)
{
aPtr = PQgetvalue(resultant, 0, 0);
return aPtr;
}
else
{
state = PQresultStatus(resultant);
aPtr = PQresultErrorMessage(resultant);
}
}
else
{
state = PQresultStatus(resultant);
aPtr = PQresultErrorMessage(resultant);
}
return NULL;
}


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question on libpq parameters

2008-11-29 Thread Owen Hartnett

At 11:45 PM -0500 11/29/08, Tom Lane wrote:

Owen Hartnett [EMAIL PROTECTED] writes:

 The following libpq code chokes on me with invalid input to an
 integer parameter (state == PGRES_FATAL_ERR aPtr == Error: Invalid
 Input syntax for integer  .   It fails on the call to
 PQexecPrepared.  I suspect I'm not doing the parameters right.  Can
 anyone spot anything wrong?


You can't just point to integers as if they were strings.  Didn't
your compiler complain about that?

regards, tom lane


Yes, it did.  I'm confused.  My first parameter is a string, but the 
following two are integers.  I thought the paramType parameter 
indicated the type.  Do the integers need to be sprintf'd to strings? 
There's only one paramValues array and it's type is a const char * 
array.


-Owen

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql and Mac OS X

2008-11-08 Thread Owen Hartnett

At 7:51 PM -0500 11/7/08, Tom Allison wrote:

adam_pgsql wrote:
When I do the install script in contrib it says I have no rights 
to the directory.  It was in /usr/local/pgsql/data/log and I 
changed it to /usr/local/pgsql/log.  It was set as root.wheel with 
755 permissions so I suspect it's mad at me because the postgres 
user was left in the cold.


Have you switched on logging in postgresql.conf?



doh!

There's no postgresql.conf file, just a postgresql.conf.sample.

Guess I have to start from .sample and work my way up...

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Mine was installed as part of the distribution.

-Owen

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange Postgresql behavior solved

2008-07-26 Thread owen hartnett


On Jul 26, 2008, at 2:32 AM, Leif B. Kristensen  
[EMAIL PROTECTED] wrote:



On Saturday 26. July 2008, Owen Hartnett wrote:

Probably some funky stuff with the router (not one of their expensive
ones) that caused all the consternation, but I originally thought
corrupt database (because I could get 117 records to come out fine,
but not the 118th).  Also, I had narrowed it down to failing only
when accessing the last three fields of that 118th record, the first
40 fields were fine.


That sounds a lot like the game mode router bug:

http://www.azureuswiki.com/index.php/Torrents_stop_at_99_percent


Yes. It looks like just the behavior. The read failed in the exact  
same record every time, even at the same column, and the server is  
sitting in a DMZ.


-Owen


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Strange Postgresql behavior solved

2008-07-25 Thread Owen Hartnett


I spent a day on this, and it's really not a PostgreSQL issue, but I 
thought I'd post it in case someone else comes down with it.


Scenario:

I moved the physical location and networking environment of the 
server.  It's on Mac OS X - XServe, but that isn't germaine to the 
story.  Originally, the server was the DHCP router for the network, 
now it sits in a demilitarized zone off a DLink router that's 
providing DHCP and NAT.


Symptoms:

Postgres was unable to resolve *some* simple queries, like Select * 
from salestable where thekey = 118, although it would work for 
thekey values of 1 all the way to 117.  The connection would just 
freeze, and timeout after a couple of minutes.


My application worked this way, and so did pgAdmin, but Navicat LE didn't!

Solution:

I finally realized that my application and pgAdmin were both 
accessing the server using the domain name, and Navicat was using the 
IP number.  Indeed, replacing the connection data with the IP number 
on the app and pgAdmin made the world safe again.


Probably some funky stuff with the router (not one of their expensive 
ones) that caused all the consternation, but I originally thought 
corrupt database (because I could get 117 records to come out fine, 
but not the 118th).  Also, I had narrowed it down to failing only 
when accessing the last three fields of that 118th record, the first 
40 fields were fine.


-Owen

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Finding records that are not there

2008-05-22 Thread Owen Hartnett


Hi:

This is gotta be elementary SQL 101, but I'm having a mental block as 
to why this doesn't work.


I have two tables that have identical index fields, maplot and 
unitno, (both indexes span two columns) and I want to find all the 
records in the commcost table that don't have a corresponding record 
in the bldg file.


The SQL I've tried is:

select commcost.maplot, commcost.unitno from commcost
where not exists(select 1 from commcost, bldg
where commcost.maplot = bldg.maplot and
commcost.unitno = bldg.unitno)
order by commcost.maplot

It returns no records although I know that there are records in 
commcost which do not match keys with records from bldg.


Help!  What am I doing wrong?

-Owen

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Finding records that are not there

2008-05-22 Thread Owen Hartnett

At 6:47 PM +0200 5/22/08, hubert depesz lubaczewski wrote:

On Thu, May 22, 2008 at 12:21:35PM -0400, Owen Hartnett wrote:

 The SQL I've tried is:
 select commcost.maplot, commcost.unitno from commcost
 where not exists(select 1 from commcost, bldg
 where commcost.maplot = bldg.maplot and
 commcost.unitno = bldg.unitno)
 order by commcost.maplot


change it to:


 select commcost.maplot, commcost.unitno from commcost
 where not exists(select 1 from bldg
 where commcost.maplot = bldg.maplot and
 commcost.unitno = bldg.unitno)
 order by commcost.maplot


or simply write:

select * from commcost except select * from bldg;

depesz


Thank you very much for your quick response!

-Owen

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Linux v.s. Mac OS-X Performance

2007-12-01 Thread Owen Hartnett

At 6:15 PM -0500 11/30/07, Greg Smith wrote:

On Fri, 30 Nov 2007, Guido Neitzer wrote:

Actually - In our test if just used with a similar load as pgbench 
(e.g. typical web applications) Mac OS X  10.4.7 performed better 
then Yellow Dog Linux (I was testing with G5 hardware) on the same 
hardware as soon as more than about 90 concurrent clients were 
simulated.


At this point, that's just an interesting historical note.  Yellow 
Dog is not a particularly good Linux compared with the ones that 
have gotten years worth of performance tuning for Intel/AMD 
processors.  And you really can't extrapolate anything useful today 
from how it ran on a G5--that's two layers of obsolete.  The 
comparisons that matter now are Intel+Mac OS vs. Intel+a popular 
Linux aimed at servers.


As an unrelated note, I'm curious what you did with pgbench that you 
consider it a reasonable similation of a web application.  The 
default pgbench transaction is very write-heavy, and the read-only 
option available is way too simple to be realistic.  You'd need to 
pass in custom scripts to execute to get something that acted like a 
web app.  pgbench is an unruly tool, and there's many ways to run it 
that gives results that aren't so useful.


If this is any help to anyone, I'm running Postgresql on an Intel 
Xserve Mac OS X.  Performance is more than fine for my usage.  If 
anyone would like me to run some benchmark code to test comparisons, 
I'd be happy to do so.


-Owen

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] POLL: Women-sized t-shirts for PostgreSQL

2007-11-22 Thread Owen Hartnett

At 3:30 PM -0800 11/21/07, Selena Deckelmann wrote:

Hello,

I have been looking into getting some woman-sized PostgreSQL shirts,
in addition to the more typical men-sized tshirts. The particular
style is American Apparel shirts, style # 2102. They are normal
looking t-shirts, just proportioned a bit more correctly for a
woman's body. The shirt says 'PostgreSQL' in small print on the front,
and has the big elephant logo on the back. We'll most likely ask for a
$20 donation to the PostgreSQL non-profit for each shirt.

So, I'm conducting a small poll:

If you are interested in a woman-sized shirt, please EMAIL ME DIRECTLY
with a quantity, and size (XS, S, M, L, XL). If you would not wear a
PostgreSQL woman-sized shirt yourself, please consider whether you
would maybe get one as a gift for someone else this holiday season :)


In answer to your query, I believe the following SQL would do it:

Select count(*), size from postgresqlusers
where gender = 'female' or hasFemaleAcquaintance group by size;

If this doesn't work, what version are you using?

Ohyou're running a POLL!  Never mind.

-Owen

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Generating subtotal reports direct from SQL

2007-10-10 Thread Owen Hartnett

At 1:32 AM +0100 10/10/07, Gregory Stark wrote:

Owen Hartnett [EMAIL PROTECTED] writes:

 Such that the final table has additional subtotal rows with the 
aggregate sum
 of the amounts.  I'm thinking I can generate two tables and merge 
them, but is

 there an easier way using a fancy Select statement?


Unfortunately the fancy SQL feature you're looking for is ROLLUP which
Postgres doesn't support.

I think you would have to do

(
select id, code, amount from foo where code  10
union all
select null, code, sum(amount) from foo where code  10 group by code
) order by code, id



Perfect.  Just what I was looking for.

Thanks!

-Owen

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Generating subtotal reports direct from SQL

2007-10-09 Thread Owen Hartnett


I'm hoping there's a real easy way of doing this that I'm just missing:

Given a Select statement such as:

Select ID, code, amount from foo where code  10;

that gives me a table like this:

ID  codeamount
_
1   4   20
2   3   10
3   4   15
4   2   10
5   3   9
6   3   8

I want to generate a report table like the following (group by code):

ID  codeamount
_
4   2   10
2   10
2   3   10
5   3   9
6   3   8
3   27
1   4   20
3   4   15
4   35
72

Such that the final table has additional subtotal rows with the 
aggregate sum of the amounts.  I'm thinking I can generate two tables 
and merge them, but is there an easier way using a fancy Select 
statement?


-Owen

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Request for feature: pg_dump schema masquerade flag

2007-09-28 Thread Owen Hartnett


I don't think this would be too hard to effect:

When pg_dumping a schema, have an additional flag -m newschemaname, 
that would convert all references in the dump from the original 
schema to the new schema name.


Thus the command:

pg_dump -c -s myoldschemaname -m mynewschemaname mydatabase -f foo

would generate a dump file such that all the data that was in 
myoldschemaname would, upon psql mydatabase  foo would be in a new 
(or replaced) schema mynewschemaname.


The present strategy is to go through the dump and manually change 
the schema names in the dump text.  This is not a reliable mechanism, 
as there may be name collisions with the schema name and other names, 
and there's always the possibility that you might miss one when 
you're hand modifying the code.


I'd be happy to help on the effort if that makes sense.  I don't know 
what the code to pg_dump is like, though.


-Owen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] problem with transactions in VB.NET using npgsql

2007-09-05 Thread Owen Hartnett

At 10:14 AM -0400 8/28/07, Owen Hartnett wrote:

At 7:05 PM -0400 8/27/07, Tom Lane wrote:

Owen Hartnett [EMAIL PROTECTED] writes:

 I assign the transaction object to each of the commands, but it seems
 that some tables will get updated, even when I call rollback.  Is
 something I'm calling secretly calling commit somewhere?


Dunno anything about vb.net, but this sounds like an autocommit feature
that's not doing what you expect.

If nothing else comes to mind, try setting the DB to log all statements
(see log_statement), and compare the resulting trace to what you think
your code is doing.  That should at least narrow it down a lot.

regards, tom lane


Thanks, I'll give that a try.



I've been able to turn on statement logging (I've set log_statement 
to 'all'), but it doesn't seem to show the begin transaction - commit 
- rollback statements.  Is there another way to have them show up in 
the log?


-Owen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] problem with transactions in VB.NET using npgsql

2007-09-05 Thread Owen Hartnett

At 11:32 AM -0400 9/5/07, Tom Lane wrote:

Alvaro Herrera [EMAIL PROTECTED] writes:

 Owen Hartnett wrote:

 I've been able to turn on statement logging (I've set log_statement to
 'all'), but it doesn't seem to show the begin transaction - commit -
 rollback statements.  Is there another way to have them show up in the log?



 If they don't show up, they are not being executed at all; which
 explains why your transactions don't roll back, because there are no
 transaction blocks defined at all.


In PG 8.2 I'd agree, but older versions are not so good about logging
execution of prepared statements.  What's the server version exactly,
and is there any indication of use of prepared statements in the log?

regards, tom lane


That's what I thought at first, but then I went to pgAdmin's SQL page 
and typed in Begin and ran it.  No log shows up, but when I run it 
again from pgAdmin (and from my application), I get a no nested 
transactions allowed error (which I know Npgsql doesn't support), 
which indicates there's a transaction there which is not getting 
logged.


If I run transactions with ADO's normal Execute* functions, they work 
fine.  It's when I'm trying to use the dataset write back routines, 
with a NpgsqlTransaction and a NpgsqlCommandBuilder using Update, 
Insert and Delete commands on multiple tables one after another that 
it behaves as if the transaction isn't there, though it blithely 
accepts the begintransaction and commit commands without complaining.


I'm running the latest, 8.2.4.  The server is Mac OS X Server and 
Windows XP (of course) clients running the ADO software and npgsql 
stuff.


-Owen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] problem with transactions in VB.NET using npgsql

2007-09-05 Thread Owen Hartnett

At 12:41 PM -0400 9/5/07, Owen Hartnett wrote:

At 11:32 AM -0400 9/5/07, Tom Lane wrote:

Alvaro Herrera [EMAIL PROTECTED] writes:

 Owen Hartnett wrote:

 I've been able to turn on statement logging (I've set log_statement to
 'all'), but it doesn't seem to show the begin transaction - commit -
 rollback statements.  Is there another way to have them show up 
in the log?



 If they don't show up, they are not being executed at all; which
 explains why your transactions don't roll back, because there are no
 transaction blocks defined at all.


In PG 8.2 I'd agree, but older versions are not so good about logging
execution of prepared statements.  What's the server version exactly,
and is there any indication of use of prepared statements in the log?

regards, tom lane


That's what I thought at first, but then I went to pgAdmin's SQL 
page and typed in Begin and ran it.  No log shows up, but when I 
run it again from pgAdmin (and from my application), I get a no 
nested transactions allowed error (which I know Npgsql doesn't 
support), which indicates there's a transaction there which is not 
getting logged.


If I run transactions with ADO's normal Execute* functions, they 
work fine.  It's when I'm trying to use the dataset write back 
routines, with a NpgsqlTransaction and a NpgsqlCommandBuilder using 
Update, Insert and Delete commands on multiple tables one after 
another that it behaves as if the transaction isn't there, though it 
blithely accepts the begintransaction and commit commands without 
complaining.


I'm running the latest, 8.2.4.  The server is Mac OS X Server and 
Windows XP (of course) clients running the ADO software and npgsql 
stuff.





Looking at the problem some more - I've been looking at the logs for 
some updates - what I do is update most of the tables with the ADO 
dataset writeback (as used in the Mcmanus  Goldstein's book: 
Database Access with Visual Basic .NET), but I use a simple 
ExecuteNoQuery to update another table, all wrapped up in the same 
transaction.  The transaction affects the ExecuteNoQuery updates, but 
doesn't affect the ADO DataSet writebacks.  In fact, in the logfiles, 
for each logline, it prints IDLE in transaction on the 
ExecuteNoQuery log statement, but just IDLE on the other updates. 
It's almost like the ADO has an independent path, even though I'm 
assigning the same NpgsqlTransaction object to it as I am to the 
ExecuteNoQuery command.


I suppose I'll have to be stepping through that Npgsql code to see 
just what's going on.


-Owen

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] postgresql connection string to Vb.NET

2007-08-29 Thread Owen Hartnett

At 8:49 AM -0700 8/29/07, smithveg wrote:

Hi,

I got a source of connection string at this page, 
http://www.connectionstrings.com/default.aspx?carrier=postgresqlhttp://www.connectionstrings.com/default.aspx?carrier=postgresql
I can't test it because it seems i do not add a reference in visual 
studio 2005.


Where can i download the ODBC or .NET data provider in order to 
successfully connect to postgresql from http://vb.netvb.net




I recommend Npgsql if you're using ADO and Datasets.  Otherwise, the 
ODBC stuff. The OLE interface seems slower than Npgsql, but I haven't 
compared with ODBC.


You can find all the interfaces at:

http://www.postgresql.org/download/

-Owen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] problem with transactions in VB.NET using npgsql

2007-08-28 Thread Owen Hartnett

At 7:05 PM -0400 8/27/07, Tom Lane wrote:

Owen Hartnett [EMAIL PROTECTED] writes:

 I assign the transaction object to each of the commands, but it seems
 that some tables will get updated, even when I call rollback.  Is
 something I'm calling secretly calling commit somewhere?


Dunno anything about vb.net, but this sounds like an autocommit feature
that's not doing what you expect.

If nothing else comes to mind, try setting the DB to log all statements
(see log_statement), and compare the resulting trace to what you think
your code is doing.  That should at least narrow it down a lot.

regards, tom lane


Thanks, I'll give that a try.

-Owen

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] problem with transactions in VB.NET using npgsql

2007-08-27 Thread Owen Hartnett


Maybe someone here can figure it out.  Everything updates fine with 
this code, except where there's an exception, it's not rolling back 
by the transaction.  What I'm trying to do:


Begin a transaction
Do the update, insert, delete checks on each of the data tables, 
using a different npgsqlcommandbuilder for each of the tables.

Commit
if any failure happens, roll back all the changes to the transaction beginning.

I assign the transaction object to each of the commands, but it seems 
that some tables will get updated, even when I call rollback.  Is 
something I'm calling secretly calling commit somewhere?


My code follows.  Thanks for checking it over.  Sorry about the 
length, but I wanted you to see that I'm updating multiple tables 
with multiple dataadapters.


-Owen

Option Explicit On
Imports System.Windows.Forms
Imports npgsql
Imports System.Xml.Serialization
Imports System.IO
Imports System.Collections.Generic
Imports System.Configuration
' Note: some controls, in the forms designer, cover other controls, 
i.e. CommUsageCB covers styleCB

Public Class ParcelDisplayFrm

Public Sub WriteAllData()
Dim trans As NpgsqlTransaction = Nothing
Dim cmd As NpgsqlCommandBuilder
Dim i As Integer
Dim success As Boolean

Try
If Not statusReadOnly Then
i = vbCancel
success = priceIt(Me, full_DataSet, True, True, pc)
dt = full_DataSet.Tables(currentSchema  .parcel)

dt.Rows(0).EndEdit()
dt = full_DataSet.Tables(currentSchema  .accounts)
dt.Rows(0).EndEdit()
dt = full_DataSet.Tables(currentSchema  .bldg)
For i = 0 To dt.Rows.Count - 1
dt.Rows(i).EndEdit()
Next i
dt = full_DataSet.Tables(currentSchema  .commcost)
For i = 0 To dt.Rows.Count - 1
dt.Rows(i).EndEdit()
Next i
dt = full_DataSet.Tables(currentSchema  .outbuildings)
For i = 0 To dt.Rows.Count - 1
If dt.Rows(i).RowState = DataRowState.Added Then
dt.Rows(i).Item(maplot) = Form1.currentMapLot
End If
Debug.Print(dt.Rows.Count)
dt.Rows(i).EndEdit()
Next i
If Not dirtySketch And Not full_DataSet.HasChanges Then
Exit Sub' Nothing to change
End If

Dim dg As New SaveChangesDlog
If dg.ShowDialog = Windows.Forms.DialogResult.Cancel 
Then Exit Sub ' don't save

writeFinalize()
dt = full_DataSet.Tables(currentSchema  .parcel)
m_SqlConnection.Open()
' create a transaction for the rest of all the changes

trans = m_SqlConnection.BeginTransaction

cmd = New NpgsqlCommandBuilder(parcel_DataAdapter)


Dim parcelchanges As DataTable = 
dt.GetChanges(DataRowState.Modified)


If parcelchanges IsNot Nothing Then
parcel_DataAdapter.UpdateCommand = 
cmd.GetUpdateCommand(dt.Rows(0))

parcel_DataAdapter.UpdateCommand.Transaction = trans

parcel_DataAdapter.Update(parcelchanges)
End If
parcelchanges = dt.GetChanges(DataRowState.Deleted)
If parcelchanges IsNot Nothing Then
parcel_DataAdapter.DeleteCommand = 
cmd.GetDeleteCommand(dt.Rows(0))

parcel_DataAdapter.DeleteCommand.Transaction = trans

parcel_DataAdapter.Update(parcelchanges)
End If
parcelchanges = dt.GetChanges(DataRowState.Added)
If parcelchanges IsNot Nothing Then
parcel_DataAdapter.InsertCommand = 
cmd.GetInsertCommand(dt.Rows(0))


parcel_DataAdapter.InsertCommand.Transaction = trans

parcel_DataAdapter.Update(parcelchanges)
End If

' accounts table
cmd = New NpgsqlCommandBuilder(accts_DataAdapter)
dt = full_DataSet.Tables(currentSchema  .accounts)
Dim acctchanges As DataTable = 
dt.GetChanges(DataRowState.Modified)


If acctchanges IsNot Nothing Then
accts_DataAdapter.UpdateCommand = 
cmd.GetUpdateCommand(dt.Rows(0))

accts_DataAdapter.UpdateCommand.Transaction = trans

accts_DataAdapter.Update(acctchanges)
End If
acctchanges = dt.GetChanges(DataRowState.Deleted)
If acctchanges IsNot Nothing Then
accts_DataAdapter.DeleteCommand = 
cmd.GetDeleteCommand(dt.Rows(0))

accts_DataAdapter.DeleteCommand.Transaction = trans

accts_DataAdapter.Update(acctchanges)
  

[GENERAL] Take your postgresSql on the road, and live to tell of it.

2007-08-07 Thread Owen Hartnett


Here's what I want to do:

Checkpoint the database in whatever way is appropriate.

Make copies of the database on several laptops for use in the field 
(in automobiles) to do database changes.  Record all the changes made 
since the checkpoint as the user makes them.


Periodically take all the changes back into the office, take the 
changes made out in the field and apply them to the main database.


Repeat the process.

Notes:

1) Unless an user makes a mistake, there should be no changes to the 
same records by multiple users.  (i.e. any concurrency violations 
should be registered as an exception.)


2) I'd prefer it to just record the sql commands executed by the 
database as text, then use psql  myFieldcommands to update the 
database.  This will also help me isolate any concurrency exceptions, 
and I'd like to wrap the whole update in a transaction, so I can roll 
the whole thing back if it does detect concurrency problems anywhere 
in the process (then I can edit out the offending lines).


3) There's no particular rush to update the database - I don't need 
this real-time.


4) Users might make their checkpoint at a different time from other users.

Since I'm relatively new to Postgres, (and I apologize if this has 
come up before), I'm hoping some respondents will provide me with the 
correct strategy.


-Owen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Take your postgresSql on the road, and live to tell of it.

2007-08-07 Thread Owen Hartnett

At 2:15 PM -0700 8/7/07, Ben wrote:
How many users do you have? Have you considered giving each user a 
schema in which to make their changes? It sounds like you don't 
really have a multi-master replication issue, which makes things 
easier.


Maybe I'm not understanding the strategy, but I don't see what this 
buys me, as I have to end up with a single database schema that has 
incorporated all the changes.  If I can record all the SQL a user 
does from the checkpoint on, then I can psql  it in to the main 
database.  Once I've combined their data into the database that sits 
on the server, I don't need their database copies anymore.


-Owen


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Take your postgresSql on the road, and live to tell of it.

2007-08-07 Thread Owen Hartnett

At 5:13 PM -0500 8/7/07, Scott Marlowe wrote:

On 8/7/07, Owen Hartnett [EMAIL PROTECTED] wrote:


 Here's what I want to do:

 Checkpoint the database in whatever way is appropriate.

 Make copies of the database on several laptops for use in the field
 (in automobiles) to do database changes.  Record all the changes made
 since the checkpoint as the user makes them.

 Periodically take all the changes back into the office, take the
 changes made out in the field and apply them to the main database.

 Repeat the process.

 Notes:

 1) Unless an user makes a mistake, there should be no changes to the
 same records by multiple users.  (i.e. any concurrency violations
 should be registered as an exception.)

 2) I'd prefer it to just record the sql commands executed by the
 database as text, then use psql  myFieldcommands to update the
 database.  This will also help me isolate any concurrency exceptions,
 and I'd like to wrap the whole update in a transaction, so I can roll
 the whole thing back if it does detect concurrency problems anywhere
 in the process (then I can edit out the offending lines).

 3) There's no particular rush to update the database - I don't need
 this real-time.

 4) Users might make their checkpoint at a different time from other users.


Given that each person is likely to only be only operating on their
own data set, I'd use an integer range for each person.  Make an int
field in each table, and give each use a 1,000,000 id range to play
in, or something like that.  You can even set it up so that the app
uses sequences and have them start at whatever the user's first id is,
and not cycling and stopping when it reaches the end to keep them from
bumping into the next person's range.

Heck, go with bigint and give each person a 1,000,000,000 range.  Then
you could still handle 9,223,372,035 or so users before you'd run out
of sequences for each.

Heck, you could even write a system of update functions that checked
the userid against their numeric range and only updated the data if it
was in their range.  Send it to a coworker for approval if it's not.
I'm having a few too mad scientist moments right about now.  Got to
get back to my data mining project...


This would probably work, but it seems like overkill...I'll have to 
think about it some more...


-Owen

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Linux distro

2007-08-01 Thread Owen Hartnett

At 4:52 PM +0200 8/1/07, Leif B. Kristensen wrote:

On Wednesday 1. August 2007 16:15, Madison Kelly wrote:


/Personally/, I love Debian on servers.

It's not quite as 'hardcore' as Gentoo (a great distro, but not one to
start with!). It's the foundation of many of the popular distros
(Ubuntu, Mepis, Knoppix, etc) and the Debian crew is very careful
 about what they put into the 'stable' repositories.


I agree totally. Debian in a server configuration is quite easy to get
started with, and is rock solid. My first Linux test server (my old
Pentium 133 MHz desktop) way back in 2002 ran Debian Woody. I kept it
running until it died from old age a couple of years ago. Later I fell
in love with Gentoo. But if I'd have to run a server with maximum
stability and uptime, I think that I'd still prefer Debian.


As an alternative viewpoint, I've been running the latest postgres on 
Mac OS X Server 10.4, and it's been great for me.  It was my first 
time using a server, and my first serious use of postgres (although I 
have had a lot of previous unix experience.)  All the power of unix, 
all the ease of the Macintosh (and it's server installation gives you 
lots of great things for free and already installed - granted most is 
publicly available, but it's already installed and ready for use 
that's the big advantage).  Not only that, but I can run windoze in 
Parallels (or even Boot Camp if I desired).


-Owen

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostGreSQL for a small Desktop Application

2007-06-13 Thread Owen Hartnett

At 12:34 PM +0200 6/13/07, Magnus Hagander wrote:

On Mon, Jun 11, 2007 at 10:44:38AM -0700, Gabriele wrote:

 I'm going to develop a medium sized business desktop client server
 application which will be deployed mostly on small sized networks and
 later eventually, hopefully, on medium sized networks.
 It will probably be developed using C#.

 I do need a solid DBMS wich can work with .Net framework. I do know
 PostGreSQL is a good DBMS in general (it sports most of the advanced
 DBMS features, transactions and stored procedure included) but i
 wonder if it is suited for my application.

 Knowledge base of my users is very low and servers will be standard
 class desktop computers most probably ran on Windows XP (and Vista
 later on, i suspect).
 The service should be enough lightweight to be ran on such server
 and I need silent installation and configuration because i can't
 expect my user to be able to configure a DBMS.


Silent installation is available and works fine. There will be a bit of
overhead, especially in disk usage, compared to an embedded database. But
it's much better than most commercial ones, like MSDE.
See http://pginstaller.projects.postgresql.org/silent.html.



 Additionally i need a passable to good data provider to interface
 PostGreSQL with .Net which possibly provide better performance than
 ODBC (don't know if it exists and i hope it is free).


npgsql (http://pgfoundry.org/projects/npgsql) works very well for .net.



 Anyway performance shoudn't be a big issue, i expect low concurrency
 level (less than 10 users) and low to medium volume of rows and
 queries. If more users and more data are needed for especially big
 customer i can simply suggest bigger and dedicated server. (different
 problems will arise for the aggregated data which will feed the web
 application, but for these we will have a real server).

 Is PostGreSQL suited for such use? If not which alternatives are there
 to be used? When using PostGreSQL in such a way is there any
 suggestion to be followed? Links to sources which i may find
 interesting (how to make a silent install, basic hardware
 requirements, so on).


Your other option would be to use an embedded database like SQLite. It has
a much smaller footprint, but is of course also a lot less capable. But it
could be worthwhile to check it out.

As for suggestions, well, don't forget to run autovacuum and to schedule
your backups properly. You can certainly not expect your users to do that
:) Also, plan and test in advance a method for upgrading the installations
- you'll always want to be at the latest release in a branch as soon as
possible after the release.


I'm in almost exactly the original poster's boat.  I introduced 
PostgreSQL into a rewrite of an old application that ran with an 
Access backend, and I'm certainly glad I moved up.


There are a couple of .Net interfaces.  The OLE one is OK, but seemed 
a lot slower than Npgsql, which really is pretty good performance, at 
least for my usage.  I've used the ODBC interface for using Crystal 
Reports, but not enough so I can measure its performance. I'd 
recommend Npgsql.


Right now my backend is running on a Macintosh Xserve, which does a 
backup cron task every night, and rsyncs the backups over to another 
offsite location.  Since my database isn't that big (~30MB), this 
isn't a big deal.


I've run Postgres as a server on both Windows and Mac, and it seems 
to make a bigger strain on Windows performance than on Mac 
performance (probably because the scheduler for multi-tasking on the 
Mac suits it better).  I'll probably use windows as the installed 
backend, but I might just drop in a Mac Mini if that's problematic 
(not from Windows, but trying to get space on a client's server).


-Owen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: NULLS and User Input WAS Re: [GENERAL] multimaster

2007-06-04 Thread Owen Hartnett

At 12:37 AM +0200 6/4/07, PFC wrote:

Yeah, it is awful ;^)  However the existing system is equally awful
because there is no way to enter NULL!


Consider this form :

First name :Edgar
Middle name :   J.
Last name : Hoover

Now, if someone has no middle name, like John Smith, should we use 
NULL or  for the middle name ?
NULL usually means unknown or not applicable, so I believe we 
have to use the empty string here. It makes sense to be able to 
concatenate the three parts of the name, without having to put 
COALESCE() everywhere.


Now consider this form :

City:
State   :
Country :

If the user doesn't live in the US, State makes no sense, so it 
should be NULL, not the empty string. There is no unnamed state. 
Also, if the user does not enter his city name, this does not mean 
he lives in a city whose name is . So NULL should be used, too.


It is very context-dependent.


My take on the NULL philosophy is that NULL should indicate that no 
data has been entered.  If the data for the record is not applicable, 
then it should have a zero length string, indicating that the field 
has been considered by the user, and that a blank value is 
appropriate.  A NULL field on an entered record should indicate an 
error condition, rather than that the field is not appropriate to the 
context.


Thus, NULL fields on a completed record would mean either that they 
were never presented to the user (thus, did not appear in the UI), or 
an error condition.


The advantages to this is that, if enforced, a count of the non-null 
records will show those operated on by a user, vs. those untouched by 
a user.


-Owen

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Turning on logging

2007-06-02 Thread Owen Hartnett


I know this is in the docs somewhere, and it's probably staring me in 
the face, but I haven't been able to find it:


I'm running 8.2.4 through npgsql - how do I log:

1) connections to the database

2) updates, deletes, adds


Is this set in the ./configuration?  Or in the startup command line? 
I'm running on Mac OS X server using the Ray Aspeitia/David Wheeler 
startup script with log rotation turned on (using 
/usr/sbin/rotatelogs the Apache log rotation utility).


Thanks for the help!

-Owen

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Schema as versioning strategy

2007-04-26 Thread Owen Hartnett

At 9:23 AM +0100 4/26/07, Richard Huxton wrote:

Jonathan Vanasco wrote:


On Apr 25, 2007, at 2:05 PM, Richard Huxton wrote:


Owen Hartnett wrote:
I want to freeze a snapshot of the database every year (think 
of end of year tax records).  However, I want this frozen version 
(and all the previous frozen versions) available to the database 
user as read-only.  My thinking is to copy the entire public 
schema (which is where all the current data lives) into a new 
schema, named 2007 (2008, etc.)


Sounds perfectly reasonable. You could either do it as a series of:
  CREATE TABLE archive2007.foo AS SELECT * FROM public.foo;
or do a pg_dump of schema public, tweak the file to change the 
schema names and restore it.


the create table method won't copy the constraints + fkeys .


Shouldn't matter for an archive though, since you'd not want anyone 
to have permissions. Still, pg_dump is my preference. Apart from 
anything else, you can keep a copy of the dump around too.



Thanks to everyone for all the replies.  You've been most helpful. 
It looks like pg_dump is the way to go, though I'll have to think 
about it because I'm ultimately looking for a mechanical process that 
will automatically tweak the schema names.  I don't want to have to 
visit clients every year to archive their data.  Since the pg_dump 
file might change, my program may have to be version dependent.


-Owen

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Schema as versioning strategy

2007-04-25 Thread Owen Hartnett


Hi:

I'm a new user of Postgresql (8.2.3), and I'm very happy with both 
the performance and operation of the system.  My compliments to you 
the many authors who keep this database running and useful.


My question is:

I want to freeze a snapshot of the database every year (think of 
end of year tax records).  However, I want this frozen version (and 
all the previous frozen versions) available to the database user as 
read-only.  My thinking is to copy the entire public schema (which is 
where all the current data lives) into a new schema, named 2007 
(2008, etc.)


Is this a valid plan.  I had thought of using a different database, 
but that would require multiple opens.  I looked to see if there were 
an easy way to script doing an exact schema copy, but I haven't found 
anything like it in the docs.


This is not heavy usage, nor is there a large amount of data (current 
pg_dump backups are around 30 Megabytes.


Am I on the right track, or would you suggest a different strategy?

-Owen

Clipboard, Inc.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly