Eric:

On  Tuesday, November 18, 2003 12:18 AM, Eric Soroos wrote:

> On Nov 17, 2003, at 7:20 AM, Gary Murphy wrote:
<snip>
> Correct, although you don't actually need to rollback a transaction
> postgres if there's a error reported from the database, as it puts the
> entire transaction into an abort state. You can (effectively) either
> commit or rollback to clear that transaction and start another one. Of
> course, you really want to be trapping for that sort of error so that
> you can report back to the outer levels.

You are absolutely right.  This applies to any transaction aware database,
of which, I am familiar.  I just like to add the rollback statement (for my
own awareness) and then not perform the rest of the transaction steps for
optimal performance.
<snip>

> If the message size is small,
> then it isn't that bad to read it into memory. Either it's going to
> work, or you're already over quota. For large messages, it's more
> dangerous to read into memory, especially depending on the largest
> message that is supported through the mta. Here if maxMessage <
> freeQuota, then quota is not an issue. It's only where
> maxMessage>freeQuota and maxMessage is too much ram to burn that
> there's a problem. I don't know of many systems where the max mail
> message size is over 10 megs.

> The other possibilty is that if you look at is as softQuota =
> enteredQuota and hardQuota = enteredQuota+maxMessage, then you could
> just look to see if they're already over quota before the insert, just
> preventing any further messages past the one that went over.  If that's
> coupled with reading in the entire message to get the size, then you
> already have all of the information for the uniqueid hash and the
> message size, so it's no longer an insert + update on the message
> table.

The current quota implementation is definitely costly. Establishing a
transaction with failure and/or over-quota control for the message inserts
(rollback inserted data) could be a first step; but I agree a more elegant
solution is desirable.

Limiting the message to fit into a given amount of memory, else dealing with
it piece-by-piece, sounds like a reasonable approach.  With this approach
all messages < XXXbytes (1-2MB?) could be checked for quota violations in
db_insert_message via a conditional check of a new size parameter before an
insert ever happens.  This would also eliminate the update for those
messages, which should statistically represent a very high percentage of the
messages received.  Larger messages would still have to be inserted
one-piece-at-a-time or spooled to a tmp file for statistics gathering before
being inserted to the database but there shouldn't be many of these to
handle.

<snip>
For a more radical storage saving approach we could consider storing only
one copy of a message for all recipients within the database.  This is the
approach Oracle uses for its email package.  My users are insisting on this
so, I am already working to make it happen.  However, the change will
require a large number of the base db_ methods be changed, one or two new
recipient delivery db_ methods created, one table added but physmessage can
be removed, and delivery code modified to store only one copy of the message
in messages and messagblks.  Finally, dbmail-smtp needs to be called only
once with a list of all database recipients or a generic database-to address
and recipients contained within the header, which will have to be parsed.
   .... Gary ...

Reply via email to