On Nov 18, 2003, at 10:46 AM, Gary Murphy wrote:

The current quota implementation is definitely costly. Establishing a
transaction with failure and/or over-quota control for the message inserts (rollback inserted data) could be a first step; but I agree a more elegant
solution is desirable.

Transactions failing on quotas is going to be hard, since one failure will make all of the recipients roll back. (unless each individual recipient is a separate transaction)

Limiting the message to fit into a given amount of memory, else dealing with
it piece-by-piece, sounds like a reasonable approach.

That's pretty much how it happens now, with a max chunk size of 512k. (which ends up using 3x 512k in the code) db.h:#define READ_BLOCK_SIZE (512ul*1024ul) /* be carefull, MYSQL has a limit */

That leaves messages between 512k and ~10 megs as requiring extra attention. (10 megs being the apparent default max message size in postfix)

With this approach
all messages < XXXbytes (1-2MB?) could be checked for quota violations in db_insert_message via a conditional check of a new size parameter before an
insert ever happens.  This would also eliminate the update for those
messages, which should statistically represent a very high percentage of the
messages received.

Correct. The current read order would support checking if the user was already over quota, so that you could go exactly one partial message over the quota. But if you're set up with a default of a 10 meg max message and one meg quotas, then you could hit 10.99 megs before a hard limit. Kind of a silly way to set things up, but I've seen worse.

We could check any message up to 512k + headers, or if the free quota was < 512k before insertion.

Larger messages would still have to be inserted
one-piece-at-a-time or spooled to a tmp file for statistics gathering before
being inserted to the database but there shouldn't be many of these to
handle.

There might not be many, but they'd make a good DOS attack. Of course, it depends on the possible # of dbmail-smtpd processes that could run at once and max amount in memory. Spooling to disk seems to be a loser to me in terms of complexity and performance.

eric

Reply via email to