>From [EMAIL PROTECTED]
From: David Doucette <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
X-Sender: [EMAIL PROTECTED]
X-Mailer: ELM [version 2.5 PL2]
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
> David Doucette wrote:
> > How many msgs/sec have you seen with James for a given size message?
>
> I think you should ask the original poster; I haven't done any
> benchmark.
Sorry. When you asked him about message size, I thought maybe you had
done some benchmarking.
> > Have you found a good number to set this at? 300? 1000? I'm sure it
> > depends on the type of system, etc.
>
> Sure, it would depend on the hardware used. But in my case, I'm limited
> by the max. connection number that can be handled by MySQL. Somewhere on
> the Net, I read that the number was 50. Recently I changed James config
> so that it has 60 connections; let's see what happens.
I'm new to James, so I didn't realize each delivery thread reads from
the database. I assumed the message was passed to the thread. What is
passed to the thread so it knows which message to get out of the
database? Or does it just grab the next one that hasn't already been
sent?
My first impression is that this performance bottleneck could be avoided
by having one thread (or even more than one thread to improve
performance against the database) reading messages from the database and
putting them in a queue in memory. The delivery threads could access
this same shared queue and pull the data from there instead of reading
it from the database. The advantage of this would be not requiring a
connection to the database for every delivery thread, so you could have
more delivery threads running.
The delivery threads would need to somehow flag the message in the
database as having been sent (I don't even know how this works now), but
that could also be done through a shared queue. Queuing up the writes
would increases the chances slightly that a message will get resent if
James goes down hard after sending the message, but before the flag is
set in the database. This case must exist now in James, but it is a
little less likely to happen if each delivery thread sets the flag right
after sending the message.
If I'm totally misunderstanding how James works, someone who is
knowledgeable about all this should correct me! ;)
In general, however, connection pooling is a good thing! I've been
immersed in coding such things for a week straight now, so it is all
very fresh in my mind. :)
> > Could you clarify this a bit more? I'm not sure I understand about the
> > removal of messages in the spool and the log files issue.
>
> Once I had a problem on the log files after I removed some of the
> messages; James kept writing the exceptions on the log files, trying to
> find the missing messages. James just kept trying and trying until the
> filesystem was filled up. Then I switched to using the database, and
> when I removed the messages (from the spool or mailboxes), the
> exceptions printed were not so massive.
Did you remove messages that had already been sent? If so, I'm curious
why James is looking for messages that have already been sent.
> > When you say it couldn't deliver more than 1MB, what do you mean?
>
> It just meant that my email was rejected (pressing the "Send" button on
> the mail client, and then an error message received by the email
> client).
>
> > Do you mean you couldn't store more than 1MB in a single column in the
> > database?
>
> It's the limitation imposed by the MySQL JDBC driver that I use
> (mm.mysql201.jar), and not by MySQL (I guess).
Okay. I was just curious. I didn't know one way or the other.
> >Does this mean that James doesn't store the attachments in
> > the file system when storing messages in the database?
>
> If you use a database for the repositories, then everyting would be
> stored in the database; viewed from James' perspective. On the database,
> that's implementation dependent.
Yes, I meant from James' perspective.
> On MySQL, that's the way it is.
> On Postgres, blobs are stored on separate tables. Problem is, Town
> doesn't support Postgres; or maybe it's just that the JDBC driver
> supplied by Postgres is not quite compatible with the JDBC
> specification. Once I tried to use Postgres as the backend,
> unfortunately, Town had problems in dealing with attachments; Postgres
> uses "object ID" as a pointer to the real blob and Town apparently
> didn't handle that properly.
I thought I heard of others using Postgres, but maybe not.
> BTW, I'd like to eat my own words; PostgresSpoolRepository.class,
> somebody please...?
> But it might not be a cure for the attachment problem. I think the POP
> implementation in James is the one that needs some fix up. You can try
> to have about 1400 messages on your database, and have some of them to
> be large enough (300KB). Get them via James POP, and you'd see that your
> email client would just time-out (manually removing the messages in the
> database table would be needed, and then restarting James).
I'm initially interested in James for SMTP more than POP3, but since I'm
planning on using it for POP3 down the road, this kind of troubles me.
Is it merely the size of some of the messages that is the problem, or
does the number of messages in a mailbox/in the database matter??
To the rest of the group, is this a known problem??
> Well, having big attachments on email is not quite wise; but sometimes,
> it could be needed, say, your customers insist.
I understand completely. I was speaking in a perfect world where I got
to control such things. :)
Even if you can control whether you put attachments on your outgoing
emails, you can't control whether someone sends you a large one. That's
what makes the POP3 problem worse than not being able to send large
attachments (when using the database, not the file system).
David
> Oki
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]