Hi,
Am 29.07.2009 um 17:51 schrieb Mathieu Bruneau:
On Wed, Jul 29, 2009 at 3:33 AM, Alexander Malysh
<[email protected]> wrote:
Hi,
Am 28.07.2009 um 22:18 schrieb Mathieu Bruneau:
On Tue, Jul 28, 2009 at 12:03 PM, Alexander Malysh <[email protected]
> wrote:
just add store support to inmemory DLR and you are done...
but I'm --1 to keep DLRs in memory _and_ in storage. You choosed to
use external storage to keep
memory footprint low...
If you say: I will keep DLRs in memory while DB not available then
you should start thread to check DB availability
and add DLRs to DB. What will you do if kannel will told to
shutdown but some DLRs still in memory and DB not available?
I don't like the idea of dual storage either. That could be
interesting if you want to have some kind of cache in memory but in
case you want to preserve data you need to write it anyway... I
don't think the DLR would benefit much from this approach.
In fact, my issue is simply that MySQL drops connections too fast
for the usage Kannel gets (If I had a DLR to write every 5 sec I
wouldn't have spot this issue). The reconnect could happen on a
another server every 1h if it was not receiing much dlr.
Since the wait_timeout is there in all MySQL instance and cannot be
deactivated (It's just the value it's set at that change), maybe
having a timer send a mysql_ping to maintain the connection is
actually the valid fix... Basically it's adding an option to send
keepalive through the mysql connection for Kannel, effectively
turning this in a persistent connections, which would be valid
since Kannel use only a controlled number of connections.
This very much resemble the keep-alive on a SMPP connections (or
any other type of keep-alive) so I don't see what is really wrong
with this. I'm ready to hear other thought before I put my head
into what could be needed to add this :)
because to mantain keep-alive you need one thread to be running ->
wasting resources. This thread can be avoided (and already avoided)
with
Well, let's take my case and check some of the logs. I must say I
don't know the core details of how much waste a thread is but if I
look at my current Kannel i have this:
[mbrun...@xxxxx backup]$ ps ax -T | grep bearerbox | wc -l
66
So my bearerbox handle ~60 threads already. I don't think that
adding a thread that wake up every N seconds would add significant
overhead on the system? Guess it depends... Let's look a bit more.
My servers get some DLR but just not enough to be very 10 seconds.
So in the current setup it reconnects everytime DLR come in but
wasn't in the last 10 seconds. Let's check how many time we
reconnected
you got it... if we always think, ahh only one thread more, we would
be by 200 already ;)
[mbrun...@xxxxxx backup]$ zcat bearerbox.log.2.gz | grep 'database
check failed' | wc -l
2072
So there was ~2000 reconnects per 24h to the database that could be
avoided. Now the questions is which one would have been more
overhead. (I leave this to purist)
yes, but reconnects made only if kannel needs this... I think the only
issue is with your setup...
simple approach, check and reconnect connection only before
connection use.
The only issue you have, is max allowed connections in mysql and not
kannel itself.
Of course, I know that the issue is all coming from the fact that
Kannel couldn't reconnect to the database. I was just thinking that
having this "persistent" connections would be something nice to
allow user to decide to use or not, it would have cleared the log on
a "error" not really an error too...
--
Math
aka ROunofF
[email protected]