[sniffer] Re: FW: Memory Usage of MessageSniffer 3

2008-08-20 Thread Peer-to-Peer (Support)
Just following-up:  We've been running the upper limit at 100mb for 3 weeks
now and have not seen any further St9bad_alloc errors.  At 150mb we were
seeing the St9bad_alloc error daily.

Regards,
--Paul


-Original Message-
From: Message Sniffer Community [mailto:[EMAIL PROTECTED]
Behalf Of Pete McNeil
Sent: Friday, August 01, 2008 12:40 PM
To: Message Sniffer Community
Subject: [sniffer] Re: FW: Memory Usage of MessageSniffer 3


Hello Peer-to-Peer,

Friday, August 1, 2008, 10:49:52 AM, you wrote:

snip/

 I also have a scheduled reboot every night since we did
 confirm w/ Arvel at MDaemon there is a memory leak in MDaemon.exe (if
 heavily utilizing their Gateway feature).  Have yet to hear anything from
 AltN regarding a fix on the MDaemon.exe leak.

 In any case, do you think lowering the upper limit will help the
 St9bad_alloc error, or am I fishing in the wrong area.

That will help your memory leak issue because it will leave more room
for the leak to expand before causing allocation failures.

You shouldn't see a significant drop-off in GBUdb performance after
you reduce your upper RAM limit because your message rates are low
enough that GBUdb should be able to function quite well with fewer
entries-- Also there is a shared memory effect that emerges from the
interaction of GBUdb nodes and the cloud... When records are condensed
they are more likely to be bounced off the cloud and get new data so
what you might loose in fewer records you will gain in more frequent
reflections.

Hope this helps,

_M

--
Pete McNeil
Chief Scientist,
Arm Research Labs, LLC.


#
This message is sent to you because you are subscribed to
  the mailing list sniffer@sortmonster.com.
To unsubscribe, E-mail to: [EMAIL PROTECTED]
To switch to the DIGEST mode, E-mail to [EMAIL PROTECTED]
To switch to the INDEX mode, E-mail to [EMAIL PROTECTED]
Send administrative queries to  [EMAIL PROTECTED]








#
This message is sent to you because you are subscribed to
  the mailing list sniffer@sortmonster.com.
To unsubscribe, E-mail to: [EMAIL PROTECTED]
To switch to the DIGEST mode, E-mail to [EMAIL PROTECTED]
To switch to the INDEX mode, E-mail to [EMAIL PROTECTED]
Send administrative queries to  [EMAIL PROTECTED]



[sniffer] Re: FW: Memory Usage of MessageSniffer 3

2008-08-01 Thread Pete McNeil
Hello Peer-to-Peer,

Thursday, July 31, 2008, 10:05:15 PM, you wrote:

 Would it be correct to say the higher we can increase the size-trigger
 'megabytes' value, the better filtering results (accuracy) we will achieve?
 In other words, would it be beneficial for us to purchase more memory on our
 server (say an additional 2GB), then increase the 'megabytes' value to 400
 or 800?

 Several of our servers are hitting the upper limit (159,383,552) 150 MB

I don't think so. A quick look at your telemetry indicates that your
systems are typically rebooted once per day. This is actually
preempting your daily condensation.

One result of this is that many of your GBUdb nodes only condense when
they reach their size limit. From what I can see, when this happens a
significant portion of your GBUdb data is dropped. For example,
several of the systems I looked at have not condensed in months. Here
is some data from one of them:


timers
run started=20080801081753 elapsed=19637/
sync latest=20080801134415 elapsed=55/
save latest=20080801131823 elapsed=1607/
condense latest=20080406160144 elapsed=10100606/
/timers

gbudb
size bytes=50331648/
records count=214313/
utilization percent=91.1357/
/gbudb

This one has not condensed since 200804 most likely due to restarts
that prevented the daily condensation timer from expiring.

If this is the case with your other systems as well, it is likely that
they are occasionally condensing when they reach their size threshold,
but if they were allowed to condense daily they would never reach that
limit.

In that case, adding additional memory for GBUdb would probably not
improve performance significantly.

The default settings are conservative even for very large message
loads. for example our spamtrap processing systems typically handle
3000-4000 msg/minute continuously and typically have timer  GBUdb
telemetry like this:

timers
run started=20080717205939 elapsed=1270156/
sync latest=20080801134844 elapsed=11/
save latest=20080801134721 elapsed=94/
condense latest=20080801132958 elapsed=1137/
/timers

gbudb
size bytes=117440512/
records count=568867/
utilization percent=99.6626/
/gbudb

Note that this SNF node has not been restarted since 20080717 and that
it's last condensation was in the early hours today-- most likely due
to it's daily timer.

Note also that it's GBUdb size is only 117 MBytes. It is unlikely that
this system will reach 150Mbytes before the day is finished.

Since most systems we see are handling traffic rates significantly
smaller than 4.75M/day it is safe to assume that most systems would
also be unlikely to reach their default GBUdb size limit during any
single day... So, the default of 150 MBytes is likely more than
sufficient for most production systems.

---

All that said, if you want to intentionally run larger GBUdb data sets
on your systems there is no harm in that. Your system will be more
aware of habitual bot IPs etc at the expense of memory. Since all
GBUdb nodes receive reflections on IP encounters within one minute, it
is likely that the benefit would be the ability to reject the first
message from a bad IP more frequently... Subsequent messages from bad
IPs would likely be rejected by all GBUdb nodes based on reflected
data.

It is likely that increasing the amount of RAM you assign to your
GBUdb nodes will have diminishing returns past the defaults currently
set... but it might be fun to try it and see :-)

---

If you are looking for better capture rates you may be able to achieve
those more readily by adjusting your GBUdb envelopes. The default
envelopes are set to avoid false positives on large filtering systems
with a diverse client base.

It is likely that more restricted systems could afford to use more
aggressive envelopes without creating false positives because their
traffic would be more specific to their systems.

In a hypothetical case: If your system generally never receives
legitimate messages from Russian or Chinese ISPs, then it is likely
that your system would begin to learn very negative statistics for IPs
belonging to those ISPs. A slight adjustment to your black-range GBUdb
envelope might be just enough to capture those IPs without creating
false positives for other ISPs where you do receive legitimate
messages.

In any case, since the default ranges are extremely conservative and
tuned for large scale filtering systems it is worth experimenting with
them to boost your capture rates on nodes that have a more restricted
client base.

If you have a larger system and you use a clustering deployment
methodology then you might still take advantage of these statistics by
grouping similar clients on the same node(s) based on where they get
their messages. Even if you don't adjust your envelopes this
clustering will have the effect of increasing the signal to noise
ratio for GBUdb as it learns which IPs to trust and which ones to
suspect.

Hope this helps,

_M

-- 
Pete McNeil
Chief Scientist,
Arm Research Labs, LLC.



[sniffer] Re: FW: Memory Usage of MessageSniffer 3

2008-08-01 Thread Peer-to-Peer (Support)
H sorry, just before posting my question last night I lowered the upper
limit to 100MB which is why you're now seeing more normal numbers on your
end.  Six servers were at 150MB last night and today the numbers are 1/2 of
the size.

Here's an example from server#1 (LAST NIGHT)
gbudb
size bytes=159383552/
records count=781184/
utilization percent=97.3916/
/gbudb


Here's an example from server#1 (TODAY)
gbudb
size bytes=75497472/
records count=300560/
utilization percent=91.2028/
/gbudb


I lowered the upper limit because since installing 3.0, I'm now seeing a
dramatic increase of the St9bad_alloc (out of memory error) on a daily basis
again.  As you know when that error occurs, all mail is allowed to pass 
none filtered, so my server reboots automatically when the St9bad_alloc
error occurs.  I also have a scheduled reboot every night since we did
confirm w/ Arvel at MDaemon there is a memory leak in MDaemon.exe (if
heavily utilizing their Gateway feature).  Have yet to hear anything from
AltN regarding a fix on the MDaemon.exe leak.


In any case, do you think lowering the upper limit will help the
St9bad_alloc error, or am I fishing in the wrong area.


Thanks,
--Paul



-Original Message-
From: Message Sniffer Community [mailto:[EMAIL PROTECTED]
Behalf Of Pete McNeil
Sent: Friday, August 01, 2008 10:04 AM
To: Message Sniffer Community
Subject: [sniffer] Re: FW: Memory Usage of MessageSniffer 3


Hello Peer-to-Peer,

Thursday, July 31, 2008, 10:05:15 PM, you wrote:

 Would it be correct to say the higher we can increase the size-trigger
 'megabytes' value, the better filtering results (accuracy) we will
achieve?
 In other words, would it be beneficial for us to purchase more memory on
our
 server (say an additional 2GB), then increase the 'megabytes' value to 400
 or 800?

 Several of our servers are hitting the upper limit (159,383,552) 150 MB

I don't think so. A quick look at your telemetry indicates that your
systems are typically rebooted once per day. This is actually
preempting your daily condensation.

One result of this is that many of your GBUdb nodes only condense when
they reach their size limit. From what I can see, when this happens a
significant portion of your GBUdb data is dropped. For example,
several of the systems I looked at have not condensed in months. Here
is some data from one of them:


timers
run started=20080801081753 elapsed=19637/
sync latest=20080801134415 elapsed=55/
save latest=20080801131823 elapsed=1607/
condense latest=20080406160144 elapsed=10100606/
/timers

gbudb
size bytes=50331648/
records count=214313/
utilization percent=91.1357/
/gbudb

This one has not condensed since 200804 most likely due to restarts
that prevented the daily condensation timer from expiring.

If this is the case with your other systems as well, it is likely that
they are occasionally condensing when they reach their size threshold,
but if they were allowed to condense daily they would never reach that
limit.

In that case, adding additional memory for GBUdb would probably not
improve performance significantly.

The default settings are conservative even for very large message
loads. for example our spamtrap processing systems typically handle
3000-4000 msg/minute continuously and typically have timer  GBUdb
telemetry like this:

timers
run started=20080717205939 elapsed=1270156/
sync latest=20080801134844 elapsed=11/
save latest=20080801134721 elapsed=94/
condense latest=20080801132958 elapsed=1137/
/timers

gbudb
size bytes=117440512/
records count=568867/
utilization percent=99.6626/
/gbudb

Note that this SNF node has not been restarted since 20080717 and that
it's last condensation was in the early hours today-- most likely due
to it's daily timer.

Note also that it's GBUdb size is only 117 MBytes. It is unlikely that
this system will reach 150Mbytes before the day is finished.

Since most systems we see are handling traffic rates significantly
smaller than 4.75M/day it is safe to assume that most systems would
also be unlikely to reach their default GBUdb size limit during any
single day... So, the default of 150 MBytes is likely more than
sufficient for most production systems.

---

All that said, if you want to intentionally run larger GBUdb data sets
on your systems there is no harm in that. Your system will be more
aware of habitual bot IPs etc at the expense of memory. Since all
GBUdb nodes receive reflections on IP encounters within one minute, it
is likely that the benefit would be the ability to reject the first
message from a bad IP more frequently... Subsequent messages from bad
IPs would likely be rejected by all GBUdb nodes based on reflected
data.

It is likely that increasing the amount of RAM you assign to your
GBUdb nodes will have diminishing returns past the defaults currently
set... but it might be fun to try it and see :-)

---

If you are looking for better capture rates you may be able to achieve
those more readily by adjusting your GBUdb envelopes

[sniffer] Re: FW: Memory Usage of MessageSniffer 3

2008-08-01 Thread Pete McNeil
Hello Peer-to-Peer,

Friday, August 1, 2008, 10:49:52 AM, you wrote:

snip/

 I also have a scheduled reboot every night since we did
 confirm w/ Arvel at MDaemon there is a memory leak in MDaemon.exe (if
 heavily utilizing their Gateway feature).  Have yet to hear anything from
 AltN regarding a fix on the MDaemon.exe leak.

 In any case, do you think lowering the upper limit will help the
 St9bad_alloc error, or am I fishing in the wrong area.

That will help your memory leak issue because it will leave more room
for the leak to expand before causing allocation failures.

You shouldn't see a significant drop-off in GBUdb performance after
you reduce your upper RAM limit because your message rates are low
enough that GBUdb should be able to function quite well with fewer
entries-- Also there is a shared memory effect that emerges from the
interaction of GBUdb nodes and the cloud... When records are condensed
they are more likely to be bounced off the cloud and get new data so
what you might loose in fewer records you will gain in more frequent
reflections.

Hope this helps,

_M

-- 
Pete McNeil
Chief Scientist,
Arm Research Labs, LLC.


#
This message is sent to you because you are subscribed to
  the mailing list sniffer@sortmonster.com.
To unsubscribe, E-mail to: [EMAIL PROTECTED]
To switch to the DIGEST mode, E-mail to [EMAIL PROTECTED]
To switch to the INDEX mode, E-mail to [EMAIL PROTECTED]
Send administrative queries to  [EMAIL PROTECTED]



[sniffer] Re: FW: Memory Usage of MessageSniffer 3

2008-07-31 Thread Peer-to-Peer (Support)
Would it be correct to say the higher we can increase the size-trigger
'megabytes' value, the better filtering results (accuracy) we will achieve?
In other words, would it be beneficial for us to purchase more memory on our
server (say an additional 2GB), then increase the 'megabytes' value to 400
or 800?

Several of our servers are hitting the upper limit (159,383,552) 150 MB


Thanks,
--Paul



-Original Message-
From: Message Sniffer Community [mailto:[EMAIL PROTECTED]
Behalf Of Pete McNeil
Sent: Wednesday, July 30, 2008 8:23 AM
To: Message Sniffer Community
Subject: [sniffer] Re: FW: Memory Usage of MessageSniffer 3


Hello Ian,

The new (V3) SNF does use more ram than the old SNF (V2).

GBUdb adds records over time as it learns new IP data.

The amount of RAM that will be used by GBUdb depends on how quickly it
is learning new IPs and how frequently the database is condensed.

You can set an upper limit on the size of GBUdb in the configuration
file:

condense minimum-seconds-between='600'
time-trigger on-off='on' seconds='86400'/
posts-trigger on-off='off' posts='120'/
records-trigger on-off='off' records='60'/
size-trigger on-off='on' megabytes='150'/
/condense

By default GBUdb will condense once per day or when it reaches
150 MBytes. Roughly twice as much RAM is needed for the condensing
process since the GBUdb data must be copied to a new location.
Condensing the GBUdb data is relatively expensive, so if sufficient
RAM is not released by the first pass GBUdb will condense again every
10 minutes (600 seconds above) until GBUdb is below the size limit you
have set.

I recommend you determine how much ram you want to make available for
SNF and then set your size-trigger/ to 40% of that size. This should
leave room for GBUdb to condense and for the rest of SNF to fit
inside your memory limit.

You can monitor your GBUdb status in your status.minute or
status.second reports. Here is some sample data from one of our
spamtrap processors. It has been stable for months so this should
be indicative of what you would see on a busy machine that's been up
for a while:

gbudb
size bytes=142606336/
records count=650314/
utilization percent=95.8431/
/gbudb

For information on reading your status reports:

http://www.armresearch.com/support/articles/software/snfServer/logFiles/stat
usLogs.jsp

Hope this helps,

_M

Tuesday, July 29, 2008, 10:31:23 PM, you wrote:

 This is from one of our engineers.  Anybody else had this sort of issue?

 Ian


 -Original Message-
 Does the new sniffer stuff have a higher memory requirement than
 the old?  Sebastian pointed out to me today that a number of our
 gate servers were using a ton of swap space.  Restarting snfctrl frees up
a few hundred megs.

 Our newer gate servers (all with 2 or more GB of RAM) seem to be
 doing alright, but we have 16 gates at IAD with 1 GB of RAM that are
 being affected by this.  It looks like the memory usage increases
 progressively over the course of a couple days, so I don't know if
 it's a memory leak or what.  Is there anything we should do or add a
 snfctrl restart to our nightly cron jobs and just live with it for now?



 #
 This message is sent to you because you are subscribed to
   the mailing list sniffer@sortmonster.com.
 To unsubscribe, E-mail to: [EMAIL PROTECTED]
 To switch to the DIGEST mode, E-mail to [EMAIL PROTECTED]
 To switch to the INDEX mode, E-mail to [EMAIL PROTECTED]
 Send administrative queries to  [EMAIL PROTECTED]



--
Pete McNeil
Chief Scientist,
Arm Research Labs, LLC.


#
This message is sent to you because you are subscribed to
  the mailing list sniffer@sortmonster.com.
To unsubscribe, E-mail to: [EMAIL PROTECTED]
To switch to the DIGEST mode, E-mail to [EMAIL PROTECTED]
To switch to the INDEX mode, E-mail to [EMAIL PROTECTED]
Send administrative queries to  [EMAIL PROTECTED]








#
This message is sent to you because you are subscribed to
  the mailing list sniffer@sortmonster.com.
To unsubscribe, E-mail to: [EMAIL PROTECTED]
To switch to the DIGEST mode, E-mail to [EMAIL PROTECTED]
To switch to the INDEX mode, E-mail to [EMAIL PROTECTED]
Send administrative queries to  [EMAIL PROTECTED]



[sniffer] Re: FW: Memory Usage of MessageSniffer 3

2008-07-30 Thread Pete McNeil
Hello Ian,

The new (V3) SNF does use more ram than the old SNF (V2).

GBUdb adds records over time as it learns new IP data.

The amount of RAM that will be used by GBUdb depends on how quickly it
is learning new IPs and how frequently the database is condensed.

You can set an upper limit on the size of GBUdb in the configuration
file:

condense minimum-seconds-between='600'
time-trigger on-off='on' seconds='86400'/
posts-trigger on-off='off' posts='120'/
records-trigger on-off='off' records='60'/
size-trigger on-off='on' megabytes='150'/
/condense

By default GBUdb will condense once per day or when it reaches
150 MBytes. Roughly twice as much RAM is needed for the condensing
process since the GBUdb data must be copied to a new location.
Condensing the GBUdb data is relatively expensive, so if sufficient
RAM is not released by the first pass GBUdb will condense again every
10 minutes (600 seconds above) until GBUdb is below the size limit you
have set.

I recommend you determine how much ram you want to make available for
SNF and then set your size-trigger/ to 40% of that size. This should
leave room for GBUdb to condense and for the rest of SNF to fit
inside your memory limit.

You can monitor your GBUdb status in your status.minute or
status.second reports. Here is some sample data from one of our
spamtrap processors. It has been stable for months so this should
be indicative of what you would see on a busy machine that's been up
for a while:

gbudb
size bytes=142606336/
records count=650314/
utilization percent=95.8431/
/gbudb

For information on reading your status reports:

http://www.armresearch.com/support/articles/software/snfServer/logFiles/statusLogs.jsp

Hope this helps,

_M

Tuesday, July 29, 2008, 10:31:23 PM, you wrote:

 This is from one of our engineers.  Anybody else had this sort of issue?

 Ian


 -Original Message-
 Does the new sniffer stuff have a higher memory requirement than
 the old?  Sebastian pointed out to me today that a number of our
 gate servers were using a ton of swap space.  Restarting snfctrl frees up a 
 few hundred megs.

 Our newer gate servers (all with 2 or more GB of RAM) seem to be
 doing alright, but we have 16 gates at IAD with 1 GB of RAM that are
 being affected by this.  It looks like the memory usage increases
 progressively over the course of a couple days, so I don't know if
 it's a memory leak or what.  Is there anything we should do or add a
 snfctrl restart to our nightly cron jobs and just live with it for now?



 #
 This message is sent to you because you are subscribed to
   the mailing list sniffer@sortmonster.com.
 To unsubscribe, E-mail to: [EMAIL PROTECTED]
 To switch to the DIGEST mode, E-mail to [EMAIL PROTECTED]
 To switch to the INDEX mode, E-mail to [EMAIL PROTECTED]
 Send administrative queries to  [EMAIL PROTECTED]



-- 
Pete McNeil
Chief Scientist,
Arm Research Labs, LLC.


#
This message is sent to you because you are subscribed to
  the mailing list sniffer@sortmonster.com.
To unsubscribe, E-mail to: [EMAIL PROTECTED]
To switch to the DIGEST mode, E-mail to [EMAIL PROTECTED]
To switch to the INDEX mode, E-mail to [EMAIL PROTECTED]
Send administrative queries to  [EMAIL PROTECTED]