feed them to 'spamassassin -r'
i do that when i get them
... do you use SOUGHT rules?
i dont use these rules no - is there a howto regarding these as google
is letting me down a bit?
thanks
On Mon, 2008-12-08 at 11:38 +, Tom Brown wrote:
feed them to 'spamassassin -r'
i do that when i get them
... do you use SOUGHT rules?
i dont use these rules no - is there a howto regarding these as google
is letting me down a bit?
i dont use these rules no - is there a howto regarding these as google
is letting me down a bit?
sorry - i must be going mad as i do already use these rules.
On 08.12.08 10:51, Tom Brown wrote:
I am getting a few mails through with the same format as the one thats
attached. Is there much i can do to increase the scores on this type of
mail? It triggered these scores as it stands.
X-Spam-Status: No, score=3.2 tagged_above=-100 required=5
On Sat, 2008-12-06 at 18:22 -0500, Theo Van Dinter wrote:
On Sat, Dec 06, 2008 at 11:16:03PM +0100, Wolfgang Zeikat wrote:
Could you describe more elaborately how you did that?
You may wish to take a look at cpan2rpm, fwiw.
deprecated. look at cpan2dist if you are running perl 5.10
--
On 08.12.08 13:13, Tom Brown wrote:
i dont use these rules no - is there a howto regarding these as google
is letting me down a bit?
sorry - i must be going mad as i do already use these rules.
and do you use spf, uribl, dcc, razor, pyzor and optionaly other network
rules (e.g. ixhash)?
--
Hi, I was hoping someone on this list could help me with a custom rule for
SpamAssassin. I'm not an expert at perl regexps by at all, and spent a lot
of time trying to come up with a working match, all to no avail...
What I would like to match on is URLs that do _not_ start with a third level
On Mon, 8 Dec 2008, Dennis Hardy wrote:
What I would like to match on is URLs that do _not_ start with a third level
domain entry, and end with .com, .biz, .info, etc. For example,
http://hello.com/; (followed by more stuff) would match, and
http://www.hello.com/{...}; would _not_ match.
Some
On Mon, Dec 08, 2008 at 08:52:46AM -0800, John Hardin wrote:
On Mon, 8 Dec 2008, Dennis Hardy wrote:
What I would like to match on is URLs that do _not_ start with a third level
domain entry, and end with .com, .biz, .info, etc. For example,
http://hello.com/; (followed by more stuff) would
How about:
/:\/\/[^.\/]+\.[^\.\/]+\//
Hi John, sweet, this seems to work! Could you help me with how to add a
list of com|net|info|biz|etc before the closing /, so it will match
against a list of known TLDs?
Many thanks, you are awesome :-)
.dh
--
View this message in context:
On Mon, 8 Dec 2008, Dennis Hardy wrote:
How about:
/:\/\/[^.\/]+\.[^\.\/]+\//
Hi John, sweet, this seems to work! Could you help me with how to add a
list of com|net|info|biz|etc before the closing /, so it will match
against a list of known TLDs?
Henrik K wrote:
To be more specific:
Hostname may end optionally to a dot, with :port, /slash or nothing following
m{^https?://[^.:/]+\.[^.:/]+\.?(?:$|[:/])}
Could anyone please provide a reference or explanation of the use of
m{blah} in spamassassin uri rules?
Thanks
Ned Slider wrote:
Henrik K wrote:
To be more specific:
Hostname may end optionally to a dot, with :port, /slash or nothing
following
m{^https?://[^.:/]+\.[^.:/]+\.?(?:$|[:/])}
Could anyone please provide a reference or explanation of the use of
m{blah} in spamassassin uri rules?
On Mon, Dec 01, 2008 at 03:42:05PM -0500, Dan Barker wrote:
This issue, apparently, has been a problem for me for several
Spamassassin releases, but
I just now figured out what may be happening. I've been closing spamd
once per hour,
just to make it read new local.cf, notice sa-update
--On Sunday, December 07, 2008 7:45 AM -0500 Michael Scheidell
[EMAIL PROTECTED] wrote:
Thanks for the uri rule. It is tighter then the one I cobbled together.
I'm successfully using an even tighter one posted by Daryl C. W. O'Shea on
October 18, with a minor adjustment:
Kenneth Porter wrote:
--On Sunday, December 07, 2008 7:45 AM -0500 Michael Scheidell
[EMAIL PROTECTED] wrote:
Thanks for the uri rule. It is tighter then the one I cobbled together.
I'm successfully using an even tighter one posted by Daryl C. W. O'Shea
on October 18, with a minor
On Mon, 8 Dec 2008, Kenneth Porter wrote:
uri KP_LIVE_SPACES_CID /^http:\/\/cid-.{10,20}\.spaces\.live\.com\//
The variant part is a string of hex digits, so this could be even tighter.
Nothing else? Here's two versions:
uri KP_LIVE_SPACES_CID /^http:\/\/cid-\w{10,20}\.spaces\.live\.com\//
Hi all, I've run into a weird situation where spamassassin will (seemingly
randomly) only do certain RBL checks.
The following are all the same spam message (1.txt), executed ~30 seconds
apart:
$ spamc -y 1.txt
On 08/12/2008 7:09 PM, James Grant wrote:
Hi all, I've run into a weird situation where spamassassin will (seemingly
randomly) only do certain RBL checks.
The following are all the same spam message (1.txt), executed ~30 seconds
apart:
$ spamc -y 1.txt
Has anyone seen any updates to the sought rules lately? It seems like it's
been about 4 or 5 days now since I've seen any via sa-update.
--
Chris
KeyID 0xE372A7DA98E6705C
pgpqvBQu4d9qG.pgp
Description: PGP signature
Has anyone try this?
http://prag.diee.unica.it/n3ws1t0/imageCerberus
On Friday 28 November 2008 09:52:22 Karsten Bräckelmann wrote:
On Thu, 2008-11-27 at 22:44 -0600, Luis Daniel Lucio Quiroz wrote:
I wonder if there is any module for SA to detect pornographic photos, not
only OCR.
Not
21 matches
Mail list logo