Re: timeouts on processing some messages, started October 24

2021-11-02 Thread Greg Troxel

>   postfix is waiting 300s
>   SA thinks it can spend 300s processing
>   postfix gives up 1s before SA is done

The default spamd child timeout is 300s.
The default postfix content milter timeout is 300s.
Each is a reasonable choice, but really postfix's timeout should be
longer.

I set in postfix main.cf: "milter_content_timeout = 330s" and now I
still get spamd child timeouts, but things are better.

So probably we should set the default spamd child timeout to 270s.

A wrinkle is that I realize that I had a learn process running, where I
run over my ham folders and spam folders and run sa-learn -L.  I used to
run that often, and it would take some number of minutes, but this one
had been running for days.  My guess is that it took long as a symptom
of the same bug, vs being a cause, but that remains to be seen.


signature.asc
Description: PGP signature


timeouts on processing some messages, started October 24

2021-11-02 Thread Greg Troxel
I have a systeem with postfix and spamassassin 3.4.6 via spamd.  It's
been generally running well.  I noticed mail from one of my other
systems timing out and 471, and that caused me to look at the logs.  I
have KAM rules, some RBL adjustments, a bunch of local rules for my
spam, but really nothing I consider unusual.

I realized I had DCC enabled, perhaps not correctly, and I just took
that out, since I've never really been clear on how it works and if I
want to use it.


My logs go back to October 3, but starting 24th I have lots of lines like:

  Oct 24 03:23:13 bar spamd[25868]: check: exceeded time limit in 
Mail::SpamAssassin::Plugin::Check::_eval_tests_type9_pri1000_set1, skipping 
further tests 

Looking further, I see

  Nov  1 12:02:01 bar postfix/cleanup[18861]: 6E2D74106C3: 
message-id=<20211031071804.b221b16...@bar.example.com>  
   Nov  1 12:07:01 bar postfix/cleanup[18861]: warning: 
milter unix:/var/run/spamass.sock: can't read SMFIC_BODYEOB reply packet 
header: Connection timed out
  Nov  1 12:07:01 bar postfix/cleanup[18861]: 6E2D74106C3: milter-reject: 
END-OF-MESSAGE from foo.example.com[10.0.0.2]: 4.7.1 Service unavailable - try 
again later ; from= to= proto=ESMTP 
helo=
  Nov  1 12:07:02 bar spamd[23510]: check: exceeded time limit in 
Mail::SpamAssassin::Plugin::Check::_eval_tests_type9_pri1000_set1, skipping 
further tests
  Nov  1 12:07:02 bar spamd[13194]: spamd: clean message (-1.0/1.0) for 
fred:10853 in 300.2 seconds, 2064 bytes.
  Nov  1 12:07:02 bar spamd[13194]: spamd: result: . 0 - 
ALL_TRUSTED,KAM_DMARC_STATUS,TIME_LIMIT_EXCEEDED 
scantime=300.2,size=2064,user=fred,uid=10853,required_score=1.0,rhost=::1,raddr=::1,rport=56983,mid=<20211031071804.b221b16...@foo.example.com>,autolearn=unavailable

so it sort of looks like:

  postfix is waiting 300s
  SA thinks it can spend 300s processing
  postfix gives up 1s before SA is done

  something is causing a delay

and thus I have two problems:

  need to have postfix delay be more than spamassassin delay plus rounding

  need to figure out why there is a timeout

The first is surely manual reading, but I wonder why it isn't default.

On the second, I wonder if anyone else is seeing this, and clues appreciated.

Thanks,
Greg



Re: page.link spam

2021-11-02 Thread Matus UHLAR - fantomas

On 2021-11-02 12:20, Matus UHLAR - fantomas wrote:


I have tried again, but despite is being listed in
kam_sa-channels_mcgrail_com/nonKAMrules.cf, SA does not accept that
directive.


On 02.11.21 18:25, Benny Pedersen wrote:

problem is that util_rb_2tld is global while kam rules need pr rule 2tld

make spamassassin change so 2tld can be pr rule not just global, think 
of tflags rule_name util_rb_2tld=page.link


if that ever happen its up to developpers


I have already posted update: the list is cleared in SA rules. 


at least not SA 3.4.4 (debian 10 backports)


is not really debian 11 ? :)


not yet
--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I just got lost in thought. It was unfamiliar territory.


Re: Decoding Google URL redirections and check VS URI Blacklists

2021-11-02 Thread Axb



Benoit had already confirmed that the redirector_pattern worked as expected.

On 11/2/21 6:07 PM, Bill Cole wrote:

On 2021-11-02 at 04:52:17 UTC-0400 (Tue, 2 Nov 2021 09:52:17 +0100)
Benoit Panizzon 
is rumored to have said:


Hi SA Community

In the last couple of weeks, I see a massive increase of spam mails
which make use of google site redirection and dodge all our attempts at
filtering.

That is google redirector is about the only common thing in those
emails. Source IP, text content etc. is quite random.

Such an example URI looks like (two spaces added to prevent this
triggering other filters)

https://www.goo gle.com/url?q=https%3A%2F%2Fkissch 
icksrr.com%2F%3Futm_source%3DbDukb6xHEYDF2%26amp%3Butm_campaign%3DKirka2sa=Dsntz=1usg=AFQjCNGkpnVKLl8I1IP9aQXtTha-jCnt3A 



google.com of course is whitelisted.


Why "of course?"

Have you tested what happens if you add "clear_uridnsbl_skip_domain 
google.com" to your config?




Creating a rule to match the string "google.com/url?q=" also is a no go
as this would create way to many false positives.


Do not be scared by SA rules matching non-spam. That is a design 
feature, not an inadvertent bug. All of the most useful rules match some 
ham.


It's only really a "false positive" if the total score for a non-spam 
message goes over your local threshold. The fact that the automated 
re-scorer assigns scores well below the default threshold is a clue.




So if I could somehow extract the domain "kissch icksrr.com"
and ckeck it against URI blacklists, we would probably solve that issue.

Has anyone already come up with a way how to do that?


I do not believe there's a means of doing that currently. It may be 
possible to work something up using the existing internal blocklisting 
tools (HashBL, enlist*, etc) but I think it will require new code.


It would be an interesting addition to have a way to define arbitrary 
extractor patterns to pull elements out of a string to check against 
hostname blocklists or other specific classes of patterns.





Re: page.link spam

2021-11-02 Thread Benny Pedersen

On 2021-11-02 12:20, Matus UHLAR - fantomas wrote:


I have tried again, but despite is being listed in
kam_sa-channels_mcgrail_com/nonKAMrules.cf, SA does not accept that
directive.


problem is that util_rb_2tld is global while kam rules need pr rule 2tld

make spamassassin change so 2tld can be pr rule not just global, think 
of tflags rule_name util_rb_2tld=page.link


if that ever happen its up to developpers


at least not SA 3.4.4 (debian 10 backports)


is not really debian 11 ? :)


Re: Decoding Google URL redirections and check VS URI Blacklists

2021-11-02 Thread Bill Cole

On 2021-11-02 at 04:52:17 UTC-0400 (Tue, 2 Nov 2021 09:52:17 +0100)
Benoit Panizzon 
is rumored to have said:


Hi SA Community

In the last couple of weeks, I see a massive increase of spam mails
which make use of google site redirection and dodge all our attempts 
at

filtering.

That is google redirector is about the only common thing in those
emails. Source IP, text content etc. is quite random.

Such an example URI looks like (two spaces added to prevent this
triggering other filters)

https://www.goo gle.com/url?q=https%3A%2F%2Fkissch 
icksrr.com%2F%3Futm_source%3DbDukb6xHEYDF2%26amp%3Butm_campaign%3DKirka2sa=Dsntz=1usg=AFQjCNGkpnVKLl8I1IP9aQXtTha-jCnt3A


google.com of course is whitelisted.


Why "of course?"

Have you tested what happens if you add "clear_uridnsbl_skip_domain 
google.com" to your config?



Creating a rule to match the string "google.com/url?q=" also is a no 
go

as this would create way to many false positives.


Do not be scared by SA rules matching non-spam. That is a design 
feature, not an inadvertent bug. All of the most useful rules match some 
ham.


It's only really a "false positive" if the total score for a non-spam 
message goes over your local threshold. The fact that the automated 
re-scorer assigns scores well below the default threshold is a clue.




So if I could somehow extract the domain "kissch icksrr.com"
and ckeck it against URI blacklists, we would probably solve that 
issue.


Has anyone already come up with a way how to do that?


I do not believe there's a means of doing that currently. It may be 
possible to work something up using the existing internal blocklisting 
tools (HashBL, enlist*, etc) but I think it will require new code.


It would be an interesting addition to have a way to define arbitrary 
extractor patterns to pull elements out of a string to check against 
hostname blocklists or other specific classes of patterns.



--
Bill Cole
b...@scconsult.com or billc...@apache.org
(AKA @grumpybozo and many *@billmail.scconsult.com addresses)
Not Currently Available For Hire


Re: Decoding Google URL redirections and check VS URI Blacklists

2021-11-02 Thread Benoit Panizzon
Hi Alex

> So what redirector_pattern rule did you use?

Turned out, the shipped one matched:

redirector_pattern 
m'^https?:/*(?:\w+\.)?google(?:\.\w{2,3}){1,2}/url\?.*?(?<=[?&])q=(.*?)(?:$|[&\#])'i

But when I first tested, the URI was not yet blacklisted to this missed
my attention.

Mit freundlichen Grüssen

-Benoît Panizzon-
-- 
I m p r o W a r e   A G-Leiter Commerce Kunden
__

Zurlindenstrasse 29 Tel  +41 61 826 93 00
CH-4133 PrattelnFax  +41 61 826 93 01
Schweiz Web  http://www.imp.ch
__


Re: page.link spam

2021-11-02 Thread Matus UHLAR - fantomas

verified with spamassassin -D that this file is loaded.

...maybe because local.cf is parsed before URI rules are defined?


There is over 500 page[.]link subdomains inside SURBL right now so 
if you run the latest code its also having fixes to automaticly 
lookup the subdomains of those.


(The mentioned page is also listed on SURBL)



good to know - unfortunately SA seems not to check for those 3rd level
domains until page.link is listed in util_rb_2tld...

I have tried again, but despite is being listed in 
kam_sa-channels_mcgrail_com/nonKAMrules.cf, SA does not accept that 
directive.


at least not SA 3.4.4 (debian 10 backports)


this looks liks issue of:

/var/lib/spamassassin/3.004004/updates_spamassassin_org/20_aux_tlds.cf:clear_util_rb

Nov  2 12:45:25.419 [9317] dbg: config: read file 
/var/lib/spamassassin/3.004004/kam_sa-channels_mcgrail_com/nonKAMrules.cf
[...]
Nov  2 12:45:25.455 [9317] dbg: config: read file 
/var/lib/spamassassin/3.004004/updates_spamassassin_org/20_aux_tlds.cf
Nov  2 12:45:25.456 [9317] dbg: config: cleared tld lists

On 02.11.21 12:24, Raymond Dijkxhoorn wrote:

Thats added with 4.0.0-rsv


ehm?
--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I drive way too fast to worry about cholesterol.


Re: Decoding Google URL redirections and check VS URI Blacklists

2021-11-02 Thread Benoit Panizzon
Hi Alex

> you're looking to use a redirector_pattern rule - weird that this hasn't 
> been yet been added in SA's default ruleset
> Please submit a bug with a sample message

Thank you, that sounds promising. Digging into how to use.

Mit freundlichen Grüssen

-Benoît Panizzon-
-- 
I m p r o W a r e   A G-Leiter Commerce Kunden
__

Zurlindenstrasse 29 Tel  +41 61 826 93 00
CH-4133 PrattelnFax  +41 61 826 93 01
Schweiz Web  http://www.imp.ch
__


Re: page.link spam

2021-11-02 Thread Raymond Dijkxhoorn

Hi!


verified with spamassassin -D that this file is loaded.

...maybe because local.cf is parsed before URI rules are defined?


There is over 500 page[.]link subdomains inside SURBL right now so if 
you run the latest code its also having fixes to automaticly lookup the 
subdomains of those.


(The mentioned page is also listed on SURBL)



good to know - unfortunately SA seems not to check for those 3rd level
domains until page.link is listed in util_rb_2tld...

I have tried again, but despite is being listed in 
kam_sa-channels_mcgrail_com/nonKAMrules.cf, SA does not accept that 
directive.


at least not SA 3.4.4 (debian 10 backports)


Thats added with 4.0.0-rsv

Bye, Raymond


Re: Decoding Google URL redirections and check VS URI Blacklists

2021-11-02 Thread Benoit Panizzon
Hi Martin

> You can find out quite a lot about a spamming site with a few common
> commandline tools:
> 
> - 'ping' tells you of the hostname part of the UREL is valid
> - 'host hostname' should get the sender's IP
> - 'host ip'   IOW a reverse host lookup, tells yo if the first
>   sender address was an alias
> - 'lynx hostname' lets you see if there's a website there, which is
>   often useful (when prompted to accept cookies hit 
>   'V' to never accept them. This is IMO safer then
>   using Firefox etc because lynx shows all pages as
>   plaintext.

Yes, of course. The SWINOG spamtrap does this a bit more sophisticated:

We check if there is a SOA for the URI. If not, we remove the part
before the dot from the left and repeat until the URI contains at
least one dot. If no SOA found, discard.

So we end up with a list of valid 'base' domains and not TLD.

I do this also for the extracted redirection target in case of google
redirectors.

BUT, my question was: I would need SpamAssassin to ALSO extract the
target URI when encountering such a google redirector URL, and check
that against URI blacklists. Is there already a module or easy way to do
so?

Mit freundlichen Grüssen

-Benoît Panizzon-
-- 
I m p r o W a r e   A G-Leiter Commerce Kunden
__

Zurlindenstrasse 29 Tel  +41 61 826 93 00
CH-4133 PrattelnFax  +41 61 826 93 01
Schweiz Web  http://www.imp.ch
__


Re: page.link spam

2021-11-02 Thread Matus UHLAR - fantomas

any idea/tip what to do with it next?



as I sait, I added it to my local domain-based blocklist.
After adding:
util_rb_2tldpage[.]link

it started hitting, which is strange because this directive is contained in:

/var/lib/spamassassin/3.004004/kam_sa-channels_mcgrail_com/nonKAMrules.cf

verified with spamassassin -D that this file is loaded.

...maybe because local.cf is parsed before URI rules are defined?


On 31.10.21 20:12, Raymond Dijkxhoorn wrote:
There is over 500 page[.]link subdomains inside SURBL right now so if 
you run the latest code its also having fixes to automaticly lookup 
the subdomains of those.


(The mentioned page is also listed on SURBL)


good to know - unfortunately SA seems not to check for those 3rd level
domains until page.link is listed in util_rb_2tld...

I have tried again, but despite is being listed in 
kam_sa-channels_mcgrail_com/nonKAMrules.cf, 
SA does not accept that directive.


at least not SA 3.4.4 (debian 10 backports)

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Linux - It's now safe to turn on your computer.
Linux - Teraz mozete pocitac bez obav zapnut.


Re: Decoding Google URL redirections and check VS URI Blacklists

2021-11-02 Thread Martin Gregorie
On Tue, 2021-11-02 at 09:52 +0100, Benoit Panizzon wrote:
> Hi SA Community
> 
You can find out quite a lot about a spamming site with a few common
commandline tools:

- 'ping' tells you of the hostname part of the UREL is valid
- 'host hostname' should get the sender's IP
- 'host ip'   IOW a reverse host lookup, tells yo if the first
  sender address was an alias
- 'lynx hostname' lets you see if there's a website there, which is
  often useful (when prompted to accept cookies hit 
  'V' to never accept them. This is IMO safer then
  using Firefox etc because lynx shows all pages as
  plaintext.

Generally using those in the sequence I've listed them tells me enough
to decide whether to treat the site as a spam source.

In this case, either feed that URL to your favourite blacklist or write
a local rule that fires if that url you spotted is in body text.

I've recently started to see regular Google gmail spam. This looks like
boring sex spam, but that's probably a disguise since it contains
attachments with suspicious (i.e. executable) file types. Fortunately, a
more complex rule, built from a set of subrules, that I wrote years ago
to trap mail with this sort of attachment is catching them now.

Martin





Re: Decoding Google URL redirections and check VS URI Blacklists

2021-11-02 Thread Axb
you're looking to use a redirector_pattern rule - weird that this hasn't 
been yet been added in SA's default ruleset

Please submit a bug with a sample message

On 11/2/21 9:52 AM, Benoit Panizzon wrote:

Hi SA Community

In the last couple of weeks, I see a massive increase of spam mails
which make use of google site redirection and dodge all our attempts at
filtering.

That is google redirector is about the only common thing in those
emails. Source IP, text content etc. is quite random.

Such an example URI looks like (two spaces added to prevent this
triggering other filters)

https://www.goo gle.com/url?q=https%3A%2F%2Fkissch 
icksrr.com%2F%3Futm_source%3DbDukb6xHEYDF2%26amp%3Butm_campaign%3DKirka2sa=Dsntz=1usg=AFQjCNGkpnVKLl8I1IP9aQXtTha-jCnt3A

google.com of course is whitelisted.

Creating a rule to match the string "google.com/url?q=" also is a no go
as this would create way to many false positives.

So if I could somehow extract the domain "kissch icksrr.com"
and ckeck it against URI blacklists, we would probably solve that issue.

Has anyone already come up with a way how to do that?

Mit freundlichen Grüssen

-Benoît Panizzon-






Re: Decoding Google URL redirections and check VS URI Blacklists

2021-11-02 Thread Benoit Panizzon
Hi Raymond

> If you could check that it would help a lot
> 
> Some rules to translate common used services and your example is a good 
> one. If you would check the specific domain it would havbe hit SURBL.

Yes, and future hits to the SWINOG Spamtrap (uribl.swinog.ch) will also
extract such target URI:

Part of the Spamtrap code that looks at the decoded email content:

my $finder = URI::Find->new(sub {
my($uri) = shift;
print "FOUND $uri\n" if ($debug);
if ($uri =~ 
m|^https://www.google.com/url\?q=https\%3A\%2F\%2F([\w\.-]*\w)|) {
print "GOOGLE REDIR to $1\n" if ($debug);
push @uris, $1;
}


Mit freundlichen Grüssen

-Benoît Panizzon-
-- 
I m p r o W a r e   A G-Leiter Commerce Kunden
__

Zurlindenstrasse 29 Tel  +41 61 826 93 00
CH-4133 PrattelnFax  +41 61 826 93 01
Schweiz Web  http://www.imp.ch
__


Decoding Google URL redirections and check VS URI Blacklists

2021-11-02 Thread Benoit Panizzon
Hi SA Community

In the last couple of weeks, I see a massive increase of spam mails
which make use of google site redirection and dodge all our attempts at
filtering.

That is google redirector is about the only common thing in those
emails. Source IP, text content etc. is quite random.

Such an example URI looks like (two spaces added to prevent this
triggering other filters)

https://www.goo gle.com/url?q=https%3A%2F%2Fkissch 
icksrr.com%2F%3Futm_source%3DbDukb6xHEYDF2%26amp%3Butm_campaign%3DKirka2sa=Dsntz=1usg=AFQjCNGkpnVKLl8I1IP9aQXtTha-jCnt3A

google.com of course is whitelisted.

Creating a rule to match the string "google.com/url?q=" also is a no go
as this would create way to many false positives.

So if I could somehow extract the domain "kissch icksrr.com"
and ckeck it against URI blacklists, we would probably solve that issue.

Has anyone already come up with a way how to do that?

Mit freundlichen Grüssen

-Benoît Panizzon-
-- 
I m p r o W a r e   A G-Leiter Commerce Kunden
__

Zurlindenstrasse 29 Tel  +41 61 826 93 00
CH-4133 PrattelnFax  +41 61 826 93 01
Schweiz Web  http://www.imp.ch
__