Re: txrep duplicated key with postgresql

2019-12-09 Thread Daniel J. Luke
I uploaded a patch for postgresql on 
https://bz.apache.org/SpamAssassin/show_bug.cgi?id=7218 a while ago - but I 
haven't had time to clean it up into something that should be included into a 
release.

It might serve as inspiration for someone else before I end up having time to 
get to it, though.

> On Dec 9, 2019, at 4:00 PM, Martin Gregorie  wrote:
> 
> On Mon, 2019-12-09 at 11:41 -0800, John Hardin wrote:
>> This sounds more like the "does that tuple already exist?" logic is 
>> failing, causing it to think it needs to create a new entry, which
>> the unique key is (correctly) preventing.
>> 
>> You don't lightly bypass unique keys. They are there for a reason.
>> 
> Fair enough. Since this is the first reference I remember seeing to
> using PostgreSQL with TxRef I assumed that Benny's cry for help was due
> to a difference in the way it handled duplicate keys compared with the
> database that normally supports it.
> 
> Martin
> 

-- 
Daniel J. Luke



Re: txrep duplicated key with postgresql

2019-12-09 Thread Martin Gregorie
On Mon, 2019-12-09 at 11:41 -0800, John Hardin wrote:
> This sounds more like the "does that tuple already exist?" logic is 
> failing, causing it to think it needs to create a new entry, which
> the unique key is (correctly) preventing.
> 
> You don't lightly bypass unique keys. They are there for a reason.
> 
Fair enough. Since this is the first reference I remember seeing to
using PostgreSQL with TxRef I assumed that Benny's cry for help was due
to a difference in the way it handled duplicate keys compared with the
database that normally supports it.

Martin




Re: txrep duplicated key with postgresql

2019-12-09 Thread John Hardin

On Mon, 9 Dec 2019, RW wrote:


On Mon, 09 Dec 2019 13:14:45 +
Martin Gregorie wrote:


The primary key for the public.txrep table must be unique, and
evidently you already had a row with the same primary key. It seems
likely that the combination [username, email, signedby and ip] will
very often be duplicated, like every time you get another email from
that person.



TxRep reuses a lot of AWL, often leaving things mislabelled. For per
message tracking entries I think 'email' would be a message identifier
- if such a row already exists it ought be handled. There's no reason
for duplicate reputation entries.


Try this:


Unless you know of a good reason for having duplicates, making it
possible will just conceal a bug.


It's a reputation rating. I'd presume that each tuple *should* only have 
one entry, updated with more stats (message count, total score, etc.) 
once it's created.


This sounds more like the "does that tuple already exist?" logic is 
failing, causing it to think it needs to create a new entry, which the 
unique key is (correctly) preventing.


You don't lightly bypass unique keys. They are there for a reason.

--
 John Hardin KA7OHZhttp://www.impsec.org/~jhardin/
 jhar...@impsec.orgFALaholic #11174 pgpk -a jhar...@impsec.org
 key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
---
  Any time law enforcement becomes a revenue center, the system
  becomes corrupt.
---
 6 days until Bill of Rights day


Re: txrep duplicated key with postgresql

2019-12-09 Thread RW
On Mon, 09 Dec 2019 13:14:45 +
Martin Gregorie wrote:

> The primary key for the public.txrep table must be unique, and
> evidently you already had a row with the same primary key. It seems
> likely that the combination [username, email, signedby and ip] will
> very often be duplicated, like every time you get another email from
> that person.


TxRep reuses a lot of AWL, often leaving things mislabelled. For per
message tracking entries I think 'email' would be a message identifier
- if such a row already exists it ought be handled. There's no reason
for duplicate reputation entries.

> Try this:

Unless you know of a good reason for having duplicates, making it
possible will just conceal a bug.



Re: __UNPARSEABLE_RELAY_COUNT: which one?

2019-12-09 Thread RW
On Mon, 9 Dec 2019 10:05:47 +0100
hg user wrote:

> Yes, I'm still at 3.4.1... can't update now.
> 

I wouldn't bother, an unparseable relay only really matters if it's on
the edge of the trusted or internal network - mainly the internal. If
SA can't parse your own MTA's header all kinds of things are broken,
but an internal realy is mostly harmless.

IMO the parsing shouldn't count any header that has "\bwith LMTP\b"  as
an unparseable relay. 



Re: Way To Block A Specific Word From Image

2019-12-09 Thread Brent Clark

Spin up a vagrant instance and test.

I recommend using the scan size to 1024. For I found anything less was 
getting missed.


# Increase scanning size of FuzzyOCR to 1024
sed -i 's/#focr_max_height 800/focr_max_height 1024/' 
/etc/spamassassin/FuzzyOcr.cf


sed -i 's/#focr_max_width 800/focr_max_width 1024/' 
/etc/spamassassin/FuzzyOcr.cf


I would like to add, I don't use FuzzyOCR in production.
In my teams testing, we found the signatures of Sanesecurity (along with 
KAM) were sufficient enough.


If I can ask. Can you share your feedback / experience with the 
community, in the spam you analyzing?


HTH
Brent

On 2019/12/09 12:30, Matus UHLAR - fantomas wrote:

On 09.12.19 15:56, KADAM, SIDDHESH wrote:
Is there any possibilities of blocking a specific word from an image 
using SpamAssassin ?


I think FuzzyOCR plugin can do something similar - OCR scan the image and
blacklist on words.



Re: txrep duplicated key with postgresql

2019-12-09 Thread Martin Gregorie
The primary key for the public.txrep table must be unique, and evidently
you already had a row with the same primary key. It seems likely that
the combination [username, email, signedby and ip] will very often be
duplicated, like every time you get another email from that person.

Try this:
- redefine txrep_pkey as a data retrieval index (dups allowed)
- use last_hit as the primary key. This should work provided that
  CURRENT_TIMESTAMP ticks faster than new rows can arrive. (this may be
  hardware dependent).
- Otherwise, create a sequence object and use that as the source of
  primary key values. Using it this way will generate primary keys in
  data arrival sequence and will not return duplicate values. 

Martin


On Mon, 2019-12-09 at 13:28 +0100, Benny Pedersen wrote:
> 
> 2019-12-09 12:07:53.477 UTC [16458] DETAIL:  Key (username, email, 
> signedby, ip)=(u...@example.org, u...@example.com, example.com, none) 
> already exists.
> 2019-12-09 12:07:53.477 UTC [16458] STATEMENT:  INSERT INTO txrep 
> (username,email,ip,count,totscore,signedby) VALUES ($1,$2,$3,$4,$5,$6)
> 2019-12-09 12:07:53.479 UTC [16459] ERROR:  duplicate key value
> violates 
> unique constraint "txrep_pkey"
> 
> --
> -- PostgreSQL database dump
> --
> 
> -- Dumped from database version 11.4
> -- Dumped by pg_dump version 11.4
> 
> SET statement_timeout = 0;
> SET lock_timeout = 0;
> SET idle_in_transaction_session_timeout = 0;
> SET client_encoding = 'UTF8';
> SET standard_conforming_strings = on;
> SELECT pg_catalog.set_config('search_path', '', false);
> SET check_function_bodies = false;
> SET xmloption = content;
> SET client_min_messages = warning;
> SET row_security = off;
> 
> SET default_tablespace = '';
> 
> SET default_with_oids = false;
> 
> --
> -- Name: txrep; Type: TABLE; Schema: public; Owner: spamassassin
> --
> 
> CREATE TABLE public.txrep (
>  username character varying(100) DEFAULT ''::character varying
> NOT 
> NULL,
>  email character varying(255) DEFAULT ''::character varying NOT
> NULL,
>  ip character varying(40) DEFAULT ''::character varying NOT NULL,
>  count bigint DEFAULT '0'::bigint NOT NULL,
>  totscore double precision DEFAULT '0'::double precision NOT NULL,
>  signedby character varying(255) DEFAULT ''::character varying
> NOT 
> NULL,
>  last_hit timestamp without time zone DEFAULT CURRENT_TIMESTAMP
> NOT 
> NULL
> )
> WITH (fillfactor='95');
> 
> 
> ALTER TABLE public.txrep OWNER TO spamassassin;
> 
> --
> -- Name: txrep txrep_pkey; Type: CONSTRAINT; Schema: public; Owner: 
> spamassassin
> --
> 
> ALTER TABLE ONLY public.txrep
>  ADD CONSTRAINT txrep_pkey PRIMARY KEY (username, email,
> signedby, 
> ip);
> 
> 
> --
> -- Name: txrep_last_hit; Type: INDEX; Schema: public; Owner: 
> spamassassin
> --
> 
> CREATE INDEX txrep_last_hit ON public.txrep USING btree (last_hit);
> 
> 
> --
> -- Name: txrep update_txrep_update_last_hit; Type: TRIGGER; Schema: 
> public; Owner: spamassassin
> --
> 
> CREATE TRIGGER update_txrep_update_last_hit BEFORE UPDATE ON 
> public.txrep FOR EACH ROW EXECUTE PROCEDURE 
> public.update_txrep_last_hit();
> 
> 
> --
> -- PostgreSQL database dump complete
> --
> 
> 
> how to solve this ?



txrep duplicated key with postgresql

2019-12-09 Thread Benny Pedersen




2019-12-09 12:07:53.477 UTC [16458] DETAIL:  Key (username, email, 
signedby, ip)=(u...@example.org, u...@example.com, example.com, none) 
already exists.
2019-12-09 12:07:53.477 UTC [16458] STATEMENT:  INSERT INTO txrep 
(username,email,ip,count,totscore,signedby) VALUES ($1,$2,$3,$4,$5,$6)
2019-12-09 12:07:53.479 UTC [16459] ERROR:  duplicate key value violates 
unique constraint "txrep_pkey"


--
-- PostgreSQL database dump
--

-- Dumped from database version 11.4
-- Dumped by pg_dump version 11.4

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;

SET default_tablespace = '';

SET default_with_oids = false;

--
-- Name: txrep; Type: TABLE; Schema: public; Owner: spamassassin
--

CREATE TABLE public.txrep (
username character varying(100) DEFAULT ''::character varying NOT 
NULL,

email character varying(255) DEFAULT ''::character varying NOT NULL,
ip character varying(40) DEFAULT ''::character varying NOT NULL,
count bigint DEFAULT '0'::bigint NOT NULL,
totscore double precision DEFAULT '0'::double precision NOT NULL,
signedby character varying(255) DEFAULT ''::character varying NOT 
NULL,
last_hit timestamp without time zone DEFAULT CURRENT_TIMESTAMP NOT 
NULL

)
WITH (fillfactor='95');


ALTER TABLE public.txrep OWNER TO spamassassin;

--
-- Name: txrep txrep_pkey; Type: CONSTRAINT; Schema: public; Owner: 
spamassassin

--

ALTER TABLE ONLY public.txrep
ADD CONSTRAINT txrep_pkey PRIMARY KEY (username, email, signedby, 
ip);



--
-- Name: txrep_last_hit; Type: INDEX; Schema: public; Owner: 
spamassassin

--

CREATE INDEX txrep_last_hit ON public.txrep USING btree (last_hit);


--
-- Name: txrep update_txrep_update_last_hit; Type: TRIGGER; Schema: 
public; Owner: spamassassin

--

CREATE TRIGGER update_txrep_update_last_hit BEFORE UPDATE ON 
public.txrep FOR EACH ROW EXECUTE PROCEDURE 
public.update_txrep_last_hit();



--
-- PostgreSQL database dump complete
--


how to solve this ?


Re: Way To Block A Specific Word From Image

2019-12-09 Thread Matus UHLAR - fantomas

On 09.12.19 15:56, KADAM, SIDDHESH wrote:
Is there any possibilities of blocking a specific word from an image 
using SpamAssassin ?


I think FuzzyOCR plugin can do something similar - OCR scan the image and
blacklist on words.

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
You have the right to remain silent. Anything you say will be misquoted,
then used against you.


Way To Block A Specific Word From Image

2019-12-09 Thread KADAM, SIDDHESH

Hi Folks,

Is there any possibilities of blocking a specific word from an image 
using SpamAssassin ?


Regards,
Siddhesh



Re: SA memory (Re: ".*" in body rules)

2019-12-09 Thread Henrik K
On Mon, Dec 09, 2019 at 10:54:00AM +0100, Matus UHLAR - fantomas wrote:
> 
> I'm afraid I can't provide clients' file.
> 
> I can only repeat:
> - the mail is 20424329 bytes
> - the mail contains single uuencoded .rar file inline.
> 
> -rw-rw-rw- 1 root root 14818832 Dec  9 10:50 'redacted.rar'
> 
> I have tried to run it again, it took about 20minutes to scan and memory
> usage slowly increased up to:
> 
>  PID USER  PR  NIVIRTRESSHR S  %CPU  %MEM TIME+ COMMAND
> 1924 amavis20   0 3916332   2.8g   1468 D   1.0  72.2   3:08.08 
> spamassassin
> 
> note the "amavis" is the spamassassin command line client running under
> amavis user to use amavis' bayes database:
> 
> amavis1924 24.8 72.9 3916332 2923964 ? D10:23   3:08 
> /usr/bin/perl -T -w /usr/bin/spamassassin -x
> 
> -rw--- 1 amavis amavis 10584064 Dec  9 10:45 bayes_seen
> -rw--- 1 amavis amavis 10760192 Dec  9 10:45 bayes_toks
> 
> I have tried to attach the proces using strace, after a while it produced
> output (only 2 rules hit), and exited.  I hope this didn't cause premature
> exit of the SA client.

And what does running spamassassin debug directly from command line output? 
Where does it hang?

spamassassin -t -D < message >/dev/null



Re: SA memory (Re: ".*" in body rules)

2019-12-09 Thread Matus UHLAR - fantomas

>On Thu, 5 Dec 2019 17:07:05 +0100
>Matus UHLAR - fantomas wrote:
>>seems some big mails were too long to scan, and SA even got killed.
>>
>>[2146809.213586] Out of memory: Kill process 3660 (spamassassin)
>>score 365 or sacrifice child [2146809.213613] Killed process 3660
>>(spamassassin) total-vm:2960664kB, anon-rss:2921892kB, file-rss:0kB,
>>shmem-rss:0kB [2146809.270342] oom_reaper: reaped process 3660
>>(spamassassin), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
>>
>>I see the mail body contains nearly 20MB uuencoded text (don't ask).



On 05.12.19 17:21, RW wrote:
>In rawbody rules the text is broken into chunks of 1024 to 2048 bytes,
>so the worst case isn't all that much worst than with {0,1000}.
>
>Also  /m means that .* wont cross a line boundary in the decoded text
>and  ^ can match in the middle of the chunk. This make the average
>processing  time less sensitive to any upper limit on .*.



On Fri, Dec 06, 2019 at 10:23:15AM +0100, Matus UHLAR - fantomas wrote:

so it is not the quantifiers who cause SA taking too much of memory?

any idea how to debug that?


On 06.12.19 13:16, Henrik K wrote:

Scanning a generic 20MB will normally eat ~700MB memory.  3GB implies
something is bugging.  Feel free to send a sample if you can.


I'm afraid I can't provide clients' file.

I can only repeat:
- the mail is 20424329 bytes
- the mail contains single uuencoded .rar file inline.

-rw-rw-rw- 1 root root 14818832 Dec  9 10:50 'redacted.rar'

I have tried to run it again, it took about 20minutes to scan and memory
usage slowly increased up to:

 PID USER  PR  NIVIRTRESSHR S  %CPU  %MEM TIME+ COMMAND
1924 amavis20   0 3916332   2.8g   1468 D   1.0  72.2   3:08.08 spamassassin

note the "amavis" is the spamassassin command line client running under
amavis user to use amavis' bayes database:

amavis1924 24.8 72.9 3916332 2923964 ? D10:23   3:08 /usr/bin/perl 
-T -w /usr/bin/spamassassin -x

-rw--- 1 amavis amavis 10584064 Dec  9 10:45 bayes_seen
-rw--- 1 amavis amavis 10760192 Dec  9 10:45 bayes_toks

I have tried to attach the proces using strace, after a while it produced
output (only 2 rules hit), and exited.  I hope this didn't cause premature
exit of the SA client.

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
How does cat play with mouse? cat /dev/mouse


Re: __UNPARSEABLE_RELAY_COUNT: which one?

2019-12-09 Thread Henrik K


If it starts bothers you, simply patch the small change..

https://svn.apache.org/viewvc/spamassassin/branches/3.4/lib/Mail/SpamAssassin/Message/Metadata/Received.pm?r1=1686458=1686457=1686458=patch


On Mon, Dec 09, 2019 at 10:05:47AM +0100, hg user wrote:
> Yes, I'm still at 3.4.1... can't update now.
> 
> Thank you
> Francesco
> 
> 
> 
> On Mon, Dec 9, 2019 at 9:55 AM Henrik K <[1]h...@hege.li> wrote:
> 
> 
> I guess you are using old SA version, it was fixed in 3.4.2
> 
>     # Received: from [2]mail-backend..com (LHLO [3]
> mail-backend..com) (10.2.2.20) by [4]mail-backend..com with LMTP;
> Thu, 18 Jun 2015 16:50:56 -0700 (PDT)
>     if (/^(\S+) \(LHLO (\S*)\) \((${IP_ADDRESS})\) by (\S+) with LMTP/) {
>       $rdns = $1; $helo = $2; $ip = $3; $by = $4; goto enough;
>     }
> 
> 
> On Mon, Dec 09, 2019 at 09:33:39AM +0100, hg user wrote:
> > Here the line, redacted:
> > Dec  9 09:30:36.700 [10600] dbg: received-header: unparseable: from
> > [5]host4.domain.it (LHLO [6]host4.domain.it) (1.1.1.1) by [7]
> host3.domain.it with
> > LMTP; Tue, 3 Dec 2019 11:55:44 +0100 (CET)
> >
> > Thank you
> > Francesco
> >
> > On Mon, Dec 9, 2019 at 9:22 AM Henrik K <[8]h...@hege.li> wrote:
> >
> > > On Mon, Dec 09, 2019 at 09:10:25AM +0100, hg user wrote:
> > > > Investigating why a message scored X when arrived and Y now
> (recovered
> > > from
> > > > user inbox), I realized that UNPARSEABLE_RELAY_COUNT rule fires on
> all
> > > messages
> > > > recovered from user inbox.
> > > > In almost all cases this is not a problem, except for
> XPRIO_SHORT_SUBJ:
> > > it
> > > > fired on X and didn't fire on Y due to UNPARSEABLE_RELAY_COUNT...
> > > >
> > > > How can I understand why UNPARSEABLE_RELAY_COUNT rule fires?
> > >
> > > Use debugging
> > >
> > > spamassassin -t -D -L < message | grep unparseable:
> > >
> > > Show the output here if you can, it might need to be added in SA code.
> 
> 
> References:
> 
> [1] mailto:h...@hege.li
> [2] http://mail-backend..com/
> [3] http://mail-backend..com/
> [4] http://mail-backend..com/
> [5] http://host4.domain.it/
> [6] http://host4.domain.it/
> [7] http://host3.domain.it/
> [8] mailto:h...@hege.li


Re: __UNPARSEABLE_RELAY_COUNT: which one?

2019-12-09 Thread hg user
Yes, I'm still at 3.4.1... can't update now.

Thank you
Francesco



On Mon, Dec 9, 2019 at 9:55 AM Henrik K  wrote:

>
> I guess you are using old SA version, it was fixed in 3.4.2
>
> # Received: from mail-backend..com (LHLO mail-backend..com)
> (10.2.2.20) by mail-backend..com with LMTP; Thu, 18 Jun 2015 16:50:56
> -0700 (PDT)
> if (/^(\S+) \(LHLO (\S*)\) \((${IP_ADDRESS})\) by (\S+) with LMTP/) {
>   $rdns = $1; $helo = $2; $ip = $3; $by = $4; goto enough;
> }
>
>
> On Mon, Dec 09, 2019 at 09:33:39AM +0100, hg user wrote:
> > Here the line, redacted:
> > Dec  9 09:30:36.700 [10600] dbg: received-header: unparseable: from
> > host4.domain.it (LHLO host4.domain.it) (1.1.1.1) by host3.domain.it with
> > LMTP; Tue, 3 Dec 2019 11:55:44 +0100 (CET)
> >
> > Thank you
> > Francesco
> >
> > On Mon, Dec 9, 2019 at 9:22 AM Henrik K  wrote:
> >
> > > On Mon, Dec 09, 2019 at 09:10:25AM +0100, hg user wrote:
> > > > Investigating why a message scored X when arrived and Y now
> (recovered
> > > from
> > > > user inbox), I realized that UNPARSEABLE_RELAY_COUNT rule fires on
> all
> > > messages
> > > > recovered from user inbox.
> > > > In almost all cases this is not a problem, except for
> XPRIO_SHORT_SUBJ:
> > > it
> > > > fired on X and didn't fire on Y due to UNPARSEABLE_RELAY_COUNT...
> > > >
> > > > How can I understand why UNPARSEABLE_RELAY_COUNT rule fires?
> > >
> > > Use debugging
> > >
> > > spamassassin -t -D -L < message | grep unparseable:
> > >
> > > Show the output here if you can, it might need to be added in SA code.
>


Re: __UNPARSEABLE_RELAY_COUNT: which one?

2019-12-09 Thread Henrik K


I guess you are using old SA version, it was fixed in 3.4.2

# Received: from mail-backend..com (LHLO mail-backend..com) 
(10.2.2.20) by mail-backend..com with LMTP; Thu, 18 Jun 2015 16:50:56 -0700 
(PDT)
if (/^(\S+) \(LHLO (\S*)\) \((${IP_ADDRESS})\) by (\S+) with LMTP/) {
  $rdns = $1; $helo = $2; $ip = $3; $by = $4; goto enough;
}


On Mon, Dec 09, 2019 at 09:33:39AM +0100, hg user wrote:
> Here the line, redacted:
> Dec  9 09:30:36.700 [10600] dbg: received-header: unparseable: from
> host4.domain.it (LHLO host4.domain.it) (1.1.1.1) by host3.domain.it with
> LMTP; Tue, 3 Dec 2019 11:55:44 +0100 (CET)
> 
> Thank you
> Francesco
> 
> On Mon, Dec 9, 2019 at 9:22 AM Henrik K  wrote:
> 
> > On Mon, Dec 09, 2019 at 09:10:25AM +0100, hg user wrote:
> > > Investigating why a message scored X when arrived and Y now (recovered
> > from
> > > user inbox), I realized that UNPARSEABLE_RELAY_COUNT rule fires on all
> > messages
> > > recovered from user inbox.
> > > In almost all cases this is not a problem, except for XPRIO_SHORT_SUBJ:
> > it
> > > fired on X and didn't fire on Y due to UNPARSEABLE_RELAY_COUNT...
> > >
> > > How can I understand why UNPARSEABLE_RELAY_COUNT rule fires?
> >
> > Use debugging
> >
> > spamassassin -t -D -L < message | grep unparseable:
> >
> > Show the output here if you can, it might need to be added in SA code.


Re: __UNPARSEABLE_RELAY_COUNT: which one?

2019-12-09 Thread hg user
Here the line, redacted:
Dec  9 09:30:36.700 [10600] dbg: received-header: unparseable: from
host4.domain.it (LHLO host4.domain.it) (1.1.1.1) by host3.domain.it with
LMTP; Tue, 3 Dec 2019 11:55:44 +0100 (CET)

Thank you
Francesco

On Mon, Dec 9, 2019 at 9:22 AM Henrik K  wrote:

> On Mon, Dec 09, 2019 at 09:10:25AM +0100, hg user wrote:
> > Investigating why a message scored X when arrived and Y now (recovered
> from
> > user inbox), I realized that UNPARSEABLE_RELAY_COUNT rule fires on all
> messages
> > recovered from user inbox.
> > In almost all cases this is not a problem, except for XPRIO_SHORT_SUBJ:
> it
> > fired on X and didn't fire on Y due to UNPARSEABLE_RELAY_COUNT...
> >
> > How can I understand why UNPARSEABLE_RELAY_COUNT rule fires?
>
> Use debugging
>
> spamassassin -t -D -L < message | grep unparseable:
>
> Show the output here if you can, it might need to be added in SA code.
>
>


Re: __UNPARSEABLE_RELAY_COUNT: which one?

2019-12-09 Thread Henrik K
On Mon, Dec 09, 2019 at 09:10:25AM +0100, hg user wrote:
> Investigating why a message scored X when arrived and Y now (recovered from
> user inbox), I realized that UNPARSEABLE_RELAY_COUNT rule fires on all 
> messages
> recovered from user inbox.
> In almost all cases this is not a problem, except for XPRIO_SHORT_SUBJ: it
> fired on X and didn't fire on Y due to UNPARSEABLE_RELAY_COUNT...
> 
> How can I understand why UNPARSEABLE_RELAY_COUNT rule fires?

Use debugging

spamassassin -t -D -L < message | grep unparseable:

Show the output here if you can, it might need to be added in SA code.



__UNPARSEABLE_RELAY_COUNT: which one?

2019-12-09 Thread hg user
Investigating why a message scored X when arrived and Y now (recovered from
user inbox), I realized that UNPARSEABLE_RELAY_COUNT rule fires on all
messages recovered from user inbox.
In almost all cases this is not a problem, except for XPRIO_SHORT_SUBJ: it
fired on X and didn't fire on Y due to UNPARSEABLE_RELAY_COUNT...

How can I understand why UNPARSEABLE_RELAY_COUNT rule fires?

Thanks
Francesco