>time 7,663 seconds, data 486.61 Mbyte

This is very slow. To be honest - I'm lost for words!

My rebuild results are:

Jan-05-18 04:00:00 Maxbytes: 20,000 
Jan-05-18 04:20:21 start populating Spamdb with 969,781 records - Bayesian 
check is now disabled!
Jan-05-18 04:24:36 start populating Hidden Markov Model with 1,870,535 
records!
Jan-05-18 04:26:43 Total processing time: 1,603 second(s)
Jan-05-18 04:26:43 Total processing data: 518.11 MByte
Jan-05-18 04:26:43 Rebuild processed 13.15 files per second.
Jan-05-18 04:26:45 Uploading Griplist via Direct Connection

Even with MaxBytes set to 50.000, it is above 11 files per second.

My two MX-servers are connected using VPN - so my corpus is not shared, it 
is permanently sychronized with rsync (one direction). MySQL is in 
bidirectional master-slave sync.

>and it is likely in my setup rather than an ASSP issue

I don't aggree to 100%. I'll show you.

Hardware + Hypervisor:

VMware ESXi, 6.5.0, 6765664
IBM System x3650 M3 -[7945N2G]-
Intel(R) Xeon(R) CPU X5680 @ 3.33GHz
12 Cores at 6.4GT/s
64GB DDR3-RAM at 1333 MHz
SSD Intel 6000p 512GB m.2 NVME at PCIe 2.0 x 4 lanes
SSD Intel 600p 256GB m.2 NVME at PCIe 2.0 x 4 lanes
IBM M5015/LSI 9260-8i 512MB R/W BBU-cache SAS 6Gbit RAID Controller with a 
14 Disk (146GB 10Krpm) RAID 10 array

virtual machine:

Microsoft Windows Server 2016 (64-bit)
ESXi 6.5 and higher (VM-Version 13)
10GB RAM / ~8GB in use
4 vCPU's - below 5% - typical at 0-2% (while rebuild at 26%)
all disks on the SSD Intel 6000p
[using the IBM/LSI RAID Controller Disks for this VM results nearly in the 
same rebuild speed, because of the 512MB controller cache and read-ahead]

applications:

OS uses ~1GB RAM
ASSP uses ~1 GB RAM (Perl 5.26.1 x64, 5 workers, all features enabled) - 
stats for 1623 days - 1023 mails per day - blocking correctness 99.853% - 
CPU usage 1.56% avg
MySQL uses ~1GB RAM
ClamAV uses ~1GB RAM
other apps uses ~4GB RAM (dccifd, Domino, PHP, Apache, Unifi-Ctl.....)

You see, this is an "eierlegende Wollmilchsau" - in english "all-in-one 
device suitable for every purpose" (hope that's right translated by 
google) :):)


Except the 'Intel 6000p and 600p m.2 NVME', this is really outdated 
hardware. But it has two advantages - 1. it is cheap - 2. the CPU's have a 
high clock speed.
Every modern server hardware will beat this system four or more times. And 
much more times using Samsungs very fast 960Pro/SM961 m.2 NVME SSD at PCIe 
3.0 , instead of the cheap "slow" Intel SSDs at PCIe 2.0.

From my point of view the real bottleneg for the rebuild task is, that 
only one core (thread) is used by this task, even there are 12 or more 
available.
Because of this (my bad) software design, the speed of a single core 
matters too much. I think about for a while to change this. I hope, I'll 
get this fixed/improved in 2018.

Thomas 




Von:    "Colin Waring" <co...@dolphinict.co.uk>
An:     "ASSP development mailing list" <assp-test@lists.sourceforge.net>
Datum:  05.01.2018 16:01
Betreff:        Re: [Assp-test] Meltdown/Spectre



Hi Thomas,
 
Thank you for the input – I do recall previously discussing ISP mode and 
realising that it was for bigger deployments than ours.
 
We have three servers. Two handling inbound and one specifically for 
Office 365 relaying. The two inbound probably do about 50,000 messages per 
day between them according to infostats.
 
CPU Usage on both frontends is 1.62% avg and 1.49% avg respectively. I 
only have a single MySQL db (general load average is around 0.1 ) and I’ve 
been watching the hypervisor reports on its performance. I did set up a 
Gluster sync between the two frontends so they have access to the same 
corpus without having to do it over the network – that helped with 
performance however I’ve never been able to get the rebuild run to be 
particularly quick (Last night’s was total processing time 7,663 seconds, 
data 486.61 Mbyte). I haven’t brought it up here because it doesn’t really 
have much of an effect and it is likely in my setup rather than an ASSP 
issue.
 
So I think I’ll get away with it on my setup, hopefully this information 
will be helpful to other people who are trying to figure out if they’ll be 
impacted.
 
All the best,
Colin Waring.
 
From: Thomas Eckardt [mailto:thomas.ecka...@thockar.com] 
Sent: 05 January 2018 13:49
To: ASSP development mailing list <assp-test@lists.sourceforge.net>
Subject: Re: [Assp-test] Meltdown/Spectre
 
I remember an ISP issue, who used 10 assp instances with one enterprise 
MySQL backend cluster, sharing all tables for all instances. 
In havy workload times (100.000 or even more mails per hour), the MySQL 
server was brought to its end - no matter how many physical resouces were 
made available. Even holding the complete assp DB in the DB-server RAM has 
not solved the problem. 
With 100.000 mails per hour and  ~50 DB queries per mail (HMMdb and 
spamDB), the DB server has to process at least 5 million queries in one 
hour.
If we exclude HMMdb and spamDB, depending on the configuration, there can 
be additionaly 10 to 20 DB queries per mail (for all the other DB-tables). 
Even this can lead in to a very high DB workload! 
The URIBL-check can also be very resource expensive (read and write !!!). 
Assume a mail with 100 different URIs is seen the first time - 100 
unsuccessfull cache DB-queries, followed by 100 DNS queries, followed by 
100 cache DB-writes. 

To prevent this issue, assp V2 has a buildin ISP mode for HMMdb and 
spamDB. 
In short: 
- the corpus of all instances is synchronized to a master instance (rsync 
for example) 
- HMMdb and spamDB are hold in memory in each instance and each worker 
- HMMdb and spamDB are build on the master system and are distributed as 
files to all other instances using an external script (methode of your 
choice) 
- all other tables are shared traditionaly - but each instance uses a 
configurable DB cache to prevent repeated DB-queries for the same results 
(for example IP checks, helo ....) 

This ISP mode requires at least 16GB RAM per instance, if a maximum of 15 
SMTP workers is used. Using more than 15 workers in an instance, produces 
a large overhead without any performance improvement. 

Collin, I don't know the workload and configuration of your systems - but 
the math is simple. 

An possible solution between the standard mode and the ISP mode can be: 
- each assp instance has its own DB backend 
- all DB-backends are bidirectional synchronized (asynchron) to a 
DB-master-server-cluster 

Depending on the overall workload, the DB-master-server-cluster must be an 
enterprise cluster or something like that. 
If we assume 10 assp instances, each record change in one instance will 
lead in to one store and nine write sync ops at the master cluster! 
  
If we assume five DB-write ops per mail -> 100 000 mail/h in all instances 
-> 500 000 store ops/h + 4.5M sync ops/h at the master cluster. 
Yes - the workload at the cluster will be very high, but it is no longer 
time critical and will balance over all the time. 
The disadvantage is, that the tables in all instances are never 100% 
sychron and the last instance "winns" in writing the same DB-record. The 
async state of the tables in all DB-backends increases with the overall 
workload. 

You may also think about a ring synchronization between the 10 assp 
DB-backends. The cluster will not be required and the DB-backends will 
have a manageable workload - but the delay of syncing a single record and 
the data inconsitency over all instances will be increased. 

Thomas 






Von:        "Colin Waring" <co...@dolphinict.co.uk> 
An:        "ASSP development mailing list" <
assp-test@lists.sourceforge.net> 
Datum:        05.01.2018 10:45 
Betreff:        [Assp-test] Meltdown/Spectre 

 
Hi All,
 
I’m wondering if anyone has updated their ASSP/db backends and monitored 
the performance impact yet.
 
I’m currently working on assessing just how bad this is going to be with 
how many systems I’ve got to coordinate hypervisor/OS/microcode updates on 
so I’m checking around with everyone to see who’s already got some 
answers.
 
All the best,
Colin Waring. 
 

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot 
_______________________________________________
Assp-test mailing list
Assp-test@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/assp-test





DISCLAIMER:
*******************************************************
This email and any files transmitted with it may be confidential, legally 
privileged and protected in law and are intended solely for the use of the 

individual to whom it is addressed.
This email was multiple times scanned for viruses. There should be no 
known virus in this email!
*******************************************************
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Assp-test mailing list
Assp-test@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/assp-test





DISCLAIMER:
*******************************************************
This email and any files transmitted with it may be confidential, legally 
privileged and protected in law and are intended solely for the use of the 

individual to whom it is addressed.
This email was multiple times scanned for viruses. There should be no 
known virus in this email!
*******************************************************


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Assp-test mailing list
Assp-test@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/assp-test

Reply via email to