On 25/01/2020 17:46, Jeremy Harris via Exim-users wrote:
> On 22/01/2020 12:43, Graeme Fowler via Exim-users wrote:
>> It took 35 seconds to do the initial non-delivering queue run while parsing 
>> and routing each message on the queue, then started delivering them. 
> So that's 35ms per message.

> I guess in the longterm, that's a possible point of attack.  We could
> parallelize the scan over that fsync time.

Development testing of a patch gives:

Testcase: 80 messages queued, "exim -d+all" instrumented for the time
until completion of the 1st phase.

4-core x86_64, consumer SSD:
Baseline: approx 1 sec (12.5 ms/msg)
2-ply:           613ms (7.66 ms/msg, +63% speed)
10-ply:          207ms (2.59 ms/msg, +380% or 4.8x speed)

8-core aarch64, rotating disk:
Baseline:        4.8 sec (60 ms/msg)
2-ply:           3.5 sec (44 ms/msg, +36% speed)
10-ply:          3.4 sec (42.5 ms/msg, +42% speed)


Looks like the aarch system is into diminishing returns; unfortunately
because both the disk and the cpu change across the two test platforms
we don't know which was the limiting factor.  However, the speedup looks
worthwhile so I think I'll make a guess at 4-ply for an initial commit.

It can't be used when queue_run_in_order is active.



Bleeding-edge self-builders may also be interested in 9438970c97:
"ACL: control = queue/first_pass_route".  It provides for SMTP-sourced
messages the same facility that "-oqds" does for command-line source.
-- 
Cheers,
  Jeremy

-- 
## List details at https://lists.exim.org/mailman/listinfo/exim-users
## Exim details at http://www.exim.org/
## Please use the Wiki with this list - http://wiki.exim.org/

Reply via email to