Re: Best practice - concurrent delivery to remote sites

2018-12-06 Thread Andrey Repin
Greetings, Viktor Dukhovni!

>> On Dec 6, 2018, at 2:19 PM, Andrey Repin  wrote:
>> 
>> In other words, if I have multiple different messages to the same 
>> destination,
>> I can't know if they will be delivered through single connection?
>> And can't control it?

> If the inter-message spacing exceeds the either of:

> http://www.postfix.org/postconf.5.html#connection_cache_ttl_limit
>
> http://www.postfix.org/postconf.5.html#smtp_connection_cache_time_limit

> then any cached connections would be closed before it is time to send
> another message.  Generally, with serialized deliveries, you should not
> cache connections.  Keeping idle connections open is anti-social, you're
> consuming remote resources.

> A transport with a destination rate delay will not do demand caching,
> which IIRC requires either concurrent or closely spaced deliveries
> before it is enabled.

> Bottom line, with rate delays, each delivery would be expected to use a new
> connection.  On demand connection re-use is not compatible with rate delays.

Thank you for your assistance.
I see how lingering connections could be a worse problem than multiple
simultaneous connections.
I'll have to think this issue through again.
And probably start a new topic for it.


-- 
With best regards,
Andrey Repin
Friday, December 7, 2018 0:12:45

Sorry for my terrible english...



Re: Best practice - concurrent delivery to remote sites

2018-12-06 Thread Viktor Dukhovni
> On Dec 6, 2018, at 2:19 PM, Andrey Repin  wrote:
> 
> In other words, if I have multiple different messages to the same destination,
> I can't know if they will be delivered through single connection?
> And can't control it?

If the inter-message spacing exceeds the either of:

http://www.postfix.org/postconf.5.html#connection_cache_ttl_limit
http://www.postfix.org/postconf.5.html#smtp_connection_cache_time_limit

then any cached connections would be closed before it is time to send
another message.  Generally, with serialized deliveries, you should not
cache connections.  Keeping idle connections open is anti-social, you're
consuming remote resources.

A transport with a destination rate delay will not do demand caching,
which IIRC requires either concurrent or closely spaced deliveries
before it is enabled.

Bottom line, with rate delays, each delivery would be expected to use a new
connection.  On demand connection re-use is not compatible with rate delays.

-- 
Viktor.



Re: Best practice - concurrent delivery to remote sites

2018-12-06 Thread Andrey Repin
Greetings, Viktor Dukhovni!

>>> The default amount of delay that is inserted between individual deliveries
>>> over the same message delivery transport, regardless of destination. If
>>> non-zero, all deliveries over the same message delivery transport will
>>> happen one at a time.
>> 
>> To me, it is unclear,
>> - what considered "individual deliveries"? Individual messages? Individual
>> connects to the destination?

> One delivery at a time.  A delivery is a handoff of a message with a set of
> recipients of that message to a delivery agent for processing.

>> - what "one at a time" means exactly?

> Less than two or more in parallel.

>> Will queue manager connect and disconnect for each message in queue?

> The queue manager does not connect to remote destinations, delivery
> agents make connections.  The queue manager asks delivery agents to
> perform work, and collects the results.

>> Will it try to deliver multiple messages
>> to the same destination in parallel, over multiple connections?

> One delivery at a time, with the configured delay between deliveries.
> [ where one is less than two. ]

In other words, if I have multiple different messages to the same destination,
I can't know if they will be delivered through single connection?
And can't control it?


-- 
With best regards,
Andrey Repin
Thursday, December 6, 2018 22:07:44

Sorry for my terrible english...



Re: Best practice - concurrent delivery to remote sites

2018-12-06 Thread Viktor Dukhovni
> On Dec 6, 2018, at 1:28 PM, Andrey Repin  wrote:
> 
>> The default amount of delay that is inserted between individual deliveries
>> over the same message delivery transport, regardless of destination. If
>> non-zero, all deliveries over the same message delivery transport will
>> happen one at a time.
> 
> To me, it is unclear,
> - what considered "individual deliveries"? Individual messages? Individual
> connects to the destination?

One delivery at a time.  A delivery is a handoff of a message with a set of
recipients of that message to a delivery agent for processing.

> - what "one at a time" means exactly?

Less than two or more in parallel.

> Will queue manager connect and disconnect for each message in queue?

The queue manager does not connect to remote destinations, delivery
agents make connections.  The queue manager asks delivery agents to
perform work, and collects the results.

> Will it try to deliver multiple messages
> to the same destination in parallel, over multiple connections?

One delivery at a time, with the configured delay between deliveries.
[ where one is less than two. ]

-- 
Viktor.



Re: Best practice - concurrent delivery to remote sites

2018-12-06 Thread Andrey Repin
Greetings, Wietse Venema!

>> default_transport_rate_delay = 15s

I'd like to ask for clarification, as man page wording is not clear.

The original wording is

> The default amount of delay that is inserted between individual deliveries
> over the same message delivery transport, regardless of destination. If
> non-zero, all deliveries over the same message delivery transport will
> happen one at a time.

To me, it is unclear,
- what considered "individual deliveries"? Individual messages? Individual
connects to the destination?
- what "one at a time" means exactly? Will queue manager connect and
disconnect for each message in queue? Will it try to deliver multiple messages
to the same destination in parallel, over multiple connections?


-- 
With best regards,
Andrey Repin
Thursday, December 6, 2018 21:23:52

Sorry for my terrible english...



Re: Best practice - concurrent delivery to remote sites

2018-12-06 Thread Andrey Repin
Greetings, Wietse Venema!

> Wietse:
>> > I don't think that there is a 'standard' policy that 'works' for
>> > delivery from every site to every site.
>> >
>> > Nowadays you get a policy exception from 'big' receivers, and you
>> > come up with transport_maps with different 'classes' of delivery
>> > agents that are configured with different rate_delay (no concurrency),
>> > with limited concurrency, and/or with different source IP address,
>> > then pick the agent depending on destination.
>> >
>> > Or you just pay a mail sending company for doing the job.

> Stefan Bauer:
>> Thank you Wietse,
>> 
>> wouldn't default_transport_rate_delay = 15s
>> 
>> be a safe setting to relax the whole transport a bit?

> It is a big sledgehammer that allows 4 delivers per minute per
> destination (or per transport). If that works for you, great.
> Just keep in mind that 'postfix reload' will reset the rate
> delay timers.

Assuming his use case (hundreds of mails could be generated per minute to the
same destination), this seems appropriate.
I'm considering doing the same on my relay systems, to limit the rate at which
they talk to the smarthost.
My use case is not "hundreds", but I'd rather have this level of throttling,
than leaving a wide open gap for brute force attacks.
(As I'm not a huge fan of fail2ban. I prefer more direct approaches.)


-- 
With best regards,
Andrey Repin
Thursday, December 6, 2018 21:16:00

Sorry for my terrible english...



Re: Best practice - concurrent delivery to remote sites

2018-12-06 Thread Wietse Venema
Wietse:
> > I don't think that there is a 'standard' policy that 'works' for
> > delivery from every site to every site.
> >
> > Nowadays you get a policy exception from 'big' receivers, and you
> > come up with transport_maps with different 'classes' of delivery
> > agents that are configured with different rate_delay (no concurrency),
> > with limited concurrency, and/or with different source IP address,
> > then pick the agent depending on destination.
> >
> > Or you just pay a mail sending company for doing the job.

Stefan Bauer:
> Thank you Wietse,
> 
> wouldn't default_transport_rate_delay = 15s
> 
> be a safe setting to relax the whole transport a bit?

It is a big sledgehammer that allows 4 delivers per minute per
destination (or per transport). If that works for you, great.
Just keep in mind that 'postfix reload' will reset the rate
delay timers.

Wietse


Re: Best practice - concurrent delivery to remote sites

2018-12-06 Thread Stefan Bauer
Thank you Wietse,

wouldn't default_transport_rate_delay = 15s

be a safe setting to relax the whole transport a bit?

from a receivers perspective, that's something i would like to see instead
of having ongoing delivery.

Am Do., 6. Dez. 2018 um 14:41 Uhr schrieb Wietse Venema <
wie...@porcupine.org>:

> Stefan Bauer:
> > stuff/best practice that makes the process more effective.
> >
> > i'm certain that remote sites prefer one way over the other.
>
> I don't think that there is a 'standard' policy that 'works' for
> delivery from every site to every site.
>
> Nowadays you get a policy exception from 'big' receivers, and you
> come up with transport_maps with different 'classes' of delivery
> agents that are configured with different rate_delay (no concurrency),
> with limited concurrency, and/or with different source IP address,
> then pick the agent depending on destination.
>
> Or you just pay a mail sending company for doing the job.
>
> Wietse
>


Re: Best practice - concurrent delivery to remote sites

2018-12-06 Thread Wietse Venema
Stefan Bauer:
> stuff/best practice that makes the process more effective.
> 
> i'm certain that remote sites prefer one way over the other.

I don't think that there is a 'standard' policy that 'works' for
delivery from every site to every site.

Nowadays you get a policy exception from 'big' receivers, and you
come up with transport_maps with different 'classes' of delivery
agents that are configured with different rate_delay (no concurrency),
with limited concurrency, and/or with different source IP address,
then pick the agent depending on destination.

Or you just pay a mail sending company for doing the job.

Wietse


Re: Best practice - concurrent delivery to remote sites

2018-12-06 Thread Andrey Repin
Greetings, Stefan Bauer!

> ack. but i was looking for advices like e.g:

> initially defer mail delivery for lets say a minute to be able to send out
> a bunch of mails to same recipient in a single session instead of having 100 
> independant sessions.

For queue management, look at http://www.postfix.org/qmgr.8.html
I can't provide exact solutions, as I'm solving a similar problem myself ATM.

> stuff/best practice that makes the process more effective.

> i'm certain that remote sites prefer one way over the other.

Sure they are. To each their own.


-- 
With best regards,
Andrey Repin
Thursday, December 6, 2018 15:52:37

Sorry for my terrible english...



Re: Best practice - concurrent delivery to remote sites

2018-12-06 Thread Stefan Bauer
ack. but i was looking for advices like e.g:

initially defer mail delivery for lets say a minute to be able to send out
a bunch of mails to same recipient in a single session instead of having
100 independant sessions.

stuff/best practice that makes the process more effective.

i'm certain that remote sites prefer one way over the other.

Stefan

Am Donnerstag, 6. Dezember 2018 schrieb Andrey Repin :
> Greetings, Stefan Bauer!
>
>  >>> we're running a small relay-service and looking for best practice to
>  >>> deliver mails to remote sites regarding concurrent delivery and so
on.
>>>
>>>
>  >>> Sometimes, we have customers that are sending several mails per
second to same recipients.
>>>
>>>
>  >>> What is best practice to handle this?
>>>
>>>
>  >>> We would like to avoid getting blacklisted or throttled by remote
sites due
>  >>> to sending too many mails or in an non compliant way. How should
this be handled/configured in postfix?
>>>
>>>
>>>  This has nothing to do with postfix itself.
>>>  Social issues can't be solved by technical means.
>>>
>  >>> so far all settings are default in postfix.
>>>
>>>
>>>
>  >>> thank you.
>>>
>
>> Its no user issue. Its a real and legal use case that customers send
>> several mails / second to same recipient over a long period (software
tests whatever).
>
> Did I say anything about reality or legality?
> The decision of remote host to block you is 100% social, not technical or
> legal.
> How do they judge you is entirely up to them, as long as you conform to
> standards, you can't do anything short of communicating with the owners
and
> solving any arising issues as they happen.
>
>
> --
> With best regards,
> Andrey Repin
> Thursday, December 6, 2018 15:00:05
>
> Sorry for my terrible english...
>
>


Re: Best practice - concurrent delivery to remote sites

2018-12-06 Thread Andrey Repin
Greetings, Stefan Bauer!

 >>> we're running a small relay-service and looking for best practice to
 >>> deliver mails to remote sites regarding concurrent delivery and so on.
>>  
>>  
 >>> Sometimes, we have customers that are sending several mails per second to 
 >>> same recipients.
>>  
>>  
 >>> What is best practice to handle this?
>>  
>>  
 >>> We would like to avoid getting blacklisted or throttled by remote sites due
 >>> to sending too many mails or in an non compliant way. How should this be 
 >>> handled/configured in postfix?
>>  
>>  
>>  This has nothing to do with postfix itself.
>>  Social issues can't be solved by technical means.
>>  
 >>> so far all settings are default in postfix.
>>  
>>  
>>  
 >>> thank you.
>>  

> Its no user issue. Its a real and legal use case that customers send
> several mails / second to same recipient over a long period (software tests 
> whatever).

Did I say anything about reality or legality?
The decision of remote host to block you is 100% social, not technical or
legal.
How do they judge you is entirely up to them, as long as you conform to
standards, you can't do anything short of communicating with the owners and
solving any arising issues as they happen.


-- 
With best regards,
Andrey Repin
Thursday, December 6, 2018 15:00:05

Sorry for my terrible english...



Re: Best practice - concurrent delivery to remote sites

2018-12-06 Thread Stefan Bauer
Its no user issue. Its a real and legal use case that customers send
several mails / second to same recipient over a long period (software tests
whatever).

Am Do., 6. Dez. 2018 um 12:50 Uhr schrieb Andrey Repin :

> Greetings, Stefan Bauer!
>
> > Hi,
>
>
> > we're running a small relay-service and looking for best practice to
> > deliver mails to remote sites regarding concurrent delivery and so on.
>
>
> > Sometimes, we have customers that are sending several mails per second
> to same recipients.
>
>
> > What is best practice to handle this?
>
>
> > We would like to avoid getting blacklisted or throttled by remote sites
> due
> > to sending too many mails or in an non compliant way. How should this be
> handled/configured in postfix?
>
>
> This has nothing to do with postfix itself.
> Social issues can't be solved by technical means.
>
> > so far all settings are default in postfix.
>
>
>
> > thank you.
>
>
> --
> With best regards,
> Andrey Repin
> Thursday, December 6, 2018 14:39:20
>
> Sorry for my terrible english...
>
>


Re: Best practice - concurrent delivery to remote sites

2018-12-06 Thread Andrey Repin
Greetings, Stefan Bauer!

> Hi,


> we're running a small relay-service and looking for best practice to
> deliver mails to remote sites regarding concurrent delivery and so on.


> Sometimes, we have customers that are sending several mails per second to 
> same recipients.


> What is best practice to handle this?


> We would like to avoid getting blacklisted or throttled by remote sites due
> to sending too many mails or in an non compliant way. How should this be 
> handled/configured in postfix?


This has nothing to do with postfix itself.
Social issues can't be solved by technical means.

> so far all settings are default in postfix.



> thank you.


-- 
With best regards,
Andrey Repin
Thursday, December 6, 2018 14:39:20

Sorry for my terrible english...