Dear Peter,

Pleaes check my comments below. thanks


--



Regards,
Yuan Man
Trouble is a Friend.


At 2017-10-27 22:02:21, "Peter Booth" <peter_bo...@me.com> wrote:


There are a few approaches to this but they depend upon what you’re trying to 
achieve.

Are your requests POSTs or GETs? Why do you have the mirroring configured?

<Angus: Both POSTs or GETs method are possible.>

 If the root cause is that your mirror site cannot support the same workload as 
your primary site, what do you want to happen when your mirror site is 
overloaded? 
<Angus: Yes, the situation we observed is the mirror site can't not support 
same workload as primary. We just used the test result to evaluate the resource 
we should add into the mirror site. But we hope the testing will not impact the 
primary site processing request from client as it's a product environment. 
Means no matter mirror site overload or fail down, it should not impact the 
primary environment. >

One approach, using nginx, is to use rate limiting and connection limiting on 
your mirror server. This is described on the nginx website as part of the ddos 
mitigation section. 
Or, if your bursts of activity are typically for the same resource then you can 
use caching with the proxy_cache_use_stale directive.
<Angus: we have try to disable the keepalive and set timeout time to very short 
time, but seems not working. >

Another approach could be to use lua / openresty to implement a work- shedding 
interceptor (within nginx) that sits in front of your slow web server.  Within 
lua you would need code that guesses  whether or not your web server is 
overloaded and, if it is, it simply returns a 503 and doesn’t forward the 
request.


<Angus: If there some pamareter in Nginx can make the mirror site not impact 
the nginx. for example, if mirror site is overload, make it give the new 
request etc. Appreciate for your response in advance. >


Sent from my iPhone

On Oct 27, 2017, at 2:24 AM, 安格 <yuan...@163.com> wrote:


Dear Roman,


Thanks for your valuable response.
So ,is that means, If we optimize the parameter of the keep-alive to avoid 
keep-alive connection ? then we can avoid this kind of performance issue ?
Or if the mirror subrequest is slow than original subrequest, then we can't 
avoid this kind of performance issue ? Thanks in advance.


--



Regards,
Yuan Man
Trouble is a Friend.



At 2017-10-26 20:22:13, "Roman Arutyunyan" <a...@nginx.com> wrote:
>Hi,
>
>On Thu, Oct 26, 2017 at 03:15:02PM +0800, 安格 wrote:
>> Dear All,
>> 
>> 
>> I have faced a issue with Nginx "ngx_http_mirror_module" mirror function. 
>> want to discuss with you about this.
>> 
>> 
>> The situation is like below:
>> While I try to copy the request form original to mirror side, if the 
>> original application can process 600 request per seconds, but the mirror 
>> environment can only process 100 requests per seconds. Normally, even the 
>> mirror environment can't process all the request in time. it's should not 
>> impact the nginx forward the request to the original environment. But we 
>> observed if the mirror environment can't process all the request, then the 
>> Nginx will fall in issue , and original environment can't feedback process 
>> result to client in time. Then from client side, it seems the Nginx is down. 
>> If you have faced same issue before ? any suggestion  ?
>
>A mirror request is executed in parallel with the main request and does not
>directly affect the main request execution time.  However, if you send another
>request on the same client connection, it will not be processed until the
>previous request and all its subrequest (including mirror subrequests) finish.
>So if you use keep-alive client connections and your mirror subrequests are
>slow, you may experience some performance issues.
>
>-- 
>Roman Arutyunyan
>_______________________________________________
>nginx mailing list
>nginx@nginx.org
>http://mailman.nginx.org/mailman/listinfo/nginx



【网易自营】好吃到爆!鲜香弹滑加热即食,经典13香/麻辣小龙虾仅75元3斤>>      
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Reply via email to