I have seen issues with the write buffers filling and being flushed while it 
stalls the process.  This happened to me when doing large file transfers on 
Linux.

I ended up changing the value of vm.dirty_writeback_centisecs to 1500.

$ sudo sysctl -a | grep writeback
vm.dirty_writeback_centisecs = 1500

Here is an article on the issue:
https://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/
 
<https://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/>

-Bryan




> On Aug 18, 2016, at 10:56 PM, Hiroaki Nakamura <[email protected]> wrote:
> 
> Hi,
> 
> I forgot to write traffic server version.
> I'm using ATS 6.1.1 and using the raw partition for the cache storage.
> 
> I also noticed vmstat disk/writ values remain 0 for one minute,
> and then non-zero values follow and slow responses happen.
> 
> 
> 2016-08-19 14:27 GMT+09:00 Hiroaki Nakamura <[email protected]>:
>> Hi,
>> 
>> I'm having a trouble that sometimes a request took 3 to 5 seconds on
>> approximately
>> every 39 minutes.
>> 
>> I turned on the slow log and found out that cache_open_read_end is the most
>> time consuming.
>> 
>> status: 200 unique id:  redirection_tries: 0 bytes: 43 fd: 0 client
>> state: 0 server state: 9 ua_begin: 0.000 ua_first_read: 0.000
>> ua_read_header_done: 0.000 cache_open_read_begin: 0.000
>> cache_open_read_end: 2.536 dns_lookup_begin: 2.536 dns_lookup_end:
>> 2.536 server_connect: 2.537 server_first_read: 2.589
>> server_read_header_done: 2.589 server_close: 2.589 ua_close: 2.589
>> sm_finish: 2.589 plugin_active: 0.000 plugin_total: 0.000
>> 
>> Could you give me advice to know the reason why cache_open_read_end took so 
>> long
>> and fix the slow responses?
>> 
>> Thanks,
>> Hiroaki

Reply via email to