Hi Mikhail,

I didn't see this error in my log. Following is my OS/Kernel:
CentOS:  8.1.1911
Kernel:    5.7.19
Liburing: liburing-1.0.7-3.el8.x86_64, liburing-devel-1.0.7-3.el8.x86_64 (from 
yum repo)

Regarding the error: 11: Resource temporarily unavailable. It's probably that 
too many read "/usr/local/html/64746" at one time which is still locked by 
previous read. I tried to repro this error with single file but it seems nginx 
auto store the signal file in memory and I don't see error. How do you perform 
the test? I want to repro this if possible.

My nginx reported this error before:
2021/01/04 05:04:29 [alert] 50769#50769: *11498 pread() read only 7101 of 15530 
from "/mnt/cache1/17/68aae9d816ec02340ee617b7ee52a117", client: 11.11.11.3, 
server: _, request: "GET /_100kobject?version=cdn003191&thread=64 HTTP/1.1", 
host: "11.11.11.1:8080"
Which is fixed by my 2nd patch(Jan 25) already.

BR,
Ping

-----Original Message-----
From: nginx-devel <nginx-devel-boun...@nginx.org> On Behalf Of Mikhail 
Isachenkov
Sent: Wednesday, February 3, 2021 10:11 PM
To: nginx-devel@nginx.org
Subject: Re: [PATCH] Add io_uring support in AIO(async io) module

Hi Ping Zhao,

When I try to repeat this test, I've got a huge number of these errors:

2021/02/03 10:22:48 [crit] 30018#30018: *2 aio read "/usr/local/html/64746" 
failed (11: Resource temporarily unavailable) while sending response to client, 
client: 127.0.0.1, server: localhost,
request: "GET /64746 HTTP/1.1", host: "localhost"

I tested this patch on Ubuntu 20.10 (5.8.0-1010-aws kernel version) and Fedora 
33 (5.10.11-200.fc33.x86_64) with the same result.

Did you get any errors in error log with patch applied? Which OS/kernel did you 
use for testing? Did you perform any specific tuning before running?

25.01.2021 11:24, Zhao, Ping пишет:
> Hello, add a small update to correct the length when part of request already 
> received in previous.
> This case may happen when using io_uring and throughput increased.
> 
> # HG changeset patch
> # User Ping Zhao <ping.z...@intel.com> # Date 1611566408 18000
> #      Mon Jan 25 04:20:08 2021 -0500
> # Node ID f2c91860b7ac4b374fff4353a830cd9427e1d027
> # Parent  1372f9ee2e829b5de5d12c05713c307e325e0369
> Correct length calculation when part of request received.
> 
> diff -r 1372f9ee2e82 -r f2c91860b7ac src/core/ngx_output_chain.c
> --- a/src/core/ngx_output_chain.c     Wed Jan 13 11:10:05 2021 -0500
> +++ b/src/core/ngx_output_chain.c     Mon Jan 25 04:20:08 2021 -0500
> @@ -531,6 +531,14 @@
>   
>       size = ngx_buf_size(src);
>       size = ngx_min(size, dst->end - dst->pos);
> +#if (NGX_HAVE_FILE_IOURING)
> +    /*
> +     * check if already received part of the request in previous,
> +     * calculate the remain length
> +     */
> +    if(dst->last > dst->pos && size > (dst->last - dst->pos))
> +        size = size - (dst->last - dst->pos); #endif
>   
>       sendfile = ctx->sendfile && !ctx->directio;
> 
> -----Original Message-----
> From: nginx-devel <nginx-devel-boun...@nginx.org> On Behalf Of Zhao, 
> Ping
> Sent: Thursday, January 21, 2021 9:44 AM
> To: nginx-devel@nginx.org
> Subject: RE: [PATCH] Add io_uring support in AIO(async io) module
> 
> Hi Vladimir,
> 
> No special/extra configuration needed, but need check if 'aio on' and 
> 'sendfile off' is correctly set. This is my Nginx config for reference:
> 
> user nobody;
> daemon off;
> worker_processes 1;
> error_log error.log ;
> events {
>      worker_connections 65535;
>      use epoll;
> }
> 
> http {
>      include mime.types;
>      default_type application/octet-stream;
>      access_log on;
>      aio on;
>      sendfile off;
>      directio 2k;
> 
>      # Cache Configurations
>      proxy_cache_path /mnt/cache0 levels=2 keys_zone=nginx-cache0:400m 
> max_size=1400g inactive=4d use_temp_path=off; ......
> 
> 
> To better measure the disk io performance data, I do the following steps:
> 1. To exclude other impact, and focus on disk io part.(This patch only impact 
> disk aio read process) Use cgroup to limit Nginx memory usage. Otherwise 
> Nginx may also use memory as cache storage and this may cause test result not 
> so straight.(since most cache hit in memory, disk io bw is low, like my 
> previous mail found which didn't exclude the memory cache impact)
>       echo 2G > memory.limit_in_bytes
>       use ' cgexec -g memory:nginx' to start Nginx.
> 
> 2. use wrk -t 100 -c 1000, with random 25000 http requests.
>       My previous test used -t 200 connections, comparing with -t 1000, 
> libaio performance drop more when connections numbers increased from 200 to 
> 1000, but io_uring doesn't. It's another advantage of io_uring.
> 
> 3. First clean the cache disk and run the test for 30 minutes to let Nginx 
> store the cache files to nvme disk as much as possible.
> 
> 4. Rerun the test, this time Nginx will use ngx_file_aio_read to 
> extract the cache files in nvme cache disk. Use iostat to track the io 
> data. The data should be align with NIC bw since all data should be 
> from cache disk.(need exclude memory as cache storage impact)
> 
> Following is the test result:
> 
> Nginx worker_processes 1:
>               4k              100k            1M
> Io_uring      220MB/s 1GB/s           1.3GB/s
> Libaio                70MB/s          250MB/s 600MB/s(with -c 200, 1.0GB/s)
> 
> 
> Nginx worker_processes 4:
>               4k              100k            1M
> Io_uring      800MB/s 2.5GB/s         2.6GB/s(my nvme disk io maximum bw)
> libaio                250MB/s 900MB/s 2.0GB/s
> 
> So for small request, io_uring has huge improvement than libaio. In previous 
> mail, because I didn't exclude the memory cache storage impact, most cache 
> file is stored in memory, very few are from disk in case of 4k/100k. The data 
> is not correct.(for 1M, because the cache is too big to store in memory, it 
> wat in disk)  Also I enabled directio option "directio 2k" this time to avoid 
> this.
> 
> Regards,
> Ping
> 
> -----Original Message-----
> From: nginx-devel <nginx-devel-boun...@nginx.org> On Behalf Of 
> Vladimir Homutov
> Sent: Wednesday, January 20, 2021 12:43 AM
> To: nginx-devel@nginx.org
> Subject: Re: [PATCH] Add io_uring support in AIO(async io) module
> 
> On Tue, Jan 19, 2021 at 03:32:30AM +0000, Zhao, Ping wrote:
>> It depends on if disk io is the performance hot spot or not. If yes, 
>> io_uring shows improvement than libaio. With 4KB/100KB length 1 Nginx 
>> thread it's hard to see performance difference because iostat is only 
>> around ~10MB/100MB per second. Disk io is not the performance bottle 
>> neck, both libaio and io_uring have the same performance. If you 
>> increase request size or Nginx threads number, for example 1MB length 
>> or Nginx thread number 4. In this case, disk io became the 
>> performance bottle neck, you will see io_uring performance improvement.
> 
> Can you please provide full test results with specific nginx configuration?
> 
> _______________________________________________
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> _______________________________________________
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> _______________________________________________
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> 

--
Best regards,
Mikhail Isachenkov
NGINX Professional Services
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Reply via email to