Am 01.06.2022 um 20:42 schrieb Henrik Holst via curl-library 
<curl-library@lists.haxx.se>:
> 
> I wonder if you in that specific example (the stackexchange one) are hitting 
> some limit in the dns resolver since "domains/s=161.98" which is the max 
> value there would be something like (1 event for dns reply, 1 for connect, 1 
> for write is what I'm guesstimating at here) ~486 events in epoll per second 
> is not much events at all.
> 
> I wouldn't be at all surprised if public DNS servers such as the Cloudflare 
> and Google one used in your example have some kind of rate limiting.

I'm running a similar usage pattern of accessing random domain names with epoll 
(behind libev) based I/O and, according to ifstat, see around 82 Mbit/s peak 
incoming and 20 Mbit/s outgoing bandwidth when using 2 threads with 1024 
parallel requests each. I'm using c-ares as a resolver and unbound as a DNS 
cache. In addition, there's some sysctl tuning involved:

net.core.somaxconn = 256
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 30
net.ipv4.ip_local_port_range = 1024 65535
net.core.netdev_max_backlog = 2500
net.core.rmem_max = 26214400
net.core.rmem_default = 26214400
net.core.wmem_max = 26214400

This [1] is running on a pretty weak virtual server. Especially the rmem/wmem 
max settings were crucial for being able to cope with the UDP traffic between 
curl and unbound. (Otherwise UDP packets seem to get lost before they are 
processed and this leads to DNS timeouts.)

Just posting this here because it seems I'm getting much better performance 
than what was mentioned in the stackoverflow thread (which mentioned around 5-6 
Mbit/s).

Maybe this is helpful.

Best,

Patrick

[1] https://github.com/pschlan/cron-job.org/blob/master/chronos/CurlWorker.cpp 
+ https://github.com/pschlan/cron-job.org/blob/master/chronos/HTTPRequest.cpp 
-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Reply via email to