Hi Maxim! On 8/24/22 18:00, Maxim Dounin wrote: > I would rather say "it's a common misconception that it's faster > to communicate through Unix sockets". While it Unix sockets can > be beneficial for some microbenchmarks, in most production setups > it makes no difference, yet used to introduce various issues.
On 8/25/22 01:14, Alejandro Colomar wrote:
Anyway, if you're not convinced by the feature, let me come back to it after I do some performance testing in Unit to see how significant are the numbers.
I got some numbers (not with nginx, but with unit; if you want me to try with nginx, I'll do). I think they are relevant enough to come revive this discussion.
I set up the following:Unit, running two apps (written in C), one is the backend, and one is the frontend. The user curl(1)s the frontend, which itself calls the backend (through either localhost or an abstract UDS), then responds to the user.
The exact code of the apps is here: <http://www.alejandro-colomar.es/src/alx/alx/nginx/unit-c-app.git/>I used the 'main' branch for the backend, which is a simple Hello-world app. The 'front' branch is a modified version of the backend which calls 1000 times in a loop to the backend and reads the first few bytes of those requests, and then responds to the user. The 'ip' branch is the same as the 'front' one, but calling the backend through localhost.
The unit config file is here: $ cat /opt/local/unit/state/conf.json { "listeners": { "*:8080": { "pass": "applications/back" }, "unix:@back": { "pass": "applications/back" }, "unix:@front": { "pass": "applications/front" } }, "applications": { "back": { "type": "external", "executable": "/opt/local/unit/libexec/app_back" }, "front": { "type": "external", "executable": "/opt/local/unit/libexec/app_front" } } }The times, are as follows (I only copied a sample of each, but all of the tests that I run --around 10 each--, only varied +/- 2 seconds):
IPv4: $ time curl --output - --abstract-unix-sock front localhost/ >/dev/null 2>&1 real 0m0.122s user 0m0.006s sys 0m0.000s UDS: $ time curl --output - --abstract-unix-sock front localhost/ >/dev/null 2>&1 real 0m0.078s user 0m0.006s sys 0m0.000sI think these numbers are enough to make some users go for UDS instead of localhost if it's possible. Also, I think my test is mostly measuring the latency of the sockets, but throughput should be even more different between UDS and localhost, from what I've read. Of course, if the app is slow, the socket might not be the bottleneck, but from what I've tested, unit is fast enough so that socket speed is still measurable, so fast apps may legitimately want UDS to pair with them.
Cheers, Alex -- Alejandro Colomar <http://www.alejandro-colomar.es/>
OpenPGP_signature
Description: OpenPGP digital signature
_______________________________________________ nginx-devel mailing list -- nginx-devel@nginx.org To unsubscribe send an email to nginx-devel-le...@nginx.org