> Hello Roberto,
>
> also I found that the same happened even without ugreen, but not so
> visible, so performance if worse but not so much, so here is question
> did you thought about DYNAMIC async/ugreen threads opening? so to set
> in command line only MAX LIMIT, because now seems when async too high,
> you will switch trow all threads even they not needed or not used?
>
> ab -c 15 -n 40000 http://test.dev/
> --async=100 - Requests per second:    7216.98 [#/sec] (mean)
> --async=40000 - Requests per second:    4684.86 [#/sec] (mean)
>
>
>

Generic async is better than ugreen because it is based on file descriptor
monitoring so i have not to iterate all the time in the cores list.

The problem is finding the core that is monitoring a specific file
descriptor. When epoll rise i iterate the cores list until i find the core
that is monitoring the specific fd.

Normally this is fast because it is a simple integer comparison (and
infact the loss factor between 100 and 40000 is less than 50%).

If i use a dynamic core allocation you will lose time using malloc() all
the time (and worst things happen if you are running out of heap as the
process must call the mmap()/brk() syscall).

Preallocation is the fastest way for sure. What i am investigating is
having a second array where the keys are the fd and the value is the core
id.

So

ready_fd = epoll_wait()

ready_core_id = fd_array[ready_fd];

Instead of

ready_fd = epoll_wait()

foreach(core in cores)
    if core.monitoring_fd == ready_fd
        ready_core_id = core.id
        break
    end
end

In your case you will lose about 320K of memory (40000*4*4) but you will
not need to scan all the array.

I will send you a patch to make some test with 0.9.6.6 soon (with the
postgresql example i promised)


-- 
Roberto De Ioris
http://unbit.it
_______________________________________________
uWSGI mailing list
[email protected]
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

Reply via email to