Hi,
Thanks for all the inputs.....
Had a discussion with few Nginx developers and running the nginx workers as
thread model is not advisable. The work was there to support the thread
model but it was dropped because of some issues and we still have the dead
code there.
Now, I am think to run nginx and for all the TCP/IP communication it will
talk to another process (say APP) which will be using rump-netmaptcpip. I
am not sure if this "APP" process will be bottleneck. I am wondering if
this will help me in providing better results than running Nginx directly
on the kernel. Any thoughts ??
I read about "rumpclient" and I am thinking this might not help me.
Let me share my thoughts here such that you can throw some lights on this
(I am hoping to get some lights :-) )
1. Say from Nginx I want to connect to a server. Now, I will inform the APP
process to perform the connect operation and APP will do the connect and
return me the fd. As the APP process will be communicating with more than
one worker process of Nginx, the APP process needs to maintain a hash table
which will map the fd with the process. Such that any further notification
on that FD will be correctly conveyed to the process.
2.Similarly, when it receives the data arrival notification on the socket,
it will read the data from the socket (the address passed to the read is
taken from the shared memory which is shared between Nginx worker process
and the APP process), and inform the Nginx worker process that the data has
arrived.
Similarly for other TCP funtion calls...
Will the above idea have any issues ?
Now, I am thinking what kind of IPC I can setup between APP process and
Nginx process. All the IPCs go through the kernel and there is an overhead
. Any suggestions.
Regards, Santos
On Fri, Apr 25, 2014 at 5:02 PM, Antti Kantee <[email protected]> wrote:
> On 25/04/14 11:00, Antti Kantee wrote:
>
>> So if you don't do anything, all local threads will run in the context
>> of process 1. Now, unless you copy the fd by "forking", and assuming
>> the nginx event loop is something like the following, you'd have a race
>> condition for the socket:
>>
>
> Actually, let's expand on that a bit more. If you use a separate rump
> kernel process context, all resources managed by the rump kernel will
> automatically get freed when you release that context, including other file
> descriptors etc. opened by the rump kernel while handling the request. So
> for those resources you get the true semantics of a fork-based worker model.
>
> However, what will _not_ get released is any resource you allocated from
> the host instead of from the rump kernel. This includes e.g. malloc()'d
> memory. Unless you can successfully patch the code to free those resources
> (and perhaps convince nginx upstream to take your patches), you still have
> the option of using the inherently slower remote clients, where a host
> process fork is possible.
>
------------------------------------------------------------------------------
"Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos. Get
unparalleled scalability from the best Selenium testing platform available.
Simple to use. Nothing to install. Get started now for free."
http://p.sf.net/sfu/SauceLabs
_______________________________________________
rumpkernel-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/rumpkernel-users