Hi,

Thank you for your input. I am able to run rumpkernel-netmap tcp ip with a
multi threaded application following your steps. Thank you.

I have a small question, what is the difference between creating the thread
with just "pthread_create" and doing a join Vs the stuffs which you
mentioned above (putting it below)

        mylwp = rump_pub_lwproc_curlwp();
        rump_pub_lwproc_rfork(RUMP_RFFDG);
        newlwp = rump_pub_lwproc_curlwp();
        pthread_create(..., newlwp);
        rump_pub_lwproc_switch(mylwp);

and then in the worker:

        rump_pub_lwproc_switch(arg);
        [do work]
        rump_pub_lwproc_releaselwp();
        return NULL;



Could you please let me know.

Thank you.


Regards, Santos


On Wed, Apr 23, 2014 at 5:18 PM, Antti Kantee <[email protected]> wrote:

> [somewhat repeating what others said already, hopefully with some new
> insight]
>
>
> On 19/04/14 09:35, Santos Das wrote:
>
>> Before I start, I have a basic question ? Does the current rump kernel
>> support multiple processes? As you know, in Nginx we have master process
>> which creates several worker process. The master process creates the
>> listen
>> socket and all the worker listens to that socket and the synchronization
>> is
>> achieved through the mutex.
>>
>
> Well, yes and no ;)
>
> That's actually a fairly difficult question to answer without getting into
> much detail.  I suggest skimming chapter 2 of the book that Martin linked.
>
>
>  Now, can I link the  rump kernel to the main process . Since the worker
>> processes fork from the parent, they will get the linked rump kernel
>> library along with it during the fork system call in their process address
>> space.
>>
>
> Right, you can't fork the kernel.  It's easy to imagine why not: after a
> fork you'd have two network stacks using the same IP addresses.  (ok,
> strictly speaking, you can fork the kernel, but undesirable things will
> happen as a result)
>
> There is a "remote" mode, where the application isn't in the same host
> process with the rump kernel and communication is done via IPC.  That, of
> course, is much slower than a function call.  Furthermore, for userspace, I
> really wrote it for convenience instead of performance. I'd say using the
> remote mode for high performance is sensible only if using a rump kernel as
> an offload engine.
>
> So, that leaves threads.  Generally, it's not too difficult to replace
> fork with pthread_create().  The approximate replacement, off the top of my
> head, is something like:
>
>         mylwp = rump_pub_lwproc_curlwp();
>         rump_pub_lwproc_rfork(RUMP_RFFDG);
>         newlwp = rump_pub_lwproc_curlwp();
>         pthread_create(..., newlwp);
>         rump_pub_lwproc_switch(mylwp);
>
> and then in the worker:
>
>         rump_pub_lwproc_switch(arg);
>         [do work]
>         rump_pub_lwproc_releaselwp();
>         return NULL;
>
> That dance ensures that you emulate the file descriptor copy properties of
> fork().
>
> (see, there is multiprocess support in a rump kernel, just in a slightly
> different way.  again, see chapter 2)
>
>
>  The parent process will open the listen 'socket' (here it is no longer a
>> socket, just a call to the rump kernel socket abstraction) and do a
>> select/epoll/kqueue (again a event loop abstraction). Each forked worker
>> process will inherit this listen 'socket'. The zero copy driver will
>> deliver packets to a user space fifo which will result in read activity
>> notification in the rump kernel select abstraction (this is no different
>> from a typical process of getting a packet on the NIC, servicing a network
>> hardware interrupt and  going through the TCP state machine. Only here all
>> that happens in user space and interrupts are replaced by polling the
>> FIFO).
>>
>> Is this supported ? Please let me know if I am wrong in my assumption.
>>
>
> Sockets are sockets, not sure why you don't consider sockets provided by a
> rump kernel sockets.
>
> Yes, generally it will work like that, though sockets are of course a poor
> abstraction if you want zero-copy.
>
> There's still some planned work to optimize the ingress path; currently,
> there's both a "hard" interrupt and a soft interrupt for network packet
> delivery.  That doesn't really make sense in a rump kernel, since a rump
> kernel only supports soft interrupts.  I have a plan, just haven't gotten
> around to doing the work yet:
> https://github.com/rumpkernel/wiki/wiki/Performance%3A-
> optimizing-networking-performance
> (I wrote the page originally for DPDK, but of course the same thing
> applies to netmap also)
>
>
>  Another thing I dont fully understand is how the rump kernel needs to
>> interface with the real kernel for other system calls. (system calls like
>> gettimeofday(), or fork, setbrk, other timer related calls or file system
>> calls) cannot be serviced by the rump kernel since it is only a library
>> and
>> it will need to hand it over to the real kernel for all this.
>>
>
> I'm not sure if that's a question or not.
>
>   - antti
>
------------------------------------------------------------------------------
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform
_______________________________________________
rumpkernel-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/rumpkernel-users

Reply via email to