On 25/04/14 10:25, Santos Das wrote: > Hi, > > Thank you for your input. I am able to run rumpkernel-netmap tcp ip with a > multi threaded application following your steps. Thank you.
great! > I have a small question, what is the difference between creating the thread > with just "pthread_create" and doing a join Vs the stuffs which you > mentioned above (putting it below) > > mylwp = rump_pub_lwproc_curlwp(); > rump_pub_lwproc_rfork(RUMP_RFFDG); > newlwp = rump_pub_lwproc_curlwp(); > pthread_create(..., newlwp); > rump_pub_lwproc_switch(mylwp); > > and then in the worker: > > rump_pub_lwproc_switch(arg); > [do work] > rump_pub_lwproc_releaselwp(); > return NULL; Process and thread handling is different in a rump kernel over that of what you'd expect from an OS. The difference is due to the key idea of rump kernels: using the host's thread scheduler. Your question is explained for example in the now-famous chapter 2 and on this man page: http://netbsd.gw.com/cgi-bin/man-cgi?rump_lwproc++NetBSD-current I'll give the short version here, refer to the material for details. "In the rump kernel model, each host thread (implemented for example with pthreads) is either bound to a rump kernel lwp or accesses the rump kernel with an implicit thread context associated with pid 1." So if you don't do anything, all local threads will run in the context of process 1. Now, unless you copy the fd by "forking", and assuming the nginx event loop is something like the following, you'd have a race condition for the socket: for (;;) { s = accept(); if (fork() == 0) handle(s); else close(s); } That's why you need to fork your rump kernel process context. ------------------------------------------------------------------------------ Start Your Social Network Today - Download eXo Platform Build your Enterprise Intranet with eXo Platform Software Java Based Open Source Intranet - Social, Extensible, Cloud Ready Get Started Now And Turn Your Intranet Into A Collaboration Platform http://p.sf.net/sfu/ExoPlatform _______________________________________________ rumpkernel-users mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/rumpkernel-users
