> -----Original Message-----
> From: [email protected] [mailto:[email protected]]
> On Behalf Of danx0r
> Sent: Sunday, December 21, 2008 6:04 PM
> To: realXtend
> Subject: [REX] Re: new bsd-viewer
>
>
> and here's the missing post:
>
> -----------------------------
> one thing to keep in mind -- an issue with libomv is it runs on its
> own thread.  Since we're thinking about issues of concurrence,
> parallelism, performance etc (we've been talking on IRC about that,
> should move to rex-arch at some point) -- I'd like to understand how
> the C++ port of libomv will be dealing with that issue.


Technically, libomv runs in many threads. When you instantiate a GridClient 
object it exists in the thread it was created in. If you call 
client.Network.Login(), that sequence runs in the same thread. However, the 
packet processing in libomv is entirely asynchronous, so every packet handled 
or event fired from the library happens in an IOCP thread. .NET uses a hybrid 
system for IOCP threads, so some of these will be actual system threads that 
were sitting in a threadpool, while some of them are actually microthreads 
(fibers). The main takeaway is that you can assume every function in libomv 
that starts with .Begin*() will do the heavy lifting in a callback or start a 
worker thread, and any callback or handled packet from libomv will be executing 
in a different context as well.

No single programming model is ideal for every language, every operating 
system, etc. This model has been shown to have good performance with .NET on 
Windows (especially look at the recent improvements in .NET 3.5 SP1 with high 
performance sockets), and Mono has been steadily improving IOCP performance. 
The tradeoff, as Dan mentioned, is a lot of overhead (both in potentially buggy 
code and lock contention) managing data shared between threads.

John

--~--~---------~--~----~------------~-------~--~----~
this list: http://groups.google.com/group/realxtend
realXtend home page: http://www.realxtend.org/
-~----------~----~----~----~------~----~------~--~---

Reply via email to