Shmuel Fomberg wrote:

I read the document of Thread::Apartment, and it's one impressive module.
However, I didn't quite understood what the 'urgent' methods are, and how
they are different from regular ones?

Urgent methods queue their method call requests to the head of the
target object's method request queue, rather than the tail. So they'll get 
serviced
before any other pending requests.

Also, the Thread::Apartment->new is creating objects inside the thread pool?

Sortof. A "constructor" request is sent to a thread to tell it to install
an object in its thread and start servicing method calls for the object.

It is not clear from the docs. If so, how do I create such object in the
current thread?

Normally, you don't. You install all the apartments, giving them all
proxy references to each other, and then let them all run their own
thread environments, communicating via the proxies. The "main" thread
then just sits around waiting for threads to exit. However, it is possible
to use the main thread as an object, but it needs to create its own
proxy object to pass to the other apartments.

You might want to review Microsoft's COM/DCOM architecture, and
lookup their definition of apartment threading; its where I got
many of the notions in Thread::Apartment.


Btw, does every access to a shared variable is really catching a global
lock? It's crazy. Why does it do it?

threads::shared is really a tie() module that maps scalar, array, and hash
accesses to a thread-private proxy version of a variable to the "real"
instance of the shared variable that exists in a global shared Perl interpretter
context. Needless to say, a Perl interpretter
has a *lot* of internal state. And since any reference to the "real" version
of a variable requires a refcount bump (and eventually, a refcount drop), pretty
much anytime an app touches a shared variable, the shared interpretter has to 
lock
everything down to avoid major internal chaos (esp on multicore systems) due to 
scrambled
internal state. Its similar to the locking needed for many stock C runtime heaps
to avoid scrambled heaps...but those tend to use quick spinlocks, rather than
full blown semaphores (and there are per-thread caching heap managers,
e.g., Hoard and Google's TCMalloc, that try to avoid that locking too)

Thread::Sociable attempts to avoid the refcounting and shared interpretter,
and takes locks only rarely, so it is (and hopefully someday, will be) much 
faster.


But the overhead issues still apply.

Every inter-thread operation is expensive.
Thanks,
Shmuel.


HTH,
Dean Arnold
Presicient Corp.

Reply via email to