> 
> Each process is a Pharo object (an instance of ERringElement) that contains a 
> counter, a reference to the next ERringElement, and an "ErlangProcess" that 
> is a Process that contains a reference to an instance of SharedQueue (its 
> "mailbox").
> 
> The good news is that up to 50k processes, it didn't crash.  But it did run 
> with increasing sloth.
> 
> I can imagine that the increasing process-creation time is due to beating on 
> the memory manager.  But why the increasing message-sending time as the 
> number of processes increases?  (Recall that exactly one process is runnable 
> at any given time).  I'm wondering if the scheduler is somehow getting 
> overwhelmed by all of the non-runable processes that are blocked on 
> Semaphores in SharedQueue.  Any ideas?
> 
> If you're using Cog then one reason performance falls off with number of 
> processes is context-to-stack mapping, see 08 Under Cover Contexts and the 
> Big Frame-Up.  Once there are more processes than stack pages every process 
> switch faults out a(t least one) frame to a heap context and faults in a heap 
> context to a frame.  You can experiment by changing the number of stack pages 
> (see vmAttributeAt:put:) but you can't have thousands of stack pages; it uses 
> too much C stack memory.  I think the default is 64 pages and Teleplace uses 
> ~ 112.  Each stack page can hold up to approximately 50 activations.
> 
> But to be sure what the cause of the slowdown is one could use my VMProfiler. 
>  Has anyone ported this to Pharo yet?

not that I know.
Where is the code :)

> 
> 
> (My code is on Squeaksource in project Erlang.  But be warned that there is a 
> simulation of the Erlang "universal server" in there too.  To run this code, 
> look for class ErlangRingTest.)
> 
> 
> 
> -- 
> best,
> Eliot
> 


Reply via email to