I just tried to hang 1.1.1 (using a traditional linux vm) and a 1.4 image with a Cog vm (also linux). No problems, but I do have questions that might be important to others trying to reproduce it:
(1) how fast do you do this? (2) do you inspect the instances, or just let them get gc'd immediately? Bill ________________________________ From: [email protected] [[email protected]] on behalf of Larry White [[email protected]] Sent: Thursday, December 01, 2011 12:56 PM To: [email protected] Subject: Re: [Pharo-project] VM freezes sending #basicNew to Stream subclass I can do it with Stream basicNew. but I have to invoke it twice. The first time it works ok. On Thu, Dec 1, 2011 at 12:48 PM, Stéphane Ducasse <[email protected]<mailto:[email protected]>> wrote: gary can you post the smallest code that makes the system hangs? Stef On Dec 1, 2011, at 4:48 PM, Larry White wrote: > Hi, > > Throwing this out there because it may be a bug. > > I'm running the Seaside one-click install on OS X Lion. > Pharo1.3 > Latest update: #13302 > > I can reliably cause my VM to freeze up and need to Force-Quit it from the OS. > > I'm implementing (copying) the probability logic from the blue book. When I > tried to create an instance of the Binomial class, the system hung. I can > replicate the problem by sending the message #basicNew to > ProbabilityDistribution. ProbabilityDistribution is a direct subclass of > Stream and I haven't overridden or modified #basicNew. > > What's happening is that it fails in the BlockClosure [anObject doit], but > only when I instantiate a member of this particular class hierarchy. In the > probability classes, a #doIt in a Workspace hits the line "self suspend" in > the #terminate method of Process and the VM hangs there. > > I believe they had ProbabilityDistribution subclass from Stream because > sampling from a distribution is like reading from a Stream, but I don't think > any there's any actual shared code, so I switched the superclass of > ProbabilityDistribution to Object and the code works fine now. > > Thanks. > > Larry
