Hi Igor,
On Tue, Oct 8, 2013 at 2:27 AM, Igor Stasenko <[email protected]> wrote: > > > > On 7 October 2013 18:36, Norbert Hartl <[email protected]> wrote: > >> >> Am 07.10.2013 um 16:36 schrieb Igor Stasenko <[email protected]>: >> >> 1 thing. >> >> can you tell me what given expression yields for your VM/image: >> >> Smalltalk vm maxExternalSemaphores >> >> (if it gives you number less than 10000000 then i think i know what is >> your problem :) >> >> It is 10000000 >> >> What would be the problem if it would be smaller? >> >> > that just means your VM don't have external object size cap. > I changed the implementation to not have hard limit (the arbitrary large > number > is there just to be "compatible" with previous implementation). > If you've really done this why haven't you pushed changes back to me? You think I like the limit ?!? ;-). But is your new implementation lock-free? I went to some lengths to make sure that the Cog implementation is thread-safe, by making signalling lock-free. But making it lock-free while allowing growing was too much work. If your new implementation isn't lock-free and/or isn't thread-safe then IMO the cure is worse than the disease, because signals can get lost and that's much harder to diagnose than deal with a limit that can only be set at startup. > > This means, that you can actually change in your image the check and > completely ignore limits > and just keep growing if it necessary. > > Now, since you using VM which don't have a limit, but problem still > persists, > it seems like it somewhere else.. :/ > > i just found that after one merge, my changes get lost > we're just plugged them back in, and it should be back again with newer > VMs.. > but the problem could be more than just semaphores.. if merge broken this, > it may break > many other things, so we need time to check > > I try to look at it some more time. I'm using the pharo-vm from the > launchpad build. Are the changes supposed to be in this one? > > Norbert > > Launchpad? You mean ppa? I can't say i remember all the details how > changes to VM source > gets into ppa distro, and how fast they get there. @Damien, can you > enlighten us? > > > Well, the VM which i downloaded recently using zero-conf script, having > limit back to 256. Just some merge mistake, which now is fixed.. means that > couple builds will use limit-based implementation.. but then > it will be back to my implementaiton. > > > > > > > On 7 October 2013 12:31, Norbert Hartl <[email protected]> wrote: > >> >> Am 07.10.2013 um 11:28 schrieb Henrik Johansen < >> [email protected]>: >> >> >> On Oct 7, 2013, at 11:16 , Norbert Hartl <[email protected]> wrote: >> >> As I need an image that runs longer than 24 hours I'm looking at some >> stuff and wonder. Can anybody explain me the rationale for a code like this >> >> maxExternalSemaphores: aSize >> "This method should never be called as result of normal program >> execution. If it is however, handle it differently: >> - In development, signal an error to promt user to set a bigger size >> at startup immediately. >> - In production, accept the cost of potentially unhandled interrupts, >> but log the action for later review. >> See comment in maxExternalObjectsSilently: why this behaviour is >> desirable, " >> "Can't find a place where development/production is decided. >> Suggest Smalltalk image inProduction, but use an overridable temp >> meanwhile. " >> | inProduction | >> self maxExternalSemaphores >> ifNil: [^ 0]. >> inProduction := false. >> ^ inProduction >> ifTrue: [self maxExternalSemaphoresSilently: aSize. >> self crTrace: 'WARNING: Had to increase size of semaphore signal >> handling table due to many external objects concurrently in use'; >> crTrace: 'You should increase this size at startup using >> #maxExternalObjectsSilently:'; >> crTrace: 'Current table size: ' , self maxExternalSemaphores >> printString] >> ifFalse: ["Smalltalk image" >> self error: 'Not enough space for external objects, set a larger size >> at startup!' >> "Smalltalk image"] >> >> I have reported this once but got no feedback so I like to have a few >> opinions. >> >> The report is here: https://pharo.fogbugz.com/f/cases/10839/ >> >> Norbert >> >> >> The rationale is that inProduction would be some global setting, not yet >> in place when the code was written… >> Excessive simultaneous Semaphore usage is something that should be caught >> during development, in which case it's better to get an active >> notification, than having it logged somewhere. >> >> >> Agreed. But didn't work in my case because it needed roughly 20 hours and >> an instable remote backend to trigger the problem. And somehow I forgot to >> install my logger as Transcript so there is no warning message. I saw only >> dead images in the morning. >> This not satisfactory but on the other hand this type of problems are >> hard to solve anyway. My feeling tells me there is more to discover. >> Sockets resources get unregistered at finalization time but this didn't >> work either. I would have said that the unlikely situation that no garbage >> collection ran could be the case. But it can't because in >> ExternalSemaphoreTable>>#freedSlotsIn:ratherThanIncreaseSizeTo: there is >> explicit garbage collection. >> >> If I've understood correctly, it's moot on newer Pharo VM's, where >> there's no limit on the semtable size, but for legacy code a startup item >> setting size using maxExternalObjectsSilently: (as suggested in the Warning >> text), is still a more proper fix than setting inProduction to true and >> crossing your fingers hoping no signals will be lost during table growth. >> >> >> Ah, I didn't know about the risk of loosing signals while resizing the >> table. Thanks for that. Don't get me wrong I wasn't proposing to set >> inProduction in effect. I don't think that automatically growing resource >> management is a proper way to design a system. There is always a range of >> resources you need for your use case. Not setting an upper bound for this >> just covers leaking behavior. >> >> Norbert >> >> >> > > > -- > Best regards, > Igor Stasenko. > > > > > > -- > Best regards, > Igor Stasenko. > -- best, Eliot
