Nice to know about this Smalltalk session. Didn't knew about that.

BTW, did a debug it and it froze my image in a BlockClosure>>newProcess

Phil

2013/2/21 Igor Stasenko <[email protected]>:
> On 21 February 2013 12:36, Sven Van Caekenberghe <[email protected]> wrote:
>> Yes, I think you are right: the ambiguity is plain wrong, and even then not 
>> saving and continuing them sounds reasonable.
>>
>> But what about the inverse: you schedule something that should happen every 
>> hour or every day, you save/restart the image and with the new approach 
>> these would all run immediately, right ?
>>
>
> well, but in this case, once you will know that all delays is released
> on image restart, you can always check for session change before doing
> any action..
> for example:
>
> [
>   | session |
>   session := Smalltalk session.
>
>   1 hour asDelay wait.
>
>   "is session changed?"
>   session == Smalltalk session ifFalse: [ "perhaps we should abandon
> the loop here and reload/reinitialize stuff in some higher layers of
> code" ].
>
>   self doSomething.
>
> ] repeat.
>
>
> Why this  "perhaps we should abandon the loop here and
> reload/reinitialize stuff in some higher layers of code"..
> because any resident code (like forked process with infinite loop) is
> a common source of nasty problems,
> unreliable behavior and usually hard to debug (especially over
> multiple sessions or when you changing the code which it should run)..
>
> And to my thinking, writing a proper session-aware code is a way to
> go, instead of relying on ambiguous things.
>
>> So not only would you kill delays that are too short (reasonable), but 
>> collapse the longer ones.
>>
>> In terms of design: I think the different behaviors should be implemented 
>> with different objects.
>>
>> On 21 Feb 2013, at 12:28, Igor Stasenko <[email protected]> wrote:
>>
>>> Hi.
>>>
>>> There is one thing which is IMO an over-engineering artifact:
>>> - when system goes down (image shutdown), all currently scheduled
>>> delays are "saved"
>>> and then when image starting up they are rescheduled again to keep
>>> waiting what time is left for delay..
>>>
>>> But the problem is that it does not takes into account the total time
>>> an image was frozen, and the requirement is quite ambiguous:
>>>
>>> - if you put a process on a delay for 5 minutes, then immediately
>>> save image, and then restart it 10 minutes (or 1 year) after,
>>> should this delay keep waiting for 4 +x seconds which is left? Or
>>> should we consider this delay as utterly expired?
>>> (and as you can see, the answer is different, if we counting time
>>> using real, physical time, or just image uptime).
>>>
>>> And why counting image uptime? Consider use cases, like connection
>>> timeout.. it is all about
>>> real time , right here , right now.. will it matter to get socket
>>> connection timeout error when you restart some image 1 year after?
>>> Please, give me a scenario, which will illustrate that we cannot live
>>> without it and should count image uptime for delays, because i can't
>>> find one.
>>>
>>> If not, then to my opinion, and to simplify all logic inside delay
>>> code, i would go straight and declare following:
>>> - when new image session starts, all delays, no matter for how long
>>> they are scheduled to wait are considered expired (and therefore all
>>> waiting processes
>>> is automatically resumed).
>>>
>>> Because as tried to demonstrate, the meaning of delay which spans over
>>> multiple image sessions is really fuzzy and i would be really
>>> surprised to find a code
>>> which relies on such behavior.
>>>
>>> This change will also can be helpful with terminating all processes
>>> which were put on wait for too long (6304550344559763 milliseconds) by
>>> mistake or such.
>>>
>>>
>>> --
>>> Best regards,
>>> Igor Stasenko.
>>>
>>
>>
>
>
>
> --
> Best regards,
> Igor Stasenko.
>

Reply via email to