Yes, I think you are right: the ambiguity is plain wrong, and even then not saving and continuing them sounds reasonable.
But what about the inverse: you schedule something that should happen every hour or every day, you save/restart the image and with the new approach these would all run immediately, right ? So not only would you kill delays that are too short (reasonable), but collapse the longer ones. In terms of design: I think the different behaviors should be implemented with different objects. On 21 Feb 2013, at 12:28, Igor Stasenko <[email protected]> wrote: > Hi. > > There is one thing which is IMO an over-engineering artifact: > - when system goes down (image shutdown), all currently scheduled > delays are "saved" > and then when image starting up they are rescheduled again to keep > waiting what time is left for delay.. > > But the problem is that it does not takes into account the total time > an image was frozen, and the requirement is quite ambiguous: > > - if you put a process on a delay for 5 minutes, then immediately > save image, and then restart it 10 minutes (or 1 year) after, > should this delay keep waiting for 4 +x seconds which is left? Or > should we consider this delay as utterly expired? > (and as you can see, the answer is different, if we counting time > using real, physical time, or just image uptime). > > And why counting image uptime? Consider use cases, like connection > timeout.. it is all about > real time , right here , right now.. will it matter to get socket > connection timeout error when you restart some image 1 year after? > Please, give me a scenario, which will illustrate that we cannot live > without it and should count image uptime for delays, because i can't > find one. > > If not, then to my opinion, and to simplify all logic inside delay > code, i would go straight and declare following: > - when new image session starts, all delays, no matter for how long > they are scheduled to wait are considered expired (and therefore all > waiting processes > is automatically resumed). > > Because as tried to demonstrate, the meaning of delay which spans over > multiple image sessions is really fuzzy and i would be really > surprised to find a code > which relies on such behavior. > > This change will also can be helpful with terminating all processes > which were put on wait for too long (6304550344559763 milliseconds) by > mistake or such. > > > -- > Best regards, > Igor Stasenko. >
