I guess your style is more subtle than mine.

I think I would just record everything whether or not it changed. 
Hmmm... if we store 1000 bytes at each step for a million (executed) 
line program. That's "only" a gig. Small potatoes by today's standards.

If you use fixed length records, you can even access them randomly. Add 
some indexing, and you can go directly to a line number.

Ken

On 4/12/2012 3:05 PM, Michael Haberler wrote:
> Ken,
>
> ...
>>> lets look at 1): Interp state as per _setup currently has some 140+ 
>>> variables, including maps, sets and arrays. Plus, there's some implicit 
>>> state in canon static variables, the CRC canon queue, and probably some 
>>> other places I havent thought of. keeping state differences for all of 
>>> these so can roll them back checkpoint by checkpoint is conceivable, but  
>>> think it will be very easy to miss an aspect and create mess. It would be 
>>> cool from a UI perspective, I admit.
>> I don't think the situation is as grim as you describe it. The reason is
>> that once we develop a general mechanism to do this, adding state
>> variables should be pretty easy. That means we could fix any mess we create.
> I was thinking 'if after every step (whatever unit that may be) I need to 
> sift through _setup and try to figure what changed I'll be retired before I 
> have any tangible results'.
>
> I've rethought this a bit. Since I am working on and off on refactoring 
> interpreter state into separate classes (modal state, execution state, world 
> model, config) it would be a relatively small step to modify the current 
> approach of directly setting/getting variables from _setup to go through 
> getter/setter methods of the relevant state class. Once you have state 
> access/modification isolated into getters and setters, it is relatively 
> straighforward to tack on change recording with that class without changing 
> the external interface.
>
> Example:
>
> current code:
>
>      _setup.sequence_number = foo;
>      bar = _setup.sequence_number;
>
> Now assume all interpreter execution state (line number, call level etc etc) 
> is factored out into a class ExecState.
> The above then will become
>
>      exec_state.set_sequence_number(foo);
>      bar = exec_state.get_sequence_number();
>
> where exec_state is a reference to class ExecState.
>
> once you have the set_<something>() methods you can track changes within the 
> class without any externally visible changes.
>
> (and the compiler helps a lot with the mods: start with all members public, 
> like _setup.sequence_number. Then add a getter/setter and make the 
> sequence_number member variable private. Bang, you get all the references to 
> sequence_number which need to be changed into getter/setter calls. It's 
> conceivable event that some of this becomes mechanically translatable). Also, 
> it wouldnt be any performance penalty if one didnt use checkpointing - it 
> could be a macro or inlined in that case. So the refactoring wouldnt be a 
> drag factor per se.
>
> Maybe there's even some C++ stunt with overloading the assignment operator 
> which does it without changes to the source. Where's Jeff when we need him ;)
>
> That said, I still think this 'lets record from the inside' approach is a 
> heavyweight project.
>
>
>>> now 2): if it were just for a linear program with no queuebusters, it would 
>>> be easy - run-to-line and stop because execution is completely predictable.
>>>
>>> Now, queuebusters like reading HAL pins, probe and tool change make 
>>> execution non-predictable. However, it occurs to me that recording the 
>>> state of 'past queuebusters' is entirely feasible because that affects some 
>>> very limited aspect of the state (the HAL pin vector and timeouts, tool 
>>> geometry, offsets, the probe trip point). If that is a valid assumption, 
>>> then rolling forward would become possible even with queue busters, at 
>>> relatively low cost. The queuebuster operations would actually be 
>>> interesting optional breakpoints, for instance to correct tool geometry or 
>>> other past decisions, and continue with a CRC move (not entirely sure thats 
>>> possible, but it would be nice to have).
>>>
>>> Let's call these queuebuster responses 'sync records' for now, so they have 
>>> a name. Technically they contain for:
>>>
>>> - probe: trip flag and trip position
>>> - HAL pin read: the vector of pin values, and timeouts or in-time
>>> - tool change: the geometry of the tool post-change.
>>>
>>> I think the fundamental question really is:
>>>
>>> Is it a safe assumption given:
>>> - any conceivable NGC program
>>> - a constant starting state
>>> - a constant sequence of 'sync records'
>>>
>>> that the end state of the program must  be identical?
>>>
>>> If the answer to that is yes, the roll-forward approach looks feasible at 
>>> relatively little cost compared to 1).
>>>
>>> Let me clarify why the answer to the above question isnt "yes, of course": 
>>> it could very well be that parameters like feed override, which eventually 
>>> may impact on the path of a machine but are NOT recorded in the 
>>> interpreter, have an impact on the values of the sync records. I havent 
>>> fully thought through this; my gut feeling is the answer is "yes", but I'm 
>>> not sure.
>> That's my gut feeling, too. But if we are wrong, we are back to the same
>> sort of mess we would have with approach one. The big thing going for
>> this approach is that there is much less data to record. But quantity of
>> data should not be an issue with modern computers and disk drives.
>>> ---
>>>
>>> I'm leaving out here completely the aspect of checkpointing and restarting 
>>> the motion and TP queues, which is a nontrivial issue in itself. Let's 
>>> think this through piecemeal.
>>>
>>> I'd be interested to hear any thoughts on this.
>> To me, the question is, how would the user describe what he wants to do.
>> I think the answer is that he "wants to go back to (for example) just
>> before the tool broke". I don't think a user would describe what he
>> wants as skip the parts that worked up until the tool broke and continue
>> from there.
> I'm not sure if that particular task can be easily modelled into a set of 
> requirements.
>
> But it might help to consider what 'checkpoint units' could be in theory (for 
> lack of a better term). For going back and forth in execution we have in 
> theory and practice:
>
> 1. on the Canon/interp_list queue
> 2. pretty much the same on the motion queue, although location tracking is 
> currently limited to moves (arc,line, tap, probe)
> 3. in the trajectory planner: each position sample along primitives in 2)
> 4. within the interpreter, checkpoints could actually be arbitrary units - 
> whatever an expression can evaluate to could define a checkpoint unit (or 
> 'step' for that matter).
>
> now 4 is readahead time, which is why I currently have the watchpoint results 
> travel down the interplist and maybe motion queue so they can be acted upon 
> when chips are made.
>
> RFL is currently using 1). Motion stepping uses 2). Feedhold (I think) works 
> at the 3) level. Watchpoints are a 4) affair. Interpreter line stepping 
> (#execute next line') is 4), too.
>
> "Now she wants to go back to (for example) just before the tool broke": for 
> "going back", we need to employ one or several of the above mechanisms in 
> combination.
>
> 1,2 and 4 are used here and there, but rarely in combination. I dont think 3) 
> has been touched yet and considered. It would be a very versatile approach 
> (and a tough one do I guess).
>
> regards,
>
> Michael
>
>
>
>
> ------------------------------------------------------------------------------
> For Developers, A Lot Can Happen In A Second.
> Boundary is the first to Know...and Tell You.
> Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
> http://p.sf.net/sfu/Boundary-d2dvs2
> _______________________________________________
> Emc-developers mailing list
> Emc-developers@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/emc-developers

------------------------------------------------------------------------------
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
_______________________________________________
Emc-developers mailing list
Emc-developers@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-developers

Reply via email to