On 1/27/2009 4:47 PM, P T Withington wrote:
On 2009-01-27, at 09:46EST, André Bargull wrote:


On 1/27/2009 1:03 AM, P T Withington wrote:
On 2009-01-26, at 18:24EST, André Bargull wrote:
Oh wait, you just said nothing in the LFC depends on the current way things work. I really believed in you,
But wait, I said:
"where a child node _needs_ to access it's parent when the parent is in the process of being deleted" so, there may be cases where it _does_ access the parent, but does it _need_ to? I don't think it does. So it is just a matter of fixing those cases to be careful.

How do you plan to find those cases?

One way would be to set parent to null and then fix the bugs it reveals.

It'd be great to have JS2 getters in this case. So you can generate a debugger warning in one release and by that give people the chance to update their sources. And in a next release, you'd apply the real proposed change. That way user applications continue to work.
Like:
---
private var _parent:LzNode;
public function get parent () :LzNode {
  if ($debug) {
    if (_parent.__LZdeleted) {
      Debug.warn("danger");
    }
  }
  // return parent but give a warning when parent is already destroyed
  return this._parent;
}
---



For example what is going to happen for constraints? The example [2] isn't a real-life example (it makes no sense to do anything like that), but through replication [1] and other means, you can end up destroying the parent-node and then it's really difficult to find the source of possible runtime errors.

This is exactly what led me down this path. I was looking at replication in nav.lzx and realized that it was making hundreds of useless operations because as each child was destroyed, it told the parent to update its layout, which caused the parent to adjust the position of each child (even though that child would be destroyed soon).


Adding the same if-condition to "__LZresolveReferences" as in "__LZapplyArgs" ('bail if deleted'), presumedly helps to reduce some useless operations when applying constraints to destroyed nodes..


Doesn't this just indicate a bug in the interaction of replication and the focus manager? My take is that you are updating your dataset, so the replicator destroys and re-allocates nodes to represent the updated dataset, but the focusmanager is still hanging on to the destroyed node.

When the new data-node is added, replication just starts to run. It's more about pooling, but see below.


Either the focus manager needs to store its state in terms of "dom position" (i.e., a path to the node that has focus, rather than a reference to the actual node), or the replication manager has to cooperate with the focus manager to tell it when it has replaced the node that has focus.

Why does it work for lazy replication? Because the lazy replicator is much more careful not to destroy a node that maps to a data element that is still in the range when it is updated -- it leaves those nodes in place. This would be one way to fix the bug for normal replication, and it would presumably make it more efficient too!

Almost right: in normal replication (without pooling!), the initial node is destroyed. But in lazy replication, pooling is automatically enabled, so the initial node is not destroyed and therefore no runtime exception is generated. And the bug is not caused by the replication-manager, it's a bug in LzView. The view must check in its destroy method the current focus and if necessary clear focus. (I just used that replication example because I know how replication works, so it's easily possible for me to force runtime exceptions under certain conditions.)

Reply via email to