On 2009-01-27, at 09:46EST, André Bargull wrote:


On 1/27/2009 1:03 AM, P T Withington wrote:
On 2009-01-26, at 18:24EST, André Bargull wrote:
Oh wait, you just said nothing in the LFC depends on the current way things work. I really believed in you,
But wait, I said:
"where a child node _needs_ to access it's parent when the parent is in the process of being deleted" so, there may be cases where it _does_ access the parent, but does it _need_ to? I don't think it does. So it is just a matter of fixing those cases to be careful.

How do you plan to find those cases?

One way would be to set parent to null and then fix the bugs it reveals.

Whenever (immediate)parent is referenced, you need to add extra if- condition to protect against nullpointer dereferencing.

The alternative is to make __LZdeleted public and say:

"Everywhere you would have had to check for parent being null, instead check to see if __LZdeleted is true".

It works out to the same amount of work, but in the latter case, bugs go undetected.

For example what is going to happen for constraints? The example [2] isn't a real-life example (it makes no sense to do anything like that), but through replication [1] and other means, you can end up destroying the parent-node and then it's really difficult to find the source of possible runtime errors.

This is exactly what led me down this path. I was looking at replication in nav.lzx and realized that it was making hundreds of useless operations because as each child was destroyed, it told the parent to update its layout, which caused the parent to adjust the position of each child (even though that child would be destroyed soon).

Both of your examples remind me of the chainsaw user who cuts the branch he is standing on and then complains that the chainsaw did not prevent him from a horrible accident. Are there really nodes that believe they can delete their own parent (knowing that will result in their own deletion)?

For example an one-way message dialog, if you close the dialog it should be destroyed, too. So you add a button which calls destroy on the parent-node on "onclick".

And I can also make the examples less obvious [1]:
- when replication is set to lazy, everything works
- but when replication is "normal", the app breaks and produces the same error as in my earlier example

Doesn't this just indicate a bug in the interaction of replication and the focus manager? My take is that you are updating your dataset, so the replicator destroys and re-allocates nodes to represent the updated dataset, but the focusmanager is still hanging on to the destroyed node.

In the long run, the focus manager will leak nodes... which is a bug, to me.

Either the focus manager needs to store its state in terms of "dom position" (i.e., a path to the node that has focus, rather than a reference to the actual node), or the replication manager has to cooperate with the focus manager to tell it when it has replaced the node that has focus.

Why does it work for lazy replication? Because the lazy replicator is much more careful not to destroy a node that maps to a data element that is still in the range when it is updated -- it leaves those nodes in place. This would be one way to fix the bug for normal replication, and it would presumably make it more efficient too! Change your example to show the node ID's and you will see what I mean:

<canvas debug="true" >
 <dataset name="ds" ><item /></dataset>
 <view width="100%" height="100%" layout="axis:y" >
   <view>
     <datapath xpath="ds:/item" replication="lazy" />
<view width="180" height="40" bgcolor="#eaeaea" clickable="true" focusable="true" > <text text="${this.__LZUID + (this.__LZdeleted ? ' (deleted)' : '')}" />
       <handler name="onclick" >
         canvas.ds.appendChild(new lz.DataElement("item"));
       </handler>
     </view>
   </view>
 </view>
</canvas>

In a perfect world, we would not need to destroy nodes, because we have a garbage-collected runtime, but we have to at least know how to unhook a node from the node heirarchy so that the garbage collector can collect it. This is what destroy is all about. The change I propose will help us to find the places in the runtime that are hanging on to nodes that we intended to garbage collect (destroyed), and help reduce leaks.


Reply via email to