> In the meantime a new version of the spec arrived...

@Araq, Thanks for the heads-up on the new version of the spec! I like it very 
much and look forward to being able to test the implementation.

Meanwhile, I wonder if the following considerations have some merit:

1\. I see your ref being a RC'ed instead of GC'ed ref and owned ref being a 
kind of "sink" optimization for ref, but it looks like we also need shared or 
shared ref as a thread shared version of RC'ed ref where owned shared or owned 
shared ref would be the same to it as the owned modifier is to ref.

2\. I see that your current proposal for ref is that it is a double indirected 
pointer as in there is in ptr rcdpointer where rcdpointer is implemented as an 
object; I wonder if this would be simpler if we just worked directly with the 
rcdpointer object as ref and left the double indirection up to the use case? As 
well as saving some cycles, this has the advantage that working with ref/shared 
ref uses all the re-write rules as for above this section but just applies the 
special "hooks" as applicable to the use as pointers.

3\. It would then seem that we would also need a new shared seq as well as a 
new non-shared seq where the two implementations could just be distinct types 
of each other so one could cast one to the other when the full atomic access of 
the shared contents isn't required (as when there are exterior barriers to 
prevent non-atomic access).

4\. It seems to me that we need a weak reference type so we do such things as 
point into the interior of ref and shared ref types without causing destruction 
when moved/copied and breaking data races. This would not be a primitive 
pointer but rather a variation of the ref/shared ref (and owned qualifications 
of these types) that are marked to make destruction a no-op.

5\. As you say in the spec, whether the destruction is done on a (conceptual) 
scoped block basis or a proc basis should work either way; my only concern is 
the implementation of how the destuctors are stacked at the end of either 
method so that we avoid current Swift's problems of stack overflow - in other 
words, do nested try/finally pairs build stack? If the following works, I'm 
good, as it doesn't work in current Swift:
    
    
    type
      Thunk{T] = proc(): T {.closure.}
      CIS[T] = object
        head: T
        tail: Thunk[CIS[T]]
    proc makeCIS[T](hd: T; clsr: Thunk[T]) =
      CIS[T](head: hd, tail: clsr)
    iterator consumeCIS[T](cis: var CIS[T]): T =
      yield cis.head; cis = cis.tail()
    
    proc numbersFrom(x: int): var CIS[int] =
      makeCIS(x, proc(): CIS[int] = numbersFrom(x + 1))
    
    for n in consumeCIS(numbersFrom(a)): if n > 1000000: break
    
    
    Run

6\. I am considering the last limitation you express as in "Objects that 
contain pointers that point to the same object are not supported by Nim's 
model. Objects can be swapped and end up in an inconsistent state.". It seems 
to me this comes about because of the current patterns for =destroy, =move, and 
= which I reproduce here:
    
    
    proc `=destroy`(x: var T) =
      if x.field != nil:
        dealloc(x.field)
        x.field = nil
    
    proc `=move`(dest, source: var T) =
      # protect against self-assignments:
      if dest.field != source.field:
        `=destroy`(dest)
        dest.field = source.field
        source.field = nil
    
    proc `=`(dest: var T; source: T) =
      # protect against self-assignments:
      if dest.field != source.field:
        `=destroy`(dest)
        dest.field = copy source.field
    
    
    Run

For these, your =destroy seems to consider that it is used to destroy fields 
that are pointers to types and not the types themselves (the checks for nil) 
and as it directly uses dealloc to destroy those fields when they exist, then 
if these primitive pointers in turn pointed to destroyable types the sub-types 
would not be properly destroyed. For your consideration, what if the pattern 
for =destroy considered both the case of fields which were ptr SomeType or just 
SomeObject and just called destroy on them all as appropriate (custom overrides 
could even take care of where a given field is a primitive pointer or 
whatever); the patterns for = and =move would also change to match, something 
like as follows:
    
    
    proc `=destroy`(x: var T) =
      # for a given field that is an object, just call `=destroy` and the sub 
`=destroy`'s will handle it...
      `=destroy`(x.fieldobj)
      # for the case where a given field is a "ptr obj"
      if x.fieldptrobj != nil:
        =destroy(x.fieldptrobj[])
        dealloc(x.fieldptrobj)
        x.fieldptrobj = nil
      # for the case where a given field is a primitive "pointer"
      if x.fieldpointer != nil
        dealloc(x.fieldpointer)
        x.fieldpointer = nil
    
    proc `=move`(dest, source: var T) =
      # don't do a bulk "=destroy" on dest as some pointers might be equal...
      # do the following for every field
      # for fields that are objects just call `=move` and the sub `=move`'s 
will handle it
      `=move`(dest.fieldob, source.fieldobj)
      # for fields that are ptr obj, protect against self-assignments:
      if dest.fieldptrobj != source.fieldptrobj and dest.fieldptrobj != nil:
        `=destroy`(dest.fieldptrobj[])
      copy(dest.fieldptrobj[], source.fieldptrobj[]) # to avoid whatever 
behavior `=` implies
      source.fieldptrobj = nil
      # for fields that are primitive pointers...
      if dest.fieldpointer != source.fieldpointer and dest.fieldpointer != nil:
        deallocate(dest.fieldpointer)
      dest.fieldpointer = source.fieldpointer
      source.fieldpointer = nil
    
    proc `=`(dest: var T; source: T) =
      # don't do a bulk "=destroy" on dest as some pointers might be equal...
      # for fields that are objects just call `=` and the sub `=`'s will handle 
it
      `=`(dest.fieldobj, source.fieldobj)
      # for fields that are ptr obj, protect against self-assignments:
      if dest.fieldptrobj != source.fieldptrobj and dest.fieldptrobj != nil:
        `=destroy`(dest.fieldptrobj[])
      dest.fieldptrobj[] = source.fieldptrobj[] # handles whatever behavior `=` 
implies
      # for fields that are primitive pointers...
      if dest.fieldpointer != source.fieldpointer and dest.fieldpointer != nil:
        deallocate(dest.fieldpointer)
      dest.fieldpointer = source.fieldpointer
    
    
    Run

Since these are overrides on a default version of each object type, the default 
version can do as appropriate for things that need no special treatment as in 
=destroy doing nothing as for primitive types and the default versions of =move 
and = just doing simple copies.

The above pattern would seem to remove the limitation that the model can't 
handle pointers to the same object or location.

To me, it seems this scheme has few limitations and supports all kinds of 
tuning for speed in the use of the owned and sink qualifiers along with the 
suggested ability to cast between structurally equivalent object types in being 
able to reduce or eliminate the expensive lock/atomic operations even for use 
with types allocated on the shared heap.

Reply via email to