Ah right, indeed. Well let's say that it's a non-supported use case shrugs and 
that Arraymancer tensors are the wrong container for that. I'm not aware of any 
scientific/numerical computing situation with an operation depends not only on 
the tensor size but also the value of what is wrapped.

Now regarding copy-on-write in Arraymancer, after tinkering a bit, I am 
convinced that it is not suitable and that plain reference semantics (or even 
value semantics are better).

I've explored using a refcounter or a simpler "shared_bit" boolean that just 
tracks if it was shared at any point (= assignment) or moved (=sink) and they 
don't cut it for the following reasons:

1\. Tensors wrapped in containers: in neural networks, you create a tensor then 
you wrap it in a container that will be used in a graph that will keep track of 
all operations applied to it.
    
    
    import ../src/arraymancer, sequtils
    
    let ctx = newContext Tensor[int] # Context that will track operations 
applied on the tensor to compute gradient in the future
    
    let W = ctx.variable toSeq(1..8).toTensor.reshape(2,4) # What is the 
refcount? --> it's 0
    
    let x = toSeq(11..22).toTensor.reshape(4,3)
    let X = ctx.variable x # What is the refcount? it's 1 until x goes out of 
scope
    

Working around this will probably lead to an overengineered solution.

2\. Predictability: When everything deepcopies or everything shallowcopies, 
everything is easier to reason about. If there is an perf or a sharing issue 
just add a shallow copy or a clone and done.

3\. Workaroundability: Since copy-on-write must overload assignment, if you 
want to ensure shallow-copy for example you have to use: 
    
    
    let foo = toCowObj(1, 2, 3, 4, 5)
    var bar: CowObject[int]
    system.`=`(bar, foo)
    

Reply via email to