You can implement atomic refcounting with destructors. See what I do in my 
[multithreading 
runtime](https://github.com/mratsim/weave/blob/33a446ca4ac6294e664d26693702e3eb1d9af326/weave/cross_thread_com/flow_events.nim#L176-L201):
    
    
    type
      FlowEvent* = object
        e: EventPtr
      
      EventPtr = ptr object
        refCount: Atomic[int32]
        kind: EventKind
        union: EventUnion
    
    # Refcounting is started from 0 and we avoid fetchSub with release semantics
    # in the common case of only one reference being live.
    
    proc `=destroy`*(event: var FlowEvent) =
      if event.e.isNil:
        return
      
      let count = event.e.refCount.load(moRelaxed)
      fence(moAcquire)
      if count == 0:
        # We have the last reference
        if not event.e.isNil:
          if event.e.kind == Iteration:
            wv_free(event.e.union.iter.singles)
          # Return memory to the memory pool
          recycle(event.e)
      else:
        discard fetchSub(event.e.refCount, 1, moRelease)
      event.e = nil
    
    proc `=sink`*(dst: var FlowEvent, src: FlowEvent) {.inline.} =
      # Don't pay for atomic refcounting when compiler can prove there is no 
ref change
      `=destroy`(dst)
      system.`=sink`(dst.e, src.e)
    
    proc `=`*(dst: var FlowEvent, src: FlowEvent) {.inline.} =
      `=destroy`(dst)
      discard fetchAdd(src.e.refCount, 1, moRelaxed)
      dst.e = src.e
    
    
    Run

Multithreaded garbage collection is a very hard problem and it took years for 
Java to nail it.

The `parallel` statement has a proof-of-concept array bound checking algorithm 
that can prove at compile-time that arrays/seq accesses are safe to be made 
concurrently because the cells are not updated by different threads. It's 
mostly a proof-of-concept though and the future Z3 integration (a SMT solver) 
is in part motivated to extend this capability as far as I know.

If you compile with `--gc:boehm` or `--gc:arc` memory is allocated on a shared 
heap. Though I'm currently having trouble with `--gc:arc` \+ `--threads:on` its 
main motivation is easing multithreading.

Note that for data structures shared between threads you will still need to 
handle them via destructors + atomic refcounting, there is no plan to add 
atomic overhead to all types managed by the GC.

Furthermore Nim encourages using message-passing (i.e. share-memory by 
communicating instead of communicating by sharing). 

Reply via email to