Regarding multithreading it really depends on your workload but here are 2 
kinds of code architectures that would allow mixing async on threads:

1\. If the part of your code that you want threaded is stateless and only 
allows the following types to crossover threads:

  * plain old datatypes/variants
  * Ref, seq, strings types but only if they are created and destroyed within 
the task and are not sent across threads
  * pointer to buffers (raw or Nim sequences) that outlive the task (for 
example pointers to matrices)



You can use a threadpool and just spawn myFunctionCall(a, b, c) (or Weave if 
dynamic load balancing, parallel for or producer-consumer task dependencies is 
needed).

2\. Your threads are long-lived, maintain (independent) state and work as a 
service that needs to communicate with other services. Then use channels to 
communicate between them. Nim channels support sending seq and strings via deep 
copy.

3\. You need a shared state, for example a shared hash table. Now you have a 
problem :P, if it has infinite lifetime, you can allocate it on the heap and 
pass a pointer to it, if not you need to implement atomic refcounting which is 
not too difficult with destructors, see for example my [refcounted events in 
Weave](https://github.com/mratsim/weave/blob/17257c2f95594b566abaa7b5c1a875f1a77f3536/weave/cross_thread_com/flow_events.nim#L176-L201)
    
    
    type
      FlowEvent* = object
        e: EventPtr
      
      EventPtr = ptr object
        refCount: Atomic[int32]
        kind: EventKind
        union: EventUnion
    
    # Internal
    # ----------------------------------------------------
    # Refcounting is started from 0 and we avoid fetchSub with release semantics
    # in the common case of only one reference being live.
    
    proc `=destroy`*(event: var FlowEvent) =
      if event.e.isNil:
        return
      
      let count = event.e.refCount.load(moRelaxed)
      fence(moAcquire)
      if count == 0:
        # We have the last reference
        if not event.e.isNil:
          if event.e.kind == Iteration:
            wv_free(event.e.union.iter.singles)
          # Return memory to the memory pool
          recycle(event.e)
      else:
        discard fetchSub(event.e.refCount, 1, moRelease)
      event.e = nil
    
    proc `=sink`*(dst: var FlowEvent, src: FlowEvent) {.inline.} =
      # Don't pay for atomic refcounting when compiler can prove there is no 
ref change
      `=destroy`(dst)
      system.`=sink`(dst.e, src.e)
    
    proc `=`*(dst: var FlowEvent, src: FlowEvent) {.inline.} =
      `=destroy`(dst)
      discard fetchAdd(src.e.refCount, 1, moRelaxed)
      dst.e = src.e
    
    
    Run

So for the first 2 architectures, it's easy to "mix", threading and async live 
in separate domains.

I've also added facilities this weekend for Weave to run as a background 
service so that [long-lived threads can also submit jobs to 
Weave](https://github.com/mratsim/weave#foreign-thread--background-service-experimental)
 to ease interaction with async.

Reply via email to