> How does it work with threads?

It works the same with threads as with everything else: When you need to pass 
anything as a thread argument that is subject to being destroyed, you need to 
"move" it to a sink parameter with everything that implies including that it 
must then be an owned ref rather than a ref. Threads don't return anything, but 
if you are returning something through a let/var variable somewhere (atomically 
protected if necessary), anything passed out must be "moved" as sink and if a 
ref must be an owned ref. This also applies to Channels, which must contain 
only sink values (owned ref if they are ref's) with all the usual 
qualifications: owned ref's can only be moved.

> Can data structures like seq and tables be shared now?

You may as well include strings, and yes, they can be shared between threads if 
the main thread or a controlling thread creates them and just passes them as 
non-sink parameters into the threads and as long as the controlling thread 
doesn't go out of scope for the using threads lifetime, but they can't be 
simultaneously modified across threads as if this were built in, there would be 
a performance hit for all of them whenever the threads switch is on. If 
simultaneous access including mutations is desired, one must override the 
[]/[]= accessors to make them atomic.

When one wants to be able to arbitrarily create these in any thread to be used 
and destroyed in some other thread, one can accomplish this through passing 
them through Channels as sink but then we need special versions that use 
allocShared instead of alloc to create them and deallocShared instead of 
dealloc to deal with their heap storage, again with the accessor overrides if 
they are to be simultaneously mutated. We can improve the syntax for creation 
and destruction of which of them we want with something like a {.shared.} 
pragma when they are created that is only effective when threads are on, and we 
can bypass the atomic accessor operations by casting them to the non-shared 
versions for access when atomic operation are't needed, but that's likely about 
the extent of performance tuning we can do.

Note that the standard library doesn't yet support the features as I describe 
in the previous paragraph, but it could.

> Does threaded code becomes simpler to write?

I would say yes with some qualifications. Using closures across threads 
currently can be done but it is very ugly using protect/dispose and with dribs 
and drabs of closure environments scattered across the heap space for each of 
the threads that create/use them. That should get a lot better with the new 
model as closures will own their environments and part of that will be 
automatic management of copy/move/destroy of the environment captured values.

One will have to know when to use sink parameters and variables as described 
above, but that isn't unique to threads. In some respects, it gets easier 
because the new memory model makes memory management just work as long as one 
stays within those (quite simple) rules and we no longer have to consider that 
each heap has its own allocator for GC. Threading is never entirely easy, but I 
would think this change improves things rather than makes them more difficult, 
mostly because the conditions are well defined and at least partially checked 
by the compiler as to passing of ownership. It still likely isn't as easy as 
for a multi-threading GC as in the JVM, DotNet, or even Haskell where one 
doesn't have to consider ownership.

> Are there any examples of the simplifications?

Regarding threading? Well, we can't have working examples yet because we don't 
yet have a working implementation of the new memory modelling system. Trivial 
examples such as those on RosettaCode: 
[https://rosettacode.org/wiki/Synchronous_concurrency#Nim](https://rosettacode.org/wiki/Synchronous_concurrency#Nim)
 and 
[https://rosettacode.org/wiki/Concurrent_computing#Nim](https://rosettacode.org/wiki/Concurrent_computing#Nim)
 will just work; the modifications will come when things that have 
destroy/copy/move semantics come into play. For example, lets consider multi 
threading an operation which involves manipulations on seq's which may need to 
grow for any given thread's use. This would require that the threads own the 
seq's and as it would be slow to have to keep creating new seq's unless 
necessary, we would need to pass the ownership to the threads and get them back 
out along with the result. The following code shows one way to do it: 
    
    
    from sequtils import newSeqWith
    from cpuinfo import countProcessors
    let NUMPROCS = countProcessors()
    
    type Args = (int, seq[int] {.shared.})
    
    var seqs = newSeqWith(NUMPROCS, sink newSeq[int]() {.shared.}) # new pragma 
to create a shared
    for i in 0 ..< NUMPROCS: seqs[i].add(i * 100) # put one element into each 
seq
    
    var results: Channel[sink (int, seq[int])] # or Channel contents are 
automatically sink
    results.open
    
    var thrds: array[0 .. NUMPROCS - 1, Thread[Args])
    
    proc doit(arg: sink Args) {.thread.} =
      let id = arg[0]; var nasq: seq[int]; move(nasq, cast[seq[int]](arg[1])) # 
cast to non-atomic for faster use
      let last = nasq[nasq.high]; nasq.add(last.inc) # modifies seq because we 
own it and no other thread
      results.send(move (id, cast[seq[int] {.shared.}](nasq))) # back to shared 
for safety between threads
    
    for i in 0 ..< NUMPROCS: createThread[Args](thrd[i], doit, (i, seqs[i])) # 
start some threads...
    
    var numdone = 0
    while true:
      var arg: Args; move(arg, results.recv)
      let id = arg[0]; var sq: seq[int] {.shared.}; move(sq, arg[1]) # leave as 
shared
      if sq.len < 100: createThread[Args](thrd[id], doit, move (id, sq))
      else: move (seqs[id], sq); numdone.inc
      if numdone == NUMPROCS: break
    
    # at this point we own the seqs and its contents and could concatenate them 
or whatever
    
    
    Run

The above shows the general idea of what we talked about above; in short, the 
only real complexities are knowing when to use sink and move as for any new 
memory model use, when shared versions are needed or not for thread safety and 
performance, and how to transfer ownership along with when it's necessary as 
applied to the special case of threads. In this case, synchronization was 
handled simply through the use of channels, but it might get a little more 
complex if we were to implement it through the use of Lock's and Cond's...

Reply via email to