I think this pattern as you described it could be written with Option<T>
types. What you call Splittable<T> is basically just Option<T>. To
split off you can use the swap_unwrap() method of Option. You would
then use futures or channels to receive back the result and simply
assign it (e.g., `event.partA = Some(future.get())`).
Niko
Stefan Plantikow wrote:
Hi,
Am 21.02.2013 um 15:21 schrieb Niko Matsakis<[email protected]>:
The problem with allocating memory in another task is that it's kind of a
"rendezvous" operation, since you must pause both tasks so that neither
performs a GC etc during the time of the copy. Personally I think we should stick with
serialization unless it becomes a true burden. We'll probably have to improve the
serializer to handle cyclic data structures at some point though.
If there is a true performance problem, we could build an API that allowed you
to allocate arbitrary data structures in an arena that can then be sent to
other tasks as a unit. In this way one could send an arbitrary graph in
constant time. This takes a little bit of thought and care and maybe require
some minor extensions to the type system.
I keep thinking about a feature like that in relationship to the Disruptor
(factory line) concurrency pattern which is one of the fastest methods for CPU
cache-conscious high-speed concurrent processing.
Disruptor summary: A Disruptor is a ring buffer of event objects. The ring
buffer keeps track of free event slot and events ready for consumption using
two atomically incremented event slot pointers. Events get processed by
different tasks/threads which read directly off the ring buffer and thus
benefit massively from cache locality. Concurrency between tasks is managed by
either ensuring that tasks work on different parts of the event or by ordering
tasks that work on the same part of an event using atomic event slot pointers
(i.e. task b only works on a field 'a' in an event after task a has worked on
it).
I've been wondering how to implement Disruptor in rust without resorting to
using unsafe pointers. To make this convenient, I think it would be necessary
to have partial transfer of ownership of data.
For unique pointers, it should be possible to have that without needing cycle detecting GC by introducing a "split
of" operation that transforms a unique into the original pointer with access removed to the sub-part (or replaced
with a default value, like option::None), and a new unique pointer for the torn off sub-part. The second pointer could
then be handed off to another task for parallel processing without incurring copying costs. The parent of such a
"splittable pointer" would have to keep a reference count for the torn off part that gets incremented on
"split" and decremented on "union". When the sub-part goes out of scope, it would automatically
reunion with the parent. The parent can be freed, when its refcount is zero. There would be no cross-task cycles since
all involved pointers would still be unique (only owned by a single task at a single point in time).
The whole point of this would be to have good CPU cache locality, as the
sub-pointer would live at a memory address close to its parent pointer.
I guess what I am thinking about is /something/ like this:
struct Event {
partA: Splittable[A],
partB: Splittable[B]
}
fn handleEvent(event: ~Event)
{
let (eventWithoutA, partA) = event.partA.split()
// send unique partA somewhere for processing
let (eventWithoutB, partB) = eventWithoutA.partB.split()
// send unique partB somewhere for processing and retrieve it back
let eventWithUpdatedB = eventWithB.partB.union(receivedPartB)
// blocking union (?)
let eventWithUpdatedA = eventWithUpdatedB.partA.blockingUnion()
// continue
}
Is there a better way to achieve this in rust today?
Cheers,
Stefan
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev