Hi Simon ,

You may want to test the througput of tasks first to set a base line.

Disruptor is faster but normally runs in a tight loop ( or a yield loop) so
pretty much tying up a core ( even when yielding it will run again very
soon , its the fact that its always running which makes it very fast as
there is no contention /communication ).  You cant run 100 tasks like that
with real work not just micro benches .  While i like there Disruptor
approach it doesnt fit well with rust tasks which are more an actor model .
( Well Disruptor is like tasks its just they are much bigger).

Anyway Disruptor really rests on the old ringbuffer , take the ring buffer
, think about scheduling / sequencing and test  so it doesnt set the CPU to
100%.. ( Note there catch up extentions on the ring buffer may or may not
be relevant)   .

Im sure the performance of tasks can be approved have a look at IPC in
barrel fish OS . It uses a ring buffer like Disruptor and plays around with
cache prefetch to give either low latency or high latency with better
throughput ( its simple)  .. It can be improved further with non temporal
SIMD moves but this is more tricky to do as its the compiler  / marsheling
the types.

Ben


On Tue, Oct 29, 2013 at 1:02 PM, Simon Ruggier <[email protected]> wrote:

> Greetings fellow Rustians!
>
> First of all, thanks for working on such a great language. I really like
> the clean syntax, increased safety, separation of data from function
> definitions, and freedom from having to declare duplicate method prototypes
> in header files.
>
> I've been working on an alternate way to communicate between tasks in
> Rust, following the same approach as the LMAX Disruptor.[1] I'm hoping to
> eventually offer a superset of the functionality in the pipes API, and
> replace them as the default communication mechanism between tasks. Just as
> with concurrency in general, my main motivation in implementing this is to
> improve performance. For more information about the disruptor approach,
> there's a lot of information linked from their home page, in a variety of
> formats.
>
> This is my first major contribution of new functionality to an open-source
> project, so I didn't want to discuss it in advance until I had a working
> system to demonstrate. I currently have a very basic proof of concept that
> achieves almost two orders of magnitude better performance than the pipes
> API. On my hardware[2], I currently see throughput of about 27 million
> items per second when synchronizing with a double-checked wait condition
> protocol between sender and receivers, 80+ million items with no blocking
> (i.e. busy waiting), and anywhere from 240,000 to 600,000 when using pipes.
> The LMAX Disruptor library gets up to 110 million items per second on the
> same hardware (using busy waiting and yielding), so there's definitely
> still room for significant improvement.
>
> I've put the code up on GitHub (I'm using rustc from master).[3]
> Currently, single and multi-stage pipelines of receivers are supported,
> while many features are missing, like multiple concurrent senders, multiple
> concurrent receivers, or mutation of the items as they pass through the
> pipeline. However, given what I have so far, now is probably the right time
> to start soliciting feedback and advice. I'm looking for review,
> suggestions/constructive criticism, and guidance about contributing this to
> the Rust codebase.
>
> Thanks,
> Simon
>
> [1] http://lmax-exchange.github.io/disruptor/
> [2] A 2.66GHz Intel P8800 CPU running in a Thinkpad T500 on Linux x86_64
> [3] https://github.com/sruggier/rust-disruptor
>
> _______________________________________________
> Rust-dev mailing list
> [email protected]
> https://mail.mozilla.org/listinfo/rust-dev
>
>
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to