maybe i'm not understanding things, but is there any reason why the
standard library couldn't provide both types of channels? Its really really
not a "which should be the default", but two abstractions/data structures
which are very very different in when they're used in practice!

Looking at the docs for
http://static.rust-lang.org/doc/master/std/comm/index.html, a send type
like the following (for no wait/ not blocking) would make sense for bounded
channels (the following type is pseudo code, i'm still learning rust so
this is slightly wrong likely )

*fn sendBoundedNoWait(&self, t: T) -> Result<int,T>*,

 so that the send for a bounded channel, on success you know the current
size of the channel, and on failure (ie you can't send it right away, so
you'd have to wait otherwise) you get back the value you were trying to
send. something like
*fn sendBoundedNoWait(&self, t: T) -> Result<int,(int,T)>*
 because certain channel models you may have to wait even if the queue
isn't full, or something like that, so the congestion should be part of the
info in a failed send?  And the normal send t would block when the channel
queue is full perhaps, to retry once theres a spot?

point being, bounded channels don't have to be blocking, but the non
blocking api variant needs to have an API that provides enough information
that the sender can handle that error/retry logic.

I work with and have the pleasure of knowing a number of people who build
some pretty nontrivial dist systems. They all  explain to me (on multiple
occasions) is that unbounded queues (in the context of asynchronous
messages) are a pain / nightmare in terms of debugging the composite
system, and in providing good performance characteristics under heavy
loads. If you don't get feedback that you're producing far far faster than
can be consumed, thats a bug! No back pressure is a scary thing problem to
have if you're processing terabytes of real time event data a day.

With a bounded channel, you're actually forced to have a policy to deal
with the issue of "i'm producing more than is being consumed", and when you
have a composed system of channels, it becomes possible to debug throughput
issues because of this, rather than wondering "why is there a space leak".
 This idea is often called backflow / backpressure etc  by many folks.

with unbounded channels, you don't have a way of building a policy for how
to handle heavy workloads, and then any latency sensitive computation turns
into a high latency space leak.  Unbounded channels are definitely a bit
easier to work in terms of "oh, i can send a message to another computation
running in parallel and it just works", but you very very easily can
manufacture examples where this blows up in your face wrt performance
characteristics and space usage.

point being, BOTH are valuable, and they're fundamentally very different
critters in terms of when they're appropriate. Talking about which should
be the default is a pretty dangerous thing :).


That said if you have to choose a "default", if the "spirit of rust" is
about good performance with predictable resource usage characteristics,
bounded channels plus something like the straw man non blocking
*sendBoundedNoWait* as part of the api.

that said, there shouldn't be a default, they're not the same thing at all.

cheers
-Carter


On Thu, Dec 19, 2013 at 12:36 AM, Patrick Walton <[email protected]>wrote:

> On 12/18/13 8:48 PM, Kevin Ballard wrote:
>
>  By that logic, you'd want to drop the oldest unprocessed events, not the
>> newest.
>>
>
> Right.
>
> To reiterate, there is a meta-point here: Blessing any communications
> primitive as the One True Primitive never goes well for high-performance
> code. I think we need multiple choices. The hard decision is what should be
> the default.
>
> Patrick
>
>
> _______________________________________________
> Rust-dev mailing list
> [email protected]
> https://mail.mozilla.org/listinfo/rust-dev
>
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to