Re: [rust-dev] Mutability and borrowing

2013-06-02 Thread Daniel Micay
On Sun, Jun 2, 2013 at 12:33 AM, Ziad Hatahet hata...@gmail.com wrote:
 I have the following function:

 fn add_equal(x: mut Complex, y: Complex) {
 x.real += y.real;
 x.imag += y.imag;
 }

 Calling the function with the same variable being passed to both arguments
 (i.e. add_equal(mut c, c)), results in the compile error:

 error: cannot borrow `c` as immutable because it is also borrowed as mutable

 I am guessing this is to avoid aliasing issues? What is the way around this?

 Thanks

 --
 Ziad

You can currently use `const Complex` for the second parameter, but
it may or may not be removed in the future. At the very least it will
probably be renamed.

An `` pointer guarantees that the value it points to is immutable,
but `const` is allowed to alias `mut` since the compiler restricts
it much more. Ideally you would use a separate function for doubling
the components of a value and adding two together.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Mutability and borrowing

2013-06-02 Thread Ziad Hatahet
On Sun, Jun 2, 2013 at 6:56 AM, Daniel Micay danielmi...@gmail.com wrote:

 You can currently use `const Complex` for the second parameter, but
 it may or may not be removed in the future. At the very least it will
 probably be renamed.



Excellent. Thanks all for your replies :)

--
Ziad
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Scheduler and I/O work items for the summer

2013-06-02 Thread Brian Anderson

On 06/01/2013 12:08 AM, Vadim wrote:


On Fri, May 31, 2013 at 3:45 PM, Brian Anderson bander...@mozilla.com 
mailto:bander...@mozilla.com wrote:



With this problem in general I think the obvious solutions amount
to taking one of two approaches: translate I/O events into pipe
events; translate pipe events into I/O events. Solving the problem
efficiently for either one is rather simpler than solving both.
The example you show is promising model that looks like it could
naively be implemented by buffering I/O into pipes. bblum and I
talked about an implementation that would work well for this
approach, but it has costs. I imagine it working like this.

1) The resolve_xxx function partitions the elements into pipesy
types and I/O types.
3) For each of the I/O types it creates a new pipe, and registers
a uv event. Note that because of I/O-scheduler affinity some of
these may cause the task to migrate between threads.
4) Now we just wait on all the pipes.


What if futures were treated as one-shot pipes with one element queue 
capacity, and real pipes were used only when there's a need for 
buffering?  That would help to reduce per-operation costs (also see 
notes below about allocation).


oneshot pipes already behave this way but they do incur one allocation 
that is shared between both endpoints. I expect futures to be oneshot 
pipes with some extra promise semantics on the sender side.




I think we would want to do this too for efficiency reasons. The
above outline has two major costs: the first is the extra pipes
and the second is the buffering of the I/O. For example, the
synchronous read method looks like `read(buf: mut [u8])` where
the buf is typically allocated on the stack. In the future
scenario presumably it would be more like `read() -
Future~[u8]`, forcing a heap allocation, but maybe `read(buf:
mut [u8]) - Futuremut [u8]` is workable.


Not necessarily.  In fact, .NET's signature of the read method is 
something like this: fn read(buf: mut [u8]) - ~Futureint;  It 
returns just the read bytes count.  This is perfectly fine for simple 
usage because caller still has a reference to the buffer.


Now, if you want to send it over to another task, this is indeed a 
problem, however composition of future comes to the rescue.   Each 
.NET's future has a method that allows to attach a continuation that 
yields a new future of the type of continuation's result:


trait FutureT
{
fn continue_withT1(self, cont: fn (T) - T1) - FutureT1;
}

let buf = ~[0,..1024];
let f1 = stream.read(buf);
let f2 : Future(~[u8],int = f1.continue_with(|read| return (buf, read));

Now f2 contains all information needed to process received data in 
another task.


There is at least one problem with this formulation I think that makes 
it unsafe. To start with, for maximum efficiency you want buf to be on 
the stack, `[0,..1024]`. If one is ok with using a heap buffer then 
there aren't any lifetime issues. So I would want to write:


// A stack buffer
let buf = [0, ..1024];
// A future that tells us when the stack buffer is written
let f = stream.read(buf);

But because `buf` is on the stack, it must stay valid until `f` is 
resolved, which would require a fairly intricate application of 
borrowing to guarantee. To make borrowcheck inforce this invariant you 
would need the returned future to contain a pointer borrowed from `buf`:


let f: Futuresome_type_that_keeps_the_buf_lifetime_borrowed = 
stream.read(buf);


This would prevent `buf` from going out of scope and `f` from being 
moved out of the `buf` region.


Once you do that though the future is unsendable. To make it sendable we 
can possibly implement some fork/join abstraction that lets you borrow 
sendable types into subtasks that don't outlive the parent tasks' region 
(i'm working on some abstractions for simple fork/join type stuff 
currently).





2. Each i/o operation now needs to allocate heap memory for the
future object.   This has been known to create GC performance
problems for .NET web apps which process large numbers of small
requests.  If these can live on the stack, though, maybe this
wouldn't be a problem for Rust.


Haha, yep that's a concern.


I know that Rust currently doesn't currently support this, but what if 
futures could use a custom allocator?   Then it could work like this:


1. Futures use a custom free-list allocator for performance.
2. The I/O request allocates new future object, registers uv event, 
then returns unique pointer to the future to its' caller.  However I/O 
manager retains internal reference to the future, so that it can be 
resolved once I/O completes.
3. The future object also has a flag indicating that there's an 
outstanding I/O, so if caller drops the reference to it, it won't be 
returned to the free list until I/O completes.
4. When I/O is complete, the future get resolved and all attached 

Re: [rust-dev] Scheduler and I/O work items for the summer

2013-06-02 Thread Brian Anderson

On 06/01/2013 11:49 PM, Matthieu Monrocq wrote:




On Sat, Jun 1, 2013 at 8:35 PM, Vadim vadi...@gmail.com 
mailto:vadi...@gmail.com wrote:



On Sat, Jun 1, 2013 at 7:47 AM, Matthieu Monrocq
matthieu.monr...@gmail.com mailto:matthieu.monr...@gmail.com
wrote:


1. Futures use a custom free-list allocator for
performance.


I don't see why Futures could not be allocated on the stack ?

Since Rust is move aware and has value types, it seems to me
this should be possible.


Because I/O manager needs to know where that future is in order to
fill in the result.

Perhaps it's possible to have a stack-allocated future objects
that consist of just a raw pointer to a block owned by the I/O
manager.  But these would need to have by-move semantics in order
to emulate behavior of unique pointers.   I am not entirely sure
how by-move vs by-copy is decided, but according to this
http://static.rust-lang.org/doc/rust.html#moved-and-copied-types
Rust would choose by-copy.

Vadim


Actually, I was more thinking of reserving space on the stack for the 
return value and have to IO layer write directly into that space (akin 
to C++'s Return Value Optimization).


However I might be stumbling on ABI issues here, since it essentially 
means that the compiler transforms Args... - ~Future into Args..., 
mut Future - ().


In some scenarios it could be an important optimization to let async 
results be written directly into the stack, but this requires further 
safety guarantees that the receiver can't move from the stack, as you 
say. That strikes me more as a fork/join style computation - which may 
be an appropriate solution here - but I wouldn't want to couple 
general-purpose futures to stack-discipline.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Scheduler and I/O work items for the summer

2013-06-02 Thread Brian Anderson

On 06/02/2013 03:44 PM, Brian Anderson wrote:

On 06/01/2013 11:49 PM, Matthieu Monrocq wrote:




On Sat, Jun 1, 2013 at 8:35 PM, Vadim vadi...@gmail.com 
mailto:vadi...@gmail.com wrote:



On Sat, Jun 1, 2013 at 7:47 AM, Matthieu Monrocq
matthieu.monr...@gmail.com mailto:matthieu.monr...@gmail.com
wrote:


1. Futures use a custom free-list allocator for
performance.


I don't see why Futures could not be allocated on the stack ?

Since Rust is move aware and has value types, it seems to me
this should be possible.


Because I/O manager needs to know where that future is in order
to fill in the result.

Perhaps it's possible to have a stack-allocated future objects
that consist of just a raw pointer to a block owned by the I/O
manager. But these would need to have by-move semantics in order
to emulate behavior of unique pointers.   I am not entirely sure
how by-move vs by-copy is decided, but according to this
http://static.rust-lang.org/doc/rust.html#moved-and-copied-types Rust
would choose by-copy.

Vadim


Actually, I was more thinking of reserving space on the stack for the 
return value and have to IO layer write directly into that space 
(akin to C++'s Return Value Optimization).


However I might be stumbling on ABI issues here, since it essentially 
means that the compiler transforms Args... - ~Future into 
Args..., mut Future - ().


In some scenarios it could be an important optimization to let async 
results be written directly into the stack, but this requires further 
safety guarantees that the receiver can't move from the stack, as you 
say. That strikes me more as a fork/join style computation - which may 
be an appropriate solution here - but I wouldn't want to couple 
general-purpose futures to stack-discipline.


One other note. I expect Future return values to still be written `- 
Future`. The allocation they require is an internal implementation 
detail - they don't need an extra ~.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Traits mod

2013-06-02 Thread Sanghyeon Seo
 This is fixed by adding use some_mod::SomeTrait at the start of
 some_use.rs. It's as though traits need to be in the same scope as
 code that expects to make use of their behaviour (where I'd expect the
 behaviour would be associated with the implementation for the self
 type).

 My question is: is this intended behaviour? If not, what's the
 expected behaviour  is there an outstanding issue for this?

Yes, I believe this is fully intended behaviour.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev