Re: [rust-dev] Bring Back Type State

2014-06-05 Thread Eric Reed
I'm not going to claim canonicity, but I used the type system to encode the
socket state machine (see std::io::net::{tcp,udp}).
TcpListener consumes itself when you start listening and becomes a
TcpAcceptor.
UdpSocket can connect (i.e. ignore messages from other sources) and
become a UdpStream, which can disconnect (i.e. stop ignoring) and become
a UdpSocket again.

It's actually very easy to do. Make every state a distinct affine type.
Implement state transitions as methods that take self by value (consume old
state) and return the new state.


On Wed, Jun 4, 2014 at 10:40 PM, Cameron Zwarich zwar...@mozilla.com
wrote:

 Is there a canonical example of encoding a state machine into Rust's
 substructural types?

 Cameron

 On Jun 4, 2014, at 10:14 PM, Brian Anderson bander...@mozilla.com wrote:

 Thank you for your suggestion, but typestate is not coming back. There is
 no room in the complexity budget for another major piece of type system,
 and linear types can serve much the same purpose.

 On 06/04/2014 10:11 PM, Suminda Dharmasena wrote:

  Hi,

  The initial Type State implementation in Rust was not a great way to get
 about it. Please reconsider adding type state like it has been done in the
 Plaid language.

  Basically you can use traits mechanism to mixin and remove the trait
 when methods marked as having state transitions.

  Suminda

  Plaid: http://www.cs.cmu.edu/~aldrich/plaid/


 ___
 Rust-dev mailing 
 listRust-dev@mozilla.orghttps://mail.mozilla.org/listinfo/rust-dev


 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Function overloading is necessary

2014-05-29 Thread Eric Reed
Rust *does* have function overloading. That's *exactly* what traits are for.
If you want to overload a function, then make it a trait and impl the trait
for all the types you want to overload it with.


On Thu, May 29, 2014 at 4:01 PM, Tommi rusty.ga...@icloud.com wrote:

 Function overloading is a language feature that is necessary in order to
 prevent implementation details from becoming a part of an interface.

 Example:

 // Assuming function overloading exists...

 trait A {
 // Basic functionality:
 ...
 }

 trait B : A {
 // Extended functionality:
 ...
 }

 fn fooT: A(T t) {
 // Algorithm 1:
 ...
 }

 fn fooT: B(T t) {
 // Algorithm 2:
 ...
 }

 The fundamental requirement for the Algorithm 1 is that the type of the
 argument `t` must implement trait `A`. But, if the argument `t`implements
 trait `B` (as well as `A`), then the extended functionality in `B` makes
 possible an optimization that the Algorithm 2 is able to use. Whether the
 optimized algorithm is used or not is an implementation detail that the
 caller of `foo` shouldn't need to be bothered with.

 The lack of function overloading forces us to have two differently named
 functions, say `foo_a` and `foo_b`, and the programmer has to keep in mind
 that if he wants the optimized algorithm, then he needs to call `foo_b`
 (instead of `foo_a`) if his argument implements `B`. With function
 overloading, the programmer gets the optimization for free (no extra mental
 burden and no chance of calling the wrong function).

 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Function overloading is necessary

2014-05-29 Thread Eric Reed
You have to make the varying type the type implementing the trait.

trait Foo {
fn foo(arg: Self, some_int: int);
}

implT: Iterator Foo for T {
fn foo(arg: T, some_int: int) { ... /* arg implements Iterator */ }
}

implT: RandomAccessIterator Foo for T {
fn foo(arg: T, some_int: int) { ... /* arg implements
RandomAccessIterator */ }
}

Although traits are the tool you're supposed to use to do this, there are a
couple issues at the moment:
- The compiler can sometimes get scared and confused when you start mixing
generic impls and concrete impls (or multiple generic impls). This should
be fixed eventually.
- You can only vary on one type. Long term, this can be fixed by making our
traits MultiParameterTypeClasses instead of just regular Type Classes.
Short term, you can avoid this by impl'ing on a tuple of the varying types.


On Thu, May 29, 2014 at 5:52 PM, Tommi rusty.ga...@icloud.com wrote:

 On 2014-05-30, at 3:42, Eric Reed ecr...@cs.washington.edu wrote:

  Rust *does* have function overloading. That's *exactly* what traits are
 for.
  If you want to overload a function, then make it a trait and impl the
 trait for all the types you want to overload it with.

 I've been trying to figure out how exactly to do this. How would I write a
 function that's overloaded based on whether its argument's type implements
 Iterator or RandomAccessIterator?


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Function overloading is necessary

2014-05-29 Thread Eric Reed
That was what I was referencing in my comment about the compiler getting
scared and confused. Theoretically, it should be allowed and the compiler
would just require you to specify, but rustc may not be there yet.
Note that this problem is present in either formulation of function
overloading. If we had the style of function overloading Tommi used in the
first post, rustc still wouldn't know which function to call for a type
that implements both A and B.


On Thu, May 29, 2014 at 6:05 PM, Oleg Eterevsky o...@eterevsky.com wrote:

 If a type implements both Iterator and RandomAccessIterator, wouldn't
 it lead to a conflict?

 On Thu, May 29, 2014 at 6:02 PM, Eric Reed ecr...@cs.washington.edu
 wrote:
  You have to make the varying type the type implementing the trait.
 
  trait Foo {
  fn foo(arg: Self, some_int: int);
  }
 
  implT: Iterator Foo for T {
  fn foo(arg: T, some_int: int) { ... /* arg implements Iterator */ }
  }
 
  implT: RandomAccessIterator Foo for T {
  fn foo(arg: T, some_int: int) { ... /* arg implements
  RandomAccessIterator */ }
  }
 
  Although traits are the tool you're supposed to use to do this, there
 are a
  couple issues at the moment:
  - The compiler can sometimes get scared and confused when you start
 mixing
  generic impls and concrete impls (or multiple generic impls). This
 should be
  fixed eventually.
  - You can only vary on one type. Long term, this can be fixed by making
 our
  traits MultiParameterTypeClasses instead of just regular Type Classes.
 Short
  term, you can avoid this by impl'ing on a tuple of the varying types.
 
 
  On Thu, May 29, 2014 at 5:52 PM, Tommi rusty.ga...@icloud.com wrote:
 
  On 2014-05-30, at 3:42, Eric Reed ecr...@cs.washington.edu wrote:
 
   Rust *does* have function overloading. That's *exactly* what traits
 are
   for.
   If you want to overload a function, then make it a trait and impl the
   trait for all the types you want to overload it with.
 
  I've been trying to figure out how exactly to do this. How would I
 write a
  function that's overloaded based on whether its argument's type
 implements
  Iterator or RandomAccessIterator?
 
 
 
  ___
  Rust-dev mailing list
  Rust-dev@mozilla.org
  https://mail.mozilla.org/listinfo/rust-dev
 

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Seattle Rust Meetup interest?

2014-05-15 Thread Eric Reed
I'm down for a meetup. I may be able to bring some others from UW CSE with
me.


On Thu, May 15, 2014 at 1:46 PM, Paul Nathan pnathan.softw...@gmail.comwrote:

 Hi,

 It looks like two people have expressed interest in this. I think that's
 enough to get together and talk.

 My suggestion for scheduling is next Thursday (May 22nd) at 7-9pm at
 Remedy Teas[1] on Capitol Hill.

 Proposed topics:

 - meet  greet
 - talk about extant Rust projects (if any)
 - planned targets for projects.


 Does this seem convenient  work for everyone interested?


 [1] http://remedyteas.com
 Remedy Teas
 345 15th Ave E
 Seattle, WA 98112


 On Mon, May 12, 2014 at 5:05 PM, Eli Lindsey e...@siliconsprawl.comwrote:

 +1

 Somewhere around Capitol Hill would be very convenient.

 On May 12, 2014, at 11:31 AM, Paul Nathan pnathan.softw...@gmail.com
 wrote:

 Remedy Teas on Capitol Hill if interest is  7 people.

 I am open to other alternatives, but don't usually prefer places with
 alcohol. I e I like coffee and tea shops.
  On May 12, 2014 10:14 AM, benjamin adamson adamson.benja...@gmail.com
 wrote:

 +1 where in Seattle are you thinking?
 On May 11, 2014 11:08 PM, Paul Nathan pnathan.softw...@gmail.com
 wrote:

 Hi,

 This email is to gauge interest in doing a Rust meetup in Seattle.  If
 there's interest, I'll coordinate finding a place  time that accomodates
 people.

  Regards,
 Paul

 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

  ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev



 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Specifying lifetimes in return types of overloaded operators

2014-04-15 Thread Eric Reed
Could you provide a code sample that causes this error?


On Tue, Apr 15, 2014 at 6:28 AM, Artella Coding 
artella.cod...@googlemail.com wrote:


 Currently if I try to specify lifetimes in the return types of overloaded
 operators like Index ([]), I get an error message :

 method `index` has an incompatible type for trait: expected concrete
 lifetime, but found bound lifetime parameter 

 Why has this restriction been placed, given that I can write custom
 functions which can have bounded lifetimes specifications in the return
 type?

 Thanks

 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] impl num::Zero and std::ops::Add error

2014-04-09 Thread Eric Reed
In addition, mathematicians typically use the symbol '0' to refer to the
additive identity of a ring anyway.
On Apr 9, 2014 10:47 AM, Kevin Ballard ke...@sb.org wrote:

 Why? Zero is the additive identity. It's only bad if you want to denote a
 value that contains zeros that doesn't support addition, but that's only
 bad because of a misconception that Zero should mean a default value when
 we have Default for that. For reference, the Zero trait lives in std::num,
 which should be a good indication that this is a property of numeric types.

 AdditiveIdentity is the only reasonable alternative, but that's a mouthful
 of a name and I think changing the name to this would be more confusing.
 Someone who needs a numeric zero isn't going to go looking for
 AdditiveIdentity, they're going to look for Zero.

 -Kevin

 On Apr 9, 2014, at 6:29 AM, Liigo Zhuang com.li...@gmail.com wrote:

 Zero is a bad name here, it should be renamed or removed
 2014年4月9日 上午1:20于 Kevin Ballard ke...@sb.org写道:

 On Apr 7, 2014, at 1:02 AM, Tommi Tissari rusty.ga...@icloud.com wrote:

 On 07 Apr 2014, at 08:44, Nicholas Radford nikradf...@googlemail.com
 wrote:

 I think the original question was, why does the zero trait require the
 add trait.

 If that was the original question, then my answer would be that
 std::num::Zero requires the Add trait because of the way it is
 specified: Defines an additive identity element for Self. Then the
 question becomes: why is Zero specified like that?, and I would answer:
 because then you can use it in generic algorithms which require their
 argument(s) to have an additional identity.


 If you want a zero value for a type that doesn't support addition,
 std::default::Default may be a good choice to use. Semantically, that
 actually returns the default value for a type instead of the zero
 value, but in a type without addition, how do you define zero value?

 -Kevin

 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev



 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] impl num::Zero and std::ops::Add error

2014-04-09 Thread Eric Reed
If you implement Add on a type, then you should implement Zero to specify
the identity of the + operation on that type.

If you simply want to specify a default value, then you should implement
Default.
On Apr 9, 2014 11:25 AM, Tommi Tissari rusty.ga...@icloud.com wrote:

  On 09 Apr 2014, at 20:46, Kevin Ballard ke...@sb.org wrote:
 
  For reference, the Zero trait lives in std::num, which should be a good
 indication that this is a property of numeric types.

 Am I not supposed to use std::num::Zero for defining things like zero
 vector or zero matrix? Those are neither numbers nor zeroes.

 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] impl num::Zero and std::ops::Add error

2014-04-09 Thread Eric Reed
I think part of the confusion here is that matrix addition isn't actually
a binary operator, but rather a family of binary operators parametrized
over the matrix dimensions. There's +2,2 for 2 x 2 matrices, +2,3 for 2
x 3 matrices, etc. Similarly, the zero matrix is actually parametrized
over dimensions. 02,2 is different from 02,3. For any n,m: +n,m has
the identity 0n,m. If we wanted to properly represent that in Rust, we
would need type level naturals that we could parametrize Matrix over.

Regarding the range thing, I thought for a minute that it might make sense
if we required Mul+One+Add+Zero to be a ring (which is the intention I
think), but I don't think that's actually true in general for rings (i.e.
that 1 is a generating set of the underlying group).


On Wed, Apr 9, 2014 at 1:42 PM, Kevin Ballard ke...@sb.org wrote:

 The number 0 is the additive identity for numbers. But informally, the
 additive identity for other things can be called zero without problem.
 Heck, even the wikipedia page on Additive Identity uses this example for
 groups:

  Let (G, +) be a group and let 0 and 0' in G both denote additive
 identities, so for any g in G,
 
  0 + g = g = g + 0 and 0' + g = g = g + 0'
  It follows from the above that
 
  (0') = (0') + 0 = 0' + (0) = (0)

 Look at that, an additive identity for something other than a number, and
 zero (0) is used to denote this additive identity.

 The only issue comes in when you define addition in multiple different
 ways for a single type. Of course, right now I believe compiler bugs
 prevent you from actually using multiple implementations of Add with
 different type parameters for a given type, so this isn't actually a
 problem right now. And when that bug is fixed, it's still reasonable to
 consider Zero to be the additive identity for any addition where the
 receiver type is the right-hand side of the addition. In other words, if
 you define Adduint, Matrix for Matrix, then the additive identity here is
 Zero::for uint, not Zero::for Matrix.

 Regarding You can't assign a zero to a 2x2 matrix, additive identity
 does not require the ability to assign. And this is only a problem when
 considering addition between disparate types. If you consider matrix
 addition (e.g. 2x2 matrix + 2x2 matrix) then you certainly can assign the
 additive identity back to one of the matrix values.

 let m: Matrix = Zero::zero();

 looks fine to me. It produces a matrix m that, when added to any other
 Matrix m', produces the same matrix m'. This is presumably a Matrix where
 every element is 0. But again, this only makes sense if you've actually
 defined AddMatrix,Matrix for Matrix.

 Regardless, we've already made the decision not to go down numeric type
 hierarchy hell. We're trying to keep a reasonable simple numeric hierarchy.
 And part of that means using straightforward lay-person terms instead of
 perhaps more precise mathematical names. As such, we have std::num::Zero as
 the additive identity and std::num::One as the multiplicative identity.

 If you really want to complain about something, complain about
 std::num::One being used for things other than multiplicative identity,
 e.g. std::iter::range() uses Add and One to produce the next value in the
 range.

 -Kevin

 On Apr 9, 2014, at 1:25 PM, Tommi Tissari rusty.ga...@icloud.com wrote:

  On 09 Apr 2014, at 20:46, Kevin Ballard ke...@sb.org wrote:
 
  Why? Zero is the additive identity.
 
  Zero is _an_ additive identity for numbers, but not for vectors or
 matrices.
 
  use std::slice::Items;
  use std::iter::RandomAccessIterator;
  use std::num::Zero;
 
  Items is a RandomAccessIterator, but a RandomAccessIterator is not an
 Items. 0 is an additive identity, but an additive identity is not 0. You
 can't assign a zero to a 2x2 matrix, and therefore this trait is
 incorrectly named. The following just looks wrong:
 
  let m: Matrix = Zero::zero();
 
  AdditiveIdentity is the only reasonable alternative, but that's a
 mouthful of a name and I think changing the name to this would be more
 confusing.
 
  Naming a trait something that it's not is even more confusing. I don't
 think we should give an incorrect name to this trait on the grounds of the
 correct name being longer. Just look at RandomAccessIterator.
 

 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Rust automation downtime

2014-03-13 Thread Eric Reed
The Mountain View office outgrew their old location and they're moving to a
larger one.


On Thu, Mar 13, 2014 at 6:21 AM, Thad Guidry thadgui...@gmail.com wrote:

 Curious, Whole Mozilla moving ? or just some teams ?  and why ?  making
 room for others ?  kicked out by grumpy landlord or mayor ? :-)


 On Wed, Mar 12, 2014 at 11:08 PM, Brian Anderson bander...@mozilla.comwrote:

 This weekend Mozilla's Mountain View office is moving, and along with it
 some of Rust's build infrastructure. There will be downtime.

 Starting tonight bors is not gated on the mac or android builders and
 those machines are turned off. Sometime this weekend other build machines,
 including the build master and bors, will be moved and things will stop
 working.

 The Monday triage email will be delayed.

 We'll sort everything out Monday. Sorry for the inconvenience.

 Regards,
 Brian
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev




 --
 -Thad
 +ThadGuidry https://www.google.com/+ThadGuidry
 Thad on LinkedIn http://www.linkedin.com/in/thadguidry/

 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Work week minutes and upcoming RFCs

2014-03-10 Thread Eric Reed
Wow! Great job all.

I think the only major concern I had after one read-through is how we could
make downcasting safe. I can't see a way without runtime type tags on
structs, which is a non-starter. I guess I'll wait for the RFC on that one.


On Sun, Mar 9, 2014 at 8:27 PM, Brian Anderson bander...@mozilla.comwrote:

 Hi.

 Last week a number of us got together to hash out designs for the
 remaining features in Rust 1.0, with the goal of producing RFC's for each
 in the upcoming weeks.

 I'm very optimistic about how it's all going to come together, and that
 the quantity of work to be completed is reasonable.

 I've put the minutes for the week up on the wiki[1] for the curious, but I
 warn you that they are sometimes inscrutable.

 [1]: https://github.com/mozilla/rust/wiki/Meeting-workweek-2014-03-03

 As I mentioned, folks will be writing RFC's on all the major topics to get
 feedback, and they are going to do so according to a tweaked RFC process
 aimed at introducing new features into Rust in a more controlled way than
 we have in the past. More about that later.

 Regards,
 Brian
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Alternative to Option types

2014-03-03 Thread Eric Reed
Tobi, good question!
I see Vladimir already answered, but I'll throw my own answer in too.

I think you can encode HKTs in MPTCs (MPTCs are just relations on types,
HKTs are just functions on types, and functions are specific kind of
relation [you might run into a problem with HKTs being themselves types
whereas MPTCs are not, bit I think that's avoidable]), but it would be very
difficult to use (or ensure that usage is correct).
Monads require some notion of HKTs to make sense. It's misleading to talk
about monad as a type since it's not a type [see note].
Monad is a sort of behavior a type could exhibit (demonstrated by being
an instance of the Monad type class).
It so happens that to exhibit the behavior of Monad requires a type to
have kind * - * (because Monads are generic in their result type).

I would consider HKTs a much better choice than MPTCs (even if MPTCs
theoretically subsume HKTs). They're *way* easier to use.

[note] Strictly speaking, you can have a type monad if you have a
sufficiently powerful type system (i.e. some level of dependent types).


On Fri, Feb 28, 2014 at 2:54 PM, Tobias Müller trop...@bluewin.ch wrote:

 Eric Reed ecr...@cs.washington.edu wrote:
  In general, monads require higher-kinded types because for a type to be a
  monad it must take a type variable. That is, OptionT and ListT could
  be monads, but int and TcpSocket can't be monads. So imagine we wanted to
  define a trait Monad in Rust.

 Just for my understanding. Is there an inherent reason that a monad has to
 be a higher kinded type (type constructor)? Couldn't it also be represented
 somehow as a multiparam trait/typeclass?
 AFAIK, higher kinded types are standard haskell, while MPTCs are not, so
 it's the obvious choice for haskell. Is it also for rust?

 Tobi

 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: Stronger aliasing guarantees about mutable borrows

2014-02-25 Thread Eric Reed
I'm in favor.

I guess if there's some giant use case for the current mut then we could
keep it and add this new version as move, but I agree that that probably
isn't the case.


On Tue, Feb 25, 2014 at 10:32 AM, Niko Matsakis n...@alum.mit.edu wrote:

 I wrote up an RFC. Posted on my blog at:


 http://smallcultfollowing.com/babysteps/blog/2014/02/25/rust-rfc-stronger-guarantees-for-mutable-borrows/

 Inlined here:

 Today, if you do a mutable borrow of a local variable, you lose the
 ability to *write* to that variable except through the new reference
 you just created:

 let mut x = 3;
 let p = mut x;
 x += 1;  // Error
 *p += 1; // OK

 However, you retain the ability to *read* the original variable:

 let mut x = 3;
 let p = mut x;
 print(x);  // OK
 print(*p); // OK

 I would like to change the borrow checker rules so that both writes
 and reads through the original path `x` are illegal while `x` is
 mutably borrowed. This change is not motivated by soundness, as I
 believe the current rules are sound. Rather, the motivation is that
 this change gives strong guarantees to the holder of an `mut`
 pointer: at present, they can assume that an `mut` referent will not
 be changed by anyone else.  With this change, they can also assume
 that an `mut` referent will not be read by anyone else. This enable
 more flexible borrowing rules and a more flexible kind of data
 parallelism API than what is possible today. It may also help to
 create more flexible rules around moves of borrowed data. As a side
 benefit, I personally think it also makes the borrow checker rules
 more consistent (mutable borrows mean original value is not usable
 during the mutable borrow, end of story). Let me lead with the
 motivation.

 ### Brief overview of my previous data-parallelism proposal

 In a previous post I outlined a plan for
 [data parallelism in Rust][dp] based on closure bounds. The rough idea
 is to leverage the checks that the borrow checker already does for
 segregating state into mutable-and-non-aliasable and
 immutable-but-aliasable. This is not only the recipe for creating
 memory safe programs, but it is also the recipe for data-race freedom:
 we can permit data to be shared between tasks, so long as it is
 immutable.

 The API that I outlined in that previous post was based on a `fork_join`
 function that took an array of closures. You would use it like this:

 fn sum(x: [int]) {
 if x.len() == 0 {
 return 0;
 }

 let mid = x.len() / 2;
 let mut left = 0;
 let mut right = 0;
 fork_join([
 || left = sum(x.slice(0, mid)),
 || right = sum(x.slice(mid, x.len())),
 ]);
 return left + right;
 }

 The idea of `fork_join` was that it would (potentially) fork into N
 threads, one for each closure, and execute them in parallel. These
 closures may access and even mutate state from the containing scope --
 the normal borrow checker rules will ensure that, if one closure
 mutates a variable, the other closures cannot read or write it. In
 this example, that means that the first closure can mutate `left` so
 long as the second closure doesn't touch it (and vice versa for
 `right`). Note that both closures share access to `x`, and this is
 fine because `x` is immutable.

 This kind of API isn't safe for all data though. There are things that
 cannot be shared in this way. One example is `Cell`, which is Rust's
 way of cheating the mutability rules and making a value that is
 *always* mutable. If we permitted two threads to touch the same
 `Cell`, they could both try to read and write it and, since `Cell`
 does not employ locks, this would not be race free.

 To avoid these sorts of cases, the closures that you pass to to
 `fork_join` would be *bounded* by the builtin trait `Share`. As I
 wrote in [issue 11781][share], the trait `Share` indicates data that
 is threadsafe when accessed through an `T` reference (i.e., when
 aliased).

 Most data is sharable (let `T` stand for some other sharable type):

 - POD (plain old data) types are forkable, so things like `int` etc.
 - `T` and `mut T`, because both are immutable when aliased.
 - `~T` is sharable, because is is not aliasable.
 - Structs and enums that are composed of sharable data are sharable.
 - `ARC`, because the reference count is maintained atomically.
 - The various thread-safe atomic integer intrinsics and so on.

 Things which are *not* sharable include:

 - Many types that are unsafely implemented:
   - `Cell` and `RefCell`, which have non-atomic interior mutability
   - `Rc`, which uses non-atomic reference counting
 - Managed data (`GcT`) because we do not wish to
   maintain or support a cross-thread garbage collector

 There is a wrinkle though. With the *current* borrow checker rules,
 forkable data is only safe to access from a parallel thread if the
 *main thread* is suspended. Put another way, forkable closures can
 

Re: [rust-dev] RFC: Stronger aliasing guarantees about mutable borrows

2014-02-25 Thread Eric Reed
Well, you can if you can construct a dummy value to swap in temporarily.
I'm pretty sure that's not always possible without venturing into unsafe
code (like making an uninit block of the proper size or something).
Moreover, nothing stops you from forgetting to swap the value back in so
you could just leave the dummy value in there. Finally, it's just way nicer
to write than the swap/replace version.

Would a mut that could move enable us to write insertion into a growable
data structure that might reallocate itself without unsafe code? Something
like OwnedVector.push() for instance.


On Tue, Feb 25, 2014 at 3:18 PM, Kevin Ballard ke...@sb.org wrote:

 If you can construct the new value independently of the old, sure. But if
 constructing the new value
 requires consuming the old, then you can't.

 -Kevin

 On Feb 25, 2014, at 3:14 PM, Corey Richardson co...@octayn.net wrote:

  Is this not already expressible with swap/replace? Is there a big
  improvement here that I'm missing?
 
  On Tue, Feb 25, 2014 at 4:23 PM, Kevin Ballard ke...@sb.org wrote:
  I too was under the impression that you could not read from a
 mutably-borrowed location.
 
  I am looking forward to the ability to move out of a mut (as long as
 the value is replaced again),
  if the issues around task failure and destructors can be solved.
 
  -Kevin
 
  On Feb 25, 2014, at 12:19 PM, Michael Woerister 
 michaelwoeris...@posteo.de wrote:
 
  I'm all for it. In fact,  I thought the proposed new rules *already*
 where the case :-)
 
  On 25.02.2014 19:32, Niko Matsakis wrote:
  I wrote up an RFC. Posted on my blog at:
 
 
 http://smallcultfollowing.com/babysteps/blog/2014/02/25/rust-rfc-stronger-guarantees-for-mutable-borrows/
 
  Inlined here:
 
  Today, if you do a mutable borrow of a local variable, you lose the
  ability to *write* to that variable except through the new reference
  you just created:
 
 let mut x = 3;
 let p = mut x;
 x += 1;  // Error
 *p += 1; // OK
 However, you retain the ability to *read* the original variable:
 
 let mut x = 3;
 let p = mut x;
 print(x);  // OK
 print(*p); // OK
 I would like to change the borrow checker rules so that both writes
  and reads through the original path `x` are illegal while `x` is
  mutably borrowed. This change is not motivated by soundness, as I
  believe the current rules are sound. Rather, the motivation is that
  this change gives strong guarantees to the holder of an `mut`
  pointer: at present, they can assume that an `mut` referent will not
  be changed by anyone else.  With this change, they can also assume
  that an `mut` referent will not be read by anyone else. This enable
  more flexible borrowing rules and a more flexible kind of data
  parallelism API than what is possible today. It may also help to
  create more flexible rules around moves of borrowed data. As a side
  benefit, I personally think it also makes the borrow checker rules
  more consistent (mutable borrows mean original value is not usable
  during the mutable borrow, end of story). Let me lead with the
  motivation.
 
  ### Brief overview of my previous data-parallelism proposal
 
  In a previous post I outlined a plan for
  [data parallelism in Rust][dp] based on closure bounds. The rough idea
  is to leverage the checks that the borrow checker already does for
  segregating state into mutable-and-non-aliasable and
  immutable-but-aliasable. This is not only the recipe for creating
  memory safe programs, but it is also the recipe for data-race freedom:
  we can permit data to be shared between tasks, so long as it is
  immutable.
 
  The API that I outlined in that previous post was based on a
 `fork_join`
  function that took an array of closures. You would use it like this:
 
 fn sum(x: [int]) {
 if x.len() == 0 {
 return 0;
 }
  let mid = x.len() / 2;
 let mut left = 0;
 let mut right = 0;
 fork_join([
 || left = sum(x.slice(0, mid)),
 || right = sum(x.slice(mid, x.len())),
 ]);
 return left + right;
 }
 The idea of `fork_join` was that it would (potentially) fork into N
  threads, one for each closure, and execute them in parallel. These
  closures may access and even mutate state from the containing scope --
  the normal borrow checker rules will ensure that, if one closure
  mutates a variable, the other closures cannot read or write it. In
  this example, that means that the first closure can mutate `left` so
  long as the second closure doesn't touch it (and vice versa for
  `right`). Note that both closures share access to `x`, and this is
  fine because `x` is immutable.
 
  This kind of API isn't safe for all data though. There are things that
  cannot be shared in this way. One example is `Cell`, which is Rust's
  way of cheating the mutability rules and making a value that is
  *always* mutable. If we permitted two threads to touch 

Re: [rust-dev] RFC: Stronger aliasing guarantees about mutable borrows

2014-02-25 Thread Eric Reed
True. I guess I was thinking less unsafe code as opposed to no unsafe code.


On Tue, Feb 25, 2014 at 4:47 PM, Kevin Ballard ke...@sb.org wrote:

  On Feb 25, 2014, at 4:04 PM, Eric Reed ecr...@cs.washington.edu wrote:
 
  Would a mut that could move enable us to write insertion into a
 growable data structure that might reallocate itself without unsafe code?
 Something like OwnedVector.push() for instance.

 The problem with that is you need uninitialized memory that you can move
 in to (without running drop glue). I don't see how moving from mut will
 help. Even if rustc can avoid the drop glue when writing to a mut that it
 already moved out of, there's no way to construct a pre-moved mut that
 points to the uninitialized memory (and no way to even create uninitialized
 memory without unsafe).

 -Kevin
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Alternative to Option types

2014-02-25 Thread Eric Reed
Turns out Rust's Option type already has all this behavior, so I think
we're all on to something :)

Option is a little more powerful than nullable pointers because you can
have Options of non-pointer values. IIRC, Option~T is actually compressed
to be a nullable pointer. I actually really like the ?T syntax, but I'm not
sure it's worth special-casing Options. I think it's something macros could
handle (convert ?T into OptionT).

Like your Hack type system, Rust's type system stops you from using an
OptionT in place of a T (they are different types after all). The basic
way to convert is a match expression, which is equivalent to:

if($maybe_car) { /* Some / non-null case here */ }
else { /* None / null case here */ }

Leaving the else branch off, i.e. leaving None as None, actually
corresponds to either functorial map (Option.map) or monadic bind
(Option.and_then) depending on the return type of Some branch. So your
example could become:

fn demo(maybe_car: Optionmut Car, car: mut Car) {
car.start();
maybe_car.map(|car| car.start()); // ignore the resulting option
}

Rust's Option has it's own version of your invariant function:
Option.expect.
If the Option is Some, then it returns the value therein. If the Option is
None, then it fails and displays a message. Option.unwrap is the same, but
with a default message.

fn demo(car: Optionmut Car) {
let car = car.expect(Expected non-null car for the demo);
car.start();
}

Your edge case presents an interesting difference between Hack and Rust.
In Hack, you know $car is non-null inside the if's consequent, but $car is
*still* a nullable pointer ?Car. In Rust, you know car is non-null in the
Some branch of a match, but it's not an OptionCar anymore! It's just a
Car, so a smashCar method either isn't applicable (it's a method on
OptionCars) or it creates a new OptionCar for us and sets it to None
(an example of a monadic function for Option). In the later case, we'd use
.and_then() to call smashCar and then chain the start call with .map(). The
.map() call would safely evaluate to None.

Thanks for the input!


On Tue, Feb 25, 2014 at 7:24 PM, Aran Donohue a...@fb.com wrote:

  Hey,

  I'm not sure how people feel about Option types but there seem to be a
 few hundred uses in the rust codebase. Wanted to share an idea we use in
 our custom type system (Hack) at Facebook to make it a bit nicer to
 safely deal with null references. We don't use Option types directly. I
 don't think this adds any theoretical power or new possible efficiency
 gains, it's mostly for making the code a bit simpler.

  Type declarations prefixed with a question mark represent references
 which might be null. So '?Foo' is somewhat like a shorthand for
 'OptionFoo'.

  function demo(?Car $maybe_car, Car $car) {...}

  We use these like possibly-null pointers. Usually you write an if
 statement prior to using a value.

  function demo(?Car $maybe_car, Car $car) {
   $car-start();
   if($maybe_car) { $maybe_car-start(); }
 }

  Sometimes we use these in combination with a special function
 invariant, an assertion function that throws an exception if a condition
 is not met:

  function demo(?Car $car) {
   invariant($car, 'Expected non-null car for the demo');
   $car-start();
 }

  If you forget to check for null one way or the other, the type-checker
 complains. This is a static, ahead-of-time check.

  function demo(?Car $car) {
   $car-start(); // error
 }

  There are some natural annoying edge cases to be covered.

  class Smash {
   private ?Car $car;

function demo() {
 if ($this-car) {
   $this-smashCar();
   $this-car-start(); // error
 }
   }
 }

  A downside of this approach vs. Option is that code written using
 pattern matching over Option is more easily upgraded to code using pattern
 matching over Result (or something else).

  Anyway, we like this feature and I'd be happy to see it adopted
 elsewhere. Tossing it out there as I don't know anything about the Rust
 compiler or how language design decisions get made for it :)

  -Aran

 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Alternative to Option types

2014-02-25 Thread Eric Reed
Turns out Option.and_then is literally Option's monadic bind. Both
do-notation and Haskell's list comprehensions (which generalize to monads
and I think are equivalent to Scala's for-comprehensions) actually just
desugar into calls to the monad's bind and return functions (Option's
return is just |x| Some(x) in Rust). I'm fairly certain Rust's macros can
handle both notational forms.

In general, monads require higher-kinded types because for a type to be a
monad it must take a type variable. That is, OptionT and ListT could be
monads, but int and TcpSocket can't be monads. So imagine we wanted to
define a trait Monad in Rust. It'd look something like:

trait MonadT {
fn return(t: T) - SelfT;
fn bindU(mt: SelfT, f: |T| - SelfU) - SelfU;
}

Notice how Self takes a type parameter? That's not legal in current Rust.
We'd need to make sure that types implemented Monad take the right number
of type variables. This is where 'kinds' come in. Kinds are to types as
types are to values. Concrete types (those that don't take parameters) have
kind *. A type that has one type variable has kind * - *, and so on. Kinds
like * - * are called 'higher-kinds' and the types with those kinds are
'higher-kinded' types. Turns out Monad requires types of kind * - *.
That's why Rust can't have general monads until we add higher kinded types.


On Tue, Feb 25, 2014 at 8:01 PM, Ziad Hatahet hata...@gmail.com wrote:

 On Tue, Feb 25, 2014 at 7:24 PM, Aran Donohue a...@fb.com wrote:


  Anyway, we like this feature and I'd be happy to see it adopted
 elsewhere.


 There are few languages out there that take an approach like this,
 including Kotlin and Fantom. I agree it is a cool feature; however, the
 Option type is more general, and pattern matching is not the only way to
 deal with Option variables.

 map/map_or, and/and_then, or/or_else are some of the methods that can be
 called on Option in Rust, while still avoiding pattern matching. Referring
 to your `demo` function:

 // Rust syntax
 fn demo(c: Car, maybe_car: OptionCar) {
 c.start()
 maybe_car.map(|car| car.start());
 }

 I am personally in favor of having something like Scala's monadic `for`
 construct. Apparently this feature needs Higher Kinded Types to be
 implemented in the compiler first. There has been a couple of Rust macro
 implementations that offer a stop gap though:
 https://mail.mozilla.org/pipermail/rust-dev/2013-May/004176.html

 --
 Ziad


 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-03 Thread Eric Reed
Actually this isn't the case.

fn fooT: Any(t: T) - TypeId {
t.get_type_id()
}

compiles just fine, but

fn barT(t: T) - TypeId {
t.get_type_id()
}

fails with error: instantiating a type parameter with incompatible type
`T`, which does not fulfill `'static`. Just T does not imply T:
'static, so parametricity is not violated.

I had the same thought about making size_of and friends unsafe functions. I
think that might be a reasonable idea.


On Mon, Feb 3, 2014 at 5:35 AM, Gábor Lehel glaebho...@gmail.com wrote:

 Just because Any is a trait doesn't mean it doesn't break parametricity.
 Look at this:


 http://static.rust-lang.org/doc/master/src/std/home/rustbuild/src/rust-buildbot/slave/doc/build/src/libstd/any.rs.html#37-63

 Because we have `implT: 'static Any for T`, it can be used with *any
 type* (except borrowed data), including type parameters, whether or not
 they declare the `T: Any` bound explicitly (which is essentially redundant
 in this situation).

 The proper thing would be for the compiler to generate an `impl Any for
 MyType` for each individual type separately, rather than a single generic
 impl which is valid for all types.

 I also think we should guarantee parametricity for safe code and make
 `size_of` an unsafe fn. Its legitimate uses in unsafe code (e.g. smart
 pointers) are well encapsulated and don't expose parametricity violations,
 and I don't believe safe code has a legitimate reason to use it (does it?).


 On Sun, Feb 2, 2014 at 3:27 AM, Eric Reed ecr...@cs.washington.eduwrote:

 I'm going to respond to Any and size_of separately because there's a
 significant difference IMO.

 It's true that Any and trait bounds on type parameters in general can let
 function behavior depend on the passed type, but only in the specific
 behavior defined by the trait. Everything that's not a trait function is
 still independent of the passed type (contrast this with a setup where this
 wasn't true. `fn fooA() - int' could return 2i for int and spin up a
 tetris game then crash for uint). Any just happens to be powerful enough to
 allow complete variance, which is expected since it's just dynamic typing,
 but there's an important distinction still: behavior variance because of
 Any *is* part of the function because you need to do explicit type tests.

 I wasn't aware of mem::size_of before, but I'm rather annoyed to find out
 we've started adding bare A - B functions since it breaks parametricity.
 I'd much rather put size_of in a trait, at which point it's just a weaker
 version of Any.
 Being able to tell how a function's behavior might vary just from the
 type signature is a very nice property, and I'd like Rust to keep it.

 Now, onto monomorphization.
 I agree that distinguishing static and dynamic dispatch is important for
 performance characterization, but static dispatch != monomorphization (or
 if it currently does, then it probably shouldn't) because not all
 statically dispatched code needs to be monomorphizied. Consider a function
 like this:

 fn fooA, B(ox: Option~A, f: |~A| - ~B) - Option~B {
 match ox {
 Some(x) = Some(f(x)),
 None = None,
 }
 }

 It's quite generic, but AFAIK there's no need to monomorphize it for
 static dispatch. It uses a constant amount of stack space (not counting
 what `f' uses when called) and could run the exact same code for any types
 A or B (check discriminant, potentially call a function pointer, and
 return). I would guess most cases require monomorphization, but I consider
 universal monomorphization a way of implementing static dispatch (as
 opposed to partial monomorphization).
 I agree that understanding monomorphization is important for
 understanding the performance characteristics of code generated by *rustc*,
 but rustc != Rust.
 Unless universal monomorphization for static dispatch makes its way into
 the Rust language spec, I'm going to consider it an implementation detail
 for rustc.



 On Sat, Feb 1, 2014 at 3:31 PM, Corey Richardson co...@octayn.netwrote:

 On Sat, Feb 1, 2014 at 6:24 PM, Eric Reed ecr...@cs.washington.edu
 wrote:
  Responses inlined.
 
 
  Hey all,
 
  bjz and I have worked out a nice proposal[0] for a slight syntax
  change, reproduced here. It is a breaking change to the syntax, but it
  is one that I think brings many benefits.
 
  Summary
  ===
 
  Change the following syntax:
 
  ```
  struct FooT, U { ... }
  implT, U TraitT for FooT, U { ... }
  fn fooT, U(...) { ... }
  ```
 
  to:
 
  ```
  forallT, U struct Foo { ... }
  forallT, U impl TraitT for FooT, U { ... }
  forallT, U fn foo(...) { ... }
  ```
 
  The Problem
  ===
 
  The immediate, and most pragmatic, problem is that in today's Rust one
  cannot
  easily search for implementations of a trait. Why? `grep 'impl
 Clone'` is
  itself not sufficient, since many types have parametric polymorphism.
 Now
  I
  need to come up with some sort of regex that can handle this. An easy
  first-attempt is `grep 'impl

Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Eric Reed
I'm going to respond to Any and size_of separately because there's a
significant difference IMO.

It's true that Any and trait bounds on type parameters in general can let
function behavior depend on the passed type, but only in the specific
behavior defined by the trait. Everything that's not a trait function is
still independent of the passed type (contrast this with a setup where this
wasn't true. `fn fooA() - int' could return 2i for int and spin up a
tetris game then crash for uint). Any just happens to be powerful enough to
allow complete variance, which is expected since it's just dynamic typing,
but there's an important distinction still: behavior variance because of
Any *is* part of the function because you need to do explicit type tests.

I wasn't aware of mem::size_of before, but I'm rather annoyed to find out
we've started adding bare A - B functions since it breaks parametricity.
I'd much rather put size_of in a trait, at which point it's just a weaker
version of Any.
Being able to tell how a function's behavior might vary just from the type
signature is a very nice property, and I'd like Rust to keep it.

Now, onto monomorphization.
I agree that distinguishing static and dynamic dispatch is important for
performance characterization, but static dispatch != monomorphization (or
if it currently does, then it probably shouldn't) because not all
statically dispatched code needs to be monomorphizied. Consider a function
like this:

fn fooA, B(ox: Option~A, f: |~A| - ~B) - Option~B {
match ox {
Some(x) = Some(f(x)),
None = None,
}
}

It's quite generic, but AFAIK there's no need to monomorphize it for static
dispatch. It uses a constant amount of stack space (not counting what `f'
uses when called) and could run the exact same code for any types A or B
(check discriminant, potentially call a function pointer, and return). I
would guess most cases require monomorphization, but I consider universal
monomorphization a way of implementing static dispatch (as opposed to
partial monomorphization).
I agree that understanding monomorphization is important for understanding
the performance characteristics of code generated by *rustc*, but rustc !=
Rust.
Unless universal monomorphization for static dispatch makes its way into
the Rust language spec, I'm going to consider it an implementation detail
for rustc.



On Sat, Feb 1, 2014 at 3:31 PM, Corey Richardson co...@octayn.net wrote:

 On Sat, Feb 1, 2014 at 6:24 PM, Eric Reed ecr...@cs.washington.edu
 wrote:
  Responses inlined.
 
 
  Hey all,
 
  bjz and I have worked out a nice proposal[0] for a slight syntax
  change, reproduced here. It is a breaking change to the syntax, but it
  is one that I think brings many benefits.
 
  Summary
  ===
 
  Change the following syntax:
 
  ```
  struct FooT, U { ... }
  implT, U TraitT for FooT, U { ... }
  fn fooT, U(...) { ... }
  ```
 
  to:
 
  ```
  forallT, U struct Foo { ... }
  forallT, U impl TraitT for FooT, U { ... }
  forallT, U fn foo(...) { ... }
  ```
 
  The Problem
  ===
 
  The immediate, and most pragmatic, problem is that in today's Rust one
  cannot
  easily search for implementations of a trait. Why? `grep 'impl Clone'`
 is
  itself not sufficient, since many types have parametric polymorphism.
 Now
  I
  need to come up with some sort of regex that can handle this. An easy
  first-attempt is `grep 'impl(.*?)? Clone'` but that is quite
  inconvenient to
  type and remember. (Here I ignore the issue of tooling, as I do not find
  the
  argument of But a tool can do it! valid in language design.)
 
 
  I think what I've done in the past was just `grep impl | grep Clone'.
 
 
  A deeper, more pedagogical problem, is the mismatch between how `struct
  Foo... { ... }` is read and how it is actually treated. The
  straightforward,
  left-to-right reading says There is a struct Foo which, given the types
  ...
  has the members  This might lead one to believe that `Foo` is a
  single
  type, but it is not. `Fooint` (that is, type `Foo` instantiated with
  type
  `int`) is not the same type as `Foounit` (that is, type `Foo`
  instantiated
  with type `uint`). Of course, with a small amount of experience or a
 very
  simple explanation, that becomes obvious.
 
 
  I strongly disagree with this reasoning.
  There IS only one type Foo. It's a type constructor with kind * - *
 (where
  * means proper type).
  Fooint and Foouint are two different applications of Foo and are
 proper
  types (i.e. *) because Foo is * - * and both int and uint are *.
  Regarding people confusing Foo, Fooint and Foouint, I think the
 proposed
  forallT struct Foo {...} syntax is actually more confusing.
  With the current syntax, it's never legal to write Foo without type
  parameters, but with the proposed syntax it would be.
 

 I've yet to see a proposal for HKT, but with them that interpretation
 would be valid and indeed make this proposal's argument weaker.

 
  Something less

Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Eric Reed
Well there's only 260 uses of the string size_of in rustc's src/
according to grep and only 3 uses of size_of in servo according to
GitHub, so I think you may be overestimating its usage.

Either way, I'm not proposing we get rid of size_of. I just think we should
put it in an automatically derived trait instead of defining a function on
all types.
Literally the only thing that would change would be code like this:

fn fooT(t: T) {
let size = mem::size_of(t);
}

would have to be changed to:

fn fooT: SizeOf(t: T) {
let size = SizeOf::size_of(t); // or t.size_of()
}

Is that really so bad?
Now the function's type signature documents that the function's behavior
depends on the size of the type.
If you see a signature like `fn fooT(t: T)', then you know that it
doesn't.
There's no additional performance overhead and it makes size_of like other
intrinsic operators (+, ==, etc.).

I seriously don't see what downside this could possibly have.



On Sat, Feb 1, 2014 at 6:43 PM, Daniel Micay danielmi...@gmail.com wrote:

 On Sat, Feb 1, 2014 at 9:27 PM, Eric Reed ecr...@cs.washington.edu
 wrote:
 
  I wasn't aware of mem::size_of before, but I'm rather annoyed to find out
  we've started adding bare A - B functions since it breaks parametricity.
  I'd much rather put size_of in a trait, at which point it's just a weaker
  version of Any.

 You do realize how widely used size_of is, right? I don't this it
 makes sense to say we've *started* adding this stuff when being able
 to get the size/alignment has pretty much always been there.

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Compile-time function evaluation in Rust

2014-01-28 Thread Eric Reed
That's what I figured. Forbidding unsafe is definitely a good way to keep
things simple starting out. Compile time evaluation can always be extended
later on.


On Tue, Jan 28, 2014 at 3:21 PM, Pierre Talbot ptal...@hyc.io wrote:

 On 01/28/2014 11:26 PM, Eric Reed wrote:

 Looks pretty reasonable to me at first glance.
 Out of curiosity, what's the rationale behind forbidding unsafe
 functions/blocks?

  In the reference manual we can read things such as: Mutating an
 immutable value/reference, if it is not marked as non-freeze. This would
 be impossible at compile-time.
 But I'm agree that we could relax this constraint and specify more
 precisely in which cases we disallow this.|
 |

 On Tue, Jan 28, 2014 at 2:15 PM, Pierre Talbot ptal...@hyc.io mailto:
 ptal...@hyc.io wrote:

 Hi,

 The Mozilla foundation proposes research internships [1] and the
 CTFE optimization in the Rust compiler seems to be a really
 exciting project. I wrote a proposal [2] that I'll send with my
 application and so I'd like to share it with you and discuss about
 bringing CTFE inside Rust.

 Here a non-exhaustive summary of key points in my proposal.

 First of all, we need to establish when CTFE is triggered, I found
 two contexts (denoted as a hole []):

 * Inside a immutable static variable (static ident ':' type '='
 [] ';').
 * In a vector expression ('[' expr ',' .. [] ']').

 Next in a similar way than with inline attributes we might want
 to add these new attributes:

 * #[ctfe] hints the compiler to perform CTFE.
 * #[ctfe(always)] asks the compiler to always perform CTFE
 resulting in a
 compiler error if it's impossible.
 * #[ctfe(never)] asks the compiler to never perform CTFE resulting
 in a compiler
 error if this function is called in a CTFE context.

 The rational behind this is that some functions might want to
 disallow CTFE, for example if they manipulate machine-dependent
 data (such as playing with endianness). Some might want to be
 designed only for compile-time and so we want to disable run-time
 execution. Finally others might hints the compiler to try to
 optimize whenever you can, of course if the function contains
 infinite loop for some input, the compilation might not terminate.

 I propose some requirements on function eligible for CTFE (see the
 proposal for references to the Rust manual):

 1. Its parameters are evaluable at compile-time.
 2. It isn't a diverging function.
 3. It isn't an unsafe function.
 4. It doesn't contain unsafe block.
 5. It doesn't perform I/O actions.
 6. The function source code is available to the compiler. It
 mustn't be in an external
 block, however it can be an extern function.

 In this proposal, you'll also find a pseudo-coded algorithm,
 related work (in D and C++), and much more :-)

 If you have any suggestions or corrections, do not hesitate. Also,
 feel free to ask questions.

 Regards,
 Pierre Talbot

 [1] https://careers.mozilla.org/en-US/position/oZO7XfwB
 [2] http://hyc.io/rust-ctfe-proposal.pdf
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org mailto:Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev




___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: New Rust channel proposal

2014-01-14 Thread Eric Reed
How would that make us lose stack allocated return values?


On Tue, Jan 14, 2014 at 5:22 PM, Jack Moffitt j...@metajack.im wrote:

  Good point. Make `Chan` a trait with implementers `UniqueChan` and
  `SharedChan`?

 I suppose the main downside of that solution is that you lose stack
 allocated return values.

 jack.

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: New Rust channel proposal

2014-01-14 Thread Eric Reed
fn fooT: Trait() - T


On Tue, Jan 14, 2014 at 9:20 PM, Jack Moffitt j...@metajack.im wrote:

 You can't do `foo() - Trait`. It would have to be `foo() - ~Trait`.
 Well, unless DST fixes this. I assume this is the same reason we
 return specific instances of iterators instead of an Iteratable trait
 object.

 jack.

 On Tue, Jan 14, 2014 at 10:10 PM, Eric Reed ecr...@cs.washington.edu
 wrote:
  How would that make us lose stack allocated return values?
 
 
  On Tue, Jan 14, 2014 at 5:22 PM, Jack Moffitt j...@metajack.im wrote:
 
   Good point. Make `Chan` a trait with implementers `UniqueChan` and
   `SharedChan`?
 
  I suppose the main downside of that solution is that you lose stack
  allocated return values.
 
  jack.
 
 

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: New Rust channel proposal

2014-01-14 Thread Eric Reed
As a follow up, what situation would arise where you'd have to actually
return a Chan trait object?
Constructors are going to return the concrete type UniqueChan/SharedChan.
Functions acting on channels can just use generics, which will allow
returning.


On Tue, Jan 14, 2014 at 9:21 PM, Eric Reed ecr...@cs.washington.edu wrote:

 fn fooT: Trait() - T


 On Tue, Jan 14, 2014 at 9:20 PM, Jack Moffitt j...@metajack.im wrote:

 You can't do `foo() - Trait`. It would have to be `foo() - ~Trait`.
 Well, unless DST fixes this. I assume this is the same reason we
 return specific instances of iterators instead of an Iteratable trait
 object.

 jack.

 On Tue, Jan 14, 2014 at 10:10 PM, Eric Reed ecr...@cs.washington.edu
 wrote:
  How would that make us lose stack allocated return values?
 
 
  On Tue, Jan 14, 2014 at 5:22 PM, Jack Moffitt j...@metajack.im wrote:
 
   Good point. Make `Chan` a trait with implementers `UniqueChan` and
   `SharedChan`?
 
  I suppose the main downside of that solution is that you lose stack
  allocated return values.
 
  jack.
 
 



___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: New Rust channel proposal

2014-01-14 Thread Eric Reed
I was working under the assumption that we'd add UniqueChan - SharedChan
promotion back. I assumed that would be possible since a unified Chan would
be doing it internally.

Does an auto-promoting Chan really do that much for reducing cognitive
load?
The only thing the programmer can skip understanding with a unified Chan is
the whether to choose between cloneable and uncloneable channels, which
isn't very much.
It seems like a pretty significant departure from how Rust approaches this
kind of thing elsewhere (i.e. memory allocation and threading model being
the two things that come to mind).
I'm just skeptical that the cognitive load reduction outweighs the
information loss.


On Tue, Jan 14, 2014 at 9:41 PM, Jack Moffitt j...@metajack.im wrote:

 You can't promote a Chan to a SharedChan currently (like you could
 with the old API), so if the caller needs to make the decision, all
 the APIs will have to return SharedChan to be flexible. I don't think
 traits help with that problem (they do help with passing in Chans). If
 we want flexible APIs that returns Chans, we need promotion of some
 kind.

 I think I misspoke before. The core issue is cognitive load of 3
 channel types. Auto-promoting Chan to shared Chan reduces the number
 of channels (less API surface) and makes channels just work (never
 have to think about whether Chans are clonable). That seems a
 compelling combo. I was just focused more on the latter use case.

 jack.

 On Tue, Jan 14, 2014 at 10:26 PM, Eric Reed ecr...@cs.washington.edu
 wrote:
  As a follow up, what situation would arise where you'd have to actually
  return a Chan trait object?
  Constructors are going to return the concrete type UniqueChan/SharedChan.
  Functions acting on channels can just use generics, which will allow
  returning.
 
 
  On Tue, Jan 14, 2014 at 9:21 PM, Eric Reed ecr...@cs.washington.edu
 wrote:
 
  fn fooT: Trait() - T
 
 
  On Tue, Jan 14, 2014 at 9:20 PM, Jack Moffitt j...@metajack.im wrote:
 
  You can't do `foo() - Trait`. It would have to be `foo() - ~Trait`.
  Well, unless DST fixes this. I assume this is the same reason we
  return specific instances of iterators instead of an Iteratable trait
  object.
 
  jack.
 
  On Tue, Jan 14, 2014 at 10:10 PM, Eric Reed ecr...@cs.washington.edu
  wrote:
   How would that make us lose stack allocated return values?
  
  
   On Tue, Jan 14, 2014 at 5:22 PM, Jack Moffitt j...@metajack.im
 wrote:
  
Good point. Make `Chan` a trait with implementers `UniqueChan` and
`SharedChan`?
  
   I suppose the main downside of that solution is that you lose stack
   allocated return values.
  
   jack.
  
  
 
 
 

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: Iterator naming convention

2013-12-21 Thread Eric Reed
I prefer the 'no suffix' option and generally agree with Alex.
Iterators aren't special and their iterator behavior is already denoted by
implementing the Iterator trait.

Frankly, aside from documentation where it is clear that something is an
iterator, I'm not sure when a user would even see concrete iterator types.
I can't think of a reason why you'd ever want to restrict the type of a
variable to a specific kind of iterator (i.e. Map or Union).

Acceptor and Port could implement Iterator directly, but currently they
create a struct containing only a mutable borrow of themselves to prevent
anything else from using the socket/channel in the meantime.
Reader could directly implement an Iterator that does one byte reads, but
currently it does the same trick as Acceptor and Port.
You only need a separate Iterator object if you actually require additional
state or if you want to prevent others from changing the underlying source
(mutable borrow of self).



On Sat, Dec 21, 2013 at 4:35 PM, Kevin Cantu m...@kevincantu.org wrote:

 Rust's standard libs are still pretty thin on their trait hierarchies,
 but I'm sure this will change.

 Kevin

 On Sat, Dec 21, 2013 at 1:30 PM, Palmer Cox palmer...@gmail.com wrote:
  Are there examples of structs that implement Iterator that also implement
  other non-Iterator related traits? Although its possible to do that, I
 can't
  think of a use case for it off the top of my head. An Iterator basically
  represents the state of an ongoing computation and once that computation
 is
  completed, the object is mostly uselss. It seems like it would be
 awkward to
  implement other traits for such an object. Maybe I'm not thinking of
  something, however.
 
  -Palmer Cox
 
 
 
  On Sat, Dec 21, 2013 at 4:24 PM, Kevin Cantu m...@kevincantu.org wrote:
 
  Iterators are just structs which implement the Iterator or a related
  trait, right?
 
  These structs which do can also implement lots of other traits, too:
  no reason to make -Iter special.
 
 
  Kevin
 
 
 
  On Sat, Dec 21, 2013 at 12:50 PM, Palmer Cox palmer...@gmail.com
 wrote:
   I'm not a big fan of Hungarian notation either. I'm not sure that
 having
   a
   naming convention for Iterators is Hungarian notation, however. For
   example,
   if you are doing Windows programming, you'll see stuff like:
  
   DWORD dwFoo = 0;
  
   In this case, the dw prefix on the variable indicates that we have a
   DWORD
   variable. However, the Iterator suffix that I'm proposing here is a
   suffix
   on the type names, not the actual variable names. So, if you are
 writing
   Rust code, you'd write something like this:
  
   let chunks = some_vector.chunks(50);
  
   So, the actual variable name doesn't have the Hungarian notation and
 the
   types aren't even generally visible since the compiler infers much of
   that.
   However, someone reading through the documentation and/or code will
 see
   a
   struct named ChunkIterator and instance know how the struct behaves -
 as
   an
   Iterator. So, I think the suffix serves less to describe the datatype
   and
   more to describe class of behavior that the struct implements.
  
   Anyway, as I said, I prefer #1. But, I also have done lots of Java
   programming so I'm probably much more used to verbosity than others.
 I'm
   not
   horribly against some sort of other naming convention, either, of
   course,
   but I would like to see some consistency.
  
   My main motivation for opening the request was because I created
   MutChunkIter and then realized that it was named differently than
   majority
   of other Iterators. I don't want to be responsible for someone reading
   through the docs and seeing something thats inconsistent for no good
   reason!
   Also, I was reading through some code and happened upon a Map and
 was
   exceptionally confused about it, until I realized it was iter::Map as
   opposed to container::Map. I figured I probably wasn't the only person
   that
   was going to be confused by something like this.
  
   -Palmer Cox
  
  
  
  
  
   On Sat, Dec 21, 2013 at 3:14 PM, Kevin Cantu m...@kevincantu.org
 wrote:
  
   IMHO Hungarian notation is for things the type system and tooling
   cannot deal with.  I'm not sure this is one of them...
  
  
   Kevin
  
  
 
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Unbounded channels: Good idea/bad idea?

2013-12-21 Thread Eric Reed
IMO the best alternative for a non-blocking send on a bounded channel is
returning an Option.
If you send successfully, you return None.
If you can't send because the channel is full, you return Some(message).
This lets the sender recover the message (important for moveable objects)
and decide how to handle it (retry, fail, drop, etc.).

Personally, I lean toward providing unbounded channels as the primitive and
implementing bounded channels on top of them OR providing both as
primitives.


On Fri, Dec 20, 2013 at 4:19 PM, Carter Schonwald 
carter.schonw...@gmail.com wrote:

 actually, you're right, in go they're fixed sized buffers
 http://golang.org/src/pkg/runtime/chan.c . I can understands (and agree!)
 that this is not a good default if a more dynamic data structure can work
 well.

 in haskell / ghc , bounded channels are dynamically sized, and merely have
 a max size thats enforced by the provided api,and I've been speaking with
 that sort of memory usage model in mind.


 On Fri, Dec 20, 2013 at 4:15 PM, Carter Schonwald 
 carter.schonw...@gmail.com wrote:

 i'd be very very surprised if bounded channels in go don't dynamically
 resize their queues and then atomically insert / remove elements while
 checking the bound.  I'd actually argue that such would be a bug.


 On Fri, Dec 20, 2013 at 4:09 PM, Kevin Ballard ke...@sb.org wrote:

 I haven’t profiled it, but my belief is that under normal circumstances,
 messages come in slow enough that the consumer is always idle and ready to
 process the next message as soon as it’s sent. However, I expect it does
 occasionally back up a bit, e.g. when I get a burst of traffic such as
 during a netsplit when I’m sent a large batch of “user has quit” or
 “user has joined” (when the netsplit is over). I don’t know how much the
 channel backs up at that point, probably not too much.

 For this particular use-case, a channel that’s bounded at e.g. 100,000
 elements would be indistinguishable from an infinite channel, as long as it
 still dynamically allocates (I don’t *think* Go channels dynamically
 allocate, which is why I can’t just use a 100,000 element channel for real).

 However, my overall point about large bounds being indistinguishable
 from infinite is that if your goal is to pick a bound large enough to
 appear infinite to the program, without actually risking OOM, then there’s
 no automated way to do this. Different environments have differing amounts
 of available resources, and there’s no good way to pick a bound that is
 sufficiently high but is definitively lower than the resource bounds. This
 is why I’m recommending that we have truly infinite channels, for users who
 don’t want to have to think about bounds (e.g. my irc program), as well as
 bounded channels, where the user has to explicitly pick a bound (with no
 “default” provided).

 -Kevin

 On Dec 20, 2013, at 12:55 PM, Carter Schonwald 
 carter.schonw...@gmail.com wrote:

 kevin, what sort of applications and workloads are you speaking about.
 Eg in your example irc server, whats the typical workload when you've used
 it?

 cheers
 -Carter


 On Fri, Dec 20, 2013 at 12:54 PM, Kevin Ballard ke...@sb.org wrote:

 On Dec 20, 2013, at 8:59 AM, Carter Schonwald 
 carter.schonw...@gmail.com wrote:

 agreed! Applications that lack explicit logic for handling heavy
 workloads (ie producers outpacing consumers for a sustained period) are the
 most common culprit for unresponsive desktop applications that become
 completely unusable.


 That’s a pretty strong claim, and one I would have to disagree with
 quite strongly. Every time I’ve sampled an unresponsive application, I
 don’t think I’ve *ever* seen a backtrace that suggests a producer
 outpacing a consumer.

 -Kevin

 relatedly: would not bounded but programmatically growable channels
 also make it trivial to provide a unbounded style channel abstraction?
 (not that i'm advocating that, merely that it seems like it would turn the
 unbounded channel abstraction into an equivalent one that is resource usage
 aware)


 On Fri, Dec 20, 2013 at 8:52 AM, György Andrasek jur...@gmail.comwrote:

 On 12/19/2013 11:13 PM, Tony Arcieri wrote:

 So I think that entire line of reasoning is a red herring. People
 writing toy programs that never have their channels fill beyond a
 small
 number of messages won't care either way.

 However, overloaded programs + queues bounded by system resources are
 a
 production outage waiting to happen. What's really important here is
 providing a means of backpressure so overloaded Rust programs don't
 grow
 until they consume system resources and OOM.


 While I disagree with the notion that all programs which don't have
 their bottlenecks right here are toys, we should definitely strive for
 the invariant that task failure does not cause independent tasks to fail.

 Also, OOM is not free. If you manage to go OOM on a desktop, you'll
 get a *very* unhappy user, regardless of their expectations wrt your 
 memory
 usage. 

Re: [rust-dev] Unbounded channels: Good idea/bad idea?

2013-12-21 Thread Eric Reed
I disagree. Option is precisely the right type because all Option means is
you can get something or nothing.
Sometimes it can make sense to consider 'something' success and 'nothing'
failure, like attempting to parse a string.
Sometimes it can make sense to consider 'nothing' success and 'something'
failure, like a potential error code from a side effecting operation.
Sometimes it doesn't really make sense either way, like looking up a value
for a key in a map. If the key isn't present, is that failure? Only if you
expected it to be there, but the lookup function doesn't care. It just
tells you if the value exists.

Either just means you can get something one one kind or something of a
different kind. No meaning of failure or success is implicit (consider
Eitherchar, int).
Conventionally, if you use Either for success/failure, then you use Left
for failure and Right for success, but this is mostly just because of the
pun with Right and that the default monad for Either allows the Right to
change type.
Result is just Either with actual success/failure meaning built in.

Yeah, we could use Result(), Err for a side effecting operation that
could return an error, but that's just a more verbose way of writing
OptionErr.
OptionErr is very clear about what it does. It might return an error code
or it might return nothing. If it returns nothing then there clearly there
wasn't an error code so it must have been successful.



On Sat, Dec 21, 2013 at 10:48 PM, Carter Schonwald 
carter.schonw...@gmail.com wrote:

 Eric, thats exactly why I suggested the use of the Result or Either type.
  Some is a bit misleaning, because the Nothing case is usually means a
 failure rather than a success.


 On Sat, Dec 21, 2013 at 9:33 PM, Eric Reed ecr...@cs.washington.eduwrote:

 IMO the best alternative for a non-blocking send on a bounded channel is
 returning an Option.
 If you send successfully, you return None.
 If you can't send because the channel is full, you return Some(message).
 This lets the sender recover the message (important for moveable objects)
 and decide how to handle it (retry, fail, drop, etc.).

 Personally, I lean toward providing unbounded channels as the primitive
 and implementing bounded channels on top of them OR providing both as
 primitives.


 On Fri, Dec 20, 2013 at 4:19 PM, Carter Schonwald 
 carter.schonw...@gmail.com wrote:

 actually, you're right, in go they're fixed sized buffers
 http://golang.org/src/pkg/runtime/chan.c . I can understands (and
 agree!) that this is not a good default if a more dynamic data structure
 can work well.

 in haskell / ghc , bounded channels are dynamically sized, and merely
 have a max size thats enforced by the provided api,and I've been speaking
 with that sort of memory usage model in mind.


 On Fri, Dec 20, 2013 at 4:15 PM, Carter Schonwald 
 carter.schonw...@gmail.com wrote:

 i'd be very very surprised if bounded channels in go don't dynamically
 resize their queues and then atomically insert / remove elements while
 checking the bound.  I'd actually argue that such would be a bug.


 On Fri, Dec 20, 2013 at 4:09 PM, Kevin Ballard ke...@sb.org wrote:

 I haven’t profiled it, but my belief is that under normal
 circumstances, messages come in slow enough that the consumer is always
 idle and ready to process the next message as soon as it’s sent. However, 
 I
 expect it does occasionally back up a bit, e.g. when I get a burst of
 traffic such as during a netsplit when I’m sent a large batch of “user
 has quit” or “user has joined” (when the netsplit is over). I don’t know
 how much the channel backs up at that point, probably not too much.

 For this particular use-case, a channel that’s bounded at e.g. 100,000
 elements would be indistinguishable from an infinite channel, as long as 
 it
 still dynamically allocates (I don’t *think* Go channels dynamically
 allocate, which is why I can’t just use a 100,000 element channel for 
 real).

 However, my overall point about large bounds being indistinguishable
 from infinite is that if your goal is to pick a bound large enough to
 appear infinite to the program, without actually risking OOM, then there’s
 no automated way to do this. Different environments have differing amounts
 of available resources, and there’s no good way to pick a bound that is
 sufficiently high but is definitively lower than the resource bounds. This
 is why I’m recommending that we have truly infinite channels, for users 
 who
 don’t want to have to think about bounds (e.g. my irc program), as well as
 bounded channels, where the user has to explicitly pick a bound (with no
 “default” provided).

 -Kevin

 On Dec 20, 2013, at 12:55 PM, Carter Schonwald 
 carter.schonw...@gmail.com wrote:

 kevin, what sort of applications and workloads are you speaking about.
 Eg in your example irc server, whats the typical workload when you've used
 it?

 cheers
 -Carter


 On Fri, Dec 20, 2013 at 12:54 PM, Kevin Ballard ke...@sb.org wrote:

 On Dec

Re: [rust-dev] Trait container return types

2013-12-15 Thread Eric Reed
I'm on a phone so I haven't tested this, but I'd suggest removing the T
parameter of Field and replacing uses of T with Self. In case you don't
already know, Self is a implicit type parameter representing the type of
self, i.e. the type you impl the trait for. Would that work for your use
case?
On Dec 15, 2013 2:40 AM, Andres Osinski andres.osin...@gmail.com wrote:

 I have not gotten around to examining the ownership issues of @-boxes -
 I've used them because they're mentioned as the only way to do runtime
 polymorphism - but I will definitely be looking at the Any type.

 The essential point is that, for a set of FieldT containers, I want to
 invoke a method whose signature does  not have generic type parameters,
 name the is_valid() method which would return a bool.

 The thing is, the specialization for Field is something that I want to
 leave open to the user, so an Enum solution or any solution which places a
 constraint on T is not good for my use case. I'm open to doing whatever
 unsafe manipulations would be necessary, but unfortunately there's not that
 much code that's been written to go around to get an example.


 On Sun, Dec 15, 2013 at 7:24 AM, Chris Morgan m...@chrismorgan.info wrote:

 The problem there is that `@Field` is not a type, because you haven't
 specified the value for the generic constraint T. That is, the
 pertinent trait object would be something like `@Fieldint`. It's not
 possible to have a field without the type being specified; that is,
 `get_fields()` can only be designed to return fields of one type
 (think of it this way—what will the type checker think of the value of
 `model.get_fields()[0].get()`? It's got to be exactly one type, but
 it's not possible to infer it).

 You'd need to deal with something like std::any::Any to achieve what
 it looks likely that you're trying to do. Because I wouldn't encourage
 designing something in that way as a starting point, I won't just now
 give you code covering how you would implement such a thing; see if
 it's possible for you to design it in such a way that this constraint
 doesn't cause you trouble. Using enums instead of traits is one way
 that can often—though certainly not always—get around this problem.

 One final note—avoid using @-boxes if possible; is it possible for you
 to give owned pointers or references?

 On Sun, Dec 15, 2013 at 7:24 PM, Andres Osinski
 andres.osin...@gmail.com wrote:
  Hi everyone, I'm doing a bit of Rust coding and I'm trying to build a
  library to manage some common business object behavior.
 
  trait FieldT {
  fn name() - ~str;
  fn get_validators() - ~[ValidatorT];
  fn get(self) - T;
  fn is_valid(self) - bool;
  }
 
  trait Model {
  fn get_fields(self) - ~[@Field];
  fn validate(self) - OptionHashMap~str, ~[FieldError] {
  }
 
  The code fails with the following compiler error:
 
  models.rs:80:35: 80:40 error: wrong number of type arguments: expected
 1 but
  found 0
  models.rs:80 fn get_fields(self) - ~[@Field];
 
  The reason for the get_fields() method is to return a list of
 heterogenous
  trait-upcasted objects, and for each of them I'd be invoking the
 is_valid()
  method.
 
  I would understand that the compiler may not understand the notion of
 trait
  return types (which would make sense) but I'd be interested to know
 whether
  this is a bug or a design limitation, and in the second case, whether
  there's a sensible alternative.
 
  Thanks
 
  --
  Andrés Osinski
 
  ___
  Rust-dev mailing list
  Rust-dev@mozilla.org
  https://mail.mozilla.org/listinfo/rust-dev
 




 --
 Andrés Osinski
 http://www.andresosinski.com.ar/

 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Let’s avoid having both foo() and foo_opt()

2013-12-06 Thread Eric Reed
FYI, there's already a method on Option that is unwrap() with an error
message: expect().

Personally, I prefer making functions that don't fail and use Option or
Result and then composing them with functions that fail for certain
outputs, but I think I'm in the minority there.


On Fri, Dec 6, 2013 at 1:45 PM, Simon Sapin simon.sa...@exyr.org wrote:

 On 06/12/2013 20:55, Léo Testard wrote:

 Hi,

 Just a suggestion, don't know what it's worth...

 For the not helpful error message thing, couldn't we extend the
 option API, to be able to specify at the creation of a None value
 the error string that will be displayed if one calls unwrap() on this
 value ? This may be useful in several situations.


 That would require making the memory representation of every Option
 bigger. Just for the (hopefully) uncommon case of task failure, it’s not
 worth the cost in my opinion.

 We could instead have .unwrap() that take an error message, but that
 leaves the responsibility to the user of the API.


 --
 Simon Sapin
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Placement new and the loss of `new` to keywords

2013-12-02 Thread Eric Reed
I think the 'new(place) expr' or 'box(place) expr' is pretty confusing
syntax. To me, it's not at all clear that new(varA) varB means eval varB
and put it into varA instead of eval varA and put it into varB.
I'd much prefer syntax that makes it very clear which is the expression and
which is the place. Personally, I like ~ so I'd like ~expr in place, but
if people absolutely insist on scrapping ~ then I'd suggest put expr in
place.
In either case, I think keeping ~ as sugar for allocating on the exchange
heap would be nice (i.e. ~expr is sugar for ~expr in Unique or put
expr in Unique).
I guess we could use new or box instead of put, but I like put over
either.



On Sun, Dec 1, 2013 at 6:31 AM, Tiffany Bennett tiff...@stormbit.netwrote:

 I agree with the `box` name, it's far less jarring than `new (1+1)`.


 On Sun, Dec 1, 2013 at 9:06 AM, Tim Kuehn tku...@cmu.edu wrote:

 On Sun, Dec 1, 2013 at 8:04 AM, spir denis.s...@gmail.com wrote:

  On 12/01/2013 02:51 AM, Florian Zeitz wrote:

 If I may chime in here.
 I agree with Kevin that the different semantics of `new` are more likely
 to create confusion, than alleviate it.

 Personally I would suggest calling this operator `box`, since it boxes
 its argument into a newly allocated memory box.

 After all, these are different semantics from C++'s `new` (and also Go's
 `make` AFAICT), therefore, presuming that a sigil is not a sufficient
 indicator of a non-stack allocation, using an unprecedented keyword
 seems the way to go to me.


 +++ to all 3 points

 Denis



 I, too, am in favor of the `box` proposal. Short, intuitive, not
 already commonly used. What's not to like?

 Cheers,
 Tim






 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev



 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev



 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Placement new and the loss of `new` to keywords

2013-12-02 Thread Eric Reed
I don't think it introduces any ambiguity. The optional in is similar to
the optional else in if.
I'm pretty sure this grammar would suffice:

alloc_expr : ~ expr in_tail ? ;
in_tail : in expr ;

The expr of in_tail would need to be considered a 'place' (some sort of
trait I assume) by the typechecker.
Ending in_tail with an expr shouldn't be a problem (lambda_expr does it).
The parser can unambiguously tell if there is an in_tail present by simply
checking for the in keyword.


On Mon, Dec 2, 2013 at 12:56 AM, Ziad Hatahet hata...@gmail.com wrote:

 On Mon, Dec 2, 2013 at 12:43 AM, Eric Reed ecr...@cs.washington.eduwrote:

 In either case, I think keeping ~ as sugar for allocating on the exchange
 heap would be nice (i.e. ~expr is sugar for ~expr in Unique or put
 expr in Unique).


 `box expr in place` reads nicely too. I don't know about any ambiguity in
 the syntax though.

 --
 Ziad


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Can traits define instance vars?

2013-12-02 Thread Eric Reed
Hi Oliver,

Glad you're liking Rust so far :)

Currently traits can only have methods (functions that have a self
parameter) and associated functions (functions without a self parameter).
There's a proposal to add inheritance on structs to Rust, which would allow
a trait to extend a struct and gain access to its fields.
This (very) long mailing list
threadhttps://mail.mozilla.org/pipermail/rust-dev/2013-November/006465.htmlis
the most recent discussion I'm aware of. There are a couple ideas in
there about how to get similar behavior using just existing Rust constructs
(I'm still biased towards the solution I gave in there).

I'm not aware of any plans for a rust-users forum. Maybe spinning a
rust-users mailing list off from rust-dev would make sense? We already did
a similar thing with our IRC channels.

Eric




On Mon, Dec 2, 2013 at 2:12 AM, jeti...@web.de wrote:

 Hello,

 I lately had a look at Rust and really liked it in many ways. Rust has
 really everything I'm missing in Go ;-). There is something about traits I
 would like to ask I can't see from the manual. Question is whether you
 define instance variables in traits or abstract variables like Ceylon and
 Kotlin have them. Abstract traits vars in Kotlin look like this (sample
 code taken from here:
 http://blog.jetbrains.com/kotlin/2011/08/multiple-inheritance-part-2-possible-directions
 ):

  trait Trait {
   val property : Int // abstract
   fun foo() {
 print(property)
   }
 }
 class C() : Trait {
   override val property : Int = 239
 }

 Can something like this be done for traits in Rust as well?

 Then I would like to ask whether you are planning to have some Rust forum
 for users. The question I was asking in this mail on rust-dev really
 doesn't belong into the dev mailing list. So something like a google
 Rust-users newsgroup would be a good thing to have. I know several people
 that are interested in Rush. Think it would give Rust a little push if a
 Rust-user forum would exist.

 Regards, Oliver

 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Placement new and the loss of `new` to keywords

2013-12-02 Thread Eric Reed
This is my new favorite idea, especially expr@place. It's concise. It reads
expr at place, which is exactly what it does. It goes along with Rust's
putting the type after the value. expr in place could be good too.


On Mon, Dec 2, 2013 at 2:57 AM, Kevin Ballard ke...@sb.org wrote:

 With @ going away another possibility is to leave ~ as the normal
 allocation operator and to use @ as the placement operator. So ~expr stays
 the same and placement looks either like `@place expr` or `expr@place`

 -Kevin Ballard
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Placement new and the loss of `new` to keywords

2013-12-02 Thread Eric Reed
But it's nothing like a pointer sigil. @ doesn't appear in the types at
all. It's just the placement allocation operator.


On Mon, Dec 2, 2013 at 10:23 AM, Patrick Walton pwal...@mozilla.com wrote:

 Anything with @ feels like it goes too close to pointer sigils for my
 taste.

 Patrick

 spir denis.s...@gmail.com wrote:

 On 12/02/2013 11:57 AM, Kevin Ballard wrote:

 With @ going away another possibility is to leave ~ as the normal 
 allocation operator and to use @ as the placement operator. So ~expr stays 
 the same and placement looks either like `@place expr` or `expr@place`


 I like that, with expr@place. Does this give:
  let foo = ~ bar;
  let placed_foo = bar @ place;
 ?

 Yet another solution, just for fun, using the fact that pointers are 
 supposed to
 point to:

  let foo = - bar;
  let placed_foo = bar - place;

 Denis
 --

 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


 --
 Sent from my Android phone with K-9 Mail. Please excuse my brevity.

 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Placement new and the loss of `new` to keywords

2013-12-02 Thread Eric Reed
I think the idea was to have the syntax desugar into method calls just like
other existing operators.
There'd be a trait like:

trait BoxT {
fn box(val: T) - Self
}

and something like box expr in place would desugar into
place::box(expr).

One question this poses is why are we requiring the place to be specified
all the time?
Why not let type inference handle deciding the place most of the time?


On Mon, Dec 2, 2013 at 10:46 AM, Erick Tryzelaar
erick.tryzel...@gmail.comwrote:

 Is there any way we can use a method and move semantics for this? This
 feels pretty natural to me:

 let foo = gc_allocator.box(bar);


 On Mon, Dec 2, 2013 at 10:23 AM, Patrick Walton pwal...@mozilla.comwrote:

 Anything with @ feels like it goes too close to pointer sigils for my
 taste.

 Patrick

 spir denis.s...@gmail.com wrote:

 On 12/02/2013 11:57 AM, Kevin Ballard wrote:

 With @ going away another possibility is to leave ~ as the normal 
 allocation operator and to use @ as the placement operator. So ~expr stays 
 the same and placement looks either like `@place expr` or `expr@place`



 I like that, with expr@place. Does this give:
  let foo = ~ bar;
  let placed_foo = bar @ place;
 ?

 Yet another solution, just for fun, using the fact that pointers are 
 supposed to
 point to:


  let foo = - bar;
  let placed_foo = bar - place;

 Denis
 --

 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


 --
 Sent from my Android phone with K-9 Mail. Please excuse my brevity.

 ___

 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev



 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Placement new and the loss of `new` to keywords

2013-12-02 Thread Eric Reed
And @ even makes sense for what it's doing (placing something at somewhere)
when compared with most operators.


On Mon, Dec 2, 2013 at 12:04 PM, Kevin Ballard ke...@sb.org wrote:

 ~ would still be the unique default. @ would require a place (because
 there's no placement without a place). Just because C++ uses the same
 operator for regular allocation and for placement doesn't mean we have to
 do the same. As it's been pointed out already, C++'s use of `new` for
 placement is kind of quite strange, since it doesn't actually allocate
 anything.

 As for too punctuation heavy, why the hate on punctuation? Operators
 have a long history of use in programming languages to great effect. I
 don't get why operators are now suddenly bad. User-overloadable operators
 are contentious, certainly, but this isn't an overloadable operator.

 -Kevin

 On Dec 2, 2013, at 11:38 AM, Patrick Walton pwal...@mozilla.com wrote:

 Besides, unless you remove the unique default (which I think would be too
 verbose) the default allocator reduces to a pointer sigil.

 Patrick Walton pwal...@mozilla.com wrote:

 Still too punctuation heavy.

 Kevin Ballard ke...@sb.org wrote:

 What do you mean? This suggestion uses @ as an operator, not as a sigil.

 -Kevin

 On Dec 2, 2013, at 10:23 AM, Patrick Walton pwal...@mozilla.com wrote:

 Anything with @ feels like it goes too close to pointer sigils for my
 taste.

 Patrick

 spir denis.s...@gmail.com wrote:

 On 12/02/2013 11:57 AM, Kevin Ballard wrote:

 With @ going away another possibility is to leave ~ as the normal 
 allocation operator and to use @ as the placement operator. So ~expr 
 stays the same and placement looks either like `@place expr` or 
 `expr@place`


 I like that, with expr@place. Does this give:
  let foo = ~ bar;
  let placed_foo = bar @ place;
 ?

 Yet another solution, just for fun, using the fact that pointers are 
 supposed to
 point to:

  let foo = - bar;
  let placed_foo = bar - place;

 Denis
 --

 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


 --
 Sent from my Android phone with K-9 Mail. Please excuse my brevity.
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev




 --
 Sent from my Android phone with K-9 Mail. Please excuse my brevity.



 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Rust forum

2013-12-02 Thread Eric Reed
Well there's always r/rust/ http://www.reddit.com/r/rust/. It usually
works pretty well.


On Mon, Dec 2, 2013 at 9:45 PM, David Piepgrass qwertie...@gmail.comwrote:

 On 02/12/2013 16:21, David Piepgrass wrote:

  That would be so. much. better. than a mailing list.

 Hi. Could you expand on this? I don?t necessarily disagree, but as the
 one proposing change it?s up to you to convince everyone else :)

 --
 Simon Sapin


 Okay, well, I've never liked mailing lists at all, because:

 1. In non-digest mode, My inbox gets flooded.
 2. In digest mode, it's quite inconvenient to write a reply, having to cut
 out all the messages that I don't want to reply to and manually edit the
 subject line. Also, unrelated messages are grouped together while threads
 are broken apart, making discussions harder to follow.
 3. In email I don't get a threaded view. If I go to mailing list archives
 to see a threaded view, I can't reply.
 4. I have to manually watch for replies to my messages or to threads I'm
 following. If someone mentions my name (not that they would), I won't be
 notified.

 In contrast, Discourse has a variety of email notification options. I
 don't know if those options are enough to please everybody, but you can
 probably configure it to notify you about all posts, which makes it
 essentially equivalent to a mailing list. It supports reply by email, so
 those that prefer a mailing list can still pretend it's a mailing list.
 Currently I'm getting an shrunk digest of Discourse Meta--by email I only
 get a subset of all messages, auto-selected by Discourse, whatever it
 thinks is interesting. That's good for me: I really don't want to see every
 message.

 Plus, a mailing list offers less privacy as it mandates publishing your
 email address. That's not a big deal for me personally, but do you really
 want to require that from every Rust user?

 (btw, if I'm wrong about any of the above points, I promise there are lots
 of other netizens out there who have the same misconception(s), so many of
 them will avoid mailing lists. The fact that y'all are talking to me on a
 mailing list suggests that the disadvantages of a mailing list are not a
 big deal *to you*, but as for those who aren't participating, you can't
 conclude *they* prefer mailing lists.)

 And like mailing lists, Discourse also supports private messages.

 I don't understand why Paul mentioned GPG. You want to encrypt messages to
 a public mailing list? You can sign messages, but surely almost no one
 actually checks the signature, and I'd be surprised if Discourse didn't
 offer some built-in evidence of identity (surely it's not like email in
 letting you spoof the sender name easily?).

 I heard discourse supports attachments, just that you may have to go to
 the forum to attach or download them (rather than by email).


 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Implementation Inheritance / mixins

2013-11-13 Thread Eric Reed
Here's how I would do it using just existing Rust (assuming this hasn't all
changed under me in the past couple months).
NB: I haven't actually tried compiling this, but I'm pretty sure it (or
something like it) would work.

Nice properties over this solution:
- Doesn't require language extensions (although syntax sugar wouldn't be
unwelcome)
- Doesn't require trait objects (i.e. static dispatch is possible)
- Only need to implement one method for each derived type (super) in
addition to overridden methods
- Supports multiple inheritance in two ways (and avoids the diamond problem
I think -- not a C++ expert so I may have misunderstood that)
  + no default parent and programmer must select which parent to use
before calling
  + implementer-chosen default parent and programmer can chose to use a
different parent if desired

Neutral?:
- Doesn't enforce or care about the prefix property. Not sure if that still
matters so much w/o dynamic dispatch.

Downsides:
- Performance of delegation depends on LLVM's ability to inline (I think).
- Does require repeating all the methods once (for delegating default
implementations)

// The base type
struct Base {
data : int;
}

// Characterize it's extensible behavior in a trait
trait Trait {
fn method(self);
}

// Implement the base behavior
impl Trait for Base {
fn method(self) { ... }
}

// Extension of trait that supports upcasting to existing implementations
trait DerivingTraitP : Trait : Trait {
// one extra method for accessing a parent's implementation. ideally
this would be inlined by the compiler
fn super(self) - P;
// default implementations for all the methods in Trait let us avoid
writing delegation everywhere manually
fn method(self) {
 self.super().method() // just delegate to parent
}
}

// Single inheritance
struct Single {
parent: Base,
moreData: int,
}

impl DerivingTraitBase for Single {
fn super(self) - Base { self.parent }
}

// Overriding behavior
struct Override {
parent: Base,
otherData: u8,
}

impl DerivingTraitBase for Override {
fn super(self) - Base { self.parent }
fn method(self) { ... }
}

// Multiple inheritance
struct Multiple {
single: Single,
override: Override,
evenMoreData: ~str,
}

// must specify which parent's implementation we want (could hide wrapping
inside of as_* methods impl'd on Multiple if you like)
// if we want one of them as the default, then we can impl DerivingTrait on
Multiple directly
struct MultipleAsSingle(Multiple);
struct MultipleAsOverride(Multiple);

impl DerivingTraitSingle for MultipleAsSingle {
fn super(self) - Single { self.single }
}

impl DerivingTraitOverride for MultipleAsOverride {
fn super(self) - Override { self.override }
}

fn main() {
let base = Base { ... };
let single = Single { ... };
let override = Override { ... };
let multiple = Multiple { ... };

base.method();
base.super(); // compile time error

single.method(); // =inline delegation= single.super().method()
=inline upcast= single.base.method()
override.method(); // done! no delegating
MultipleAsSingle(multiple).method(); // =delegate=
MAS(multiple).super().method() =upcast= multiple.single.method()
=delegate= multiple.single.super().method() =upcast=
multiple.single.base.method()
MutlipleAsOverride(multiple).method(); // =delegate=
MAO(multiple).super().method() =upcast= multiple.override.method()
}

Thoughts?

Eric


On Tue, Nov 12, 2013 at 10:30 PM, Kevin Ballard ke...@sb.org wrote:

 On Nov 12, 2013, at 10:26 PM, Kevin Ballard ke...@sb.org wrote:

  And even that restriction could be lifted if ~Trait objects could be
 represented using an array of pointers (one to each inherited struct), e.g.
 ([*A,*B,*C],*vtable) instead of just (*A,*vtable), though I suspect this is
 not worth doing.

 Upon further reflection, this  would need to be done anyway because of the
 ability to combine traits. If I have

 trait TraitA : A {}
 trait TraitB : B {}

 and I want to use ~TraitA+TraitB then I would need a fat trait. Although
 in this case the number of value pointers is equal to the number of
 combined traits, so it's a bit more sensible to allow for fat trait
 pointers here.

 -Kevin
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Greenlets in Rust (was: Abandoning segmented stacks in Rust)

2013-11-13 Thread Eric Reed
The big issue I see right away (assuming I read this correctly and
greenlets can still access the stack that existed when they were created),
is that now mutable state on the stack is *shared* between greenlets and
therefore can experience *data races* (impossible for tasks b/c they don't
share memory), so they sound wildly unsafe to me.

There may be some other issues that arise from the shared stack prefix
property:
- If a greenlet moves something on the stack, then other greenlets now have
access to invalidated memory
- You can no longer assume that you have sole access things pointed to by
unique pointers, which would probably invalidate a lot of existing
assumptions.

Eric


On Wed, Nov 13, 2013 at 12:02 AM, Vadim vadi...@gmail.com wrote:

 Hi,
 I would like to float a proposal (or three :-), regarding greenlets in
 Rust.  For those unfamiliar with greenlets, they are a tool for writing
 concurrent code, similar to Rust's tasks, but much more light-weight in
 terms of memory consumption (especially now that segmented stacks are no
 more).

 I think there are some scenarios where low memory consumption per-task is
 still important, 64-bit address spaces notwithstanding.   A typical one
 would be a pub-sub server, which needs to maintain a massive number of
 simple I/O workflows, where I/O channels are idle most of the time.

 So here we go (in the order of increasing craziness):

 1. Recently I've learned how Python 
 greenletshttp://greenlet.readthedocs.org/are
 implemented http://stackoverflow.com/a/17447308.  I believe that the
 same approach could work in Rust:

 Basically, greenlets are spawned using the same stack as the parent
 greenlet, just like a normal function call.  When a greenlet is suspended,
 it copies the portion of the stack used up since its' spawning to the
 heap.  When one is re-activated, the saved memory is copied back where it
 came from (having first saved stack of the previous active greenlet,- if
 they overlap).

 Since greenlets don't need to save red zone of the stack, the amount of
 data per instance is precisely what is actually used.

 There are also downsides, of course:
 - greenlets are bound to the thread that spawned them,
 - two memcpy's are needed when switching between them.

 In the case of Python, though, there's one further optimization: since
 Python's stack frames live on the heap, in most cases, there nothing on the
 hardware stack that a greenlet needs saving!   As a bonus, it can now be
 resumed at any stack position, so no saving of previous greenlet's stack is
 needed.  The only time when a full save occurs is when there are foreign
 stack frames on the stack.


 2. Well, can Rust do the same?   What if we came up with an attribute,
 say, [#stackless], which causes a function to allocate it's stack frame on
 the heap and put all local vars there?The only things on the actual
 hardware stack would then be the function's arguments, the return address,
 the saved base pointer and the pointer to that heap-alllocated frame.
 With the exception of base pointers, all these things are
 position-independent, I believe.   And base pointer chain can be easily
 fixed up if/when stack is moved.

 So if we had that, and the whole greenlet's stack consisted of such
 functions, and there was a way for the switch_to_greenlet() function to
 detect that, then such greenlet's stack would be relocatable and could be
 resumed at any position in the thread's stack (or even in another thread!)
 with minimal memory copying, just like in Python.

 Of course, the [#stackless] functions would be slower than the normal
 ones, but in the scenario I've outlined in the beginning, it shouldn't be a
 problem.


 3.  Unfortunately, in order for the above scheme to work, all I/O methods,
 (which are typically where yields happen), would need to be marked as
 [#stackless]...  This would affect the performance of normal code using
 the same API, which is undesirable.

 Okay, but usually there are not *that *many things that point into the
 stack in a typical program.  I can think of only three things: references
 to stack-allocated buffers, base pointer chains and references to
 caller-allocated return values.
 - The first one can be lived without - just allocate buffers on the heap.
 - The second one - see above.
 - The last one is more tricky, but for the sake of argument, let's assume
 that we restricted function signatures such that only register-allocated
 types can be returned.

 Let's say we came up with a way to mark up functions that may yield to
 another greenlet, and also with a way to prohibit taking address of
 stack-allocated variables for the duration of calls to yielding functions.
 These restrictions would be annoying, but not overly so, as long as you
 had to obey them only in functions that are intended to be run in a
 greenlet.
 On the plus side, the hardware stack contents would now be relocatable.

 In this setup, everything could proceed as usual, using the 

Re: [rust-dev] Implementation Inheritance / mixins

2013-11-13 Thread Eric Reed
I'm not clear on why LLVM wouldn't be able to inline super() calls. It's
static dispatch, so it knows exactly what function is being called.


On Wed, Nov 13, 2013 at 1:25 AM, Oren Ben-Kiki o...@ben-kiki.org wrote:

 This is probably as good as we can get in the current system (I do
 something similar in my code today).

 I also think you probably need both super and mut_super, so it would
 be two methods to implement instead of one (still pretty good). I wonder
 whether it is possible to write a macro that automates writing these
 boilerplate methods?

 The key weakness is that (I think) the compiler can't inline the accesses
 to super so that you end up with a chain of virtual function calls every
 time it is accessed, so performance would be pretty bad.


 On Wed, Nov 13, 2013 at 10:27 AM, Eric Reed ecr...@cs.washington.eduwrote:

 Here's how I would do it using just existing Rust (assuming this hasn't
 all changed under me in the past couple months).
 NB: I haven't actually tried compiling this, but I'm pretty sure it (or
 something like it) would work.

 Nice properties over this solution:
 - Doesn't require language extensions (although syntax sugar wouldn't be
 unwelcome)
 - Doesn't require trait objects (i.e. static dispatch is possible)
 - Only need to implement one method for each derived type (super) in
 addition to overridden methods
 - Supports multiple inheritance in two ways (and avoids the diamond
 problem I think -- not a C++ expert so I may have misunderstood that)
   + no default parent and programmer must select which parent to use
 before calling
   + implementer-chosen default parent and programmer can chose to use
 a different parent if desired

 Neutral?:
 - Doesn't enforce or care about the prefix property. Not sure if that
 still matters so much w/o dynamic dispatch.

 Downsides:
 - Performance of delegation depends on LLVM's ability to inline (I think).
 - Does require repeating all the methods once (for delegating default
 implementations)

 // The base type
 struct Base {
 data : int;
 }

 // Characterize it's extensible behavior in a trait
 trait Trait {
 fn method(self);
 }

 // Implement the base behavior
 impl Trait for Base {
 fn method(self) { ... }
 }

 // Extension of trait that supports upcasting to existing implementations
 trait DerivingTraitP : Trait : Trait {
 // one extra method for accessing a parent's implementation. ideally
 this would be inlined by the compiler
 fn super(self) - P;
 // default implementations for all the methods in Trait let us avoid
 writing delegation everywhere manually
 fn method(self) {
  self.super().method() // just delegate to parent
 }
 }

 // Single inheritance
 struct Single {
 parent: Base,
 moreData: int,
 }

 impl DerivingTraitBase for Single {
 fn super(self) - Base { self.parent }
 }

 // Overriding behavior
 struct Override {
 parent: Base,
 otherData: u8,
 }

 impl DerivingTraitBase for Override {
 fn super(self) - Base { self.parent }
 fn method(self) { ... }
  }

 // Multiple inheritance
 struct Multiple {
 single: Single,
 override: Override,
 evenMoreData: ~str,
 }

 // must specify which parent's implementation we want (could hide
 wrapping inside of as_* methods impl'd on Multiple if you like)
 // if we want one of them as the default, then we can impl DerivingTrait
 on Multiple directly
 struct MultipleAsSingle(Multiple);
 struct MultipleAsOverride(Multiple);

 impl DerivingTraitSingle for MultipleAsSingle {
 fn super(self) - Single { self.single }
 }

 impl DerivingTraitOverride for MultipleAsOverride {
 fn super(self) - Override { self.override }
 }

 fn main() {
 let base = Base { ... };
 let single = Single { ... };
 let override = Override { ... };
 let multiple = Multiple { ... };

 base.method();
 base.super(); // compile time error

 single.method(); // =inline delegation= single.super().method()
 =inline upcast= single.base.method()
 override.method(); // done! no delegating
 MultipleAsSingle(multiple).method(); // =delegate=
 MAS(multiple).super().method() =upcast= multiple.single.method()
 =delegate= multiple.single.super().method() =upcast=
 multiple.single.base.method()
 MutlipleAsOverride(multiple).method(); // =delegate=
 MAO(multiple).super().method() =upcast= multiple.override.method()
 }

 Thoughts?

 Eric


 On Tue, Nov 12, 2013 at 10:30 PM, Kevin Ballard ke...@sb.org wrote:

 On Nov 12, 2013, at 10:26 PM, Kevin Ballard ke...@sb.org wrote:

  And even that restriction could be lifted if ~Trait objects could be
 represented using an array of pointers (one to each inherited struct), e.g.
 ([*A,*B,*C],*vtable) instead of just (*A,*vtable), though I suspect this is
 not worth doing.

 Upon further reflection, this  would need to be done anyway because of
 the ability to combine traits. If I have

 trait TraitA : A {}
 trait TraitB : B {}

 and I want to use ~TraitA

Re: [rust-dev] Implementation Inheritance / mixins

2013-11-13 Thread Eric Reed
I'm not sure I follow.
My implementation doesn't use any trait pointers, so the only time there
were would be a trait pointer is if you casted to ~Trait yourself.
In that case, only the original method call would be dynamic dispatch; all
the internal calls (delegating and upcasting) are still static dispatch.
So my version doesn't pay for any dynamic dispatch over what the programmer
is already paying for.


On Wed, Nov 13, 2013 at 3:46 AM, Oren Ben-Kiki o...@ben-kiki.org wrote:

 The call isn't statically dispatched when I invoke a method via a trait
 pointer. So it seems when I invoke any trait function, I pay double the
 cost of a virtual function call instead of one... I suppose it isn't _too_
 bad, but it still hurts.



 On Wed, Nov 13, 2013 at 12:21 PM, Eric Reed ecr...@cs.washington.eduwrote:

 I'm not clear on why LLVM wouldn't be able to inline super() calls. It's
 static dispatch, so it knows exactly what function is being called.


 On Wed, Nov 13, 2013 at 1:25 AM, Oren Ben-Kiki o...@ben-kiki.org wrote:

 This is probably as good as we can get in the current system (I do
 something similar in my code today).

 I also think you probably need both super and mut_super, so it would
 be two methods to implement instead of one (still pretty good). I wonder
 whether it is possible to write a macro that automates writing these
 boilerplate methods?

 The key weakness is that (I think) the compiler can't inline the
 accesses to super so that you end up with a chain of virtual function
 calls every time it is accessed, so performance would be pretty bad.


 On Wed, Nov 13, 2013 at 10:27 AM, Eric Reed ecr...@cs.washington.eduwrote:

 Here's how I would do it using just existing Rust (assuming this hasn't
 all changed under me in the past couple months).
 NB: I haven't actually tried compiling this, but I'm pretty sure it (or
 something like it) would work.

 Nice properties over this solution:
 - Doesn't require language extensions (although syntax sugar wouldn't
 be unwelcome)
 - Doesn't require trait objects (i.e. static dispatch is possible)
 - Only need to implement one method for each derived type (super) in
 addition to overridden methods
 - Supports multiple inheritance in two ways (and avoids the diamond
 problem I think -- not a C++ expert so I may have misunderstood that)
   + no default parent and programmer must select which parent to
 use before calling
   + implementer-chosen default parent and programmer can chose to
 use a different parent if desired

 Neutral?:
 - Doesn't enforce or care about the prefix property. Not sure if that
 still matters so much w/o dynamic dispatch.

 Downsides:
 - Performance of delegation depends on LLVM's ability to inline (I
 think).
 - Does require repeating all the methods once (for delegating default
 implementations)

 // The base type
 struct Base {
 data : int;
 }

 // Characterize it's extensible behavior in a trait
 trait Trait {
 fn method(self);
 }

 // Implement the base behavior
 impl Trait for Base {
 fn method(self) { ... }
 }

 // Extension of trait that supports upcasting to existing
 implementations
 trait DerivingTraitP : Trait : Trait {
 // one extra method for accessing a parent's implementation.
 ideally this would be inlined by the compiler
 fn super(self) - P;
 // default implementations for all the methods in Trait let us
 avoid writing delegation everywhere manually
 fn method(self) {
  self.super().method() // just delegate to parent
 }
 }

 // Single inheritance
 struct Single {
 parent: Base,
 moreData: int,
 }

 impl DerivingTraitBase for Single {
 fn super(self) - Base { self.parent }
 }

 // Overriding behavior
 struct Override {
 parent: Base,
 otherData: u8,
 }

 impl DerivingTraitBase for Override {
 fn super(self) - Base { self.parent }
 fn method(self) { ... }
  }

 // Multiple inheritance
 struct Multiple {
 single: Single,
 override: Override,
 evenMoreData: ~str,
 }

 // must specify which parent's implementation we want (could hide
 wrapping inside of as_* methods impl'd on Multiple if you like)
 // if we want one of them as the default, then we can impl
 DerivingTrait on Multiple directly
 struct MultipleAsSingle(Multiple);
 struct MultipleAsOverride(Multiple);

 impl DerivingTraitSingle for MultipleAsSingle {
 fn super(self) - Single { self.single }
 }

 impl DerivingTraitOverride for MultipleAsOverride {
 fn super(self) - Override { self.override }
 }

 fn main() {
 let base = Base { ... };
 let single = Single { ... };
 let override = Override { ... };
 let multiple = Multiple { ... };

 base.method();
 base.super(); // compile time error

 single.method(); // =inline delegation= single.super().method()
 =inline upcast= single.base.method()
 override.method(); // done! no delegating
 MultipleAsSingle(multiple).method(); // =delegate=
 MAS(multiple).super().method() =upcast

Re: [rust-dev] Implementation Inheritance / mixins

2013-11-13 Thread Eric Reed
Indeed. I haven't looked at what it would generate at all. I'm just working
off my understanding of the semantics. What LLVM actually does is an
entirely different question.


On Wed, Nov 13, 2013 at 10:28 AM, Oren Ben-Kiki o...@ben-kiki.org wrote:

 Hmmm. Perhaps I was too hasty. It would be interesting to look at the
 generated binary code and see if it actually work this way...

 On Wed, Nov 13, 2013 at 8:08 PM, Eric Reed ecr...@cs.washington.eduwrote:

 I'm not sure I follow.
 My implementation doesn't use any trait pointers, so the only time there
 were would be a trait pointer is if you casted to ~Trait yourself.
 In that case, only the original method call would be dynamic dispatch;
 all the internal calls (delegating and upcasting) are still static dispatch.
 So my version doesn't pay for any dynamic dispatch over what the
 programmer is already paying for.


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] copying pointers

2013-11-12 Thread Eric Reed
I'd suggest extra::list, but it looks a little dated.


On Tue, Nov 12, 2013 at 6:21 AM, Oren Ben-Kiki o...@ben-kiki.org wrote:

 For linked lists with no cycles, why not use OptionRcT (or RcMut)?


 On Tue, Nov 12, 2013 at 4:06 PM, spir denis.s...@gmail.com wrote:

 PS: What would be, in fact, the rusty way for a simplissim linked list. I
 use Option~Cell for now, to have something clean (None) to end the list,
 since Rust looks rather functional. But as always with Option this way
 quite obscures and complicates the code (Some() expressions, match
 expressions...). I'd rather just use a NULL pointer, for here it is fully
 safe. But this does not look rusty at all, I guess.
 What is your view?


 Denis




 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev



 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Is our UdpSocket misbehaving?

2013-09-16 Thread Eric Reed
I left a XXX about this
herehttps://github.com/mozilla/rust/blob/master/src/libstd/rt/uv/uvio.rs#L958.
I'm pretty sure libuv drops the remainder of the packet, but I haven't
confirmed that.
I think the best way to deal with this is to raise a PartialPacketRead
condition.


On Mon, Sep 16, 2013 at 3:18 PM, Brian Anderson bander...@mozilla.comwrote:

 On 09/16/2013 02:06 PM, Maik Klein wrote:

 https://gist.github.com/**MaikKlein/6586333https://gist.github.com/MaikKlein/6586333

 Basically what happens is that a packet is read partially if the buffer
 is to small for the packet. According to http://gafferongames.com/**
 networking-for-game-**programmers/sending-and-**receiving-packets/http://gafferongames.com/networking-for-game-programmers/sending-and-receiving-packets/this
  should not happen.

 In my case I send [99u8,99u8] which should be of size 512 and my buffer
 is [0u8,..1] which should be of size 256. But it still receives the first
 99.


 I'm not sure what is correct, but this behavior appears to be inherited
 from libuv. I wonder if the rest of the packet is delivered in subsequent
 reads or if part of the packet is just discarded.

 __**_
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/**listinfo/rust-devhttps://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev