Re: [rust-dev] minor problem with amke install

2014-09-05 Thread Kevin Ballard
I submitted a PR yesterday in response to this thread that fixes `sudo make 
install` to drop the root privs for everything except the actual installation: 
https://github.com/rust-lang/rust/pull/17009

-Kevin

 On Sep 5, 2014, at 5:00 AM, John McKown john.archie.mck...@gmail.com wrote:
 
 On Thu, Sep 4, 2014 at 8:59 PM, Brian Anderson bander...@mozilla.com wrote:
 Thanks for the report. It's probably
 https://github.com/rust-lang/rust/issues/13728. This is an easy first bug to
 fix if somebody wants to jump on it.
 
 
 Thanks for that link. I had some minor surgery yesterday, in the
 morning, and was a bit tipsy the rest of the day. Including when I
 sent the message. Which, somewhat, explains the poor grammar and
 spelling. But only somewhat. I will look at those issues in github
 before asking questions again, just to be sure that I'm not wasting
 anybody's time. In this particular case, the solution, for me, is a
 simple sudo find to chown the files. I might even be good enough to
 figure out how to _properly_ do that during the make install
 processing. If I do, I'll look up how to submit a change. I imagine it
 is in the documentation somewhere. And, obviously from posting a gist,
 I have a github account that I can use so that the maintainers could
 just do a pull from my copy.
 
 OOPS, time to go.
 
 
 -- 
 There is nothing more pleasant than traveling and meeting new people!
 Genghis Khan
 
 Maranatha! 
 John McKown
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev



smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] [ANN] Iobuf

2014-09-04 Thread Kevin Ballard
I’m still seeing bad transmutes.

  fn from_str'a(s: 'a str) - RawIobuf'a {
unsafe {
  let bytes: mut [u8] = mem::transmute(s.as_bytes());
  RawIobuf::of_buf(BorrowedBuffer(bytes))
}
  }

This is taking a `str`, converting to `[u8]`, and then transmuting to `mut 
[u8]`. Besides being undefined, I have to assume it's also possible for other 
code later on to end up attempting to actually mutate this data, which will 
either a) be really bad, or b) not even be possible if it's a string constant 
in read-only member.

-Kevin

 On Sep 4, 2014, at 1:15 AM, Clark Gaebel cg.wowus...@gmail.com wrote:
 
 I think you’re right! Thanks for pointing at UnsafeCell. That seems like 
 exactly what I want. It has been fixed.
 
 Thanks a ton for the catch!
   - Clark
 
 
 On Thu, Sep 4, 2014 at 12:46 AM, Vladimir Matveev dpx.infin...@gmail.com 
 wrote:
 
 Hi! 
 
 I’ve noticed this piece of code in your library: 
 
 #[inline] 
 fn as_mut_slice(self) - mut [u8] { 
 unsafe { 
 match self { 
 OwnedBuffer(ref v) = { 
 let mut_v: mut Vecu8 = mem::transmute(v); 
 mut_v.as_mut_slice() 
 }, 
 BorrowedBuffer(ref s) = { 
 let mut_s: mut mut [u8] = mem::transmute(s); 
 mut_s.as_mut_slice() 
 }, 
 } 
 } 
 } 
 
 I was under impression that transmuting  to mut is undefined behavior in 
 Rust, and you need to use RefCell (or UnsafeCell) for this. Am I wrong? 
 
 On 04 сент. 2014 г., at 9:17, Clark Gaebel cg.wowus...@gmail.com wrote: 
 
  Hey everyone! 
  
  Have you ever needed to communicate with the outside world from a rust 
  application? Do you need to send data through a network interface, or touch 
  a disk? Then you need Iobufs! 
  
  An Iobuf is a nifty abstraction over an array of bytes, which makes writing 
  things like highly efficient zero-copy speculative network protocol parsers 
  easy! Any time I need to do I/O, I reach for an Iobuf to do the heavy 
  lifting. 
  
  https://github.com/cgaebel/iobuf 
  
  Enjoy, 
  - Clark 
  ___ 
  Rust-dev mailing list 
  Rust-dev@mozilla.org 
  https://mail.mozilla.org/listinfo/rust-dev 
 
 
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev



smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Dynamic format template

2014-08-24 Thread Kevin Ballard
It’s technically possible, but horribly unsafe. The only thing that makes it 
safe to do normally is the syntax extension that implements `format!()` ensures 
all the types match. If you really think you need this, you can look at the 
implementation of core::fmt. But it’s certainly not appropriate for 
localization, or template engines.

-Kevin Ballard

 On Aug 24, 2014, at 2:48 PM, Vadim Chugunov vadi...@gmail.com wrote:
 
 Hi,
 Is there any way to make Rust's fmt module to consume format template 
 specified at runtime? 
 This might be useful for localization of format!'ed strings, or, if one wants 
 to use format! as a rudimentary template engine.
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev



smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Opt-In Built-In Traits (was: Mutable files)

2014-07-24 Thread Kevin Ballard
On Wed, Jul 23, 2014, at 12:52 PM, David Henningsson wrote:
 
 
 On 2014-07-21 19:17, Patrick Walton wrote:
  On 7/21/14 8:49 AM, Tobias Müller wrote:
  Patrick Walton pcwal...@mozilla.com wrote:
  On 7/20/14 8:12 PM, David Henningsson wrote:
From a language design perspective, maybe it would be more
  intuitive to
  have different syntaxes for copy and move, like:
 
  As a rust newbie, that aspect aways makes me a bit nervous. Two quite
  different operations with the same syntax and and simply changing a
  detail in the struct can be enough to switch between the two.
 
  This is the reason for Opt-In Built-In Traits.
 
  * Causing a move when you thought you were copying results in a compiler
  error.
 
  * Causing a copy when you thought you were moving is harmless, as any
  implicit copy in Rust has *exactly the same runtime semantics* as a
  move, except that the compiler prevents you from using the value again.
 
  Again, we had that world before. It was extremely annoying to write
  move all over the place. Be careful what you wish for.
 
 I find these arguments compelling, but if what we want to accomplish is 
 a conscious choice between copy and move every time somebody makes a new 
 struct, maybe #[Deriving(Data)] struct Foo vs struct Foo is not 
 first-class enough.
 
 Maybe the move vs copy should be done by using different keywords, a few 
 brainstorming examples:
 
   * datastruct for copy, struct for move
   * simplestruct for copy, complexstruct for move
   * struct for copy, class or object for move

What would this solve? Nobody who’s using a type is going to care about
the keyword used to introduce the type, they’re only going to care about
the behavior of the type. Using `datastruct` instead of `struct` will
have zero impact on the people writing

let x: Foo = y;

Actually, the whole notion of having to intentionally describe on every
struct whether you want it to be Copy is my biggest objection to opt-in
traits. The API Stability / documentation aspect is great, but it does
seem like a burden to people writing once-off structs.

What I’d actually like to see is for private structs to infer things
like Copy and for public structs to then require it to be explicitly
stated. I don’t know how to do this in a way that’s not confusing
though.

-Kevin
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] [ANN] Initial Alpha of Cargo

2014-06-24 Thread Kevin Ballard
This is pretty awesome. I notice that http://crates.io doesn’t link to the 
GitHub repo though. Seems like that might be a useful thing to add.

-Kevin

 On Jun 23, 2014, at 10:50 PM, Yehuda Katz wyc...@gmail.com wrote:
 
 Folks,
 
 I'm happy to announce that Cargo is now ready to try out!
 
 The Cargo repository is now at https://github.com/rust-lang/cargo and you can 
 learn all about it at http://crates.io/. Don't forget to check out the FAQ at 
 http://crates.io/faq.
 
 You can build Cargo from master using the latest `rustc` and running `make 
 install`. It assumes a `rustc` and `git` on the path, so you won't need to 
 recompile Cargo every time you update the nightly.
 
 Cargo is still under heavy development and features are coming quickly. At 
 the moment, all dependencies are downloaded from Github, but we are working 
 on a Cargo registry that you will be able to publish packages to. There are 
 more details about that in the FAQ.
 
 The next features we're planning on working on are:
 `cargo package name` to create a new package skeleton
 Supporting refs other than `master` from git packages
 Support for environments (such as development, production and test) as well 
 as a `cargo test` command. This includes per-environment dependencies.
 Support for per-platform configuration.
 More deterministic builds using a shrinkwrap file (like the bundler 
 Gemfile.lock or shrinkwrap.json in npm).
 Since people have asked often, we plan to transparently support duplicates of 
 the same package name and version in the following conditions:
 From different git repositories or different branches of the same git 
 repository
 In versions less than 1.0 for packages from the Cargo registry
 For different major versions for packages from the Cargo registry
 By default, we will encourage package authors to comply with semantic 
 versioning and not introduce breaking changes in minor versions by using the 
 single highest available minor version for each depended-on major version of 
 a package from the Cargo registry.
 
 For example, if I have three packages:
 uno depends on json 1.3.6
 dos depends on json 1.4.12
 tres depends on json 2.1.0
 Cargo will use json 1.4.12 for uno and dos, and json 2.1.0 for tres. This 
 makes good use of Rust's symbol mangling support, while also avoiding 
 unnecessary code bloat.
 
 This will tend to produce significantly smaller binary sizes than encouraging 
 libraries to depend on precise versions of published packages. We tried to 
 strike a good balance between isolating unstable code and avoiding binary 
 bloat in stable libraries. As the ecosystem grows, we'll watch carefully and 
 see if any tweaks are necessary.
 
 Yehuda Katz
 (ph) 718.877.1325
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev



smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Preserving formatting for slice's Show impl

2014-06-13 Thread Kevin Ballard
I would not expect this to be “mapped” over the slice. I encourage you to come 
up with an appropriate syntax to describe that and submit an RFC, although I 
wonder how you plan on dealing with things like key vs value in Maps, and 
further nesting (e.g. slices of slices, etc).

As for applying it to the slice as a whole, that would be the appropriate way 
to handle this format parameter. The problem is doing that requires building an 
intermediate string first, because you have to know the length of the full 
output before you can know how to pad it, and it’s generally considered to be 
undesired work to do that. As far as I’m aware, the only types right now that 
actually support the various padding-related formatting controls are the ones 
that can be printed in a single operation (such as numbers, or strings).

-Kevin

On Jun 9, 2014, at 8:50 PM, Tom Jakubowski t...@crystae.net wrote:

 I would expect that `println!({:_4}, [1].as_slice());` would print either 
 `[___1]` (where the format is mapped over the slice) or `_[1]` (where the 
 format is applied to the slice as a whole), but instead no formatting is 
 applied at all and it simply prints `[1]`. 
 
 I can see uses and arguments for both the mapping and whole” 
 interpretations of the format string on slices. On the one hand this 
 ambiguity makes a case for leaving the behavior as-is for backwards 
 compatibility. On the other hand it would be useful to be able to format 
 slices (and other collections, of course). Would it be appropriate to expand 
 the syntax for format strings to allow for nested format strings, so that 
 separate formatting can be applied to the entire collection and to its 
 contents? I assume it this would require an RFC.
 
 (The mapped variant can be very easily implemented, by the way, by 
 replacing `try!(write!({}, x))` with `try!(x.fmt(f))` in the `implT: Show 
 Show for [T]`.)
 
 Tom
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] cannot borrow `st` as mutable more than once at a time

2014-05-30 Thread Kevin Ballard
On May 30, 2014, at 12:12 AM, Vladimir Matveev dpx.infin...@gmail.com wrote:

 2014-05-30 5:37 GMT+04:00 Kevin Ballard ke...@sb.org:
 
 It shouldn't.
 
 The for-loop desugaring looks like
 
 match mut st.execute_query() {
__i = loop {
match __i.next() {
None = break,
Some(mut __value) = {
let i = __value;
{
// for loop body goes here
}
}
}
}
 }
 
 It's done with a match statement like this specifically to make the mut 
 binding of the iterator end after the for loop.
 
 Great, didn't know it. Last time I asked (on StackOverflow, I think;
 that was some time ago though) there were no `match`. Then from that
 code alone it does look like a bug to me. Note that it refers to
 `st.set_string(%e%)` and `for` loop ten lines above, that is, the
 first one. If mutable borrow of the iterator aren't escaping the loop,
 then this error should not appear, right?

The errors you printed are slightly malformed, and you only listed some of your 
code. Is this a database library you're writing yourself? My best guess here is 
that you've accidentally used the wrong lifetime on your `execute_query()` 
method, tying the lifetime of the `self` reference to a lifetime on the value 
itself. Something like this:

impl'a Statement'a {
pub fn execute_query('a mut self) { ... }
}

By using 'a on 'a mut self here, you've explicitly tied the reference to the 
lifetime of the value. This causes the mutable reference to live much longer 
than you expected it to, which means it's still alive when you try to 
subsequently borrow it on your call to .set_string().

-Kevin

smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] cannot borrow `st` as mutable more than once at a time

2014-05-30 Thread Kevin Ballard
I'm assuming that Statement has its own lifetime parameter? And that's the 'a 
you're using here?

Try using a new lifetime.

pub fn execute_query'b('b mut self) - ResultSelf'b;

-Kevin

On May 30, 2014, at 1:54 AM, Christophe Pedretti 
christophe.pedre...@gmail.com wrote:

 Hi All,
 
 sorry for my late replay, i am UTC+2
 
  Won't wrapping the first `for` loop into curly braces help?
 no
 
  is this a database library you're writing yourself?
 yes
 
  My best guess here is that you've accidentally used the wrong lifetime on
  your `execute_query()` method, tying the lifetime of the `self` reference to
  a lifetime on the value itself
 
 yes, but removing the lifetime reference on the self, compiling my library 
 gives
 
 sql\connection.rs:57:2: 64:3 note: consider using an explicit lifetime 
 parameter as shown: fn execute_query('a mut self) - ResultSet'a
 sql\connection.rs:57pub fn execute_query(mut self) - ResultSet'a {
 sql\connection.rs:58match self.pCon.dbType {
 sql\connection.rs:59SQLITE3 = {
 sql\connection.rs:60if self.exec { unsafe { 
 sqlite3_reset(self.pStmt) }; } else {self.exec=true; }
 sql\connection.rs:61ResultSet { pStmt : self, error : false }
 sql\connection.rs:62}
  ...
 sql\connection.rs:61:23: 61:27 error: cannot infer an appropriate lifetime 
 for automatic coercion due to conflicting requirements
 sql\connection.rs:61ResultSet { pStmt : self, error : false }
 ^~~~
 
 execute_query can be used only for the loop body, and if there is no variable 
 referencing it there is no reason for the execute-query to live outside the 
 loop (as is my example)
 
 or, with code like this :
 
 let query_result = st.execute_query()
 for i in query_result {
 ...
 
 and in this case, the query_result lives outside the loop
 
 the compiler can not distinguish these two usages ?
 
 Thanks
 
 2014-05-30 9:17 GMT+02:00 Kevin Ballard ke...@sb.org:
 On May 30, 2014, at 12:12 AM, Vladimir Matveev dpx.infin...@gmail.com wrote:
 
  2014-05-30 5:37 GMT+04:00 Kevin Ballard ke...@sb.org:
 
  It shouldn't.
 
  The for-loop desugaring looks like
 
  match mut st.execute_query() {
 __i = loop {
 match __i.next() {
 None = break,
 Some(mut __value) = {
 let i = __value;
 {
 // for loop body goes here
 }
 }
 }
 }
  }
 
  It's done with a match statement like this specifically to make the mut 
  binding of the iterator end after the for loop.
 
  Great, didn't know it. Last time I asked (on StackOverflow, I think;
  that was some time ago though) there were no `match`. Then from that
  code alone it does look like a bug to me. Note that it refers to
  `st.set_string(%e%)` and `for` loop ten lines above, that is, the
  first one. If mutable borrow of the iterator aren't escaping the loop,
  then this error should not appear, right?
 
 The errors you printed are slightly malformed, and you only listed some of 
 your code. Is this a database library you're writing yourself? My best guess 
 here is that you've accidentally used the wrong lifetime on your 
 `execute_query()` method, tying the lifetime of the `self` reference to a 
 lifetime on the value itself. Something like this:
 
 impl'a Statement'a {
 pub fn execute_query('a mut self) { ... }
 }
 
 By using 'a on 'a mut self here, you've explicitly tied the reference to the 
 lifetime of the value. This causes the mutable reference to live much longer 
 than you expected it to, which means it's still alive when you try to 
 subsequently borrow it on your call to .set_string().
 
 -Kevin
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev



smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] cannot borrow `st` as mutable more than once at a time

2014-05-30 Thread Kevin Ballard
If I'm interpreting this right, you also need to add a second lifetime 
parameter to your ResultSet object. This way the lifetime used for its 
reference to the Statement can be different than the lifetime on the Statement 
type itself (I assume Statement has a lifetime to refer to the database).

I whipped up an equivalent bit of code. The following reproduces your error:

struct Root {
s: String
}

struct One'a {
root: 'a mut Root
}

struct Two'a {
one: 'a mut One'a
}

impl Root {
pub fn make_one'a('a mut self) - One'a {
One { root: self }
}
}

impl'a One'a {
pub fn make_two('a mut self) - Two'a {
Two { one: self }
}

pub fn foo(mut self) {
println!(foo);
}
}

fn main() {
let mut r = Root { s: root.to_string() };
let mut one = r.make_one();
match one.make_two() {
x = {
println!(root: {}, x.one.root.s);
}
}

one.foo();
}
The equivalent change here that I'm proposing is updating Two to

struct Two'a, 'b {
one: 'b mut One'a
}
and the definition of make_two() to

pub fn make_two'b('b mut self) - Two'a, 'b {
Two { one: self }
}

-Kevin

On May 30, 2014, at 3:12 PM, Kevin Ballard ke...@sb.org wrote:

 I'm assuming that Statement has its own lifetime parameter? And that's the 'a 
 you're using here?
 
 Try using a new lifetime.
 
 pub fn execute_query'b('b mut self) - ResultSelf'b;
 
 -Kevin
 
 On May 30, 2014, at 1:54 AM, Christophe Pedretti 
 christophe.pedre...@gmail.com wrote:
 
 Hi All,
 
 sorry for my late replay, i am UTC+2
 
  Won't wrapping the first `for` loop into curly braces help?
 no
 
  is this a database library you're writing yourself?
 yes
 
  My best guess here is that you've accidentally used the wrong lifetime on
  your `execute_query()` method, tying the lifetime of the `self` reference 
  to
  a lifetime on the value itself
 
 yes, but removing the lifetime reference on the self, compiling my library 
 gives
 
 sql\connection.rs:57:2: 64:3 note: consider using an explicit lifetime 
 parameter as shown: fn execute_query('a mut self) - ResultSet'a
 sql\connection.rs:57pub fn execute_query(mut self) - ResultSet'a {
 sql\connection.rs:58match self.pCon.dbType {
 sql\connection.rs:59SQLITE3 = {
 sql\connection.rs:60if self.exec { unsafe { 
 sqlite3_reset(self.pStmt) }; } else {self.exec=true; }
 sql\connection.rs:61ResultSet { pStmt : self, error : false }
 sql\connection.rs:62}
  ...
 sql\connection.rs:61:23: 61:27 error: cannot infer an appropriate lifetime 
 for automatic coercion due to conflicting requirements
 sql\connection.rs:61ResultSet { pStmt : self, error : false }
 ^~~~
 
 execute_query can be used only for the loop body, and if there is no 
 variable referencing it there is no reason for the execute-query to live 
 outside the loop (as is my example)
 
 or, with code like this :
 
 let query_result = st.execute_query()
 for i in query_result {
 ...
 
 and in this case, the query_result lives outside the loop
 
 the compiler can not distinguish these two usages ?
 
 Thanks
 
 2014-05-30 9:17 GMT+02:00 Kevin Ballard ke...@sb.org:
 On May 30, 2014, at 12:12 AM, Vladimir Matveev dpx.infin...@gmail.com 
 wrote:
 
  2014-05-30 5:37 GMT+04:00 Kevin Ballard ke...@sb.org:
 
  It shouldn't.
 
  The for-loop desugaring looks like
 
  match mut st.execute_query() {
 __i = loop {
 match __i.next() {
 None = break,
 Some(mut __value) = {
 let i = __value;
 {
 // for loop body goes here
 }
 }
 }
 }
  }
 
  It's done with a match statement like this specifically to make the mut 
  binding of the iterator end after the for loop.
 
  Great, didn't know it. Last time I asked (on StackOverflow, I think;
  that was some time ago though) there were no `match`. Then from that
  code alone it does look like a bug to me. Note that it refers to
  `st.set_string(%e%)` and `for` loop ten lines above, that is, the
  first one. If mutable borrow of the iterator aren't escaping the loop,
  then this error should not appear, right?
 
 The errors you printed are slightly malformed, and you only listed some of 
 your code. Is this a database library you're writing yourself? My best guess 
 here is that you've accidentally used the wrong lifetime on your 
 `execute_query()` method, tying the lifetime of the `self` reference to a 
 lifetime on the value itself. Something like this:
 
 impl'a Statement'a {
 pub fn execute_query('a mut self) { ... }
 }
 
 By using 'a on 'a mut self here, you've explicitly tied the reference to 
 the lifetime of the value. This causes the mutable reference to live much 
 longer than you expected it to, which means it's still alive when you try to 
 subsequently borrow it on your call

Re: [rust-dev] The meaning of 'box ref foo' ?

2014-05-30 Thread Kevin Ballard
Not only this, but match patterns are also extremely often used intentionally 
to move values. The trivial example is something like

match some_opt_val {
Some(x) = do_something_with(x),
None = default_behavior()
}

By-ref matching is actually the more infrequent type of matching in my 
experience.

-Kevin

On May 30, 2014, at 9:05 AM, Benjamin Striegel ben.strie...@gmail.com wrote:

 What you're overlooking is that patterns are used for more than just `match` 
 expressions. They can also be used in both assignment statements and in 
 function/closure signatures. For example, note that `x` and `y` are the same 
 type in the following program:
 
 fn main() {
 let ref x = 3;
 let y = 3;
 foo(x);
 foo(y);
 }
 
 fn foo(x: int) {
 println!({:i}, *x);
 }
 
 
 Removing the `ref` keyword and making patterns reference by default would 
 make `let x = 3;` declare a reference to an integer. Then you'd need a new 
 keyword to express when you don't want this, and you're back at square one.
 
 
 On Fri, May 30, 2014 at 9:56 AM, Emmanuel Surleau 
 emmanuel.surl...@gmail.com wrote:
 I think the 'ref' keyword removal is a very good idea. It has bitten
 me several times, and the idea that pattern matching something
 essentially performs a side effect (moving the value) leaves me
 uncomfortable.
 
 Cheers,
 
 Emm
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev



smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] How to find Unicode string length in rustlang

2014-05-30 Thread Kevin Ballard
This is a very long bikeshed for something which there's no evidence is even a 
problem. I propose that we terminate this thread now.

If you believe that .len() needs to be renamed, please go gather evidence 
that's compelling enough to warrant breaking tradition with practically every 
programming language out there (e.g. that strings have a defined length that 
can be queried using that term). Until then, this thread is serving no purpose.

-Kevin

smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Detection of early end for TakeIterator

2014-05-30 Thread Kevin Ballard
I suspect a more generally interesting solution would be a Counted iterator 
adaptor that keeps track of how many non-None values it's returned from next(). 
You could use this to validate that your Take iterator returned the expected 
number of values.

pub struct CountedT {
iter: T,
/// Incremented by 1 every time `next()` returns a non-`None` value
pub count: uint
}

implA, T: IteratorA IteratorA for CountedT {
fn next(mut self) - OptionA {
match self.iter.next() {
x@Some(_) = {
self.count += 1;
x
}
None = None
}
}

fn size_hint(self) - (uint, Optionuint) {
self.iter.size_hint()
}
}

// plus various associated traits like DoubleEndedIterator

-Kevin

On May 30, 2014, at 9:31 AM, Andrew Poelstra apoels...@wpsoftware.net wrote:

 Hi guys,
 
 
 Take is an iterator adaptor which cuts off the contained iterator after
 some number of elements, always returning None.
 
 I find that I need to detect whether I'm getting None from a Take
 iterator because I've read all of the elements I expected or because the
 underlying iterator ran dry unexpectedly. (Specifically, I'm parsing
 some data from the network and want to detect an early EOM.)
 
 
 This seems like it might be only me, so I'm posing this to the list: if
 there was a function Take::is_done(self) - bool, which returned whether
 or not the Take had returned as many elements as it could, would that be
 generally useful?
 
 I'm happy to submit a PR but want to check that this is appropriate for
 the standard library.
 
 
 
 Thanks
 
 Andrew
 
 
 
 -- 
 Andrew Poelstra
 Mathematics Department, University of Texas at Austin
 Email: apoelstra at wpsoftware.net
 Web:   http://www.wpsoftware.net/andrew
 
 If they had taught a class on how to be the kind of citizen Dick Cheney
 worries about, I would have finished high school.   --Edward Snowden
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev



smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] How to find Unicode string length in rustlang

2014-05-29 Thread Kevin Ballard
On May 28, 2014, at 11:37 PM, Aravinda VK hallimanearav...@gmail.com wrote:

 I wonder if chars() available for String itself, so that we can avoid running 
 as_slice().chars()

This is a temporary issue. Once DST lands we will likely implement Derefstr 
for String, which will make all str methods work transparently on String.

-Kevin

smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] cannot borrow `st` as mutable more than once at a time

2014-05-29 Thread Kevin Ballard
On May 29, 2014, at 11:22 AM, Vladimir Matveev dpx.infin...@gmail.com wrote:

 Hi, Christophe,
 
 Won't wrapping the first `for` loop into curly braces help? I suspect
 this happens because of `for` loop desugaring, which kind of leaves
 the iterator created by `execute_query()` in scope (not really, but
 only for borrow checker).

It shouldn't.

The for-loop desugaring looks like

match mut st.execute_query() {
__i = loop {
match __i.next() {
None = break,
Some(mut __value) = {
let i = __value;
{
// for loop body goes here
}
}
}
}
}

It's done with a match statement like this specifically to make the mut 
binding of the iterator end after the for loop.

-Kevin

smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] How to find Unicode string length in rustlang

2014-05-28 Thread Kevin Ballard
It's .len() because slicing and other related functions work on byte indexes.

We've had this discussion before in the past. People expect there to be a 
.len(), and the only sensible .len() is byte length (because char length is not 
O(1) and not appropriate for use with most string-manipulation functions).

Since Rust strings are UTF-8 encoded text, it makes sense for .len() to be the 
number of UTF-8 code units. Which happens to be the number of bytes.

-Kevin

On May 28, 2014, at 7:07 AM, Benjamin Striegel ben.strie...@gmail.com wrote:

 I think that the naming of `len` here is dangerously misleading. Naive 
 ASCII-users will be free to assume that this is counting codepoints rather 
 than bytes. I'd prefer the name `byte_len` in order to make the behavior here 
 explicit.
 
 
 On Wed, May 28, 2014 at 5:55 AM, Simon Sapin simon.sa...@exyr.org wrote:
 On 28/05/2014 10:46, Aravinda VK wrote:
 Thanks. I didn't know about char_len.
 `unicode_str.as_slice().char_len()` is giving number of code points.
 
 Sorry for the confusion, I was referring codepoint as character in my
 mail. char_len gives the correct output for my requirement. I have
 written javascript script to convert from string length to grapheme
 cluster length for Kannada language.
 
 Be careful, JavaScript’s String.length counts UCS-2 code units, not code 
 points…
 
 
 -- 
 Simon Sapin
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] How to find Unicode string length in rustlang

2014-05-28 Thread Kevin Ballard
Breaking with established convention is a dangerous thing to do. Being too 
opinionated (regarding opinions that deviate from the norm) tends to put people 
off the language unless there's a clear benefit to forcing the alternative 
behavior.

In this case, there's no compelling benefit to naming the thing .byte_len() 
over merely documenting that .len() is in code units. Everything else that 
doesn't explicitly say char on strings is in code units too, so it's sensible 
that .len() is too. But having strings that don't have an inherent length is 
confusing to anyone who hasn't already memorized this difference.

Today we only need to teach the simple concept that strings are utf-8 encoded, 
and the corresponding notion that all of the accessor methods on strings 
(including indexing using []) use code units unless they specify otherwise 
(e.g. unless they contain the word char).

-Kevin

On May 28, 2014, at 10:54 AM, Benjamin Striegel ben.strie...@gmail.com wrote:

  People expect there to be a .len()
 
 This is the assumption that I object to. People expect there to be a .len() 
 because strings have been fundamentally broken since time immemorial. Make 
 people type .byte_len() and be explicit about their desire to index via code 
 units.
 
 
 On Wed, May 28, 2014 at 1:12 PM, Kevin Ballard ke...@sb.org wrote:
 It's .len() because slicing and other related functions work on byte indexes.
 
 We've had this discussion before in the past. People expect there to be a 
 .len(), and the only sensible .len() is byte length (because char length is 
 not O(1) and not appropriate for use with most string-manipulation functions).
 
 Since Rust strings are UTF-8 encoded text, it makes sense for .len() to be 
 the number of UTF-8 code units. Which happens to be the number of bytes.
 
 -Kevin
 
 On May 28, 2014, at 7:07 AM, Benjamin Striegel ben.strie...@gmail.com wrote:
 
 I think that the naming of `len` here is dangerously misleading. Naive 
 ASCII-users will be free to assume that this is counting codepoints rather 
 than bytes. I'd prefer the name `byte_len` in order to make the behavior 
 here explicit.
 
 
 On Wed, May 28, 2014 at 5:55 AM, Simon Sapin simon.sa...@exyr.org wrote:
 On 28/05/2014 10:46, Aravinda VK wrote:
 Thanks. I didn't know about char_len.
 `unicode_str.as_slice().char_len()` is giving number of code points.
 
 Sorry for the confusion, I was referring codepoint as character in my
 mail. char_len gives the correct output for my requirement. I have
 written javascript script to convert from string length to grapheme
 cluster length for Kannada language.
 
 Be careful, JavaScript’s String.length counts UCS-2 code units, not code 
 points…
 
 
 -- 
 Simon Sapin
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] How to find Unicode string length in rustlang

2014-05-28 Thread Kevin Ballard
On May 28, 2014, at 11:55 AM, Benjamin Striegel ben.strie...@gmail.com wrote:

  Being too opinionated (regarding opinions that deviate from the norm) tends 
  to put people off the language unless there's a clear benefit to forcing 
  the alternative behavior.
 
 We have already chosen to be opinionated by enforcing UTF-8 in our strings. 
 This is an extension of that break with tradition.

There's no clear tradition regarding strings. Some languages treat strings as 
just blobs of binary data with no associated encoding (and obviously, operate 
on bytes). Some languages use an associated encoding with every string, but 
those are pretty rare. Some languages, such as JavaScript and Obj-C, use UCS-2 
(well, Obj-C tries to be UTF-16 but all of its accessors that operate on 
characters actually operate on UTF-16 code units, which is effectively 
equivalent to UCS-2).

  Today we only need to teach the simple concept that strings are utf-8 
  encoded
 
 History has shown that understanding Unicode is not a simple concept. Asking 
 for the length of a Unicode string is not a well-formed question, and we 
 must express this in our API. I also don't agree with accessor functions that 
 work on code units without warning, and for this reason I strongly disagree 
 with supporting the [] operator on strings.

Unicode is not a simple concept. UTF-8 on the other hand is a pretty simple 
concept. And string accessors that operate at the code unit level are very 
common (in fact, I can't think of a single language that doesn't operate on 
code units by default[1][2]). Pretty much the only odd part about Rust's 
behavior here is that the slicing methods (with the exception of slice_chars()) 
will fail if the byte index isn't on a character boundary, but that's a natural 
extension of the fact that Rust strings are guaranteed to be valid utf-8. And 
it's unrelated to the naming (even if it were called .byte_slice() it would 
still fail with the same input; and honestly, .byte_slice() looks like it will 
return a [u8]).

Of course, we haven't mentioned .byte_slice() before, but if you're going to 
rename .len() to .byte_len() you're going to have to add .byte_ prefixes to all 
of the other methods that take byte indexes.

In any case, the core idea here is that .len() returns the length of the 
string. And the length is the number of code units. This matches the behavior 
of other languages.

-Kevin

[1]: Even Haskell can be said to operate on code units, as its built-in string 
is a linked list of UTF-32 characters, which means the code unit is the 
character. Although I don't know offhand how Data.Text or Data.ByteString work.

[2]: Python 2.7 operates on bytes, but I just did some poking around in Python3 
and it seems to use characters for length and indexing. I don't know what the 
internal representation of a Python3 string is, though, so I don't know if 
they're using O(n) operations, or if they're using UTF-16/UTF-32 internally as 
necessary.

 On Wed, May 28, 2014 at 2:42 PM, Kevin Ballard ke...@sb.org wrote:
 Breaking with established convention is a dangerous thing to do. Being too 
 opinionated (regarding opinions that deviate from the norm) tends to put 
 people off the language unless there's a clear benefit to forcing the 
 alternative behavior.
 
 In this case, there's no compelling benefit to naming the thing .byte_len() 
 over merely documenting that .len() is in code units. Everything else that 
 doesn't explicitly say char on strings is in code units too, so it's 
 sensible that .len() is too. But having strings that don't have an inherent 
 length is confusing to anyone who hasn't already memorized this difference.
 
 Today we only need to teach the simple concept that strings are utf-8 
 encoded, and the corresponding notion that all of the accessor methods on 
 strings (including indexing using []) use code units unless they specify 
 otherwise (e.g. unless they contain the word char).
 
 -Kevin
 
 On May 28, 2014, at 10:54 AM, Benjamin Striegel ben.strie...@gmail.com 
 wrote:
 
  People expect there to be a .len()
 
 This is the assumption that I object to. People expect there to be a .len() 
 because strings have been fundamentally broken since time immemorial. Make 
 people type .byte_len() and be explicit about their desire to index via code 
 units.
 
 
 On Wed, May 28, 2014 at 1:12 PM, Kevin Ballard ke...@sb.org wrote:
 It's .len() because slicing and other related functions work on byte indexes.
 
 We've had this discussion before in the past. People expect there to be a 
 .len(), and the only sensible .len() is byte length (because char length is 
 not O(1) and not appropriate for use with most string-manipulation 
 functions).
 
 Since Rust strings are UTF-8 encoded text, it makes sense for .len() to be 
 the number of UTF-8 code units. Which happens to be the number of bytes.
 
 -Kevin
 
 On May 28, 2014, at 7:07 AM, Benjamin Striegel ben.strie...@gmail.com 
 wrote:
 
 I think

Re: [rust-dev] How to find Unicode string length in rustlang

2014-05-28 Thread Kevin Ballard
On May 28, 2014, at 1:26 PM, Benjamin Striegel ben.strie...@gmail.com wrote:

  Unicode is not a simple concept. UTF-8 on the other hand is a pretty simple 
  concept.
 
 I don't think we can fully divorce these two ideas. Understanding UTF-8 still 
 implies understanding the difference between code points, code units, and 
 grapheme clusters. If we have a single unadorned `len` function, that implies 
 the existence of a default length to a UTF-8 string, which is a lie. It 
 also *fails* to suggest the existence of alternative measures of length of a 
 UTF-8 string. Finally, the choice of byte length as the default length metric 
 encourages the horrid status quo, which is the perpetuation of code that is 
 tested and works in ASCII environments but barfs as soon as anyone from a 
 sufficiently-foreign culture tries to use it. Dedicating ourselves to Unicode 
 support does us no good if the remainder of our API encourages the 
 depressingly-typical ASCII-ism that pervades nearly every other language.

Do you honestly believe that calling it .byte_len() will do anything besides 
confusing anyone who expects .len() to work, and resulting in code that looks 
any different than just using .byte_len() everywhere people use .len() today?

Forcing more verbose, annoying, unconventional names on people won't actually 
change how they process strings. It will just confuse and annoy them.

-Kevin___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] How to find Unicode string length in rustlang

2014-05-28 Thread Kevin Ballard
On May 28, 2014, at 3:24 PM, Huon Wilson dbau...@gmail.com wrote:

 Changing the names of methods on strings seems very similar how Path does not 
 implement Show (except with even stronger motivation, because strings have at 
 least 3 sensible interpretations of what the length could be).

I disagree completely.

Path does not implement Show, because of ToStr (and just generally, because 
Show can be used to convert to a string representation). This isn't a problem 
for most types, but paths are special in that a lot of people think that they 
can be represented with strings, and therefore will try to do that. Because 
Path does not implement Show it's difficult to incorrectly convert it to a 
string (it exposes methods for getting an Optionstr which is the correct way 
to do it).

This is about preventing the user from doing something incorrectly, and forcing 
them to use the correct method.

Meanwhile, renaming .len() to .byte_len() doesn't actually prevent anything. It 
will just confuse people (and cause a lot of unnecessary typing of byte_), 
but people will still end up calling the exact same method. They did the same 
operation, they just got annoyed in the process.

It's important to note here that in most cases .len() actually is the correct 
method to call. This has been discussed before, but basically, string 
manipulation needs to use byte indexes (well, code unit indexes) to be at all 
efficient, and that's why the character-based methods have special names. This 
means that the byte-based methods are the ones we're expecting people to use. 
Renaming them doesn't change that fact.

If someone doesn't give any thought to non-ASCII text, putting byte in the 
method name isn't going to change that. And if they do give thought to 
non-ASCII text, leaving byte out of the name doesn't cause any issues.

Don't forget that renaming .byte_len() only makes sense if we rename 
.slice()/.slice_from()/.slice_to() to 
.byte_slice()/.byte_slice_from()/.byte_slice_to(). And besides being extremely 
verbose, these methods imply that they return a byte slice, or a [u8].

-Kevin___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] How to find Unicode string length in rustlang

2014-05-28 Thread Kevin Ballard
On May 28, 2014, at 6:00 PM, Benjamin Striegel ben.strie...@gmail.com wrote:

 To reiterate, it simply doesn't make sense to ask what the length of a string 
 is. You may as well ask what color the string is, or where the string went to 
 high school, or how many times the string rode the roller coaster that one 
 year on the first day of summer vacation when the string's parents took the 
 string to that amusement park and the weather said that it was going to rain 
 so there were almost no crowds that day but then it didn't rain and all the 
 rides were open with absolutely no lines whatsoever.

As amusing as this imagery is, you're still arguing from faulty premise, which 
is that the concept of a string has not been well-defined. The nebulous 
string, as it applies to the general category of programming languages, does 
indeed not have a well-defined length. But Rust's strings (both String and str) 
are very explicitly defined as a utf-8 encoded sequence. And when dealing with 
a sequence in a precise encoding, the natural unit to work with is the code 
unit (and this has precedence in other languages, such as JavaScript, Obj-C, 
and Go).

---

My interpretation of your arguments is that your real objection is that you 
think that calling it len() will mean people won't even think about the fact 
that there's a difference between byte length and character length, because 
they'll be too used to working with ASCII data, and that they'll write code 
that breaks when forced to confront the difference. This is true regardless of 
how len() is defined (whether it's in bytes, in UTF-16 characters, in unicode 
scalar values, etc).

My assertion is that calling the method .byte_len() will not force anyone to 
deal with non-ASCII data if they don't want to, it will only annoy everyone by 
being overly verbose, even more so when you rename .slice() to .byte_slice(), 
etc.

I also believe that renaming .slice() to .byte_slice() is unambiguously wrong, 
as the name implies that it returns [u8] when it doesn't. And similarly, that 
renaming just .len() to .byte_len() without renaming .slice() to .byte_slice() 
is also wrong. This means you cannot rename .len() to .byte_len() without 
introducing unambiguously wrong naming elsewhere.

---

Does this accurately represent your argument? And do you have any rebuttal to 
my argument that hasn't already been said? If the answers are yes and no 
respectively, then I agree, we will have to simply live with being in 
disagreement.

 Oh and while we're belligerently bikeshedding, we should rename `to_str` to 
 `to_string` once we rename `StrBuf` to `String`. :)

We've already renamed StrBuf to String, but I agree that .to_str() makes more 
sense as .to_string(). I was assuming that would eventually get renamed, 
although I just realized that it would then conflict with StrAllocating's 
.to_string() method, which is rather unfortunate.

-Kevin

smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] A few random questions

2014-05-28 Thread Kevin Ballard
On May 28, 2014, at 5:38 PM, Oleg Eterevsky o...@eterevsky.com wrote:

 4. It looks like vectors can be concatenated with + operations, but
 strings can't. Is it deliberate?

Partially. It's fallout of all the changes that strings have been going through 
lately. And [PR #14482][] reintroduces + for strings, but it hasn't been 
accepted yet.

[PR #14482]: https://github.com/mozilla/rust/pull/14482

-Kevin

smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] How to find Unicode string length in rustlang

2014-05-28 Thread Kevin Ballard
On May 28, 2014, at 9:16 PM, Bardur Arantsson s...@scientician.net wrote:

 Rust:
 
  $ cat
  fn main() {
let l = hï.len(); // Note the accent
println!({:u}, l);
  }
  $ rustc hello.rs
  $ ./hello
  3
 
 No matter how defective the notion of length may be, personally I
 think that people will expect the former, but will be very surprised by
 the latter. There are certainly cases where the JavaScript version is
 wrong, but I conjecture that it works for the vast majority of cases
 that people and programs are likely to encounter.

The JavaScript version is quite wrong. Isaac points out that NFC vs NFD can 
change the result, although that's really an issue with grapheme clusters vs 
codepoints. More interestingly, JavaScript's idea of string length is wrong for 
anything outside of the BMP:

$ node
 .length
2

This is because it was designed for UCS-2 instead of UTF-16, so .length 
actually returns the number of UCS-2 code units in the string.

Incidentally, that means that JavaScript and Rust do have the same fundamental 
definition of length (which is to say, number of code units). They just have a 
different code unit. In JavaScript it's confusing because you can learn to use 
JavaScript quite well without ever realizing that it's UCS-2 code units (i.e. 
that it's not codepoints). In Rust, we're very clear that our strings are utf-8 
sequences, so it should surprise nobody when the length turns out to be the 
number of utf-8 code units.

FWIW, Go uses utf-8 code units as well, and nobody seems to be confused about 
that.

-Kevin

smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] include_sized_bin!(type, file)

2014-05-27 Thread Kevin Ballard
What's the use-case for this?

-Kevin

On May 27, 2014, at 3:24 AM, Tommi rusty.ga...@icloud.com wrote:

 Could we add to the standard library a macro, say 'include_sized_bin', that 
 would be similar to std::macros::builtin::include_bin except that you'd also 
 give it a sized type to return (instead of a slice of u8's) and you'd get a 
 compile time error if the size of the file is different from the size of the 
 type you specified.
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] StrBuf and regular expressions

2014-05-27 Thread Kevin Ballard
On May 27, 2014, at 6:05 AM, Benjamin Striegel ben.strie...@gmail.com wrote:

  The use of taking S: Str is for taking things like [S], where you want 
  to take both str and String (such as in the other post about getops()).
 
 I wouldn't be bothered with leaving the API as-is, but I don't understand 
 where this guideline is coming from. Can you elaborate?

I don't know if there's any authoritative source for this, but it's been 
suggested in the past for other APIs. The basic issue is that [String] cannot 
be freely converted to [str]; you need to allocate a new Vec for that. So if 
you have a [String] and try to call an API that takes a [str] then you're 
doing extra work just to satisfy the type system. But the function in question 
doesn't actually need [str], it just needs a slice of things it can convert 
to str. So if it takes S:Str [S] then you can hand it your [String] 
without conversion, or you can hand it a [str], and it will work with both.

This doesn't necessarily mean you need to use S: Str [S] everywhere. But 
it's a nice thing to do if you think about it.

-Kevin___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] include_sized_bin!(type, file)

2014-05-27 Thread Kevin Ballard
Apparently include_bin!() returns a binary literal, not an actual slice. This 
eventually gets treated as a [u8] during type-checking, but it's apparently 
treated specially by trans.

Given this, I think creating an include_sized_bin!() would require changing 
LitBinary to include an optional size, and changing typeck to treat it as an 
array instead of a slice when it has a size. Certainly doable, but not quite as 
trivial as just defining a new syntax extension.

I also wonder if perhaps this would be better done as just a variant of 
include_bin!(), e.g. `include_bin!(lut_stuff.bin, 1000)` or 
`include_bin!(lut_stuff.bin, size=1000)`.

-Kevin

On May 27, 2014, at 10:11 AM, Tommi Tissari rusty.ga...@icloud.com wrote:

 I would use it for large, immutable, static lookup tables. I could use 
 'include_bin' and wrap any use of it in a 'std::mem::transmute', but I'd 
 rather not. Also, I'd appreciate the sanity check for knowing the file size 
 matches the size of the lookup table's type. E.g. 
 
 static lut: [[MyStruct, ..1000], ..1000] = include_sized_bin!([[MyStruct, 
 ..1000], ..1000], lut_stuff.bin);
 
 I've tried initializing my large lookup table using the normal fixed size 
 vector literal syntax, but it killed the compilation time and my computer ran 
 out of memory.
 
 On 27 May 2014, at 19:50, Kevin Ballard ke...@sb.org wrote:
 
 What's the use-case for this?
 
 -Kevin
 
 On May 27, 2014, at 3:24 AM, Tommi rusty.ga...@icloud.com wrote:
 
 Could we add to the standard library a macro, say 'include_sized_bin', that 
 would be similar to std::macros::builtin::include_bin except that you'd 
 also give it a sized type to return (instead of a slice of u8's) and you'd 
 get a compile time error if the size of the file is different from the size 
 of the type you specified.
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] box ref foo?

2014-05-27 Thread Kevin Ballard
It actually makes radius a f32.

-Kevin

On May 27, 2014, at 10:42 AM, Tommi Tissari rusty.ga...@icloud.com wrote:

 Thanks. I had failed to realize that 'ref radius' would make 'radius' a 
 Boxf32 value.
 
 
 On 27 May 2014, at 20:29, Oleg Eterevsky o...@eterevsky.com wrote:
 
 As far as I understand (I'm a newbie too), it means that 'radius' is the 
 reference to the value in the box.
 
 
 On Tue, May 27, 2014 at 10:27 AM, Tommi rusty.ga...@icloud.com wrote:
 What is the meaning of this 'box ref foo' syntax found in the tutorial over 
 at __http://doc.rust-lang.org/tutorial.html#references
 
 (Sorry for uglifying the link, my posts seem to get flagged as spam if they 
 contain links)
 
 In short, it's:
 
 enum Shape { Sphere(Boxf32) }
 
 let shape = Sphere(box 1.0f32);
 let r = match shape {
Sphere(box ref radius) = *radius
 };
 
 I thought the 'box' keyword meant: allocate on heap and wrap into a Box. 
 That doesn't make sense to me in the context of 'box ref radius'.
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] StrBuf and regular expressions

2014-05-27 Thread Kevin Ballard
On May 27, 2014, at 1:55 PM, Benjamin Striegel ben.strie...@gmail.com wrote:

 What I was specifically curious about is why you seem to be against the usage 
 of the `Str` trait as a bound on the argument to the `captures` method.

It adds unnecessary complexity, bloats the crate metadata with the function 
AST, and complicates the type signature, with no real benefit. The only thing 
that taking `S: Str` for an `arg: S` argument does is allowing a String to be 
passed without calling .as_slice(). But the S: Str solution moves the String 
into the function, rather than slicing it, which means the calling function 
gives up ownership of the String. This is rarely desired, and is likely to be 
confusing.

We're also almost certainly going to implement Derefstr on String post-DST, 
which means slicing a String is a simple as *foo. And there's a good chance 
we'll add auto-deref+autoref for function arguments, specifically so String 
will auto-slice to str the same way ~str used to (although someone still needs 
to write up an RFC for this).

-Kevin

 On Tue, May 27, 2014 at 1:01 PM, Kevin Ballard ke...@sb.org wrote:
 On May 27, 2014, at 6:05 AM, Benjamin Striegel ben.strie...@gmail.com wrote:
 
  The use of taking S: Str is for taking things like [S], where you want 
  to take both str and String (such as in the other post about getops()).
 
 I wouldn't be bothered with leaving the API as-is, but I don't understand 
 where this guideline is coming from. Can you elaborate?
 
 I don't know if there's any authoritative source for this, but it's been 
 suggested in the past for other APIs. The basic issue is that [String] 
 cannot be freely converted to [str]; you need to allocate a new Vec for 
 that. So if you have a [String] and try to call an API that takes a [str] 
 then you're doing extra work just to satisfy the type system. But the 
 function in question doesn't actually need [str], it just needs a slice of 
 things it can convert to str. So if it takes S:Str [S] then you can hand 
 it your [String] without conversion, or you can hand it a [str], and it 
 will work with both.
 
 This doesn't necessarily mean you need to use S: Str [S] everywhere. But 
 it's a nice thing to do if you think about it.
 
 -Kevin
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Array of heap allocated strings for opts_present?

2014-05-26 Thread Kevin Ballard
I think getopts has an old API. All the methods that take [String] should 
probably be rewritten to be generic with S: Str and take [S] instead, which 
will allow taking either a slice of Strings or a slice of str's.

-Kevin

On May 26, 2014, at 12:16 PM, Benjamin Striegel ben.strie...@gmail.com wrote:

 I'm not familiar with the getopts module, but on the surface that behavior 
 sounds wrong to me.
 
 As for the verbosity of the repeated `to_owned` calls, this sounds like the 
 perfect application for macros:
 
 #![feature(macro_rules)]
 
 macro_rules! owned(
 ($($e:expr),*) = ([$($e.to_owned()),*])
 )
 
 fn main() {
 let x = owned![b, c, d];
 }
 
 
 On Mon, May 26, 2014 at 2:11 PM, Gulshan Singh gsingh_2...@yahoo.com wrote:
 Why does getopts::Matches::opts_present() take an array of heap allocated 
 strings? Unless I'm missing something, it doesn't seem like it needs to: 
 https://github.com/mozilla/rust/blob/7d76d0ad44e1ec203d235f22eb3514247b8cbfe5/src/libgetopts/lib.rs#L302
 
 Currently, my code to use it looks like this:
 
 if matches.opts_present([b.to_owned(), x.to_owned(), s.to_owned(), 
 w.to_owned()]) { /* */ }
 
 1. Should the function be converted to take a list of borrowed strings?
 2. Regardless of what this function should take as an argument, is the way 
 I'm constructing a list of StrBufs the correct way to do it? It seems a bit 
 verbose.
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] StrBuf and regular expressions

2014-05-26 Thread Kevin Ballard
captures() should not take S: Str. That's unnecessary, it should just take 
str as it does now. The use of taking S: Str is for taking things like [S], 
where you want to take both str and String (such as in the other post about 
getops()).

For the time being, the answer is to use .as_slice() explicitly. The proposed 
Deref implementation post-DST has been talked about, and seems reasonably 
likely (though no decision has been made about it).

The remaining bit to this is that, AFAIK, we don't auto-deref function 
arguments. We had auto-borrow of ~str into str, but that doesn't apply to 
String. It has been suggested that maybe we should auto-deref+autoref function 
arguments such that String could then automatically coerce into str, but this 
has had very little discussion so far.

My suggestion is to wait until we have DST, then file an RFC suggesting the 
autoderef behavior for function arguments (and suggesting Deref on String if we 
haven't already done that).

-Kevin

On May 26, 2014, at 12:10 PM, Benjamin Striegel ben.strie...@gmail.com wrote:

 I don't think any of these will be necessary.
 
 We already have a trait called `Str` (which is a bad name, btw) in the 
 std::str module. This trait has exactly one method: `as_slice`. This trait is 
 already implemented on both `str` and `StrBuf` (the former in std::str, the 
 latter in std::strbuf). All that would need to be done is to make the 
 `captures` method generic on any type that implements `Str`, and then have it 
 call the `as_slice` method before doing exactly what it does today. This 
 extra operation would compile to a no-op for slices, and the usual cheap 
 slice operation for heap-allocated strings. Then you would be able to call 
 `re.captures(foo)` regardless of whether `foo` was a `str` or a `StrBuf`.
 
 
 On Mon, May 26, 2014 at 3:46 AM, Igor Bukanov i...@mir2.org wrote:
 Perhaps Rust should provide something like BorrowAsStr trait allowing
 to convert automatically to str. * is just too ugly...
 
 On 26 May 2014 08:58, Vladimir Matveev dpx.infin...@gmail.com wrote:
  My suspicion is that the automatic conversion will come back at some
  point, but I'm not sure.
 
  I think it will be possible to make `String` implement `Derefstr`
  when DST land. Then it will be possible to convert from `String` to
  `str` using explicit reborrowing:
 
  let sgf_slice = *sgf;
 
  I'm not sure this will be fully automatic when `String` is an
  arbitrary actual argument to arbitrary function, however.
 
  2014-05-26 10:36 GMT+04:00 Andrew Gallant jams...@gmail.com:
  Try using `self.sgf.as_slice()` instead.
 
  The change is necessary, AFAIK, because `~str` would automatically be
  converted to a borrowed reference without having to explicitly call the
  `as_slice` method. This doesn't happen for the StrBuf (and what is now
  String, I think) type.
 
  My suspicion is that the automatic conversion will come back at some
  point, but I'm not sure.
 
  - Andrew
 
 
  On Mon, May 26, 2014 at 2:32 AM, Urban Hafner cont...@urbanhafner.com 
  wrote:
  Hello there,
 
  I just updated the compiler (I use the git master branch) and now when I
  read in a file I get a StrBuf instead of a ~str. That is easy enough to
  change, but how do I use regular expressions now? I have the following in 
  my
  code:
 
  let re = regex!(rSZ\[(\d+)\]);
  let captures = re.captures(self.sgf).unwrap();
 
  And it fails now because self.sgf is a StrBuf instead of a str. Do I 
  have
  just a Rust compiler that is somewhere in between (i.e. not everything has
  been changed to StrBuf) or is this intentional? And if so, what's the best
  way to use regular expressions now?
 
  Urban
  --
  Freelancer
 
  Available for hire for Ruby, Ruby on Rails, and JavaScript projects
 
  More at http://urbanhafner.com
 
  ___
  Rust-dev mailing list
  Rust-dev@mozilla.org
  https://mail.mozilla.org/listinfo/rust-dev
 
  ___
  Rust-dev mailing list
  Rust-dev@mozilla.org
  https://mail.mozilla.org/listinfo/rust-dev
  ___
  Rust-dev mailing list
  Rust-dev@mozilla.org
  https://mail.mozilla.org/listinfo/rust-dev
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Array of heap allocated strings for opts_present?

2014-05-26 Thread Kevin Ballard
How is that any better than just using strings? [a, b, c] is no more 
typing than ['a', 'b', 'c'], but it doesn't require defining new traits.

-Kevin

On May 26, 2014, at 4:54 PM, Sean McArthur smcart...@mozilla.com wrote:

 Considering many options would be single characters, it'd seem nice to also 
 be able to pass a slice of chars. It doesn't look like char implements Str. 
 getopts could define a new trait, Text or something, for all 3...
 
 
 On Mon, May 26, 2014 at 2:56 PM, Kevin Ballard ke...@sb.org wrote:
 I think getopts has an old API. All the methods that take [String] should 
 probably be rewritten to be generic with S: Str and take [S] instead, 
 which will allow taking either a slice of Strings or a slice of str's.
 
 -Kevin
 
 On May 26, 2014, at 12:16 PM, Benjamin Striegel ben.strie...@gmail.com 
 wrote:
 
 I'm not familiar with the getopts module, but on the surface that behavior 
 sounds wrong to me.
 
 As for the verbosity of the repeated `to_owned` calls, this sounds like the 
 perfect application for macros:
 
 #![feature(macro_rules)]
 
 macro_rules! owned(
 ($($e:expr),*) = ([$($e.to_owned()),*])
 )
 
 fn main() {
 let x = owned![b, c, d];
 }
 
 
 On Mon, May 26, 2014 at 2:11 PM, Gulshan Singh gsingh_2...@yahoo.com wrote:
 Why does getopts::Matches::opts_present() take an array of heap allocated 
 strings? Unless I'm missing something, it doesn't seem like it needs to: 
 https://github.com/mozilla/rust/blob/7d76d0ad44e1ec203d235f22eb3514247b8cbfe5/src/libgetopts/lib.rs#L302
 
 Currently, my code to use it looks like this:
 
 if matches.opts_present([b.to_owned(), x.to_owned(), s.to_owned(), 
 w.to_owned()]) { /* */ }
 
 1. Should the function be converted to take a list of borrowed strings?
 2. Regardless of what this function should take as an argument, is the way 
 I'm constructing a list of StrBufs the correct way to do it? It seems a bit 
 verbose.
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 
 

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Array of heap allocated strings for opts_present?

2014-05-26 Thread Kevin Ballard
Sure, that seems like a pretty easy job to tackle. You should audit the whole 
getopts API to see where it uses String inappropriately. Any [String] 
parameters should probably be [S] where S: Str, and any bare String 
parameters (if any) should probably be str.

-Kevin

On May 26, 2014, at 6:29 PM, Gulshan Singh gsingh2...@gmail.com wrote:

 On Mon, May 26, 2014 at 2:56 PM, Kevin Ballard ke...@sb.org wrote:
 All the methods that take [String] should probably be rewritten to be 
 generic with S: Str and take [S] instead, which will allow taking either a 
 slice of Strings or a slice of str's.
 
 I've been wanting to contribute to Rust for a while. This seems like the 
 right thing to do and I don't think it's a hard change. Should I go ahead and 
 make it?
 
 -- 
 Gulshan Singh
 University of Michigan, Class of 2015
 College of Engineering, Computer Science Major
 guls...@umich.edu | 248.961.6317
 Alternate E-mail: gsingh2...@gmail.com

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Something like generics, but with ints

2014-05-25 Thread Kevin Ballard
Matthieu's code shows subtraction. It works via addition by making the input 
the type that uses addition, and the output the base type.

-Kevin

On May 25, 2014, at 10:55 AM, Isak Andersson cont...@bitpuffin.com wrote:

 Hey, thanks for the reply!
 
 (minor correction for myself, I meant to say submatrix rather than cofactor)
 
 Using Peano numbers is quite an interesting solution..
 
 The point of it would be to make arbitrarily sized matrices that will detect 
 as
 many errors as possible at compile time..
 
 However. With Peano numbers you can really only add numbers right? Which
 makes it problematic in the case of submatrix, where you actually end up with
 a matrix that has one row and one column less than the original.
 
 I guess you could work around it somehow by adding a Prev type or something
 but then you run in to issues when you compare types that got to the same 
 dimension
 through different ways. So you'd have to like make a complicated macro or 
 something
 to find what the number represented by the Peano ish number is..
 
 
 
 On Sun, May 25, 2014 at 7:37 PM, Matthieu Monrocq 
 matthieu.monr...@gmail.com wrote:
 It's been discussed, but there is still discussion on the best way to achieve 
 this.
 
 At the moment, you should be able to get around it using Peano numbers [1]:
 
 struct Zero;
 
 struct SuccT;
 
 struct MatrixT, M, N {
 data: VecT,
 }
 
 fn cofactorT, M, N(
 m: MatrixT, SuccM, SuccN,
 row: int,
 col: int
 ) - MatrixT, M, N
 {
 Matrix::T, M, N{ data: vec!() }
 }
 
 
 Of course, I would dread seeing the error message should you need more than a 
 couple rows/columns...
 
 [1] http://www.haskell.org/haskellwiki/Peano_numbers
 
 
 On Sun, May 25, 2014 at 7:25 PM, Isak Andersson cont...@bitpuffin.com wrote:
 Hello!
 
 I was asking in IRC if something like this:
 
 fn cofactor(m: MatrixT, R, C, row, col: int) - MatrixT, R-1, C-1 {...}
 
 was possible. I quickly got the response that generics doesn't work with
 integers. So my question is, is there anyway to achieve something similar?
 
 Or would it be possible in the future to do generic instantiation based on 
 more
 than just types.
 
 Thanks!
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 
 
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Interaction between private fields and auto-dereferencing

2014-05-23 Thread Kevin Ballard
This looks like a legitimate problem. Have you filed an issue on the GitHub 
issues page? https://github.com/mozilla/rust/issues/new

-Kevin

On May 23, 2014, at 6:04 AM, Paulo Sérgio Almeida pssalme...@gmail.com wrote:

 Hi all, (resending from different email address; there seems to be a 
 problem with my other address) 
 
 I don't know if this has been discussed, but I noticed an unpleasant 
 interaction between private fields in the implementation of things like 
 pointer types and auto-dereferencing. 
 
 The example I noticed is: if I want to store a struct with field x inside 
 an Arc, and then auto-dereference it I get the error: 
 
 error: field `x` of struct `sync::arc::Arc` is private 
 
 A program showing this, if comments are removed, where the ugly form (*p).x 
 must be used to solve it: 
 
 --- 
 extern crate sync; 
 use sync::Arc; 
 
 struct Point { x: int, y: int } 
 
 fn main() { 
 let p =  Arc::new(Point { x: 4, y: 8 }); 
 let p1 = p.clone(); 
 spawn(proc(){ 
 println!(task v1: {}, p1.y); 
 //println!(task v1: {}, p1.x); 
 println!(task v1: {}, (*p1).x); 
 }); 
 println!(v: {}, p.y); 
 //println!(v: {}, p.x); 
 println!(v: {}, (*p).x); 
 } 
 -- 
 
 The annoying thing is that a user of the pointer-type should not have to know 
 or worry about what private fields the pointer implementation contains. 
 
 A better user experience would be if, if in a context where there is no 
 corresponding public field and auto-deref is available, auto-deref is 
 attempted, ignoring private-fields of the pointer type. 
 
 If this is too much of a hack or with complex or unforeseen consequences, a 
 practical almost-solution without changing the compiler would be renaming 
 private fields in pointer implementations, like Arc, so as to minimize the 
 collision probability, e.g., use something like __x__ in arc.rs: 
 
 pub struct ArcT { 
 priv __x__: *mut ArcInnerT, 
 } 
 
 Regards, 
 Paulo 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] method on generic type parameters

2014-05-17 Thread Kevin Ballard
Rust doesn't like this because datatype() is not a method on T. It's a method 
on MySendable. There are various proposals that will fix this (including UFCS, 
and a proposal to make T::datatype() work as you just tried).

But for the time being, you basically cannot have a trait method that does not 
use Self anywhere in its type signature (which is to say, it doesn't take a 
`self` parameter, and it does not use the Self pseudo-type).

A common workaround here is to add a parameter of type OptionSelf, so you can 
pass None in order to provide the correct type info:

trait MySendable {
fn datatype(_: OptionSelf) - *c_void;
}

impl MySendable for int {
fn datatype(_: Optionint) - *c_void { DT_INT32 }
}

fn processT: MySendable(data: [T]) {
unsafe {
ffi_process(data.as_ptr(), data.len(), MySendable::datatype(None::T));
}
}

I do have to wonder, though, how you intend to map from the type pointer back 
to the original type. Or do you not need to go in that direction? If you do, I 
think you'll need to ditch the trait and instead just use an enum:

enum MySendable'a {
SendableInt('a [int]),
SendableFoo('a [Foo]),
...
}

impl'a MySendable'a {
fn datatype(self) - *c_void {
match *self {
SendableInt(_) = DT_INT32,
SendableFoo(_) = DT_FOO,
...
}
}
}

This approach lets you produce a reverse mapping as well.

However, if you don't need the reverse mapping, then the trait approach seems 
like a reasonable way to handle things.

-Kevin

On May 17, 2014, at 7:58 AM, Noah Watkins jayh...@cs.ucsc.edu wrote:

 I'm running into a problem creating a generic wrapper around an FFI
 interface of the following form:
 
  fn ffi_process(data: *c_void, count: c_int, type: *c_void);
 
 where `data` points to an array of `count` elements of each of type
 `type`. Each `type` is represented by a constant opaque pointer to
 some fixed symbol exported by the C library and corresponding to a
 particular type (e.g. DT_INT32, DT_FLOAT32, etc...).
 
 My approach so far has been to use a trait to map Rust data types to
 data types in the library as follows:
 
 trait MySendable {
  fn datatype() - *c_void;
 }
 
 impl MySendable for int {
  fn datatype() - *c_void { DT_INT32 }
 }
 
 fn processT: MySendable(data: [T]) {
  unsafe {
ffi_process(data.as_ptr(), data.len(), T::datatype());
  }
 }
 
 But, Rust does not like this at all. I have seen many many references
 on github issues and SO about methods on the types, but it isn't clear
 what the current / future approach is / will be.
 
 Is this (1) a reasonable approach to the problem (even if not
 currently possible), and (2) are there other ways to bend the type
 system to my needs here?
 
 Thanks!
 Noah
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Why explicit named lifetimes?

2014-05-16 Thread Kevin Ballard
On May 15, 2014, at 9:54 PM, Daniel Micay danielmi...@gmail.com wrote:

 On 16/05/14 12:48 AM, Tommi wrote:
 On 2014-05-16, at 7:35, Steven Fackler sfack...@gmail.com wrote:
 
 Type annotations are not there for the compiler; they're there for people 
 reading the code. If I want to use some function I don't want to be forced 
 to read the entire implementation to figure out what the lifetime of the 
 return value is.
 
 Just to be sure... I take it you are in fact implying that yes, it is 
 possible for the compiler to figure out those lifetimes without the 
 programmer explicitly specifying them in the function signature?
 
 Lifetimes are based on subtyping, and AFAIK that presents huge problems
 for type inference. You would need to ask Niko how feasible it would be
 to infer it globally. I used to run into all sorts of problems with the
 *local* inference but it got a lot better.

I don't think global inference even is possible for this, because the lifetimes 
affect borrowing. Example:

pub trait Foo {
fn foo(self) - str;
}

This is a trait that defines a method `foo()` that returns a reference. I've 
elided the lifetimes. Now tell me, if I'm calling this method through a generic 
(or a trait object), what is the lifetime of the returned value? Is it the same 
as `self`, meaning does it need to borrow the receiver? Or is it 'static 
(which won't borrow the receiver)?

-Kevin
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] how to capture de-reference to ~int

2014-05-16 Thread Kevin Ballard
If you want that, you'll need to use a custom pointer type instead of ~. You 
can use ~ internally, and implement Deref/DerefMut to do the desired capturing.

pub struct PtrT {
value: ~T // assuming 0.10 here, master would be BoxT
}

implT PtrT {
pub fn new(val: T) - PtrT {
Ptr { value: ~val }
}
}

implT DerefT for PtrT {
fn deref'a('a self) - 'a T {
// do any capturing of the dereference here
*self.value
}
}

implT DerefMutT for PtrT {
fn deref_mut'a('a mut self) - 'a mut T {
// do any capturing of the mutable dereference here
mut *self.value
}
}

fn main() {
let x = Ptr::new(3i);
println!(x: {}, *x);
}

-Kevin

On May 13, 2014, at 9:16 PM, Noah Watkins jayh...@cs.ucsc.edu wrote:

 On Tue, May 13, 2014 at 2:19 PM, Alex Crichton a...@crichton.co wrote:
 The ~int type has since moved to Boxint, which will one day be a
 library type with Deref implemented on it (currently it is implemented
 by the compiler).
 
 Thanks for the note. I'm using a slightly older version that doesn't
 have BoxT, but it sounds like ~ is also implemented with the
 compiler. I was actually looking into this because I was interested in
 intercepting the deref (as well as allocate/free) for some distributed
 shared-memory experiments. Some of the safety checks rust performs
 simplifies the coherency requirements needed of the storage layer
 (e.g. expensive locking).
 
 -Noah
 
 
 On Mon, May 12, 2014 at 11:50 AM, Noah Watkins jayh...@cs.ucsc.edu wrote:
 I am trying to capture the reference to type `~int` with the following
 code. I can change it to apply to bare `int` and it works fine.
 
 #[lang=deref]
 pub trait DerefResult {
fn deref'a('a self) - 'a Result;
 }
 
 impl Deref~int for ~int {
fn deref'a('a self) - 'a ~int {
println!(deref caught);
self
}
 }
 
 fn main() {
  let x: ~int = 3;
  *x
 }
 
 Thanks,
 Noah
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Error: cannot borrow `***test` as mutable because it is also borrowed as immutable in match

2014-04-26 Thread Kevin Ballard
This fixed code is wrong. Instead of taking a reference to the borrow, it 
actually destructs it and copies the referenced Test. Since it's not keeping a 
reference anymore, that's why it works.

The issue with the original code is test.match_fn() borrow test, and Some(mut 
borrow_test) keeps that borrow alive in the borrow_test binding. Since it's 
used in the match arm, that means the borrow is alive, which prevents 
test.test_mutable() from being called.

I don't know how to fix this code when I have no idea what you're actually 
trying to accomplish here.

-Kevin

On Apr 25, 2014, at 12:55 AM, Philippe Delrieu philippe.delr...@free.fr wrote:

 Thanks for your help. It works on the test but not in my main program.
 
 I'll try to update the test to make it works like the main program but I have 
 not yet found what make the base code different.
 
 Philippe
 
 Le 24/04/2014 23:06, Artella Coding a écrit :
 Hi, the following modified program seems to work (I am using rustc 
 0.11-pre-nightly (d35804e 2014-04-18 00:01:22 -0700) :
 
 
 **
 use std::vec::Vec;
 use std::rc::Rc;
 use std::cell::RefCell;
 
 struct Test;
 
 impl Test {
  fn match_fn'a('a self) -Option'a Test {
  None
  }
 
  fn test_mutable'a('a mut self, test: 'a mut Test) {}
 }
 
 fn TestMatchBorrow(){
  let mut viewList: Vec~Test = Vec::new();
 
  for ref mut test in viewList.mut_iter(){
  match test.match_fn()   {
 Some(mut borrow_test) = test.test_mutable(mut borrow_test),
 None = {},
  }
  }
 
 }
 
 #[main]
 fn main() {
  TestMatchBorrow();
 }
 **
 
 
 
 On Thu, Apr 24, 2014 at 9:23 PM, Philippe Delrieu philippe.delr...@free.fr 
 wrote:
 Hello,
 
 I have a problem in my code and I can't find a solution. I develop a test 
 case that generate the same error. Any idea?
 
 use std::vec::Vec;
 use std::rc::Rc;
 use std::cell::RefCell;
 
 struct Test;
 
 impl Test {
 fn match_fn'a('a self) -Option'a Test {
 None
 }
 
 fn test_mutable'a('a mut self, test: 'a Test) {}
 }
 
 fn TestMatchBorrow(){
 let mut viewList: Vec~Test = Vec::new();
 
 for ref mut test in viewList.mut_iter(){
 match test.match_fn()   {
 Some(mut borrow_test) = test.test_mutable(borrow_test),
 None = {},
 }
 }
 
 }
 
 #[main]
 fn main() {
 TestMatchBorrow();
 }
 
 The test struct can't be changed.
 If I don't put the borrow_test in test.test_mutable(borrow_test) it compile.
 
 The error :
 test_match.rs:22:38: 22:42 error: cannot borrow `***test` as mutable because 
 it is also borrowed as immutable
 test_match.rs:22 Some(mut borrow_test) = 
 test.test_mutable(borrow_test),
 ^~~~
 test_match.rs:21:15: 21:19 note: previous borrow of `***test` occurs here; 
 the immutable borrow prevents subsequent moves or mutable borrows of 
 `***test` until the borrow ends
 test_match.rs:21 match test.match_fn()   {
 ^~~~
 test_match.rs:24:10: 24:10 note: previous borrow ends here
 test_match.rs:21 match test.match_fn()   {
 test_match.rs:22 Some(mut borrow_test) = 
 test.test_mutable(borrow_test),
 test_match.rs:23 None = {},
 test_match.rs:24 }
 
 Philippe
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] impl num::Zero and std::ops::Add error

2014-04-10 Thread Kevin Ballard
On Apr 9, 2014, at 11:25 PM, Tommi Tissari rusty.ga...@icloud.com wrote:

 On 10 Apr 2014, at 07:55, Corey Richardson co...@octayn.net wrote:
 
 range doesn't return a forward iterator. RangeA also implements
 DoubleEndedIterator.
 
 Ok, I didn't realize that. But it still should't require AddA, A when all 
 it needs is a way to get to the next and previous values. 

Any such trait for this would really need to be designed expressly for Range, 
and then reimplemented for every single numeric type.

-Kevin

smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] impl num::Zero and std::ops::Add error

2014-04-09 Thread Kevin Ballard
Why? Zero is the additive identity. It's only bad if you want to denote a value 
that contains zeros that doesn't support addition, but that's only bad because 
of a misconception that Zero should mean a default value when we have Default 
for that. For reference, the Zero trait lives in std::num, which should be a 
good indication that this is a property of numeric types.

AdditiveIdentity is the only reasonable alternative, but that's a mouthful of a 
name and I think changing the name to this would be more confusing. Someone who 
needs a numeric zero isn't going to go looking for AdditiveIdentity, they're 
going to look for Zero.

-Kevin

On Apr 9, 2014, at 6:29 AM, Liigo Zhuang com.li...@gmail.com wrote:
 Zero is a bad name here, it should be renamed or removed
 
 2014年4月9日 上午1:20于 Kevin Ballard ke...@sb.org写道:
 On Apr 7, 2014, at 1:02 AM, Tommi Tissari rusty.ga...@icloud.com wrote:
 
 On 07 Apr 2014, at 08:44, Nicholas Radford nikradf...@googlemail.com wrote:
 
 I think the original question was, why does the zero trait require the add 
 trait.
 
 If that was the original question, then my answer would be that 
 std::num::Zero requires the Add trait because of the way it is specified: 
 Defines an additive identity element for Self. Then the question becomes: 
 why is Zero specified like that?, and I would answer: because then you can 
 use it in generic algorithms which require their argument(s) to have an 
 additional identity. 
 
 If you want a zero value for a type that doesn't support addition, 
 std::default::Default may be a good choice to use. Semantically, that 
 actually returns the default value for a type instead of the zero value, 
 but in a type without addition, how do you define zero value?
 
 -Kevin
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] impl num::Zero and std::ops::Add error

2014-04-09 Thread Kevin Ballard
FWIW, my point about range is it relies on One being the number 1, rather than 
being the multiplicative identity. AFAIK there's nothing special about 1 in a 
ring outside of its status as a multiplicative identity. Certainly it's not 
considered some special value for addition.

As an example as to why this usage is weird, range(0f32, 10f32) actually is 
defined, and will produce [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]. Similarly, 
range(0.5f32, 10f32) is defined and will produce [0.5, 1.5, 2.5, 3.5, 4.5, 5.5, 
6.5, 7.5, 8.5, 9.5]. This is technically a mis-use of One, but it just happens 
to produce reasonable values.

Of course, if you use it on, say, a 2x2 matrix that defines AddMatrix,Matrix, 
defines One as {{1,0},{0,1}} (the multiplicative identity), and also has some 
arbitrary definition of Ord, then range() would operate over this matrix and 
produce very weird results. For example, range(matrix![[1,2],[3,4]], 
matrix![10,10],[10,10]]) might, depending on the Ord definition, produce 
[matrix![[1,2],[3,4]], matrix![[2,2],[3,5]], matrix![[3,2],[3,6]], ...]. You 
can see that this is a nonsensical range (not that I think there is a way to 
define range for matrices that makes any sense).

In any case, my overall point is Zero and One are more practical names than 
AdditiveIdentity and MultiplicativeIdentity. If we wanted a proper, accurate 
numeric hierarchy, we'd use the latter. But libstd wants a practical, shallow 
hierarchy. And there's certainly nothing stopping you from defining a separate 
libnumerics that provides an accurate mathematica numeric hierarchy (this is 
probably something that would be useful to have, but libstd doesn't want it 
because it's really easy to get wrong, e.g. I believe Haskell thinks their 
numeric hierarchy was a mistake).

-Kevin

On Apr 9, 2014, at 2:10 PM, Eric Reed ecr...@cs.washington.edu wrote:

 I think part of the confusion here is that matrix addition isn't actually a 
 binary operator, but rather a family of binary operators parametrized over 
 the matrix dimensions. There's +2,2 for 2 x 2 matrices, +2,3 for 2 x 3 
 matrices, etc. Similarly, the zero matrix is actually parametrized over 
 dimensions. 02,2 is different from 02,3. For any n,m: +n,m has the 
 identity 0n,m. If we wanted to properly represent that in Rust, we would 
 need type level naturals that we could parametrize Matrix over.
 
 Regarding the range thing, I thought for a minute that it might make sense if 
 we required Mul+One+Add+Zero to be a ring (which is the intention I think), 
 but I don't think that's actually true in general for rings (i.e. that 1 is a 
 generating set of the underlying group).
 
 
 On Wed, Apr 9, 2014 at 1:42 PM, Kevin Ballard ke...@sb.org wrote:
 The number 0 is the additive identity for numbers. But informally, the 
 additive identity for other things can be called zero without problem. 
 Heck, even the wikipedia page on Additive Identity uses this example for 
 groups:
 
  Let (G, +) be a group and let 0 and 0' in G both denote additive 
  identities, so for any g in G,
 
  0 + g = g = g + 0 and 0' + g = g = g + 0'
  It follows from the above that
 
  (0') = (0') + 0 = 0' + (0) = (0)
 
 Look at that, an additive identity for something other than a number, and 
 zero (0) is used to denote this additive identity.
 
 The only issue comes in when you define addition in multiple different ways 
 for a single type. Of course, right now I believe compiler bugs prevent you 
 from actually using multiple implementations of Add with different type 
 parameters for a given type, so this isn't actually a problem right now. And 
 when that bug is fixed, it's still reasonable to consider Zero to be the 
 additive identity for any addition where the receiver type is the right-hand 
 side of the addition. In other words, if you define Adduint, Matrix for 
 Matrix, then the additive identity here is Zero::for uint, not Zero::for 
 Matrix.
 
 Regarding You can't assign a zero to a 2x2 matrix, additive identity does 
 not require the ability to assign. And this is only a problem when 
 considering addition between disparate types. If you consider matrix addition 
 (e.g. 2x2 matrix + 2x2 matrix) then you certainly can assign the additive 
 identity back to one of the matrix values.
 
 let m: Matrix = Zero::zero();
 
 looks fine to me. It produces a matrix m that, when added to any other Matrix 
 m', produces the same matrix m'. This is presumably a Matrix where every 
 element is 0. But again, this only makes sense if you've actually defined 
 AddMatrix,Matrix for Matrix.
 
 Regardless, we've already made the decision not to go down numeric type 
 hierarchy hell. We're trying to keep a reasonable simple numeric hierarchy. 
 And part of that means using straightforward lay-person terms instead of 
 perhaps more precise mathematical names. As such, we have std::num::Zero as 
 the additive identity and std::num::One as the multiplicative identity.
 
 If you really want to complain about

Re: [rust-dev] impl num::Zero and std::ops::Add error

2014-04-09 Thread Kevin Ballard
On Apr 9, 2014, at 9:50 PM, Tommi Tissari rusty.ga...@icloud.com wrote:

 On 10 Apr 2014, at 00:22, Kevin Ballard ke...@sb.org wrote:
 
 FWIW, my point about range is it relies on One being the number 1, rather 
 than being the multiplicative identity. AFAIK there's nothing special about 
 1 in a ring outside of its status as a multiplicative identity. Certainly 
 it's not considered some special value for addition.
 
 Another problem with std::iter::range is that it requires too much from its 
 argument type A by saying A must implement AddA, A while it only returns a 
 forward iterator.
 
 Perhaps, in order to make a more sensible implementation of iter::range, a 
 new concept, a trait, is needed to be able to specify that a certain type T 
 implements a method 'increment' that modifies a variable of type T from value 
 x to value y such that:
 1) x  y
 2) there is no valid value z of type T satisfying  x  z  y
 
 For integral types there would an implementation of this trait in stdlib with 
 'increment' doing x += 1;
 
 Then, a natural extension to this trait would be a trait that has a method 
 'advance(n: uint)' that would, at constant time, conceptually call the 
 'increment' method n times.
 
 Then there would also be a 'decrement' method for going the other direction.
 
 There probably needs to be some other use cases for this new trait to carry 
 its weight though.

This trait would disallow range(0f32, 10f32) because there are quite a lot of 
valid values z of type f32 satisfying 0f32  z  1f32.

-Kevin
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] impl num::Zero and std::ops::Add error

2014-04-08 Thread Kevin Ballard
On Apr 7, 2014, at 1:02 AM, Tommi Tissari rusty.ga...@icloud.com wrote:

 On 07 Apr 2014, at 08:44, Nicholas Radford nikradf...@googlemail.com wrote:
 
 I think the original question was, why does the zero trait require the add 
 trait.
 
 If that was the original question, then my answer would be that 
 std::num::Zero requires the Add trait because of the way it is specified: 
 Defines an additive identity element for Self. Then the question becomes: 
 why is Zero specified like that?, and I would answer: because then you can 
 use it in generic algorithms which require their argument(s) to have an 
 additional identity. 

If you want a zero value for a type that doesn't support addition, 
std::default::Default may be a good choice to use. Semantically, that actually 
returns the default value for a type instead of the zero value, but in a 
type without addition, how do you define zero value?

-Kevin___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Reminder: ~[T] is not going away

2014-04-02 Thread Kevin Ballard
On Apr 2, 2014, at 8:35 AM, Alex Crichton a...@crichton.co wrote:

 As a concrete example, I'll take the read_to_end() method on io's Reader 
 trait.
 This type must use a VecT internally to read data into the vector, but it 
 will
 return a ~[T] because the contents are conceptually frozen after they have 
 been
 read.

This concrete example is great, because it precisely illustrates a major 
objection I have to returning ~[T].

Reader.read_to_end() internally uses a 64k-byte vector. It reserves 64k bytes, 
then pushes onto this vector until it hits EOF. Every time it fills up the 64k 
capacity it reserves another chunk and keeps reading (this, btw, is I think 
almost certainly unintended behavior and is fixed by #13127, which changes it 
to always keep 64k of space available for each read rather than potentially 
requesting smaller and smaller reads). Note that because it uses 
reserve_at_least() it may actually have more than 64k available. When EOF is 
reached, this vector is returned to the caller.

The problem I have with returning ~[T] here is that both choices for how to 
deal with this wasted space are terrible:

1. Shrink-to-fit before returning. If I'm going to keep the vector around for a 
long time this is a good idea, but if I'm just going to process the vector and 
throw it away, the reallocation was completely unnecessary.
2. Convert to ~[T] without shrinking. The caller has no way to know about the 
potentially massive amount of wasted space. If I'm going to just process the 
vector and throw it away that's fine, but if I'm going to keep it around for a 
while then this is terrible.

The only reasonable solution is to return the VecT and let the caller decide 
if they want to shrink-to-fit or not.

-Kevin Ballard
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Reminder: ~[T] is not going away

2014-04-02 Thread Kevin Ballard
On Apr 2, 2014, at 3:01 PM, Huon Wilson dbau...@gmail.com wrote:

 On 03/04/14 08:54, Patrick Walton wrote:
 
 What about strings? Should we be using `StrBuf` as well?
 
 I don't see why not. The same arguments apply.

I agree. I was actually quite surprised to see that the type was named StrBuf, 
I assumed it was going to be Str just as Vec is not VecBuf.

I'm in full agreement with Huon on this matter. The standard libraries should 
return VecT instead of ~[T] in pretty much every case (the only real 
exception I can think of is Vec~[T] because of the ability to convert to 
Vec[T] or [T]] for free). Similarly I think we should be returning StrBuf 
instead of ~str in all cases. And finally, I think we should just name it Str 
instead of StrBuf.

If developers want to use ~[T] and ~str in their own code, that's fine, but the 
standard libraries should err on the side of preserving information (e.g. 
capacity) and providing a consistent experience. If there's one thing I really 
want to avoid above all else, it's confusing people about whether they should 
be using ~[T] or VecT, because some standard library code uses one and some 
code uses the other.

-Kevin
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] 0.10 prerelease testing

2014-04-02 Thread Kevin Ballard

On Apr 2, 2014, at 2:08 PM, Simon Sapin simon.sa...@exyr.org wrote:

 On 02/04/2014 18:43, Corey Richardson wrote:
 On Wed, Apr 2, 2014 at 1:34 PM, Steve Klabnik st...@steveklabnik.com wrote:
 I compiled from source just yesterday, but everything's been going 
 swimmingly!
 
 I just have one comment on 0.10: It seems like println was removed
 from the prelude. While I can totally appreciate that most people will
 use println!, which is automatically use-able, it _is_ making my
 'hello world' examples significantly more complex, since basically
 every one of them needs to either import println or use println!({},
 foo);
 
 I'm not sure if this is a good or bad thing, just wanted to raise that
 as a possible issue.
 
 
 It has been raised, as an extension to the macro, that invocation with
 a single, non-string literal, could expand into `println!({},
 $that_arg)` rather than requiring the `{}`.
 
 This sounds even better than having both println() and println!() (in the 
 prelude) with non-obvious differences.

This was discussed a while ago. I am very strongly opposed to this change. The 
primary reason being that

println!(hello world);

and

let s = hello world;
println!(s);

should have the same semantics. I don't believe we have any precedence right 
now for a semantic behavior change when using an identifier in place of an 
expression. Similarly,

println!(hello world);

and

println!(hello  + world);

should behave the same. As with the previous, I don't believe we have any 
precedence for a semantic behavior change when replacing a constant string with 
a non-constant expression.

-Kevin Ballard
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] 0.10 prerelease testing

2014-04-02 Thread Kevin Ballard
On Apr 2, 2014, at 10:14 PM, Daniel Micay danielmi...@gmail.com wrote:

 Perhaps we should have `print` and `println` back in the prelude and
 call these `printf!` and `printfln!`. I think it would be a lot clearer,
 as people always ask how these are different from `print` and `println`.

I would not be opposed to putting print() and println() back in the prelude, 
but printf!() and printfln!() are not good names. Out format syntax does not 
match printf(), and any attempt to use the name printf would only sow confusion.

Ultimately, though, I think things are fine as they are. In practice I haven't 
had any issue with the requirement to say println!({}, s) if I want to print 
a variable. And most of the time it turns out I want to print more than just a 
variable anyway.

-Kevin
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] How to end a task blocked doing IO

2014-03-07 Thread Kevin Ballard
AFAIK there is no solution to this at the moment.

One proposal was to add a `close()` method to `TcpStream` that would close it 
immediately without waiting for it to go out of scope. This seems like the 
simplest solution, if someone wants to implement it.

A better (but much more complicated) solution is to have a Select functionality 
that allows for handling Channels and TcpStreams at the same time. That’s 
something a bunch of us want, but it’s more complicated to implement.

-Kevin

On Mar 7, 2014, at 3:04 PM, Rodrigo Rivas rodrigorivasco...@gmail.com wrote:

 Hello!
 
 I'm writing my first non trivial program in Rust and I'm facing now a
 blocking issue (pun intended): I have to read/write data from/to a
 TcpStream, so I create a task for reading and a task for writing.
 
 fn do_tcp_stuff(sck : TcpStream) {
let mut wtr = ~BufferedWriter::new(sck.clone());
let mut rdr = ~BufferedReader::new(sck);
 
let (inport, inchan) = Chan::new();
let (outport, outchan) = Chan::new();
 
spawn(proc() { do_tcp_write(wtr, outport); });
spawn(proc() { do_tcp_read(rdr, inchan); });
 
loop {
   // do interesting things, select!() and such
}
 
 }
 
 fn do_tcp_write(mut wtr : ~Writer, port : Port~[u8]) - IoResult() {
loop {
let data = port.recv();
try!(wtr.write(data));
wtr.flush();
}
Ok(())
 }
 
 fn do_tcp_read(mut rdr : ~Reader, chan : Chan~[u8]) - IoResult() {
loop {
let block = try!(rdr.read_bytes(1024));
chan.send(block);
}
Ok(())
 }
 
 And all works perfectly... until I want to close the connection and
 kill the tasks:
 
 - The do_tcp_write() function is blocked in port.recv(), so if I
 close the outchan it will finish automatically. Nice!
 - But the do_tcp_read() function is blocked in rdr.read_bytes() so
 closing inport will not affect it, unless it happens to receive some
 data.
 
 I've read that in older iterations of the library I could use linked
 tasks or something like that. But in master that seems to have
 disappeared. I tried also closing the connection, but I couldn't find
 how.
 
 Is there any way to do what I want? Or am I doing something fundamentally 
 wrong?
 
 Thank you in advance for your help!
 -- 
 Rodrigo
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: Conventions for well-behaved iterators

2014-03-04 Thread Kevin Ballard
On Mar 4, 2014, at 5:23 AM, Tommi rusty.ga...@icloud.com wrote:

 I agree with the spirit of your proposal. But I would change that first 
 clause above to read:
 
 An iterator is said to be well-behaved when its .next() method always 
 returns None if the iterator logically has no elements to iterate over.
 
 And all iterators should, by convention, be well-behaved. Otherwise it's 
 impossible to pinpoint what exactly is the bug in the following code:
 
 struct Digits {
 n: int
 }
 
 impl Iteratorint for Digits {
 fn next(mut self) - Optionint {
 self.n += 1;
 
 if self.n == 10 {
 None
 }
 else {
 Some(self.n)
 }
 }
 }
 
 fn main() {
 let mut itr = Digits { n: -1 };
 
 for i in itr { // for-loop consumes all items in itr
 println!({}, i);
 }
 
 let sum = itr.fold(0, |a, b| a + b); // Infinite loop
 println!({}, sum);
 }
 
 Given the current std::iter::Iterator specification [1], the implementation 
 of the .next() method of Digits is valid. Also, the fold method of Iterator 
 trait should return the initial state (the first argument) when fold is 
 called on an empty iterator, but the call get's stuck on an infinite loop 
 instead.

The bug is pretty obvious. You're using an iterator after it's been exhausted. 
This means you're relying on behavior specific to that implementation of the 
iterator.

One reason why the iterator protocol allows this is precisely to allow things 
like an iterator that yields multiple distinct sequences. And that's what your 
Digits iterator is. It yields two sequences, the first is the finite sequence 
[1, 10), and the second is the infinite sequence [11, ...]. So the fold() runs 
forever because the second sequence is infinite. If Digits only ever yielded a 
single infinite sequence [0, ...] then your fold() would still run forever. 
Alternatively, if your Digits was implemented to return multiple finite 
sequences, your fold() would work. For example

impl Iteratorint for Digits {
fn next(mut self) - Optionint {
self.n += 1;
if self.n % 10 == 0 {
None
} else {
Some(self.n)
}
}
}

This variant yields successive 9-element sequences, skipping every value 
divisible by 10.

If you do need to touch an iterator after it's been exhausted, and you want the 
post-exhaustion behavior to be always returning None, that's what .fuse() is 
for.

fn main() {
let mut itr = Digits { n: -1 }.fuse();

for i in itr {
println!({}, i);
}

let sum = itr.fold(0, |a, b| a + b);
println!({}, sum);
}

But of course this still has a bug, which is that the fold is now guaranteed to 
return 0, because you exhausted the iterator already.

-Kevin

smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] About RFC: A 30 minute introduction to Rust

2014-03-03 Thread Kevin Ballard
On Mar 3, 2014, at 8:44 PM, Nathan Myers n...@cantrip.org wrote:

 On 03/03/2014 07:46 PM, comex wrote:
 On Mon, Mar 3, 2014 at 10:17 PM, Nathan Myers n...@cantrip.org wrote:
 It's clear that we need someone fully competent in C++ to
 code any comparisons.  In C++, one is only ever motivated to
 (unsafely) extract the raw pointer from a smart pointer when
 only a raw pointer will do.  This is exactly as likely to happen
 in Rust as in C++, and in exactly the same places.  Where it is
 needed, Rust offers no safer alternative.
 
 This is simply wrong.
 
 I assume you take issue not with the leading sentence above,
 but with those following.
 
  Most C++ code I've seen frequently uses raw
 pointers in order to pass around temporary references to objects that
 are not reference counted (or even objects that are reference counted,
 to avoid the overhead for simple copies).  ...
 
 For temporary references in C++ code, I prefer to use references.  But
 we do need actual raw pointers to managed (sub)objects as arguments to
 foreign C functions.  There, C++ and Rust coders are in the same boat.
 Both depend on the C function not to keep a copy of the unsafely-issued
 borrowed pointer.  C++ does allow a reference to last longer than the
 referent, and that's worth calling attention to.
 
 In Rust, many of the situations where C++ uses raw pointers allow use
 of borrowed pointers, which are safe and have no overhead.
 
 There are certainly cases in either language where nothing but a
 pointer will do.  The problem here is to present examples that are
 simple enough for presentation, not contrived, and where Rust has
 the clear advantage in safety and (ideally) clarity.  For such examples
 I'm going to insist on a competent C++ coder if we are not to drive
 away our best potential converts.

You seem to be arguing that C++ written correctly by a highly-skilled C++ coder
is just as good as Rust code, and therefore the inherent safety of Rust does not
give it an advantage over C++. And that's ridiculous.

Yes, it's certainly possible to write safe C++ code, and properly sticking to 
things
like shared_ptr and unique_ptr make that easier. But it still relies on you 
doing the
right thing 100% of the time, and never making a mistake and never trying to 
take a
shortcut.

Recently I've had the pleasure of dealing with a relatively new C++ codebase
(that unfortunately is stuck in C++98 due to Windows support), written by 
competent
C++ programmers. And it has memory bugs. Not only have I already identified 
multiple
memory leaks caused by unclear ownership semantics, but even now there's a 
subtle
memory corruption bug that causes my app to crash on launch once every few 
days. I have
no idea what it is, and I don't think it's going to get sorted out until 
someone takes
valgrind to the code. But what I am sure is that, in the absence of `unsafe`, 
Rust would
have prevented the crash and prevented the memory leaks that I've identified. 
And if we
allow for using `unsafe`, that dramatically cuts down on the amount of code 
that needs
to be vetted for safety.

So yes, it's possible to write C++ code that's just as safe as Rust, but it's 
significantly
harder, and errors are much harder to catch.

-Kevin

smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: Opt-in builtin traits

2014-02-28 Thread Kevin Ballard
On Feb 28, 2014, at 8:10 PM, Kang Seonghoon some...@mearie.org wrote:

 2014-03-01 6:24 GMT+09:00 John Grosen jmgro...@gmail.com:
 On Friday, February 28, 2014 at 11:15 AM, Matthieu Monrocq wrote:
 
 Maybe one way of preventing completely un-annotated pieces of data would be
 a lint that just checks that at least one property (Send, Freeze, ...) or a
 special annotation denoting their absence has been selected for each
 public-facing type. By having a #[deriving(...)] mandatory, it makes it
 easier for the lint pass to flag un-marked types without even having to
 reason whether or not the type would qualify.
 
 I generally like this idea; however, I find it a bit strange `deriving`
 would still be implemented as an attribute given its essential nature in the
 language. Haskell, of course, has `deriving` implemented as a first-class
 feature — might Rust be interested in something like that?
 
 Food for thought, at least.
 
 I second to this. Indeed, we already have similar concerns about
 externally-implemented `#[deriving]` (#11813, and somewhat tangently,
 #11298), as syntax extensions don't have any clue about paths.

I actually rather like the fact that deriving is implemented as an attribute, 
because it's one less bit of syntax. Right now it's still implemented in the 
compiler, but this could theoretically eventually move into libstd entirely as 
a #[macro_registrar].

My main concern with this proposal overall is that types will forget to derive 
things. I know I almost always forget to derive Eq and Clone for my own structs 
until I run into an error due to their lack. A lint to warn about missing 
derivations would mitigate this a lot, although I'm worried that if someone 
opts out of a single trait by using #[allow(missing_traits)] and a new trait is 
added to the set, the author will never realize they're missing the new trait. 
I'm also concerned that if you need to opt out of a single trait from 
#[deriving(Data)] then you can't use #[deriving(Data)] and must instead list 
all of the remaining traits. Perhaps for both problems we could introduce the 
idea of #[deriving(!Send)], which would let me say 
#[deriving(Data,!Send,!Freeze)] to opt out of those two.

I'm also slightly concerned that #[deriving(Data)] gives the impression that 
there's a trait Data, so maybe that should be lowercased as in 
#[deriving(data)], or even just #[deriving(builtin)], but this is a lesser 
concern and somewhat bike-sheddy.

-Kevin
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: Stronger aliasing guarantees about mutable borrows

2014-02-25 Thread Kevin Ballard
I too was under the impression that you could not read from a mutably-borrowed 
location.

I am looking forward to the ability to move out of a mut (as long as the value 
is replaced again),
if the issues around task failure and destructors can be solved.

-Kevin

On Feb 25, 2014, at 12:19 PM, Michael Woerister michaelwoeris...@posteo.de 
wrote:

 I'm all for it. In fact,  I thought the proposed new rules *already* where 
 the case :-)
 
 On 25.02.2014 19:32, Niko Matsakis wrote:
 I wrote up an RFC. Posted on my blog at:
 
 http://smallcultfollowing.com/babysteps/blog/2014/02/25/rust-rfc-stronger-guarantees-for-mutable-borrows/
 
 Inlined here:
 
 Today, if you do a mutable borrow of a local variable, you lose the
 ability to *write* to that variable except through the new reference
 you just created:
 
 let mut x = 3;
 let p = mut x;
 x += 1;  // Error
 *p += 1; // OK
 However, you retain the ability to *read* the original variable:
 
 let mut x = 3;
 let p = mut x;
 print(x);  // OK
 print(*p); // OK
 I would like to change the borrow checker rules so that both writes
 and reads through the original path `x` are illegal while `x` is
 mutably borrowed. This change is not motivated by soundness, as I
 believe the current rules are sound. Rather, the motivation is that
 this change gives strong guarantees to the holder of an `mut`
 pointer: at present, they can assume that an `mut` referent will not
 be changed by anyone else.  With this change, they can also assume
 that an `mut` referent will not be read by anyone else. This enable
 more flexible borrowing rules and a more flexible kind of data
 parallelism API than what is possible today. It may also help to
 create more flexible rules around moves of borrowed data. As a side
 benefit, I personally think it also makes the borrow checker rules
 more consistent (mutable borrows mean original value is not usable
 during the mutable borrow, end of story). Let me lead with the
 motivation.
 
 ### Brief overview of my previous data-parallelism proposal
 
 In a previous post I outlined a plan for
 [data parallelism in Rust][dp] based on closure bounds. The rough idea
 is to leverage the checks that the borrow checker already does for
 segregating state into mutable-and-non-aliasable and
 immutable-but-aliasable. This is not only the recipe for creating
 memory safe programs, but it is also the recipe for data-race freedom:
 we can permit data to be shared between tasks, so long as it is
 immutable.
 
 The API that I outlined in that previous post was based on a `fork_join`
 function that took an array of closures. You would use it like this:
 
 fn sum(x: [int]) {
 if x.len() == 0 {
 return 0;
 }
  let mid = x.len() / 2;
 let mut left = 0;
 let mut right = 0;
 fork_join([
 || left = sum(x.slice(0, mid)),
 || right = sum(x.slice(mid, x.len())),
 ]);
 return left + right;
 }
 The idea of `fork_join` was that it would (potentially) fork into N
 threads, one for each closure, and execute them in parallel. These
 closures may access and even mutate state from the containing scope --
 the normal borrow checker rules will ensure that, if one closure
 mutates a variable, the other closures cannot read or write it. In
 this example, that means that the first closure can mutate `left` so
 long as the second closure doesn't touch it (and vice versa for
 `right`). Note that both closures share access to `x`, and this is
 fine because `x` is immutable.
 
 This kind of API isn't safe for all data though. There are things that
 cannot be shared in this way. One example is `Cell`, which is Rust's
 way of cheating the mutability rules and making a value that is
 *always* mutable. If we permitted two threads to touch the same
 `Cell`, they could both try to read and write it and, since `Cell`
 does not employ locks, this would not be race free.
 
 To avoid these sorts of cases, the closures that you pass to to
 `fork_join` would be *bounded* by the builtin trait `Share`. As I
 wrote in [issue 11781][share], the trait `Share` indicates data that
 is threadsafe when accessed through an `T` reference (i.e., when
 aliased).
 
 Most data is sharable (let `T` stand for some other sharable type):
 
 - POD (plain old data) types are forkable, so things like `int` etc.
 - `T` and `mut T`, because both are immutable when aliased.
 - `~T` is sharable, because is is not aliasable.
 - Structs and enums that are composed of sharable data are sharable.
 - `ARC`, because the reference count is maintained atomically.
 - The various thread-safe atomic integer intrinsics and so on.
 
 Things which are *not* sharable include:
 
 - Many types that are unsafely implemented:
   - `Cell` and `RefCell`, which have non-atomic interior mutability
   - `Rc`, which uses non-atomic reference counting
 - Managed data (`GcT`) because we do not wish to
  

Re: [rust-dev] RFC: Stronger aliasing guarantees about mutable borrows

2014-02-25 Thread Kevin Ballard
If you can construct the new value independently of the old, sure. But if 
constructing the new value
requires consuming the old, then you can't.

-Kevin

On Feb 25, 2014, at 3:14 PM, Corey Richardson co...@octayn.net wrote:

 Is this not already expressible with swap/replace? Is there a big
 improvement here that I'm missing?
 
 On Tue, Feb 25, 2014 at 4:23 PM, Kevin Ballard ke...@sb.org wrote:
 I too was under the impression that you could not read from a 
 mutably-borrowed location.
 
 I am looking forward to the ability to move out of a mut (as long as the 
 value is replaced again),
 if the issues around task failure and destructors can be solved.
 
 -Kevin
 
 On Feb 25, 2014, at 12:19 PM, Michael Woerister michaelwoeris...@posteo.de 
 wrote:
 
 I'm all for it. In fact,  I thought the proposed new rules *already* where 
 the case :-)
 
 On 25.02.2014 19:32, Niko Matsakis wrote:
 I wrote up an RFC. Posted on my blog at:
 
 http://smallcultfollowing.com/babysteps/blog/2014/02/25/rust-rfc-stronger-guarantees-for-mutable-borrows/
 
 Inlined here:
 
 Today, if you do a mutable borrow of a local variable, you lose the
 ability to *write* to that variable except through the new reference
 you just created:
 
let mut x = 3;
let p = mut x;
x += 1;  // Error
*p += 1; // OK
However, you retain the ability to *read* the original variable:
 
let mut x = 3;
let p = mut x;
print(x);  // OK
print(*p); // OK
I would like to change the borrow checker rules so that both writes
 and reads through the original path `x` are illegal while `x` is
 mutably borrowed. This change is not motivated by soundness, as I
 believe the current rules are sound. Rather, the motivation is that
 this change gives strong guarantees to the holder of an `mut`
 pointer: at present, they can assume that an `mut` referent will not
 be changed by anyone else.  With this change, they can also assume
 that an `mut` referent will not be read by anyone else. This enable
 more flexible borrowing rules and a more flexible kind of data
 parallelism API than what is possible today. It may also help to
 create more flexible rules around moves of borrowed data. As a side
 benefit, I personally think it also makes the borrow checker rules
 more consistent (mutable borrows mean original value is not usable
 during the mutable borrow, end of story). Let me lead with the
 motivation.
 
 ### Brief overview of my previous data-parallelism proposal
 
 In a previous post I outlined a plan for
 [data parallelism in Rust][dp] based on closure bounds. The rough idea
 is to leverage the checks that the borrow checker already does for
 segregating state into mutable-and-non-aliasable and
 immutable-but-aliasable. This is not only the recipe for creating
 memory safe programs, but it is also the recipe for data-race freedom:
 we can permit data to be shared between tasks, so long as it is
 immutable.
 
 The API that I outlined in that previous post was based on a `fork_join`
 function that took an array of closures. You would use it like this:
 
fn sum(x: [int]) {
if x.len() == 0 {
return 0;
}
 let mid = x.len() / 2;
let mut left = 0;
let mut right = 0;
fork_join([
|| left = sum(x.slice(0, mid)),
|| right = sum(x.slice(mid, x.len())),
]);
return left + right;
}
The idea of `fork_join` was that it would (potentially) fork into N
 threads, one for each closure, and execute them in parallel. These
 closures may access and even mutate state from the containing scope --
 the normal borrow checker rules will ensure that, if one closure
 mutates a variable, the other closures cannot read or write it. In
 this example, that means that the first closure can mutate `left` so
 long as the second closure doesn't touch it (and vice versa for
 `right`). Note that both closures share access to `x`, and this is
 fine because `x` is immutable.
 
 This kind of API isn't safe for all data though. There are things that
 cannot be shared in this way. One example is `Cell`, which is Rust's
 way of cheating the mutability rules and making a value that is
 *always* mutable. If we permitted two threads to touch the same
 `Cell`, they could both try to read and write it and, since `Cell`
 does not employ locks, this would not be race free.
 
 To avoid these sorts of cases, the closures that you pass to to
 `fork_join` would be *bounded* by the builtin trait `Share`. As I
 wrote in [issue 11781][share], the trait `Share` indicates data that
 is threadsafe when accessed through an `T` reference (i.e., when
 aliased).
 
 Most data is sharable (let `T` stand for some other sharable type):
 
 - POD (plain old data) types are forkable, so things like `int` etc.
 - `T` and `mut T`, because both are immutable when aliased.
 - `~T` is sharable, because is is not aliasable.
 - Structs and enums that are composed of sharable data are sharable

Re: [rust-dev] RFC: Stronger aliasing guarantees about mutable borrows

2014-02-25 Thread Kevin Ballard
 On Feb 25, 2014, at 4:04 PM, Eric Reed ecr...@cs.washington.edu wrote:
 
 Would a mut that could move enable us to write insertion into a growable 
 data structure that might reallocate itself without unsafe code? Something 
 like OwnedVector.push() for instance.

The problem with that is you need uninitialized memory that you can move in to 
(without running drop glue). I don't see how moving from mut will help. Even 
if rustc can avoid the drop glue when writing to a mut that it already moved 
out of, there's no way to construct a pre-moved mut that points to the 
uninitialized memory (and no way to even create uninitialized memory without 
unsafe).

-Kevin
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Possible bug? os::args() then split then print

2014-02-25 Thread Kevin Ballard
This definitely seems to be a bug. If you can, you should file this at 
https://github.com/mozilla/rust/issues.

-Kevin

On Feb 25, 2014, at 10:12 PM, Phil Dawes rustp...@phildawes.net wrote:

 Hi Ashish,
 
 Yes that works fine. Splitting out 'args' into a separate variable fixes the 
 behaviour.
 So this is a lifetime issue and the latest compiler isn't picking it up?
 
 Thanks,
 
 Phil
 
 
 On Wed, Feb 26, 2014 at 4:23 AM, Ashish Myles marci...@gmail.com wrote:
 On Tue, Feb 25, 2014 at 11:00 AM, Phil Dawes rustp...@phildawes.net wrote:
 fn main() {
 let arr : ~[str] = std::os::args()[1].split_str(::).collect();
 std::io::println(first  + arr[0]);
 std::io::println(first again  + arr[0]);
 }
 
 I am working on an older version of the compiler that fails to compile this 
 code, giving an error about the reference to the return value of 
 std::os::args() not being valid for the duration of its use.
 Does
 let args = std::os::args();
 let arr : ~[str] = args[1].split_str(::).collect();
 work properly?
 
 Ashish
 
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev



smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: About the library stabilization process

2014-02-21 Thread Kevin Ballard
On Feb 21, 2014, at 12:20 PM, Brian Anderson bander...@mozilla.com wrote:

 On 02/19/2014 02:37 AM, György Andrasek wrote:
 On Wed, Feb 19, 2014 at 1:40 AM, Brian Anderson bander...@mozilla.com 
 wrote:
 Backwards-compatibility is guaranteed.
 Does that include ABI compatibility?
 
 Second, the AST is traversed and stability index is propagated downward to 
 any indexable node that isn't explicitly tagged.
 Should it be an error to use lower stability internally?
 
 By default all nodes are *stable* - library authors have to opt-in to 
 stability index tracking. This may end up being the wrong default and we'll 
 want to revisit.
 Oh dear god no. `stable` should be *earned* over time, otherwise it's
 meaningless. The compiler should treat untagged code as `unstable`,
 `experimental` or a special `untagged` stability and accept that level
 by default.
 
 
 OK, I agree let's start all code at `#[experimental]`. It's not too much 
 burden for authors that don't want part of it to put an attribute on their 
 crates.

What's the default behavior with regards to calling #[experimental] APIs? If 
the default behavior is warn or deny, then I don't think we should default any 
crate to #[experimental]. I'm also worried that even if we default the behavior 
to allow(), that using #[experimental] is still problematic because anyone who 
turns on #[warn(unstable)] in order to avoid the unstable bits of libstd will 
be bitten by warnings in third-party crates that don't bother to specify 
stability.

Could we perhaps make the default to be no stability index whatsoever, so 
third-party library authors aren't required to deal with stability in their own 
APIs if they don't want to? This would have the same effect as defaulting to 
#[stable], which was the original suggestion, except that it won't erroneously 
indicate that APIs are stable when the author hasn't made any guarantees at 
all. If we do this, I would then also suggest that we default to either 
#[warn(unstable)] or #[warn(experimental)], which would then only complain 
about first-party APIs unless the third-party library author has opted in to 
stability.

It's worth noting that I have zero experience with node.js's use of stability, 
so I don't know how they handle defaults.

-Kevin

smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] reader.lines() swallows io errors

2014-02-19 Thread Kevin Ballard
My understanding is that .lines() exists primarily to make quick-and-dirty I/O 
as easy as it is in, say, a scripting language. How do scripting languages 
handle I/O errors when iterating lines? Do they raise an exception? Perhaps 
.lines() should fail!() if it gets a non-EOF error. Then we could introduce a 
new struct to wrap any Reader that translates non-EOF errors into EOF 
specifically to let you say “I really don’t care about failure”.

That said, I’m comfortable with things as they are now. Making .lines() provide 
an IoResult would destroy much of the convenience of the function. I know I 
personally have used .lines() with stdin(), which is an area where I truly 
don’t care about any non-EOF error, because, heck it’s stdin. All I care about 
is when stdin is closed.

-Kevin

On Feb 19, 2014, at 9:40 AM, Palmer Cox palmer...@gmail.com wrote:

 I think the Lines iterator could translate an EOF error into a None to abort 
 iteration and pass all other errors through. I don't see a WouldBlock error 
 code (is that the same as IoUnavailable), but that only applies to 
 non-blocking IO and I don't think it makes sense to use use the Lines 
 iterator in non-blocking mode.
 
 -Palmer Cox
 
 
 
 On Wed, Feb 19, 2014 at 12:35 PM, Corey Richardson co...@octayn.net wrote:
 Keep in mind that end of file and would block are considered errors...
 
 
 On Wed, Feb 19, 2014 at 12:26 PM, Palmer Cox palmer...@gmail.com wrote:
 Why not just modify the Lines iterator to return values of IoResult~str? 
 All the caller has to do to unwrap that is to use if_ok!() or try!() on the 
 returned value, so, its basically just as easy to use and it means that 
 errors are handled consistently. I don't see why this particular use case 
 calls for a completely different error handling strategy than any other IO 
 code.
 
 -Palmer Cox
 
 
 
 On Wed, Feb 19, 2014 at 6:31 AM, Michael Neumann mneum...@ntecs.de wrote:
 
 Am 19.02.2014 08:52, schrieb Phil Dawes:
 
 Is that not a big problem for production code? I think I'd prefer the default 
 case to be to crash the task than deal with a logic bug.
 
 The existence of library functions that swallow errors makes reviewing code 
 and reasoning about failure cases a lot more difficult.
 
 This is why I proposed a FailureReader: 
 https://github.com/mozilla/rust/issues/12368
 
 Regards,
 
 Michael
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 
 
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Improving our patch review and approval process (Hopefully)

2014-02-19 Thread Kevin Ballard
On Feb 19, 2014, at 12:28 PM, Corey Richardson co...@octayn.net wrote:

 This is a pretty bad idea, allowing *arbitrary unreviewed anything* to
 run on the buildbots. All it needs to do is remove the contents of its
 home directory to put the builder out of commission, afaik. It'd
 definitely be nice to have it run tidy etc first, but there needs to
 be a check tidy or any of its deps.

This is a very good point. And it could do more than that too. It could use a 
local privilege escalation exploit (if one exists) to take over the entire 
machine. Or it could start sending out spam emails. Or maybe it starts mining 
bit coins.

Code should not be run that is not at least read first by a reviewer.

-Kevin___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] reader.lines() swallows io errors

2014-02-19 Thread Kevin Ballard
On Feb 19, 2014, at 2:34 PM, Lee Braiden leebr...@gmail.com wrote:

 Then we could introduce a new struct to wrap any Reader that translates 
 non-EOF errors into EOF specifically to let you say “I really don’t care 
 about failure”.
 
 It sounds like a very specific way to handle a very general problem.  People 
 like (modern, complete) scripting languages because they handle this sort of 
 intricacy in elegant, ways, not because they gloss over it and make 
 half-baked programs that don't handle errors.  It's just that you can, say, 
 handle IOErrors in one step, at the top of your script, except for one 
 particular issue that you know how to recover from, six levels into the call 
 stack.  Exceptions (so long as there isn't a lot of boilerplate around them) 
 let you do that, easily.  Rust needs a similarly generic approach to 
 propagating errors and handling them five levels up, whether that's 
 exceptions or fails (I don't think they currently are flexible enough), or 
 monads, or something else.

In my experience, exceptions are actually a very inelegant way to handle this 
problem. The code 5 levels higher that catches the exception doesn’t have 
enough information about the problem in order to recover. Maybe it just 
discards the entire computation, or perhaps restarts it. But it can’t recover 
and continue.

We already tried conditions for this, which do let you recover and continue, 
except that turned out to be a dismal failure. Code that didn’t touch 
conditions were basically just hoping nothing went wrong, and would fail!() if 
it did. Code that did try to handle errors was very verbose because conditions 
were a PITA to work with.

As for what we’re talking about here. lines() is fairly unique right now in its 
discarding of errors. I can’t think of another example offhand that will 
discard errors. As I said before, I believe that .lines() exists to facilitate 
I/O handling in a fashion similar to scripting languages, primarily because one 
of the basic things people try to do with new languages is read from stdin and 
handle the input, and it’s great if we can say our solution to that is:

fn main() {
for line in io::stdin().lines() {
print!(“received: {}”, line);
}
}

It’s a lot more confusing and off-putting if our example looks like

fn main() {
for line in io::stdin().lines() {
match line {
Ok(line) = print!(“received: {}”, line),
Err(e) = {
println!(“error: {}”, e);
break;
}
}
}

or alternatively

fn main() {
for line in io::stdin().lines() {
let line = line.unwrap(); // new user says “what is .unwrap()?” and is 
still not handling errors here
print!(“received: {}”, line);
}
}

Note that we can’t even use try!() (née if_ok!()) here because main() doesn’t 
return an IoResult.

The other thing to consider is that StrSlice also exposes a .lines() method and 
it may be confusing to have two .lines() methods that yield different types.

Given that, the only reasonable solutions appear to be:

1. Keep the current behavior. .lines() already documents its behavior; anyone 
who cares about errors should use .read_line() in a loop

2. Change .lines() to fail!() on a non-EOF error. Introduce a new wrapper type 
IgnoreErrReader (name suggestions welcome!) that translates all errors into 
EOF. Now the original sample code will fail!() on a non-EOF error, and there’s 
a defined way of turning it back into the version that ignores errors for 
people who legitimately want that. This could be exposed as a default method on 
Reader called .ignoring_errors() that consumes self and returns the new wrapper.

3. Keep .lines() as-is and add the wrapper struct that fail!()s on errors. This 
doesn’t make a lot of sense to me because the struct would only ever be used 
with .lines(), and therefore this seems worse than:

4. Change .lines() to fail!() on errors and add a new method 
.lines_ignoring_errs() that behaves the way .lines() does today. That’s kind of 
verbose though, and is a specialized form of suggestion #2 (and therefore less 
useful).

5. Remove .lines() entirely and live with the uglier way of reading stdin that 
will put off new users.

6. Add some way to retrieve the ignored error after the fact. This would 
require uglifying the Buffer trait to have .err() and .set_err() methods, as 
well as expanding all the implementors to provide a field to store that 
information.

I’m in favor of solutions #1 or #2.___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] reader.lines() swallows io errors

2014-02-19 Thread Kevin Ballard
On Feb 19, 2014, at 3:40 PM, Jason Fager jfa...@gmail.com wrote:

 Can you point to any scripting langs whose lines equivalent just silently 
 ignores errors?  I'm not aware of any; even perl will at least populate $!.   
  

No, because I typically don’t think about errors when writing quick scripts. If 
the script blows up because I had a stdin error, that’s fine, it was never 
meant to be robust.

I just commented on #12368 saying that now I’m leaning towards suggestion #2 
(make .lines() fail on errors and provide an escape hatch to squelch them). 
This will more closely match how scripting languages behave by default (where 
an exception will kill the script).

 I opened https://github.com/mozilla/rust/issues/12130 a little while ago 
 about if_ok!/try! not being usable from main and the limitations for simple 
 use cases that can cause.  Forgive a possibly dumb question, but is there a 
 reason main has to return ()?  Could Rust provide an 'ExitCode' trait that 
 types could implement that would provide the exit code that the process would 
 spit out if it were returned from main?  IoResult's impl would just be `match 
 self { Ok(_) = 0, Err(_) = 1 }` and your example would look like
 
 fn main() - IoResult~str {
 for line in io::stdin().lines() {
 print!(“received: {}”, try!(line));
 }
 }

There is no precedent today for having a function whose return type must 
conform to a trait, without making the function generic. Furthermore, a 
function that is generic on return value picks its concrete return type by the 
type constraints of its call site, rather than by the implementation of that 
function. I also question whether this will work form an implementation 
standpoint. Today the symbol for the main() function is predictable and is the 
same for all main functions. With your suggested change, the symbol would 
depend on the return type. I don’t know if this matters to rustc; the “start” 
lang item function is passed a pointer to the main function, but I don’t know 
how this pointer is created.

But beyond that, there’s still issues here. Unlike in C, a Rust program does 
not terminate when control falls out of the main() function. It only terminates 
when all tasks have ended. Terminating the program sooner than that requires 
`unsafe { libc::abort() };`. Furthermore, the main() function has no return 
value, and does not influence the exit code. That’s set by 
`os::set_exit_status()`. If the return value of main() sets the error code that 
will overwrite any error code that’s already been set.

Perhaps a better approach is to define a macro that calls a function that 
returns an IoResult and sets the error code to 1 (and calls libc::abort()) in 
the Err case, and does nothing in the Ok case. That would allow me to write

fn main() {
abort_on_err!(main_());

fn main_() - IoResult() {
something_that_returns_io_result()
}
}

---

While writing the above code sample, I first tried actually writing the 
read_line() loop, and it occurs to me that it’s more complicated than 
necessary. This is due to the need for detecting EOF, which prevents using 
try!(). We may actually need some other macro that converts EOF to None, 
returns other errors, and Ok to Some. That makes things a bit simpler for 
reading, as I can do something like

fn handle_stdin() - IoResult() {
let mut r = BufferedReader::new(io::stdin());
loop {
let line = match check_eof!(r.read_line()) {
None = break,
Some(line) = line
};
handle_line(line);
}
}

Still not great, but at least this is better than

let line = match r.read_line() {
Ok(line) = line,
Err(IoError{ kind: EndOfFile, .. }) = break,
Err(e) = return Err(e)
};

-Kevin___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] idiomatic conversion from Optionstr to *libc::c_char

2014-02-17 Thread Kevin Ballard
No, this is likely to crash. `.to_c_str()` constructs a CString, which you are 
then promptly throwing away. So your subsequent access to `c_path` is actually 
accessing freed memory.

Try something like this:

pub fn read_file(self, path: Optionstr) {
let path = path.map(|s| s.to_c_str());
let c_path = path.map_or(ptr::null(), |p| p.with_ref(|x| x));
unsafe {
native_read_file(self.ch, c_path) as int
}
}

This variant will keep the CString alive inside the `path` variable, which will 
mean that your `c_path` pointer is still valid until you return from the 
function.

-Kevin

On Feb 17, 2014, at 1:06 PM, Noah Watkins jayh...@cs.ucsc.edu wrote:

 When calling a native function that takes a char* string or NULL for
 the default behavior I treat the option like this
 
 pub fn read_file(self, path: Optionstr) {
  let c_path = match path {
Some(ref path) = path.to_c_str().with_ref(|x| x),
None = ptr::null()
  };
  unsafe {
native_read_file(self.ch, c_path) as int
   };
 }
 
 Is this the best way to handle the situation?
 `path.to_cstr().with_ref(|x| x)` seems a bit verbose. On the other
 hand, `unwrap` says it forgets the ownership which I'm assuming means
 that the buffer won't be freed.



smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] issue numbers in commit messages

2014-02-17 Thread Kevin Ballard
This is not going to work in the slightest.

Most PRs don't have an associated issue. The pull request is the issue. And 
that's perfectly fine. There's no need to file an issue separate from the PR 
itself. Requiring a referenced issue for every single commit would be extremely 
cumbersome, serve no real purpose aside from aiding an unwillingness to learn 
how source control works, and would probably slow down the rate of development 
of Rust.

-Kevin

On Feb 17, 2014, at 3:50 PM, Nick Cameron li...@ncameron.org wrote:

 At worst you could just use the issue number for the PR. But I think all 
 non-trivial commits _should_ have an issue associated. For really tiny 
 commits we could allow no issue or '#0' in the message. Just so long as the 
 author is being explicit, I think that is OK.
 
 
 On Tue, Feb 18, 2014 at 12:16 PM, Scott Lawrence byt...@gmail.com wrote:
 Maybe I'm misunderstanding? This would require that all commits be 
 specifically associated with an issue. I don't have actual stats, but briefly 
 skimming recent commits and looking at the issue tracker, a lot of commits 
 can't be reasonably associated with an issue. This requirement would either 
 force people to create fake issues for each commit, or to reference 
 tangentially-related or overly-broad issues in commit messages, neither of 
 which is very useful.
 
 Referencing any conversation that leads to or influences a commit is a good 
 idea, but something this inflexible doesn't seem right.
 
 My 1.5¢.
 
 
 On Tue, 18 Feb 2014, Nick Cameron wrote:
 
 How would people feel about a requirement for all commit messages to have
 an issue number in them? And could we make bors enforce that?
 
 The reason is that GitHub is very bad at being able to trace back a commit
 to the issue it fixes (sometimes it manages, but not always). Not being
 able to find the discussion around a commit is extremely annoying.
 
 Cheers, Nick
 
 
 -- 
 Scott Lawrence
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev



smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Need help implementing some complex parent-child task behavior.

2014-02-14 Thread Kevin Ballard
What if the state's fields are private, and in a different module than the 
players, but exposes getters to query the state? Then the players can't modify 
it, but if the component that processes the actions has visibility into the 
state's fields, it can modify them just fine.

-Kevin

On Feb 14, 2014, at 12:22 PM, Damien Radtke damienrad...@gmail.com wrote:

 I'm trying to write what is essentially a card game simulator in Rust, but 
 I'm running into a bit of a roadblock with Rust's memory management. The gist 
 of what I want to accomplish is:
 
 1. In the program's main loop, iterate over several players and call their 
 play method in turn.
 2. Each play method should be able to send requests back to the parent in 
 order to take certain actions, who will validate that the action is possible 
 and update the player's state accordingly.
 
 The problem I'm running into is that, in order to let a player play and 
 have the game validate actions for them, I would need to run each player in 
 their own task, (I considered implementing it as each function call 
 indicating a request for action [e.g. by returning Some(action), or None when 
 finished] and calling it repeatedly until none are taken, but this makes the 
 implementation for each player needlessly complex) but this makes for some 
 tricky situations.
 
 My current implementation uses a DuplexStream to communicate back and forth, 
 the child sending requests to the parent and the parent sending responses, 
 but then I run into the issue of how to inform the child of their current 
 state, but don't let them modify it outside of sending action requests. 
 
 Ideally I'd like to be able to create an (unsafe) immutable pointer to the 
 state held by the parent as mutable, but that gives me a values differ in 
 mutability error. Other approaches so far have failed as well; Arcs don't 
 work because I need to have one-sided mutability; standard borrowed pointers 
 don't work because the child and parent need to access it at the same time 
 (though only the parent should be able to modify it, ensuring its safety); 
 even copying the state doesn't work because the child then needs to update 
 its local state with a new copy sent by the parent, which is also prone to 
 mutability-related errors.
 
 Any tips on how to accomplish something like this?
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev



smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Need help implementing some complex parent-child task behavior.

2014-02-14 Thread Kevin Ballard
Depends. If the string or the vectors are  instead of ~, that would do it. 
Also, if the element type of the vector does not fulfill Send. Oh, and the 
function pointer is a function pointer, not a closure, right?

-Kevin

On Feb 14, 2014, at 12:59 PM, Damien Radtke damienrad...@gmail.com wrote:

 Unfortunately, the type that maintains the state apparently doesn't fulfill 
 Send, which confuses me because it's a struct that consists of a string, 
 function pointer, and a few dynamically-sized vectors. Which of these types 
 makes the struct as a whole violate Send?
 
 
 On Fri, Feb 14, 2014 at 2:47 PM, Kevin Ballard ke...@sb.org wrote:
 What if the state's fields are private, and in a different module than the 
 players, but exposes getters to query the state? Then the players can't 
 modify it, but if the component that processes the actions has visibility 
 into the state's fields, it can modify them just fine.
 
 -Kevin
 
 On Feb 14, 2014, at 12:22 PM, Damien Radtke damienrad...@gmail.com wrote:
 
  I'm trying to write what is essentially a card game simulator in Rust, but 
  I'm running into a bit of a roadblock with Rust's memory management. The 
  gist of what I want to accomplish is:
 
  1. In the program's main loop, iterate over several players and call 
  their play method in turn.
  2. Each play method should be able to send requests back to the parent in 
  order to take certain actions, who will validate that the action is 
  possible and update the player's state accordingly.
 
  The problem I'm running into is that, in order to let a player play and 
  have the game validate actions for them, I would need to run each player in 
  their own task, (I considered implementing it as each function call 
  indicating a request for action [e.g. by returning Some(action), or None 
  when finished] and calling it repeatedly until none are taken, but this 
  makes the implementation for each player needlessly complex) but this makes 
  for some tricky situations.
 
  My current implementation uses a DuplexStream to communicate back and 
  forth, the child sending requests to the parent and the parent sending 
  responses, but then I run into the issue of how to inform the child of 
  their current state, but don't let them modify it outside of sending action 
  requests.
 
  Ideally I'd like to be able to create an (unsafe) immutable pointer to the 
  state held by the parent as mutable, but that gives me a values differ in 
  mutability error. Other approaches so far have failed as well; Arcs don't 
  work because I need to have one-sided mutability; standard borrowed 
  pointers don't work because the child and parent need to access it at the 
  same time (though only the parent should be able to modify it, ensuring its 
  safety); even copying the state doesn't work because the child then needs 
  to update its local state with a new copy sent by the parent, which is also 
  prone to mutability-related errors.
 
  Any tips on how to accomplish something like this?
  ___
  Rust-dev mailing list
  Rust-dev@mozilla.org
  https://mail.mozilla.org/listinfo/rust-dev
 
 



smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: Conventions for well-behaved iterators

2014-02-13 Thread Kevin Ballard
On Feb 13, 2014, at 11:56 AM, Daniel Micay danielmi...@gmail.com wrote:

 On Thu, Feb 13, 2014 at 10:05 AM, Simon Sapin simon.sa...@exyr.org wrote:
 Hi,
 
 The Rust documentation currently makes iterators behavior undefined after
 .next() has returned None once.
 
 http://static.rust-lang.org/doc/master/std/iter/trait.Iterator.html
 
 The Iterator protocol does not define behavior after None is
 returned. A concrete Iterator implementation may choose to behave
 however it wishes, either by returning None infinitely, or by doing
 something else.
 
 
 http://static.rust-lang.org/doc/master/guide-container.html
 
 In general, you cannot rely on the behavior of the next() method
 after it has returned None. Some iterators may return None forever.
 Others may behave differently.
 
 
 
 This is unfortunate. Code that accepts any iterator as input and does with
 it anything more complicated than a single 'for' loop will have to be
 defensive in order to not fall into undefined behavior.
 
 The type system can not enforce anything about this, but I’d like that we
 consider having conventions about well-behaved iterators.
 
 ---
 
 Proposal:
 
 0. An iterator is said to be well-behaved if, after its .next() method has
 returned None once, any subsequent call also returns None.
 
 1. Iterators *should* be well-behaved.
 
 2. Iterators in libstd and other libraries distributed with rustc *must* be
 well-behaved. (I.e. not being well-behaved is a bug.)
 
 3. When accepting an iterator as input, it’s ok to assume it’s well-behaved.
 
 4. For iterator adaptors in particular, 3. means that 1. and 2. only apply
 for well-behaved input. (So that, eg. std::iter::Map can stay as
 straightforward as it is, and does not need to be coded defensively.)
 
 ---
 
 Does the general idea sound like something y’all want? I’m not overly
 attached to the details.
 
 --
 Simon Sapin
 
 Enforcing this invariant makes many adaptors more complex. For
 example, the `filter` adaptor would need to maintain a boolean flag
 and branch on it. I'm fine with the current solution of a `fuse`
 adaptor because it moves all of the responsibility to a single
 location, and user-defined adaptors don't need to get this right.

This was the main reasoning behind the current logic. The vast majority of users
of iterators don't care about next() behavior after the iterator has returned
None, so there was no need to make the iterator adaptors track extra state in
the general case. Any client who does need it can just call `.fuse()` to get a
Fuse adaptor that adds the necessary checks.

-Kevin

smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Kevin Ballard
On Feb 1, 2014, at 2:39 PM, Corey Richardson co...@octayn.net wrote:

 The immediate, and most pragmatic, problem is that in today's Rust one cannot
 easily search for implementations of a trait. Why? `grep 'impl Clone'` is
 itself not sufficient, since many types have parametric polymorphism. Now I
 need to come up with some sort of regex that can handle this. An easy
 first-attempt is `grep 'impl(.*?)? Clone'` but that is quite inconvenient to
 type and remember. (Here I ignore the issue of tooling, as I do not find the
 argument of But a tool can do it! valid in language design.)

Putting your other arguments aside, I am not convinced by the grep argument.
With the syntax as it is today, I use `grep 'impl.*Clone'` if I want to find 
Clone
impls. Yes, it can match more than just Clone impls. But that's true too even 
with this
change. At the very least, any sort of multiline comment or string can contain 
text that
matches even the most rigorously specified grep. The only way to truly 
guarantee you're
only matching real impls is to actually parse the file with a real parser.

-Kevin
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Futures in Rust

2014-01-29 Thread Kevin Ballard
Any number of things.

The use case here is an interactive protocol where writes go in both 
directions, and can be initiated in response to external events. The classic 
example is an IRC bot. The server can send a message at any time, so the IRC 
bot needs to be constantly reading, but writes back to the socket are not 
necessarily initiated directly in response to reads, they may be initiated due 
to asynchronous operations, or keyboard input at the terminal, or any number of 
other external stimuli.

-Kevin

On Jan 29, 2014, at 2:19 PM, Vadim vadi...@gmail.com wrote:

 What event initiates the write?
 
 
 On Wed, Jan 29, 2014 at 2:11 PM, Kevin Ballard ke...@sb.org wrote:
 This solution will not work for what I need stream splitting for. Namely, I 
 need to be in the middle of reading from a socket when I decide that I need 
 to write. I cannot be waiting on the read future at that time or I won't be 
 able to start writing. And if I don't wait on the read future, I won't know 
 when I have data available.
 
 -Kevin
 
 On Jan 29, 2014, at 2:03 PM, Vadim vadi...@gmail.com wrote:
 
  After reading about the simultaneous stream reading/writing issue discussed 
  in the last meeting, I want to ask a question:  Maybe it's time to consider 
  using explicitly async I/O and futures?
 
  Futures sort of already exist in the libextra, but they still rely on 
  pushing async operation into a separate task.  I wonder if Rust could 
  support in-task asynchronicity.   If we had that, the simultaneous 
  read/write example could be written as:
 
  let buffer1 = [u8, ..1024];
  let buffer2 = [u8, ..1024];
  ...
  let future1 = stream.read(buffer1);
  let future2 = stream.write(buffer2);
  let combined = wait_any(future1, future2); // new future that resolves 
  once any of its' parameters does
  combined.wait(); // wait till the combined future resolves
  if future1.is_complete() {
  let value = future1.get_result();
  }
  ...
  Current proposals, such as stream splitting might work for that particular 
  case, but what about stuff like read stream with a timeout?   With 
  futures, that'd be easy - just combine the read future with a timer future 
  similarly to the above.  I am sure there are tons of other useful scenarios 
  that would be simplified with futures.
 
  I know that currently there is a problem with preventing un-synchronized 
  access to local resources involved in the async operation.  In my example 
  above, the state of buffers is undefined until async operation is complete, 
  so they should be roped off for the duration.
  But maybe Rust type system could grow a new type of borrow that prevents 
  all object access while it is in scope, similarly to how iterators prevent 
  mutation of the container being iterated?
 
  Vadim
 
  ___
  Rust-dev mailing list
  Rust-dev@mozilla.org
  https://mail.mozilla.org/listinfo/rust-dev
 
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Futures in Rust

2014-01-29 Thread Kevin Ballard
On Jan 29, 2014, at 4:16 PM, Vadim vadi...@gmail.com wrote:

 
 
 
 On Wed, Jan 29, 2014 at 3:55 PM, Kevin Ballard ke...@sb.org wrote:
 Any number of things.
 
 The use case here is an interactive protocol where writes go in both 
 directions, and can be initiated in response to external events. The classic 
 example is an IRC bot. The server can send a message at any time, so the IRC 
 bot needs to be constantly reading, but writes back to the socket are not 
 necessarily initiated directly in response to reads, they may be initiated 
 due to asynchronous operations, or keyboard input at the terminal, or any 
 number of other external stimuli.
 
 In this case you'd be waiting on futures from those external events as well.  
  I am assuming that all of I/O would be future-ized, not just TCP streams.

external events does not necessarily mean I/O. It could also mean some 
asynchronous computation inside the app.

And if you're going to try and claim that this can be represented by futures 
too in this same scheme, well, it pretty much sounds like you're reinventing 
the generic Select. Or perhaps I should call it a run loop.

-Kevin___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] let mut - var

2014-01-29 Thread Kevin Ballard
On Jan 29, 2014, at 6:43 PM, Brian Anderson bander...@mozilla.com wrote:

 On 01/29/2014 06:35 PM, Patrick Walton wrote:
 On 1/29/14 6:34 PM, Samuel Williams wrote:
 Perhaps this has been considered already, but when I'm reading rust code
 let mut just seems to stick out all over the place. Why not add a
 var keyword that does the same thing? I think there are lots of good
 and bad reasons to do this or not do it, but I just wanted to propose
 the idea and see what other people are thinking.
 
 `let` takes a pattern. `mut` is a modifier on variables in a pattern. It is 
 reasonable to write `let (x, mut y) = ...`, `let (mut x, y) = ...`, `let 
 (mut x, mut y) = ...`, and so forth.
 
 Having a special var syntax would defeat this orthogonality.
 
 `var` could potentially just be special-case sugar for `let mut`.

To what end? Users still need to know about `mut` for all the other uses of 
patterns. This would reserve a new keyword and appear to duplicate 
functionality for no gain.

-Kevin

smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deprecating rustpkg

2014-01-28 Thread Kevin Ballard
Keeping it around means maintaining it, and it means tempting people to use it 
even though it's deprecated.

My suggestion would be, if you really need rustpkg, then extract it into a 
separate repo and maintain it there. But get it out of the mozilla/rust tree.

-Kevin

On Jan 28, 2014, at 11:28 AM, Ian Daniher explodingm...@gmail.com wrote:

 Lots of good points in this thread, but I wanted to request deprecation, but 
 not removal until a better alternative is documented and made available. 
 Rustpkg works for my needs - I use it every day -  but it definitely needs 
 some TLC.
 
 Thanks!
 --
 Ian
 
 
 On Tue, Jan 28, 2014 at 11:46 AM, SiegeLord slab...@aim.com wrote:
 On 01/27/2014 11:53 PM, Jeremy Ong wrote:
 I'm somewhat new to the Rust dev scene. Would anybody care to summarize
 roughly what the deficiencies are in the existing system in the interest
 of forward progress? It may help seed the discussion for the next effort
 as well.
 
 I can only speak for myself, but here are some reasons why I abandoned 
 rustpkg and switched to CMake.
 
 Firstly, and overarchingly, it was the attitude of the project development 
 with respect to issues. As a comparison, let me consider Rust the language. 
 It is a pain to make my code pass the borrow check sometimes, the lifetimes 
 are perhaps the most frustrating aspect of Rust. I put up with them however, 
 because they solve a gigantic problem and are the keystone of Rust's 
 safety-without-GC story. rustpkg also has many incredibly frustrating 
 aspects, but they are there (in my opinion) arbitrarily and not as a solution 
 to any real problem. When I hit them, I do not get the same sense of 
 purposeful sacrifice I get with Rust's difficult points. Let me outline the 
 specific issues I personally hit (I know of other ones, but I haven't 
 encountered them personally).
 
 Conflation of package id and source. That fact combined with the fact that to 
 depend on some external package you have to write extern mod = pkgid meant 
 that you needed to create bizarre directory structures to depend on locally 
 developed packages (e.g. you'd have to put your locally developed project in 
 a directory tree like so: github.com/SiegeLord/Project). This is not 
 something I was going to do.
 
 The package dependencies are written in the source file, which makes it 
 onerous to switch between versions/forks. A simple package script would have 
 solved it, but it wasn't present by design.
 
 My repositories have multiple crates, and rustpkg is woefully under-equipped 
 to handle that case. You cannot build them without dealing with pkg.rs, and 
 using them from other projects seemed impossible too (the extern mod syntax 
 wasn't equipped to handle multiple crates per package). This is particularly 
 vexing when you have multiple example programs alongside your library. I was 
 not going to split my repository up just because rustpkg wasn't designed to 
 handle that case.
 
 All of those points would be solved by having an explicit package description 
 file/script which was THE overarching design non-goal of rustpkg. After that 
 was made clear to me, I just ditched it and went to C++ style package 
 management and a CMake build system.
 
 -SL
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] static mut and owning pointers

2014-01-28 Thread Kevin Ballard
Your code is moving the contents of Option~MyStruct into the match arm. It 
just so happens that this seems to be zeroing out the original pointer in 
memory, and that happens to be the same representation that None does for the 
type Option~MyStruct (since ~ pointers are non-nullable), so the act of 
moving the value just happens to be transforming it into a None.

Normally you couldn't do this, but mutable statics are weird (which is why you 
need the unsafe block to access it).

When you remove the ~, the lines end up printing the same because MyStruct is 
implicitly copyable, so your match arm is now copying instead of moving.

The correct fix here is to use `Some(ref data)` instead of `Some(data)`. This 
will take a reference to the data instead of moving it, and the static will 
remain unchanged.

-Kevin

On Jan 28, 2014, at 11:48 AM, Alexander Stavonin a.stavo...@gmail.com wrote:

 Hi all! I’m not sure is it an error or static mut variables 
 misunderstanding from my side. The source:
 
 struct MyStruct {
 val: int
 }
 
 static mut global_data: Option~MyStruct = None;
 
 fn test_call() {
 unsafe {
 match global_data {
 Some(data) = { println!(We have data {:?}, data);}
 None = { println!(We don't have data);}
 }
 }
 }
 
 fn main() {
 
 unsafe {
 global_data = Some(~MyStruct{val: 42});
 }
 
 test_call();
 test_call();
 }
 
 and output:
 
 We have data ~MyStruct{val: 42}
 We don't have data
 
 But if I’m changing global_data from Option~MyStruct to OptionMyStruct 
 output is changed also:
 
 We have data ~MyStruct{val: 42}
 We have data ~MyStruct{val: 42}
 
 Is it normal behaviour and owning pointers cannot be stored in global 
 variables or an error?
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Compile-time function evaluation in Rust

2014-01-28 Thread Kevin Ballard
It sounds like you're proposing that arbitrary functions may be eligible for 
CTFE if they happen to meet all the requirements, without any special 
annotations. This seems like a bad idea to me. I understand why it's 
attractive, but it means that seemingly harmless changes to a function's 
implementation (but not its signature) can cause compiler errors in other 
modules, or even other crates if the AST for the function happens to be made 
extern.

A more conservative approach would be to require the #[ctfe] annotation, which 
then imposes all the given restrictions on the function. The downside is such a 
function then is restricted to only calling other CTFE functions, so we'd have 
to go in to the standard libraries and add this annotation whenever we think 
it's both useful and possible.

This approach mirrors the approach being used by C++11/C++14.

-Kevin

On Jan 28, 2014, at 2:15 PM, Pierre Talbot ptal...@hyc.io wrote:

 Hi,
 
 The Mozilla foundation proposes research internships [1] and the CTFE 
 optimization in the Rust compiler seems to be a really exciting project. I 
 wrote a proposal [2] that I'll send with my application and so I'd like to 
 share it with you and discuss about bringing CTFE inside Rust.
 
 Here a non-exhaustive summary of key points in my proposal.
 
 First of all, we need to establish when CTFE is triggered, I found two 
 contexts (denoted as a hole []):
 
 * Inside a immutable static variable (static ident ’:’ type ’=’ [] ’;’).
 * In a vector expression (’[’ expr ’,’ .. [] ’]’).
 
 Next in a similar way than with inline attributes we might want to add 
 these new attributes:
 
 * #[ctfe] hints the compiler to perform CTFE.
 * #[ctfe(always)] asks the compiler to always perform CTFE resulting in a
 compiler error if it’s impossible.
 * #[ctfe(never)] asks the compiler to never perform CTFE resulting in a 
 compiler
 error if this function is called in a CTFE context.
 
 The rational behind this is that some functions might want to disallow CTFE, 
 for example if they manipulate machine-dependent data (such as playing with 
 endianness). Some might want to be designed only for compile-time and so we 
 want to disable run-time execution. Finally others might hints the compiler 
 to try to optimize whenever you can, of course if the function contains 
 infinite loop for some input, the compilation might not terminate.
 
 I propose some requirements on function eligible for CTFE (see the proposal 
 for references to the Rust manual):
 
 1. Its parameters are evaluable at compile-time.
 2. It isn’t a diverging function.
 3. It isn’t an unsafe function.
 4. It doesn’t contain unsafe block.
 5. It doesn’t perform I/O actions.
 6. The function source code is available to the compiler. It mustn’t be in an 
 external
 block, however it can be an extern function.
 
 In this proposal, you'll also find a pseudo-coded algorithm, related work (in 
 D and C++), and much more :-)
 
 If you have any suggestions or corrections, do not hesitate. Also, feel free 
 to ask questions.
 
 Regards,
 Pierre Talbot
 
 [1] https://careers.mozilla.org/en-US/position/oZO7XfwB
 [2] http://hyc.io/rust-ctfe-proposal.pdf
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Rust-ci updates (project categories and documentation)

2014-01-20 Thread Kevin Ballard
That's pretty cool. http://www.rust-ci.org/kballard/rust-lua/doc/lua now 
contains documentation for rust-lua!

Although there are two issues with the documentation as it exists now. Both are 
caused by the fact that the docs link actually embeds the real documentation in 
an iframe.

The first issue is that the title of the page is always Rust CI.

The second is that clicking a link in the documentation will only navigate that 
iframe, and therefore the URL bar doesn't get updated and the browser back 
button (and history list) is broken.

-Kevin

On Jan 20, 2014, at 4:01 PM, Hans Jørgen Hoel hansj...@gmail.com wrote:

 Hi,
 
 Rust-ci (http://www.rust-ci.org/) has been updated with some new features
 
 * documentation can be uploaded during Travis CI builds (see project
 page - owner actions - get config for docs upload)
 * categorization of projects
 * projects can now be edited and deleted by owners (aka Web 2.0 compliance)
 
 For a view of projects by category see:
 
 http://www.rust-ci.org/projects/
 
 I've added likely categories to projects based on name and
 description, but I've probably missed a few so please take a look at
 your own project (owner actions - edit project to change).
 
 Categories are fixed for now. Give me a ping if you want to have a
 category added or changed.
 
 Projects on the frontpage with a padlock in the status column are
 missing Travis CI authentication due to an earlier bug. To fix this,
 go to the project page and select Authenticate.
 
 If you encounter any other issues, please report it here:
 
 https://github.com/hansjorg/rust-ci
 
 Next up:
 
 * benchmarks upload (and graphing)
 
 cheers,
 
 Hans Jørgen
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Something odd I just noticed with mut and pattern matching

2014-01-16 Thread Kevin Ballard
On Jan 16, 2014, at 8:13 AM, Tommy M. McGuire mcgu...@crsr.net wrote:

let file = File::open_mode(Path::new(test), Truncate, Write);
 
match file {
Some(mut f) = f.write_str( hello ),
None = fail!(not a file)
}

This is fine, because you’re consuming `file` and moving the contained value 
into a new mutable `f` variable. It’s basically the same as

if file.is_some() {
let mut f = file.unwrap();
…
} else {
fail!(“..”);
}

 works, while
 
let mut file = File::open_mode(Path::new(test), Truncate, Write);
 
match file {
Some(mut f) = f.write_str( hello ),
None = fail!(not a file)
}
 
 results in a variable does not need to be mutable warning.

The warning here is because `file` doesn’t need to be mutable.

 Shouldn't the third option also fail (and possibly the second option
 succeed)?

The first two failed because the variable that needed to be mutable, `f`, was 
not mutable the second two succeeded because it was.

As Simon has already pointed out, if you tried to use a by-ref binding it would 
fail, because you can’t take a mutable borrow of an immutable variable. But you 
didn’t do that, you moved the value, and moving from an immutable variable into 
a mutable one is perfectly legal.

-Kevin
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] ASCII character literals

2014-01-15 Thread Kevin Ballard
The relevant issue for this is https://github.com/mozilla/rust/issues/4334.

-Kevin

On Jan 15, 2014, at 7:26 AM, Michael Neumann mneum...@ntecs.de wrote:

 
 Am 15.01.2014 16:23, schrieb Evan G:
 I'm not weighing in on whether this is something rust should do or not, but 
 we could mimic the 16i, 10u numeric literal system to achieve this in 
 syntax. An ascii literal would have a suffix, for example 'x'a or 'x'u to 
 explicitly specify unicode (which would still be the default). This could 
 probably work for string literals too.
 
 Something like 'x'a would be very nice to have!
 
 Regards,
 
   Michael
 
 
 
 On Wed, Jan 15, 2014 at 8:37 AM, Michael Neumann mneum...@ntecs.de wrote:
 Hi,
 
 There are lots of protocols based on ASCII character representation. In 
 Rust, the natural way to represent them is
 by an u8 literal (optionally wrapped within std::ascii::Ascii).
 What I am missing is a simple way to represent those literals in code. What 
 I am doing most of the time is:
 
 fn read_char() - Optionchar {
match io.read_byte() {
  Some(b) = Some(b as char),
  None = None
   }
 }
 
 And then use character literals in pattern matching. What I'd highly prefer 
 is a way to directly repesent ASCII characters
 in the code, like:
 
 match io.read_byte().unwrap {
 'c'_ascii = 

 }
 
 If macros would work in patterns, something like:
 
match ... {
ascii!('a') = ...
}
 
 would work for me too. Ideally that would work with range patterns as well, 
 but of course an ascii_range!() macro would
 do the same.
 
 Is this useful to anyone?
 
 Regards,
 
 Michael
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] returning functions in rust

2014-01-10 Thread Kevin Ballard
On Jan 10, 2014, at 10:18 AM, Patrick Walton pcwal...@mozilla.com wrote:

 On 1/10/14 7:20 AM, Nakamura wrote:
 I'm new to rust, and tried going through some of the examples from the
 OS class[0] that was taught in rust.  However, I got tripped up by the
 exercise, make a function that takes an integer n and another function
 |x| - x, and returns a function that is n applications of the original
 function.
 
 I've been trying to see what the limits of rust are if you are using it
 without the runtime/managed pointers etc, and it seems like I stumble
 against one of the limits when trying to return functions.  The big
 question is where to allocate the new function I want to return.
 
 Rust won't automatically allocate closed-over variables on the heap. This 
 limits the ability to write code like this naturally, but you get the benefit 
 that all allocations are immediately visible and under the control of the 
 programmer.
 
 I would not suggest trying to use managed pointers for this. Instead I would 
 suggest `proc`, which is allocated on the exchange heap and can close over 
 variables. This should be sufficient for the question as posed, as long as 
 the another function has a `'static` bound (`'static |x| - x`).

Procs can only be called once, though, which is a bit of a limitation.

Does ~|| - T no longer exist?

-Kevin___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] 0.9 prerelease testing

2014-01-09 Thread Kevin Ballard
That path does not exist in Xcode 5.0 or Xcode 5.1 DP3. Are you sure you aren't 
looking at an older Xcode (say, Xcode 4.6)?

-Kevin

On Jan 9, 2014, at 4:04 AM, Alexander Stavonin a.stavo...@gmail.com wrote:

 FYI. Apple doesn't remove GDB in 10.9. They has removed symlink for it So, 
 you can find GDB in 
 /Applications/Xcode.app/Contents/Developer/usr/libexec/gdb/gdb-i386-apple-darwin
  and create symlink manually.
 
 Best,
 Alex
 
 
 2014/1/9 Jack Moffitt j...@metajack.im
  We've got a little 0.9 release candidate here. I've given it the most
  cursory test, but if you have the inclination go ahead and install these on
  your system of choice and see how it fares. These days we generally claim to
  support Mac OS X 10.6+, Windows 7 and 2008 RC2, a variety of Linuxes, and
  Android, but the intrepid may have luck on other platforms as well. If
  things go reasonably well with this RC then we'll sign and tag and release
  it to the world tomorrow.
 
 Servo is moving almost to this version and it's been working pretty
 well even with lightly mixed native and green tasks. Debugging in
 particular seems much improved (except on OS X 10.9 where Apple has
 removed gdb).
 
 Some of my favorite things from 0.9 are the dead code warnings and the
 almost total removal of the option dance. Servo's code gets a little
 cleaner every time we move Rust forward.
 
 Happy testing!
 jack.
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev



smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Unbounded channels: Good idea/bad idea?

2014-01-01 Thread Kevin Ballard
On Dec 31, 2013, at 7:40 PM, Jason Fager jfa...@gmail.com wrote:

 If you're pushing to an unbounded vec in a tight loop you've got fundamental 
 design issues.

Or you’re processing user input. A rather trivial example here is parsing a 
file into lines. If I have 2GB of RAM and I throw a 4GB file at that parser, 
I’m going to have issues.

-Kevin___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] on quality success

2013-12-31 Thread Kevin Ballard
Some of what you said I agree with, and some I don’t. But in particular, I 
disagree with your thesis. A language is successful if it attracts enough 
programmers. It may very well be true that the best way to do that is to 
produce a high-quality language, but the ineffable “quality” of a language is 
not the only thing that attracts programmers.

You make a good point that programmers check out new languages because of the 
differences, rather than in spite of them. But I think that you’ve turned this 
into thinking that differences are inherently good (for example, in your 
suggestion Why not use if to mean 'try' or 'switch' or 'foreach', and ifif 
to mean 'if'? What do you think?”), and that’s simply not true. Differences can 
be good, they can be bad, and they can also be meaningless. And I would argue 
that meaningless differences are bad, because it’s yet one more thing for a 
programmer to have to learn when switching from an existing language.

There is an important balancing act going on when creating a new language. If 
you make it too different, even if you believe every single change is for the 
good, then the alien nature of the language will serve to make it harder for 
existing programmers to jump to your new language, and therefore you will get 
fewer programmers making the attempt. And too few programmers will, of course, 
render your language unsuccessful.

-Kevin

On Dec 31, 2013, at 6:56 AM, spir denis.s...@gmail.com wrote:

 Holà!
 
 [This is a rather personal and involved post. Press del if you feel like 
 it...]
 [also, it is long]
 [copy to rust-dev mailing list: actually the post is inspired by a thread 
 there Thoughts on the Rust Roadmap]
 
 There is a point obvious to me; apparently most people including many 
 language designers don't share it, or act as if they did not:
 
a language should be successful iff it is of high quality
 
 A kind of symmetric statement also holds;
 
let us hope low quality languages have no success!
 
 There are various reasons to hope this, the most stupid beeing that 
 successful languages influence others, present  future. This is in my view a 
 symptom of our civilisation's funny spirit (read: madness), and related to 
 the actual points I intend to state (if, for once, I manage to express my 
 thought).
 
 Apparently, many language designers proceed more or less the following way: 
 there are a few key points (for them) they consider mis-designed or missing 
 or wrong in some way in existing languages (not all the same for every 
 language). Thus, they want to make a language that repairs these points, all 
 together. Then, certainly in fear that too many changes may repel potential 
 adopters of their language, in hope to maximise its chances of success 
 *despite* it breaking habits on the key points more important to them, they 
 won't change anything else, or only the bare minimum they can. They want 
 instead to remain as mainstream as possible on everything else. [4]
 
 I consider this spirit bad; I mean, very bad. This is the way basic design 
 errors propagate from successful languages to others, for instance. [1] 
 Apparently, it takes a great dose of courage to break any existing practice 
 in a _new_ language: tell me why, I do not understand.
 
 Note that I am here talking of wrong design points in the opinion of a given 
 language designer. Choices he (since it's mostly men) would not do if 
 programming were a new field, open to all explorations. (There are indeed 
 loads of subjective or ideological design points; see also [1]  [3]) 
 However, while programming is not a new field anymore, it is indeed open to 
 all explorations, for you, for me, if you or me wants it. Nothing blocks us 
 but our own bloackages, our own fears, and, probably, wrong rationales, 
 perhaps non-fully-conscious ones.
 
 Deciding to reuse wrong, but mainstream, design decisions in one's own 
 language is deciding to intentionally make it of lower quality. !!! Funny 
 (read: mad), isn't it? It is thus also intentionally deciding to make it not 
 worth success. This, apparently, to make its actual chances of success 
 higher. (Isn't our culture funny?)
 Then, why does one _actually_ make a new language? For the joy of making 
 something good? To contribute to a better world, since languages and 
 programming are a common good? [2] For the joy of offering something of as 
 high a quality as humanly possible? Else, why? For fame, honour, status, 
 money, power? To mentally masturbate on the idea of having made something 
 sucessful (sic!)?
 
 We are not in need of yet another language trying, or pretending, to improve 
 on a handful of disparate points, leaving all the rest as is, meaning in bad 
 state. And, as an example, we are not in need of yet another failed trial for 
 a successor to C as major low-level lang.
 Differences, thought of by their designer as significant quality 
 improvements, are the *reasons* for programmers to adopt a new 

Re: [rust-dev] Auto-borrow/deref (again, sorry)

2013-12-28 Thread Kevin Ballard
We have to say `mut i` in main() because `i` is non-mutable. We’re explicitly 
taking a mutable borrow.

But once it’s in foo(), it’s already mutable. The type `mut int` carries its 
mutability with it. Having to say `mut` again makes no sense and is nothing but 
pure noise.

-Kevin

On Dec 27, 2013, at 4:59 PM, Vadim vadi...@gmail.com wrote:

 For the same reason we currently have to say `mut i` in main() - to 
 explicitly acknowledge that the callee may mutate i.  By the same logic, this 
 should be done everywhere.
 
 
 On Wed, Dec 25, 2013 at 3:11 PM, Kevin Ballard ke...@sb.org wrote:
 On Dec 25, 2013, at 5:17 PM, Vadim vadi...@gmail.com wrote:
 
 I agree that unexpected mutation is undesirable, but:
 - requiring 'mut' is orthogonal to requiring '' sigil, IMHO,
 - as currently implemented, Rust does not always require mut when callee 
 mutates the argument, for example:
 
 fn main() {
 let mut i: int = 0;
 foo(mut i);
 println!({}, i);
 }
 fn foo(i: mut int) {
 bar(i); // no mut!
 }
 fn bar(i: mut int) {
 *i = *i + 1;
 }
 
 Note that invocation of bar() inside foo() does not forewarn reader by 
 requiring 'mut'.  Wouldn't you rather see this?:
 
 fn main() {
 let mut i: int = 0;
 foo(mut i);
 println!({}, i);
 }
 fn foo(i: mut int) {
 bar(mut i);
 }
 fn bar(i: mut int) {
 i = i + 1;
 }
 
 What is the point of adding `mut` here? bar() does not need `mut` because 
 calling bar(i) does not auto-borrow i. It’s already a `mut int`.
 
 -Kevin
 

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Auto-borrow/deref (again, sorry)

2013-12-28 Thread Kevin Ballard
On Dec 28, 2013, at 1:53 PM, Ashish Myles marci...@gmail.com wrote:

 I think I see the confusion (as I suffered from the same point of confusion). 
 So let me restate your answer and please correct me of I am wrong.
 1. mut int and  mut int are different types and the former doesn't 
 automatically convert to the latter.
 
Effectively correct. `mut int` isn’t actually a type, `int` is a type and the 
`mut` here means that the slot is mutable. `mut int` is a type. You can in 
fact have `mut i: mut int` to have a mutable slot containing a `mut int`. 
This would allow you to replace the pointer stored in `i` with a different 
pointer. If the slot is not mutable, the pointer is fixed but because it’s a 
`mut int` the data that’s being pointed to can be modified.
 2. The way to get the latter from the former is to say mut i since i is 
 defined as taking a non-mut borrow even if i is mut. (This was the point of 
 confusion I believe.)
 
Correct.

There is in fact one way to automatically borrow `mut i` to `mut i`, and 
that’s when calling a method on `i` that takes `mut self`. But I believe 
that’s the only way to automatically perform this borrowing.
 3. No explicit conversion is needed within foo() since the type of i is 
 already mut int”.
 
Correct.

-Kevin
 Ashish
 
 On Dec 28, 2013 1:33 PM, Kevin Ballard ke...@sb.org wrote:
 We have to say `mut i` in main() because `i` is non-mutable. We’re 
 explicitly taking a mutable borrow.
 
 But once it’s in foo(), it’s already mutable. The type `mut int` carries its 
 mutability with it. Having to say `mut` again makes no sense and is nothing 
 but pure noise.
 
 -Kevin
 
 On Dec 27, 2013, at 4:59 PM, Vadim vadi...@gmail.com wrote:
 
 For the same reason we currently have to say `mut i` in main() - to 
 explicitly acknowledge that the callee may mutate i.  By the same logic, 
 this should be done everywhere.
 
 
 On Wed, Dec 25, 2013 at 3:11 PM, Kevin Ballard ke...@sb.org wrote:
 On Dec 25, 2013, at 5:17 PM, Vadim vadi...@gmail.com wrote:
 
 I agree that unexpected mutation is undesirable, but:
 - requiring 'mut' is orthogonal to requiring '' sigil, IMHO,
 - as currently implemented, Rust does not always require mut when callee 
 mutates the argument, for example:
 
 fn main() {
 let mut i: int = 0;
 foo(mut i);
 println!({}, i);
 }
 fn foo(i: mut int) {
 bar(i); // no mut!
 }
 fn bar(i: mut int) {
 *i = *i + 1;
 }
 
 Note that invocation of bar() inside foo() does not forewarn reader by 
 requiring 'mut'.  Wouldn't you rather see this?:
 
 fn main() {
 let mut i: int = 0;
 foo(mut i);
 println!({}, i);
 }
 fn foo(i: mut int) {
 bar(mut i);
 }
 fn bar(i: mut int) {
 i = i + 1;
 }
 
 What is the point of adding `mut` here? bar() does not need `mut` because 
 calling bar(i) does not auto-borrow i. It’s already a `mut int`.
 
 -Kevin
 
 
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Auto-borrow/deref (again, sorry)

2013-12-28 Thread Kevin Ballard
On Dec 28, 2013, at 7:10 PM, Vadim vadi...@gmail.com wrote:

 You could have said Well, I've already declared my variable as mutable, i.e. 
 `let mut i = 0`.  Since is already mutable, why do I have to say mut again 
 when borrowing?  The compiler could have easily inferred that.   I believe 
 the answer is To help readers of this code realize that the called function 
 is [most likely] going to mutate the variable.   I believe the same logic 
 should apply to mut references.

The answer is because T and mut T are distinct types, with distinct behavior 
(notably, mutable borrows must be unique).

-Kevin___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Unbounded channels: Good idea/bad idea?

2013-12-25 Thread Kevin Ballard
On Dec 24, 2013, at 10:23 PM, Daniel Micay danielmi...@gmail.com wrote:

 3. Having a bound of 1 by default. The default should allow for parallelism.
 
 A bound of 1 by default seems pretty stupid. I've never understood why
 Go does this... it belongs in a separate type.

Go actually has a default bound of 0. When you send on a channel that wasn’t 
allocated with a given bound, it blocks until something else reads. A bound of 
1 allows for a single item to be stored in the channel without blocking.

-Kevin___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Auto-borrow/deref (again, sorry)

2013-12-25 Thread Kevin Ballard
On Dec 25, 2013, at 5:17 PM, Vadim vadi...@gmail.com wrote:

 I agree that unexpected mutation is undesirable, but:
 - requiring 'mut' is orthogonal to requiring '' sigil, IMHO,
 - as currently implemented, Rust does not always require mut when callee 
 mutates the argument, for example:
 
 fn main() {
 let mut i: int = 0;
 foo(mut i);
 println!({}, i);
 }
 fn foo(i: mut int) {
 bar(i); // no mut!
 }
 fn bar(i: mut int) {
 *i = *i + 1;
 }
 
 Note that invocation of bar() inside foo() does not forewarn reader by 
 requiring 'mut'.  Wouldn't you rather see this?:
 
 fn main() {
 let mut i: int = 0;
 foo(mut i);
 println!({}, i);
 }
 fn foo(i: mut int) {
 bar(mut i);
 }
 fn bar(i: mut int) {
 i = i + 1;
 }

What is the point of adding `mut` here? bar() does not need `mut` because 
calling bar(i) does not auto-borrow i. It’s already a `mut int`.

-Kevin___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] See pull request #11129

2013-12-23 Thread Kevin Ballard
Even with refutable `let`s, I think there's still a good case for having 
`.unwrap()`-style APIs on enums, which is that often you need to unwrap an enum 
inside a larger expression. A refutable `let` only works if you're actually 
using a `let`-binding to begin with.

-Kevin

On Dec 23, 2013, at 1:24 PM, Simon Sapin simon.sa...@exyr.org wrote:

 On 23/12/2013 22:19, Liigo Zhuang wrote:
 Code full of .unwrap() is not good smell I think.
 
 I agree, and I wrote in the first email that it is one of the downsides.
 
 But doubling the API (from_utf8 and from_utf8_opt) is also not a good smell, 
 so it’s a compromise to find.
 
 -- 
 Simon Sapin
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Unbounded channels: Good idea/bad idea?

2013-12-20 Thread Kevin Ballard
On Dec 20, 2013, at 8:59 AM, Carter Schonwald carter.schonw...@gmail.com 
wrote:

 agreed! Applications that lack explicit logic for handling heavy workloads 
 (ie producers outpacing consumers for a sustained period) are the most common 
 culprit for unresponsive desktop applications that become completely 
 unusable. 

That’s a pretty strong claim, and one I would have to disagree with quite 
strongly. Every time I’ve sampled an unresponsive application, I don’t think 
I’ve ever seen a backtrace that suggests a producer outpacing a consumer.

-Kevin

 relatedly: would not bounded but programmatically growable channels also make 
 it trivial to provide a unbounded style channel abstraction? (not that i'm 
 advocating that, merely that it seems like it would turn the unbounded 
 channel abstraction into an equivalent one that is resource usage aware)
 
 
 On Fri, Dec 20, 2013 at 8:52 AM, György Andrasek jur...@gmail.com wrote:
 On 12/19/2013 11:13 PM, Tony Arcieri wrote:
 So I think that entire line of reasoning is a red herring. People
 writing toy programs that never have their channels fill beyond a small
 number of messages won't care either way.
 
 However, overloaded programs + queues bounded by system resources are a
 production outage waiting to happen. What's really important here is
 providing a means of backpressure so overloaded Rust programs don't grow
 until they consume system resources and OOM.
 
 While I disagree with the notion that all programs which don't have their 
 bottlenecks right here are toys, we should definitely strive for the 
 invariant that task failure does not cause independent tasks to fail.
 
 Also, OOM is not free. If you manage to go OOM on a desktop, you'll get a 
 *very* unhappy user, regardless of their expectations wrt your memory usage. 
 Linux with a spinning disk and swap for example will degrade to the point 
 where they'll reboot before the OOM killer kicks in.
 
 Can we PLEASE not do that *by default*?
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Unbounded channels: Good idea/bad idea?

2013-12-20 Thread Kevin Ballard
I haven’t profiled it, but my belief is that under normal circumstances, 
messages come in slow enough that the consumer is always idle and ready to 
process the next message as soon as it’s sent. However, I expect it does 
occasionally back up a bit, e.g. when I get a burst of traffic such as during a 
netsplit when I’m sent a large batch of “user has quit” or “user has 
joined” (when the netsplit is over). I don’t know how much the channel backs up 
at that point, probably not too much.

For this particular use-case, a channel that’s bounded at e.g. 100,000 elements 
would be indistinguishable from an infinite channel, as long as it still 
dynamically allocates (I don’t think Go channels dynamically allocate, which is 
why I can’t just use a 100,000 element channel for real).

However, my overall point about large bounds being indistinguishable from 
infinite is that if your goal is to pick a bound large enough to appear 
infinite to the program, without actually risking OOM, then there’s no 
automated way to do this. Different environments have differing amounts of 
available resources, and there’s no good way to pick a bound that is 
sufficiently high but is definitively lower than the resource bounds. This is 
why I’m recommending that we have truly infinite channels, for users who don’t 
want to have to think about bounds (e.g. my irc program), as well as bounded 
channels, where the user has to explicitly pick a bound (with no “default” 
provided).

-Kevin

On Dec 20, 2013, at 12:55 PM, Carter Schonwald carter.schonw...@gmail.com 
wrote:

 kevin, what sort of applications and workloads are you speaking about. Eg in 
 your example irc server, whats the typical workload when you've used it?
 
 cheers
 -Carter
 
 
 On Fri, Dec 20, 2013 at 12:54 PM, Kevin Ballard ke...@sb.org wrote:
 On Dec 20, 2013, at 8:59 AM, Carter Schonwald carter.schonw...@gmail.com 
 wrote:
 
 agreed! Applications that lack explicit logic for handling heavy workloads 
 (ie producers outpacing consumers for a sustained period) are the most 
 common culprit for unresponsive desktop applications that become completely 
 unusable. 
 
 That’s a pretty strong claim, and one I would have to disagree with quite 
 strongly. Every time I’ve sampled an unresponsive application, I don’t think 
 I’ve ever seen a backtrace that suggests a producer outpacing a consumer.
 
 -Kevin
 
 relatedly: would not bounded but programmatically growable channels also 
 make it trivial to provide a unbounded style channel abstraction? (not 
 that i'm advocating that, merely that it seems like it would turn the 
 unbounded channel abstraction into an equivalent one that is resource usage 
 aware)
 
 
 On Fri, Dec 20, 2013 at 8:52 AM, György Andrasek jur...@gmail.com wrote:
 On 12/19/2013 11:13 PM, Tony Arcieri wrote:
 So I think that entire line of reasoning is a red herring. People
 writing toy programs that never have their channels fill beyond a small
 number of messages won't care either way.
 
 However, overloaded programs + queues bounded by system resources are a
 production outage waiting to happen. What's really important here is
 providing a means of backpressure so overloaded Rust programs don't grow
 until they consume system resources and OOM.
 
 While I disagree with the notion that all programs which don't have their 
 bottlenecks right here are toys, we should definitely strive for the 
 invariant that task failure does not cause independent tasks to fail.
 
 Also, OOM is not free. If you manage to go OOM on a desktop, you'll get a 
 *very* unhappy user, regardless of their expectations wrt your memory usage. 
 Linux with a spinning disk and swap for example will degrade to the point 
 where they'll reboot before the OOM killer kicks in.
 
 Can we PLEASE not do that *by default*?
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 
 

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Unbounded channels: Good idea/bad idea?

2013-12-19 Thread Kevin Ballard
On Dec 18, 2013, at 10:49 PM, Daniel Micay danielmi...@gmail.com wrote:

 On Thu, Dec 19, 2013 at 1:23 AM, Kevin Ballard ke...@sb.org wrote:
 In my experience using Go, most of the time when I use a channel I don't 
 particularly care about the size, as long as it has a size of at least 1 (to 
 avoid blocking on the send). However, if I do care about the size, usually I 
 want it to be effectively infinite (and I have some code in my IRC bot that 
 uses a separate goroutine in order to implement an infinite channel). Upon 
 occasion I do want an explicitly bounded channel, but, at least in my code, 
 that tends to be rarer than wanting effectively infinite.
 
 It's not effectively infinite, because you can run out of memory. The
 difference between a bounded queue and an unbounded queue is whether
 running out of space blocks or aborts the process. The maximum
 capacity doesn't also have to be the minimum capacity - that's just an
 optimization used by some specific implementations and doesn't apply
 to all bounded channels.
 
 I also believe that unbounded should be the default, because it's the most 
 tolerant type of channel when you don't want to have to think about bounding 
 limits. It also means async send is the default, which I think is a good 
 idea.
 
 -Kevin
 
 You do have to think about bounding limits. The limits just have to be
 externally implemented instead of being enforced by the queue. It's
 not a matter of whether send is synchronous or asynchronous but
 whether or not the data structure ignores the possibility of running
 out of resources.

Running out of memory can certainly be a problem with unbounded channels, but 
it's not a problem unique to unbounded channels. I'm not sure why it deserves 
so much extra thought here to the point that we may default to bounded. We 
don't default to bounded in other potential resource-exhaustion scenarios. For 
example, ~[T] doesn't default to having a maximum capacity that cannot be 
exceeded. The only maximum there is the limits of memory. I can write a loop 
that calls .push() on a ~[T] until I exhaust all my resources, but nobody 
thinks that's a serious issue.

There is definitely a use-case for bounded channels. But I don't think it 
should be the default. If bounded channels are the default, then everyone who 
uses a channel needs to have to think about what an appropriate bound is, and 
in practice will probably just throw some small number at the channel and call 
it a day. I expect most uses of channels aren't going to grow the channel 
infinitely, and as such there's no need to require the programmer to try and 
come up with a bound for it, especially because if they come up with a bound 
that's too low then it will cause problems (e.g. performance problems, if the 
failure case is blocking on send).

If the channel does have the potential to grow infinitely, then the programmer 
needs to recognize this case and handle it explicitly (e.g. by opting into a 
bounded channel and determining an appropriate bound to use). No default 
behavior will handle the need for bounded channels correctly for everyone.

-Kevin
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Unbounded channels: Good idea/bad idea?

2013-12-19 Thread Kevin Ballard
Here’s an example from where I use an infinite queue.

I have an IRC bot, written in Go. The incoming network traffic of this bot is 
handled in one goroutine, which parses each line into its components, and 
enqueues the result on a channel. The channel is very deliberately made 
infinite (via a separate goroutine that stores the infinite buffer in a local 
slice). The reason it’s infinite is because the bot needs to be resilient 
against the case where either the consumer unexpectedly blocks, or the network 
traffic spikes. The general assumption is that, under normal conditions, the 
consumer will always be able to keep up with the producer (as the producer is 
based on network traffic and not e.g. a tight CPU loop generating messages as 
fast as possible). Backpressure makes no sense here, as you cannot put 
backpressure on the network short of letting the socket buffer fill up, and 
letting the socket buffer fill up with cause the IRC network to disconnect you. 
So the overriding goal here is to prevent network disconnects, while assuming 
that the consumer will be able to catch up if it ever gets behind.

This particular use case very explicitly wants a dynamically-sized infinite 
channel. I suppose an absurdly large channel would be acceptable, because if 
the consumer ever gets e.g. 100,000 lines behind then it’s in trouble already, 
but I’d rather not have the memory overhead of a statically-allocated gigantic 
channel buffer.

-Kevin

On Dec 19, 2013, at 10:04 AM, Jason Fager jfa...@gmail.com wrote:

 Okay, parallelism, of course, and I'm sure others.  Bad use of the word 
 'only'.  The point is that if your consumers aren't keeping up with your 
 producers, you're screwed anyways, and growing the queue indefinitely isn't a 
 way to get around that.  Growing queues should only serve specific purposes 
 and make it easy to apply back pressure when the assumptions behind those 
 purposes go awry.
 
 
 On Thursday, December 19, 2013, Patrick Walton wrote:
 On 12/19/13 6:31 AM, Jason Fager wrote:
 I work on a system that handles 10s of billions of events per day, and
 we do a lot of queueing.  Big +1 on having bounded queues.  Unbounded
 in-memory queues aren't, they just have a bound you have no direct
 control over and that blows up the world when its hit.
 
 The only reason to have a queue size greater than 1 is to handle spikes
 in the producer, short outages in the consumer, or a bit of
 out-of-phaseness between producers and consumers.
 
 Well, also parallelism.
 
 Patrick
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Unbounded channels: Good idea/bad idea?

2013-12-19 Thread Kevin Ballard
On Dec 19, 2013, at 10:25 AM, Matthieu Monrocq matthieu.monr...@gmail.com 
wrote:

 On Thu, Dec 19, 2013 at 7:23 PM, Kevin Ballard ke...@sb.org wrote:
 Here’s an example from where I use an infinite queue.
 
 I have an IRC bot, written in Go. The incoming network traffic of this bot is 
 handled in one goroutine, which parses each line into its components, and 
 enqueues the result on a channel. The channel is very deliberately made 
 infinite (via a separate goroutine that stores the infinite buffer in a local 
 slice). The reason it’s infinite is because the bot needs to be resilient 
 against the case where either the consumer unexpectedly blocks, or the 
 network traffic spikes. The general assumption is that, under normal 
 conditions, the consumer will always be able to keep up with the producer (as 
 the producer is based on network traffic and not e.g. a tight CPU loop 
 generating messages as fast as possible). Backpressure makes no sense here, 
 as you cannot put backpressure on the network short of letting the socket 
 buffer fill up, and letting the socket buffer fill up with cause the IRC 
 network to disconnect you. So the overriding goal here is to prevent network 
 disconnects, while assuming that the consumer will be able to catch up if it 
 ever gets behind.
 
 This particular use case very explicitly wants a dynamically-sized infinite 
 channel. I suppose an absurdly large channel would be acceptable, because if 
 the consumer ever gets e.g. 100,000 lines behind then it’s in trouble 
 already, but I’d rather not have the memory overhead of a 
 statically-allocated gigantic channel buffer.
 
 I feel the need to point out that the producer could locally queue the 
 messages before sending over the channel if it were bounded.

No it can’t. Most of the time, the producer is blocked waiting to read from the 
socket. If it’s locally queued the messages, and the channel empties out, the 
producer will still be blocked on the socket and won’t be able to send any of 
the queued messages.

-Kevin___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Unbounded channels: Good idea/bad idea?

2013-12-19 Thread Kevin Ballard
On Dec 19, 2013, at 10:38 AM, Matthieu Monrocq matthieu.monr...@gmail.com 
wrote:

 Furthermore, it is relatively easy to build an unbounded channel over a 
 bounded one: just have the producer queue things. Depending on whether 
 sequencing from multiple producers is important or not, this queue can be 
 either shared or producer-local, with relative ease.

This is incorrect. The producer cannot queue messages and preserve the 
appearance of an unbounded channel. Except when sending a message on the 
channel, the producer is busy doing something else. It’s producing. That’s why 
it’s called a producer. This means that if the channel empties out, the 
consumer will run out of things to consume until the producer has finished 
producing another value. At this point, the producer can send as many enqueued 
values as fits in the channel, but it’s too late, the consumer has already 
stalled out.

The only type of producer that can enqueue the messages locally is one that 
produces by selecting on channels, as it can add the sending channel to the mix 
(but even this assumes that its processing of the other channels is fast enough 
to avoid letting the channel go empty).

It’s for this very reason that in Go, the way to produce an infinite channel 
looks like this:

// run this in its own goroutine
// replace Type with the proper channel type
func makeInfiniteChan(in -chan Type, next chan- Type) {
defer close(next)

// pending events (this is the infinite part)
pending := []Type{}

recv:
for {
// Ensure that pending always has values so the select can
// multiplex between the receiver and sender properly
if len(pending) == 0 {
v, ok := -in
if !ok {
// in is closed, flush values
break
}

// We now have something to send
pending = append(pending, v)
}

select {
// Queue incoming values
case v, ok := -in:
if !ok {
// in is closed, flush values
break recv
}
pending = append(pending, v)

// Send queued values
case next - pending[0]:
pending = pending[1:]
}
}

// After in is closed, we may still have events to send
for _, v := range pending {
next - v
}
}

It’s a bit complicated, and requires a separate goroutine just to do the 
buffering. It works, but it shouldn’t be necessary. And it’s not going to be 
viable in Rust because of 1:1 scheduling.

-Kevin
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Unbounded channels: Good idea/bad idea?

2013-12-19 Thread Kevin Ballard
On Dec 19, 2013, at 11:08 AM, Jason Fager jfa...@gmail.com wrote:

 So what do you do when you OOM?  A network traffic spike beyond a particular 
 threshold is exactly why you want a bounded queue, b/c it gives you an 
 opportunity to actually handle it and recover, even if the recovery is just 
 drop messages I can't handle.
 
 Backpressure doesn't make sense on an edge server handling traffic you don't 
 control, but spill-to-disk or discarding messages does.

Dropping messages is unacceptable, because that corrupts the internal state of 
the client such that it no longer reflects the state of the server.

OOM is not a worry here. If the channel gets backed up so much that it actually 
causes memory issues, then I've got something seriously wrong with my program, 
something that a bounded channel won't fix. The only thing I can do with a 
bounded channel is disconnect if I run out of space, which is also 
unacceptable. Running OOM and being terminated by the system is the best-case 
behavior here, because anything else means effectively giving up while I still 
had available resources.

 Having a bound on your queue size and statically allocating a gigantic 
 channel buffer are orthogonal issues.  You can bound a linked list.

Correct, but my understanding is that Go's channels do allocate the buffer up 
front.

-Kevin

 On Thu, Dec 19, 2013 at 1:23 PM, Kevin Ballard ke...@sb.org wrote:
 Here’s an example from where I use an infinite queue.
 
 I have an IRC bot, written in Go. The incoming network traffic of this bot is 
 handled in one goroutine, which parses each line into its components, and 
 enqueues the result on a channel. The channel is very deliberately made 
 infinite (via a separate goroutine that stores the infinite buffer in a local 
 slice). The reason it’s infinite is because the bot needs to be resilient 
 against the case where either the consumer unexpectedly blocks, or the 
 network traffic spikes. The general assumption is that, under normal 
 conditions, the consumer will always be able to keep up with the producer (as 
 the producer is based on network traffic and not e.g. a tight CPU loop 
 generating messages as fast as possible). Backpressure makes no sense here, 
 as you cannot put backpressure on the network short of letting the socket 
 buffer fill up, and letting the socket buffer fill up with cause the IRC 
 network to disconnect you. So the overriding goal here is to prevent network 
 disconnects, while assuming that the consumer will be able to catch up if it 
 ever gets behind.
 
 This particular use case very explicitly wants a dynamically-sized infinite 
 channel. I suppose an absurdly large channel would be acceptable, because if 
 the consumer ever gets e.g. 100,000 lines behind then it’s in trouble 
 already, but I’d rather not have the memory overhead of a 
 statically-allocated gigantic channel buffer.
 
 -Kevin
 
 On Dec 19, 2013, at 10:04 AM, Jason Fager jfa...@gmail.com wrote:
 
 Okay, parallelism, of course, and I'm sure others.  Bad use of the word 
 'only'.  The point is that if your consumers aren't keeping up with your 
 producers, you're screwed anyways, and growing the queue indefinitely isn't 
 a way to get around that.  Growing queues should only serve specific 
 purposes and make it easy to apply back pressure when the assumptions behind 
 those purposes go awry.
 
 
 On Thursday, December 19, 2013, Patrick Walton wrote:
 On 12/19/13 6:31 AM, Jason Fager wrote:
 I work on a system that handles 10s of billions of events per day, and
 we do a lot of queueing.  Big +1 on having bounded queues.  Unbounded
 in-memory queues aren't, they just have a bound you have no direct
 control over and that blows up the world when its hit.
 
 The only reason to have a queue size greater than 1 is to handle spikes
 in the producer, short outages in the consumer, or a bit of
 out-of-phaseness between producers and consumers.
 
 Well, also parallelism.
 
 Patrick
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 
 

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Unbounded channels: Good idea/bad idea?

2013-12-19 Thread Kevin Ballard
On Dec 19, 2013, at 11:23 AM, Gábor Lehel glaebho...@gmail.com wrote:

  - From a semantic perspective the only distinction is between bounded and 
 unbounded queues. The capacity of a bounded queue only makes a difference 
 with regards to performance.

While this may be true in most cases, I can come up with ways to use a channel 
that require a capacity greater than 1 to avoid a deadlock (assuming 
block-on-send).

-Kevin___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Unbounded channels: Good idea/bad idea?

2013-12-19 Thread Kevin Ballard
On Dec 19, 2013, at 11:23 AM, Gábor Lehel glaebho...@gmail.com wrote:

  - Having only one type of queue, which is bounded,
  
  - and whose default capacity is just small enough that it would be hit 
 before exhausting resources, but is otherwise still ridiculously large 
 (effectively unbounded) (so basically what Kevin wrote),

For people who don't want to think about failure cases, I don't see how this is 
any better than a genuinely unbounded queue. And for people who do want to 
think about failure causes, they have to think about the bounds anyway so some 
sort of ridiculously high default isn't very usable.

I also am not sure how you can come up with an appropriate ridiculously-high 
default that is guaranteed to be small enough to fit into available resources 
for everyone.

My feeling here is that we should have a genuinely unbounded queue, and we 
should have a bounded queue that requires setting a bound instead of providing 
a default.

-Kevin___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Joe Armstrong's universal server

2013-12-18 Thread Kevin Ballard
That's cute, but I don't really understand the point. The sample program he 
gave:

test() -
Pid = spawn(fun universal_server/0),
Pid ! {become, fun factorial_server/0},
Pid ! {self(), 50},
receive
X - X
end.

will behave identically if you remove universal_server from the equation:

test() -
Pid = spawn(fun factorial_server/0),
Pid ! {self(), 50},
receive
X - X
end.

The whole point of universal_server, AFAICT, is to just demonstrate something 
clever about Erlang's task communication primitives. The equivalent in Rust 
would require passing channels back and forth, because factorial_server needs 
to receive different data than universal_server. The only alternative that I 
can think of would be to have a channel of ~Any+Send objects, which isn't very 
nice.

To that end, I don't see the benefit of trying to reproduce the same 
functionality in Rust, because it's just not a good fit for Rust's task 
communication primitives.

-Kevin

On Dec 18, 2013, at 6:26 AM, Benjamin Striegel ben.strie...@gmail.com wrote:

 Hello rusties, I was reading a blog post by Joe Armstrong recently in which 
 he shows off his favorite tiny Erlang program, called the Universal Server:
 
 http://joearms.github.io/2013/11/21/My-favorite-erlang-program.html
 
 I know that Rust doesn't have quite the same task communication primitives as 
 Erlang, but I'd be interested to see what the Rust equivalent of this program 
 would look like if anyone's up to the task of translating it.
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Unbounded channels: Good idea/bad idea?

2013-12-18 Thread Kevin Ballard
On Dec 18, 2013, at 7:55 PM, Nathan Myers n...@cantrip.org wrote:

 On 12/18/2013 07:07 PM, Patrick Walton wrote:
 (dropping messages, or exploding in memory consumption, or
 introducing subtle deadlocks) are all pretty bad. It may well
  be that dropping the messages is the last bad option, because
 the last two options usually result in a crashed app...
 
 As I understand it, getting into a state where the channel would
 drop messages is a programming error.  In that sense, terminating
 the task in such a case amounts to an assertion failure.
 
 In the case of Servo, somebody needs drop excess events
 because it makes no sense to queue more user-interface actions
 than the user can remember.

By that logic, you'd want to drop the oldest unprocessed events, not the newest.

-Kevin
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Unbounded channels: Good idea/bad idea?

2013-12-18 Thread Kevin Ballard
In my experience using Go, most of the time when I use a channel I don't 
particularly care about the size, as long as it has a size of at least 1 (to 
avoid blocking on the send). However, if I do care about the size, usually I 
want it to be effectively infinite (and I have some code in my IRC bot that 
uses a separate goroutine in order to implement an infinite channel). Upon 
occasion I do want an explicitly bounded channel, but, at least in my code, 
that tends to be rarer than wanting effectively infinite.

My general feeling is that providing both bounded and unbounded channels would 
be good. Even better would be allowing for different ways of handling bounded 
channels (e.g. block on send, drop messages, etc.), but I imagine that 
providing only one type of bounded channel (using block on send if it's full 
and providing a .try_send() that avoids blocking) would be sufficient 
(especially as e.g. dropping messages can be implemented on top of this type of 
channel).

I also believe that unbounded should be the default, because it's the most 
tolerant type of channel when you don't want to have to think about bounding 
limits. It also means async send is the default, which I think is a good idea.

-Kevin

On Dec 18, 2013, at 9:36 PM, Patrick Walton pcwal...@mozilla.com wrote:

 On 12/18/13 8:48 PM, Kevin Ballard wrote:
 
 By that logic, you'd want to drop the oldest unprocessed events, not the 
 newest.
 
 Right.
 
 To reiterate, there is a meta-point here: Blessing any communications 
 primitive as the One True Primitive never goes well for high-performance 
 code. I think we need multiple choices. The hard decision is what should be 
 the default.
 
 Patrick
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Let’s avoid having both foo() and foo_opt()

2013-12-17 Thread Kevin Ballard
On Dec 17, 2013, at 11:37 AM, Stefan Plantikow stefan.planti...@gmail.com 
wrote:

 Hi,
 
 Am 17.12.2013 um 20:10 schrieb Corey Richardson co...@octayn.net:
 
 On Tue, Dec 17, 2013 at 2:06 PM, Stefan Plantikow
 stefan.planti...@gmail.com wrote:
 Hi,
 
 
 Am 09.12.2013 um 16:53 schrieb Damien Radtke damienrad...@gmail.com:
 
 I have no idea if it would be feasible in the standard library, but 
 wouldn't the ideal solution be having one function (e.g. from_utf8()) that 
 could return two possible values, a bare result and an Option? Letting the 
 compiler decide which version to use based on type inference like this:
 
let result: ~str = from_utf8(...);
let result: Option~str = from_utf8(...);
 
 Assuming both of them are passed invalid UTF8, then the first version 
 would fail, but the second version would just return None.
 
 
 
 We already have pattern matching in `let` (the LHS is a pattern), but
 it's only for irrefutable patterns. IOW, `let` can never fail, and
 that's a very very useful property IMO.
 
 oh ok I haven’t kept up on the syntax then. Given the utility of 
 destructuring bind for error handling, wouldn't it make sense to have a 
 variant of let that can fail? 
 
 Now syntax is a matter of practicality and taste but spontaneously this comes 
 to mind:
 
let opt Some(~result) = from_utf8(..)
 
 comes to mind.

You can do it with a bit more verbosity, which I think is perfectly fine as it 
makes failure much more obvious.

let result = match from_utf8(..) {
Some(~result) = result,
_ = fail!(b0rk b0rk b0rk)
};

Of course, in this particular example, you'd probably just write

let result = from_utf8(..).unwrap();

but the longer match form will work for other enums.

-Kevin

smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


[rust-dev] Alternative proposal for `use crate`

2013-12-17 Thread Kevin Ballard
In today's meeting[1], it appears as though `extern mod foo` may become `use 
crate foo`. I have a minor worry about this, which is reserving yet another 
keyword for a very limited usage. My general feeling is we shouldn't be adding 
keywords unnecessarily, especially if their scope is extremely limited. And 
unlike the `in` from `for _ in _`, this one can't be made contextual (if we 
ever decide to go that route), because doing so would allow `mod crate`, which 
would then make `use crate` ambiguous.

One suggestion I didn't see in the meeting, that seems reasonable to me, is 
`extern use`, as in

extern use foo;

This doesn't reserve any new keywords, and it also ties in nicely with the idea 
that we're linking to something external. It also seems to emphasize the right 
thing, which is that we're trying to pull in something external. The fact that 
the thing we're pulling in is a crate seems less important than the fact that 
it's an external thing that we need to link to.

-Kevin

[1]: https://github.com/mozilla/rust/wiki/Meeting-weekly-2013-12-17
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Alternative proposal for `use crate`

2013-12-17 Thread Kevin Ballard
After chatting with Alex Crichton on IRC, it turns out `use crate` was actually 
rejected. It just wasn't captured properly in the notes.

Apparently the current leading proposal is `extern crate foo`. This still has 
the problem of defining a new limited-scope keyword, but it's better than `use 
crate foo`.

-Kevin

On Dec 17, 2013, at 12:19 PM, Kevin Ballard ke...@sb.org wrote:

 In today's meeting[1], it appears as though `extern mod foo` may become `use 
 crate foo`. I have a minor worry about this, which is reserving yet another 
 keyword for a very limited usage. My general feeling is we shouldn't be 
 adding keywords unnecessarily, especially if their scope is extremely 
 limited. And unlike the `in` from `for _ in _`, this one can't be made 
 contextual (if we ever decide to go that route), because doing so would allow 
 `mod crate`, which would then make `use crate` ambiguous.
 
 One suggestion I didn't see in the meeting, that seems reasonable to me, is 
 `extern use`, as in
 
extern use foo;
 
 This doesn't reserve any new keywords, and it also ties in nicely with the 
 idea that we're linking to something external. It also seems to emphasize the 
 right thing, which is that we're trying to pull in something external. The 
 fact that the thing we're pulling in is a crate seems less important than the 
 fact that it's an external thing that we need to link to.
 
 -Kevin
 
 [1]: https://github.com/mozilla/rust/wiki/Meeting-weekly-2013-12-17

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] This Week in Rust

2013-12-09 Thread Kevin Ballard
On Dec 9, 2013, at 7:19 PM, Corey Richardson co...@octayn.net wrote:

 - `Path::new` has been [renamed](https://github.com/mozilla/rust/pull/10796)
 back to `Path::init`.

Other way around. `Path::init` has been renamed back to `Path::new`, along with 
extra::json and std::rt::deque

-Kevin

smime.p7s
Description: S/MIME cryptographic signature
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


  1   2   >