Re: [rust-dev] Macros expanding to multiple statements

2014-01-11 Thread Ashish Myles
Ah, I didn't realize the distinction.  I am comparing the code in the first
comment in the bug you linked against the test suite example I linked.  I
guess the distinction between items and statements is that items correspond
to code outside any method, whereas statements are defined as code within a
method, and macro expansions in the latter case seem to be broken even in
the case of a single statement.  Please correct me if I am wrong.

Ashish



On Sat, Jan 11, 2014 at 9:43 PM, Huon Wilson  wrote:

>  That test is for multiple *items*, not statements.
>
> For the moment, you just have to wrap the interior of a macro expanding to
> an expression in a set of braces, so that it becomes a single statement.
>
>
> macro_rules! my_print(
> ($a:expr, $b:expr) => (
> {
> println!("{:?}", a);
> println!("{:?}", b);
> }
> );
> )
>
> Multi-statement macros are covered by
> https://github.com/mozilla/rust/issues/10681 .
>
>
> Huon
>
>
>
> On 12/01/14 13:40, Ashish Myles wrote:
>
>  Rust 0.9 indicates that support for expansion of macros into multiple
> statements is now supported, and the following example from the test suite
> works for me.
>
> https://github.com/mozilla/rust/blob/master/src/test/run-pass/macro-multiple-items.rs
>
>  However, I receive an error for the following code
>
> #[feature(macro_rules)];
>
> macro_rules! my_print(
> ($a:expr, $b:expr) => (
> println!("{:?}", a);
> println!("{:?}", b);
> );
> )
>
> fn main() {
> let a = 1;
> let b = 2;
> my_print!(a, b);
> }
>
>  (Note that the ^~~ below points at println.)
>
> $ rustc macro_ignores_second_line.rs
> macro_ignores_second_line.rs:6:9: 6:16 error: macro expansion ignores
> token `println` and any following
> macro_ignores_second_line.rs:6 println!("{:?}", b);
>^~~
> error: aborting due to previous error
> task 'rustc' failed at 'explicit failure',
> /home/marcianx/devel/rust/checkout/rust/src/libsyntax/diagnostic.rs:75
> task '' failed at 'explicit failure',
> /home/marcianx/devel/rust/checkout/rust/src/librustc/lib.rs:453
>
>
>  What's the right way to do this?
>
> Ashish
>
>
> ___
> Rust-dev mailing 
> listRust-dev@mozilla.orghttps://mail.mozilla.org/listinfo/rust-dev
>
>
>
> ___
> Rust-dev mailing list
> Rust-dev@mozilla.org
> https://mail.mozilla.org/listinfo/rust-dev
>
>
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Macros expanding to multiple statements

2014-01-11 Thread Huon Wilson

That test is for multiple *items*, not statements.

For the moment, you just have to wrap the interior of a macro expanding 
to an expression in a set of braces, so that it becomes a single statement.


macro_rules! my_print(
($a:expr, $b:expr) => (
{
println!("{:?}", a);
println!("{:?}", b);
}
);
)

Multi-statement macros are covered by 
https://github.com/mozilla/rust/issues/10681 .



Huon


On 12/01/14 13:40, Ashish Myles wrote:
Rust 0.9 indicates that support for expansion of macros into multiple 
statements is now supported, and the following example from the test 
suite works for me.

https://github.com/mozilla/rust/blob/master/src/test/run-pass/macro-multiple-items.rs

However, I receive an error for the following code

#[feature(macro_rules)];

macro_rules! my_print(
($a:expr, $b:expr) => (
println!("{:?}", a);
println!("{:?}", b);
);
)

fn main() {
let a = 1;
let b = 2;
my_print!(a, b);
}

(Note that the ^~~ below points at println.)

$ rustc macro_ignores_second_line.rs 
macro_ignores_second_line.rs:6:9: 6:16 error: macro expansion ignores 
token `println` and any following
macro_ignores_second_line.rs:6  
println!("{:?}", b);

   ^~~
error: aborting due to previous error
task 'rustc' failed at 'explicit failure', 
/home/marcianx/devel/rust/checkout/rust/src/libsyntax/diagnostic.rs:75 

task '' failed at 'explicit failure', 
/home/marcianx/devel/rust/checkout/rust/src/librustc/lib.rs:453 




What's the right way to do this?

Ashish


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


[rust-dev] Macros expanding to multiple statements

2014-01-11 Thread Ashish Myles
Rust 0.9 indicates that support for expansion of macros into multiple
statements is now supported, and the following example from the test suite
works for me.
https://github.com/mozilla/rust/blob/master/src/test/run-pass/macro-multiple-items.rs

However, I receive an error for the following code

#[feature(macro_rules)];

macro_rules! my_print(
($a:expr, $b:expr) => (
println!("{:?}", a);
println!("{:?}", b);
);
)

fn main() {
let a = 1;
let b = 2;
my_print!(a, b);
}

(Note that the ^~~ below points at println.)

$ rustc macro_ignores_second_line.rs
macro_ignores_second_line.rs:6:9: 6:16 error: macro expansion ignores token
`println` and any following
macro_ignores_second_line.rs:6 println!("{:?}", b);
   ^~~
error: aborting due to previous error
task 'rustc' failed at 'explicit failure',
/home/marcianx/devel/rust/checkout/rust/src/libsyntax/diagnostic.rs:75
task '' failed at 'explicit failure',
/home/marcianx/devel/rust/checkout/rust/src/librustc/lib.rs:453


What's the right way to do this?

Ashish
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

2014-01-11 Thread Nathan Myers

On 01/11/2014 03:14 PM, Daniel Micay wrote:

On Sat, Jan 11, 2014 at 6:06 PM, Nathan Myers  wrote:

A big-integer type that uses small-integer
arithmetic until overflow is a clever trick, but it's purely
an implementation trick.  Architecturally, it makes no sense
to expose the trick to users.


I didn't suggest exposing it to users. I suggested defining a wrapper
around the big integer type with better performance characteristics
for small integers.


Your wrapper sounds to me like THE big-integer type.  The thing you
called a "big integer" doesn't need a name.


No single big-integer or
overflow-trapping type can meet all needs. (If you try, you
fail users who need it simple.)  That's OK, because anyone
can code another, and a simple default can satisfy most users.


What do you mean by default? If you don't know the bounds, a big
integer is clearly the only correct choice. If you do know the
bounds,you can use a fixed-size integer. I don't think any default
other than a big integer is sane, so I don't think Rust needs a

> default inference fallback.

As I said,

>> In fact, i64 satisifies almost all users almost all the time.

No one would complain about a built-in "i128" type.  The thing
about a fixed-size type is that there are no implementation
choices to leak out.  Overflowing an i128 variable is quite
difficult, and 128-bit operations are still lots faster than on
any variable-precision type. I could live with "int" == "i128".

Nathan Myers
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Properly licensing Rust documentation and wiki

2014-01-11 Thread Thad Guidry
I touched the wiki.  All "Thad Guidry" edits are Public Domain, of course.
 Or MIT/ASL2 license if you so desire.


On Fri, Jan 10, 2014 at 9:44 PM, Brian Anderson wrote:

> Hey.
>
> Time for more legal stuff. Per https://github.com/mozilla/rust/issues/5831the 
> licensing of our documentation is not clear. Like all things Rust we
> want to make our doc license as permissive as possible, so after getting
> some legal advice here is what I intend to do:
>
> * Rust documentation will be MIT/ASL2 licensed like everything else.
> * Add the license as a *footer* to existing in-tree documentation, under
> the argument that it is already licensed according to the same terms as the
> rest of the repo.
> * Gather new statements from wiki contributors asserting that they
> contrtibuted under the MIT/ASL2, as we did when we relicensed Rust.
> * Put the license as footers on all pages of the wiki.
>
> For the most part this should not affect anybody, though if you've ever
> touched the wiki you may recieve an email from me about this in the future.
>
> Regards,
> Brian
> ___
> Rust-dev mailing list
> Rust-dev@mozilla.org
> https://mail.mozilla.org/listinfo/rust-dev
>



-- 
-Thad
+ThadGuidry 
Thad on LinkedIn 
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

2014-01-11 Thread Patrick Walton

On 1/10/14 10:08 PM, Daniel Micay wrote:

I don't think failure on overflow is very useful. It's still a bug if
you overflow when you don't intend it.


Of course it's useful. It prevents attackers from weaponizing 
out-of-bounds reads and writes in unsafe code.


Patrick

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

2014-01-11 Thread Daniel Micay
On Sat, Jan 11, 2014 at 6:06 PM, Nathan Myers  wrote:
> On 01/10/2014 10:08 PM, Daniel Micay wrote:
>>
>> I don't think failure on overflow is very useful. It's still a bug if
>> you overflow when you don't intend it. If we did have a fast big
>> integer type, it would make sense to wrap it with an enum heading down
>> a separate branch for small and large integers, and branching on the
>> overflow flag to expand to a big integer. I think this is how Python's
>> integers are implemented.
>
> Failure on overflow *can* be useful in production code, using
> tasks to encapsulate suspect computations.  Big-integer types
> can be useful, too.  A big-integer type that uses small-integer
> arithmetic until overflow is a clever trick, but it's purely
> an implementation trick.  Architecturally, it makes no sense
> to expose the trick to users.

I didn't suggest exposing it to users. I suggested defining a wrapper
around the big integer type with better performance characteristics
for small integers.

> The fundamental error in the original posting was saying machine
> word types are somehow not "CORRECT".  Such types have perfectly
> defined behavior and performance in all conditions. They just
> don't pretend to model what a mathematician calls an "integer".
> They *do* model what actual machines actually do. It makes
> sense to call them something else than "integer", but "i32"
> *is* something else.

Rings, fields and modular arithmetic are certainly very real
mathematical concepts. Unsigned fixed-size integers behave as a
mathematician would model them, while signed ones do not really have
sane high-level semantics.

> It also makes sense to make a library that tries to emulate
> an actual integer type.  That belongs in a library because it's
> purely a construct: nothing in any physical machine resembles
> an actual integer.  Furthermore, since it is an emulation,
> details vary for practical reasons. No single big-integer or
> overflow-trapping type can meet all needs. (If you try, you
> fail users who need it simple.)  That's OK, because anyone
> can code another, and a simple default can satisfy most users.

What do you mean by default? If you don't know the bounds, a big
integer is clearly the only correct choice. If you do know the bounds,
you can use a fixed-size integer. I don't think any default other than
a big integer is sane, so I don't think Rust needs a default inference
fallback.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

2014-01-11 Thread Patrick Walton
I think failure may have quite different inlining costs once we move to 
libunwind-based backtraces instead of hardcoding file/line number information 
into the generated code. The file and line number information tends to pollute 
generated code a lot and it's basically unnecessary with proper DWARF info and 
a functioning set of libunwind bindings, which we now have thanks to a couple 
of awesome contributions from you all. :)

Patrick

Owen Shepherd  wrote:
>On 11 January 2014 21:42, Daniel Micay  wrote:
>
>> On Sat, Jan 11, 2014 at 4:31 PM, Owen Shepherd 
>> wrote:
>> > So I just did a test. Took the following rust code:
>> > pub fn test_wrap(x : u32, y : u32) -> u32 {
>> > return x.checked_mul(&y).unwrap().checked_add(&16).unwrap();
>> > }
>> >
>> > And got the following blob of assembly out. What we have there, my
>> friends,
>> > is a complete failure of the optimizer (N.B. it works for the
>simple
>> case of
>> > checked_add alone)
>> >
>> > Preamble:
>> >
>> > __ZN9test_wrap19hc4c136f599917215af4v0.0E:
>> > .cfi_startproc
>> > cmpl%fs:20, %esp
>> > jaLBB0_2
>> > pushl$12
>> > pushl$20
>> > calll___morestack
>> > ret
>> > LBB0_2:
>> > pushl%ebp
>> > Ltmp2:
>> > .cfi_def_cfa_offset 8
>> > Ltmp3:
>> > .cfi_offset %ebp, -8
>> > movl%esp, %ebp
>> > Ltmp4:
>> > .cfi_def_cfa_register %ebp
>> >
>> > Align stack (for what? We don't do any SSE)
>> >
>> > andl$-8, %esp
>> > subl$16, %esp
>>
>> The compiler aligns the stack for performance.
>>
>>
>
>Oops, I misread and thought there was 16 byte alignment going on there,
>not
>8.
>
>
>> > Multiply x * y
>> >
>> > movl12(%ebp), %eax
>> > mull16(%ebp)
>> > jnoLBB0_4
>> >
>> > If it didn't overflow, stash a 0 at top of stack
>> >
>> > movb$0, (%esp)
>> > jmpLBB0_5
>> >
>> > If it did overflow, stash a 1 at top of stack (we are building an
>> > Option here)
>> > LBB0_4:
>> > movb$1, (%esp)
>> > movl%eax, 4(%esp)
>> >
>> > Take pointer to &this for __thiscall:
>> > LBB0_5:
>> > leal(%esp), %ecx
>> > calll__ZN6option6Option6unwrap21h05c5cb6c47a61795Zcat4v0.0E
>> >
>> > Do the addition to the result
>> >
>> > addl$16, %eax
>> >
>> > Repeat the previous circus
>> >
>> > jaeLBB0_7
>> > movb$0, 8(%esp)
>> > jmpLBB0_8
>> > LBB0_7:
>> > movb$1, 8(%esp)
>> > movl%eax, 12(%esp)
>> > LBB0_8:
>> > leal8(%esp), %ecx
>> > calll__ZN6option6Option6unwrap21h05c5cb6c47a61795Zcat4v0.0E
>> > movl%ebp, %esp
>> > popl%ebp
>> > ret
>> > .cfi_endproc
>> >
>> >
>> > Yeah. Its' not fast because its' not inlining through
>option::unwrap.
>>
>> The code to initiate failure is gigantic and LLVM doesn't do partial
>> inlining by default. It's likely far above the inlining threshold.
>>
>>
>Right, why I suggested explicitly moving the failure code out of line
>into
>a separate function.
>
>
>> A purely synthetic benchmark only executing the unchecked or checked
>> instruction isn't interesting. You need to include several
>> optimizations in the loop as real code would use, and you will often
>> see a massive drop in performance from the serialization of the
>> pipeline. Register renaming is not as clever as you'd expect.
>>
>>
>Agreed. The variability within that tiny benchmark tells me that it
>can't
>really glean any valuable information.
>
>
>> The impact of trapping is known, because `clang` and `gcc` expose
>> `-ftrapv`.
>>  Integer-heavy workloads like cryptography and video codecs are
>> several times slower with the checks.
>>
>
>What about other workloads?
>
>As I mentioned: What I'd propose is trapping by default, with
>non-trapping
>math along the lines of a single additonal character on a type
>declaration
>away.
>
>Also, I did manage to convince Rust + LLVM to optimize things cleanly,
>by
>defining an unwrap which invoked libc's abort() -> !, so there's that.
>
>
>
>
>___
>Rust-dev mailing list
>Rust-dev@mozilla.org
>https://mail.mozilla.org/listinfo/rust-dev

-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

2014-01-11 Thread Nathan Myers

On 01/10/2014 10:08 PM, Daniel Micay wrote:

I don't think failure on overflow is very useful. It's still a bug if
you overflow when you don't intend it. If we did have a fast big
integer type, it would make sense to wrap it with an enum heading down
a separate branch for small and large integers, and branching on the
overflow flag to expand to a big integer. I think this is how Python's
integers are implemented.


Failure on overflow *can* be useful in production code, using
tasks to encapsulate suspect computations.  Big-integer types
can be useful, too.  A big-integer type that uses small-integer
arithmetic until overflow is a clever trick, but it's purely
an implementation trick.  Architecturally, it makes no sense
to expose the trick to users.

The fundamental error in the original posting was saying machine
word types are somehow not "CORRECT".  Such types have perfectly
defined behavior and performance in all conditions. They just
don't pretend to model what a mathematician calls an "integer".
They *do* model what actual machines actually do. It makes
sense to call them something else than "integer", but "i32"
*is* something else.

It also makes sense to make a library that tries to emulate
an actual integer type.  That belongs in a library because it's
purely a construct: nothing in any physical machine resembles
an actual integer.  Furthermore, since it is an emulation,
details vary for practical reasons. No single big-integer or
overflow-trapping type can meet all needs. (If you try, you
fail users who need it simple.)  That's OK, because anyone
can code another, and a simple default can satisfy most users.

In fact, i64 satisifies almost all users almost all the time.

Nathan Myers
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

2014-01-11 Thread Carter Schonwald
excellent point owen. I'd agree myself, seeing how that exact same platform
dependent int/uint size gotchas (wrapping style semantics) are a recurrent
source of surprise in GHC Haskell, and other languages. In my own
applications I like wrapping semantics, but for most people, a signed
counter wrapping into negative numbers isn't a welcome suprise!


On Sat, Jan 11, 2014 at 5:38 PM, Owen Shepherd  wrote:

> On 11 January 2014 22:22, Daniel Micay  wrote:
>
>> On Sat, Jan 11, 2014 at 5:13 PM, Owen Shepherd 
>> wrote:
>> >
>> > What about other workloads?
>>
>> It just depends on how much of it is doing integer arithmetic. Many
>> applications are bounded by I/O and memory bandwidth and wouldn't be
>> hurt by integer arithmetic resulting in significantly slower code.
>>
>> > As I mentioned: What I'd propose is trapping by default, with
>> non-trapping math along the lines of a single additonal character on a type
>> declaration away.
>>
>> Why would it be a language feature? It's not an operation Rust needs
>> to expose at a language level because it can be implemented as a
>> library type.
>>
>
> I agree, however, I feel that the names like "i32" and "u32" should be
> trap-on-overflow types. The non overflow ones should be "i32w" (wrapping)
> or similar.
>
> Why? Because I expect that otherwise people will default to the wrapping
> types. Less typing. "It'll never be a security issue", or "Looks safe to
> me", etc, etc. Secure by default is a good thing, IMO.
>
> So I agree, no reason it couldn't be implemented in libstd. Just... there
> are currently type names in the way.
>
> (I note that there has been a mixed opinion in this thread each way)
>
> Owen Shepherd
> http://owenshepherd.net | owen.sheph...@e43.eu
>
> ___
> Rust-dev mailing list
> Rust-dev@mozilla.org
> https://mail.mozilla.org/listinfo/rust-dev
>
>
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

2014-01-11 Thread Owen Shepherd
 On 11 January 2014 22:22, Daniel Micay  wrote:

> On Sat, Jan 11, 2014 at 5:13 PM, Owen Shepherd 
> wrote:
> >
> > What about other workloads?
>
> It just depends on how much of it is doing integer arithmetic. Many
> applications are bounded by I/O and memory bandwidth and wouldn't be
> hurt by integer arithmetic resulting in significantly slower code.
>
> > As I mentioned: What I'd propose is trapping by default, with
> non-trapping math along the lines of a single additonal character on a type
> declaration away.
>
> Why would it be a language feature? It's not an operation Rust needs
> to expose at a language level because it can be implemented as a
> library type.
>

I agree, however, I feel that the names like "i32" and "u32" should be
trap-on-overflow types. The non overflow ones should be "i32w" (wrapping)
or similar.

Why? Because I expect that otherwise people will default to the wrapping
types. Less typing. "It'll never be a security issue", or "Looks safe to
me", etc, etc. Secure by default is a good thing, IMO.

So I agree, no reason it couldn't be implemented in libstd. Just... there
are currently type names in the way.

(I note that there has been a mixed opinion in this thread each way)

Owen Shepherd
http://owenshepherd.net | owen.sheph...@e43.eu
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] NewType change in 0.9

2014-01-11 Thread Marijn Haverbeke
> What does wrapping the 'name' of the variable with it's type on the LHS of
> the : as well as having it on the RHS?

It's a destructuring pattern, extracting the content of the Row/Column
values and binding a variable to it.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

2014-01-11 Thread Daniel Micay
On Sat, Jan 11, 2014 at 5:13 PM, Owen Shepherd  wrote:
>
> What about other workloads?

It just depends on how much of it is doing integer arithmetic. Many
applications are bounded by I/O and memory bandwidth and wouldn't be
hurt by integer arithmetic resulting in significantly slower code.

> As I mentioned: What I'd propose is trapping by default, with non-trapping 
> math along the lines of a single additonal character on a type declaration 
> away.

Why would it be a language feature? It's not an operation Rust needs
to expose at a language level because it can be implemented as a
library type.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

2014-01-11 Thread Owen Shepherd
On 11 January 2014 21:42, Daniel Micay  wrote:

> On Sat, Jan 11, 2014 at 4:31 PM, Owen Shepherd 
> wrote:
> > So I just did a test. Took the following rust code:
> > pub fn test_wrap(x : u32, y : u32) -> u32 {
> > return x.checked_mul(&y).unwrap().checked_add(&16).unwrap();
> > }
> >
> > And got the following blob of assembly out. What we have there, my
> friends,
> > is a complete failure of the optimizer (N.B. it works for the simple
> case of
> > checked_add alone)
> >
> > Preamble:
> >
> > __ZN9test_wrap19hc4c136f599917215af4v0.0E:
> > .cfi_startproc
> > cmpl%fs:20, %esp
> > jaLBB0_2
> > pushl$12
> > pushl$20
> > calll___morestack
> > ret
> > LBB0_2:
> > pushl%ebp
> > Ltmp2:
> > .cfi_def_cfa_offset 8
> > Ltmp3:
> > .cfi_offset %ebp, -8
> > movl%esp, %ebp
> > Ltmp4:
> > .cfi_def_cfa_register %ebp
> >
> > Align stack (for what? We don't do any SSE)
> >
> > andl$-8, %esp
> > subl$16, %esp
>
> The compiler aligns the stack for performance.
>
>

Oops, I misread and thought there was 16 byte alignment going on there, not
8.


> > Multiply x * y
> >
> > movl12(%ebp), %eax
> > mull16(%ebp)
> > jnoLBB0_4
> >
> > If it didn't overflow, stash a 0 at top of stack
> >
> > movb$0, (%esp)
> > jmpLBB0_5
> >
> > If it did overflow, stash a 1 at top of stack (we are building an
> > Option here)
> > LBB0_4:
> > movb$1, (%esp)
> > movl%eax, 4(%esp)
> >
> > Take pointer to &this for __thiscall:
> > LBB0_5:
> > leal(%esp), %ecx
> > calll__ZN6option6Option6unwrap21h05c5cb6c47a61795Zcat4v0.0E
> >
> > Do the addition to the result
> >
> > addl$16, %eax
> >
> > Repeat the previous circus
> >
> > jaeLBB0_7
> > movb$0, 8(%esp)
> > jmpLBB0_8
> > LBB0_7:
> > movb$1, 8(%esp)
> > movl%eax, 12(%esp)
> > LBB0_8:
> > leal8(%esp), %ecx
> > calll__ZN6option6Option6unwrap21h05c5cb6c47a61795Zcat4v0.0E
> > movl%ebp, %esp
> > popl%ebp
> > ret
> > .cfi_endproc
> >
> >
> > Yeah. Its' not fast because its' not inlining through option::unwrap.
>
> The code to initiate failure is gigantic and LLVM doesn't do partial
> inlining by default. It's likely far above the inlining threshold.
>
>
Right, why I suggested explicitly moving the failure code out of line into
a separate function.


> A purely synthetic benchmark only executing the unchecked or checked
> instruction isn't interesting. You need to include several
> optimizations in the loop as real code would use, and you will often
> see a massive drop in performance from the serialization of the
> pipeline. Register renaming is not as clever as you'd expect.
>
>
Agreed. The variability within that tiny benchmark tells me that it can't
really glean any valuable information.


> The impact of trapping is known, because `clang` and `gcc` expose
> `-ftrapv`.
>  Integer-heavy workloads like cryptography and video codecs are
> several times slower with the checks.
>

What about other workloads?

As I mentioned: What I'd propose is trapping by default, with non-trapping
math along the lines of a single additonal character on a type declaration
away.

Also, I did manage to convince Rust + LLVM to optimize things cleanly, by
defining an unwrap which invoked libc's abort() -> !, so there's that.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] NewType change in 0.9

2014-01-11 Thread benjamin adamson
Thanks! That did work. However I have no idea what this is doing:
Row(row): Row, Column(column): Column
The way I understood variable declaration, is that it goes:

name : type.

What does wrapping the 'name' of the variable with it's type on the LHS of
the : as well as having it on the RHS? Is this some special syntax related
to 'NewTypes' ??




On Sat, Jan 11, 2014 at 2:06 PM, Steven Fackler  wrote:

> Something like this should work:
>
>
> pub fn cell_alive(&self, Row(row): Row, Column(column): Column) -> uint {
>
>
>   return match self.inner[row][column].value {
>
>
>dead  => 0,
>
>
>alive => 1
>   };
>
> }
>
>
> Steven Fackler
>
>
> On Sat, Jan 11, 2014 at 2:03 PM, benjamin adamson <
> adamson.benja...@gmail.com> wrote:
>
>> Hello Rust community!
>>
>> I've been busying myself over the past few weeks learning the different
>> features of rust, and I have been working on an implementation of Conway's
>> game of life (while trying to explore different features of rust.
>>
>> In 0.9, it was changed so that you cannot dereference haskell-like
>> "NewTypes" with the * operator. In the 0.9 documentation, it says we can
>> use pattern matching to extract the underlying type.
>>
>> Right here in my 0.8 code I dererenced the row parameter here:
>>
>> https://github.com/ShortStomp/ConwayGameOfLife-RUST/blob/master/grid.rs#L42
>>
>> which is a simple 'NewType', with underlying type uint.
>>
>> My question is, instead of dereferencing the 'Row' and 'Column' types,
>> how can I use pattern matching here, to get the underlying uint to index
>> the array with the code I just linked?
>>
>> Thanks in advance! :)
>>
>> ___
>> Rust-dev mailing list
>> Rust-dev@mozilla.org
>> https://mail.mozilla.org/listinfo/rust-dev
>>
>>
>
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] NewType change in 0.9

2014-01-11 Thread Steven Fackler
Something like this should work:

pub fn cell_alive(&self, Row(row): Row, Column(column): Column) -> uint {
  return match self.inner[row][column].value {
   dead  => 0,

   alive => 1
  };
}


Steven Fackler


On Sat, Jan 11, 2014 at 2:03 PM, benjamin adamson <
adamson.benja...@gmail.com> wrote:

> Hello Rust community!
>
> I've been busying myself over the past few weeks learning the different
> features of rust, and I have been working on an implementation of Conway's
> game of life (while trying to explore different features of rust.
>
> In 0.9, it was changed so that you cannot dereference haskell-like
> "NewTypes" with the * operator. In the 0.9 documentation, it says we can
> use pattern matching to extract the underlying type.
>
> Right here in my 0.8 code I dererenced the row parameter here:
> https://github.com/ShortStomp/ConwayGameOfLife-RUST/blob/master/grid.rs#L42
>
> which is a simple 'NewType', with underlying type uint.
>
> My question is, instead of dereferencing the 'Row' and 'Column' types, how
> can I use pattern matching here, to get the underlying uint to index the
> array with the code I just linked?
>
> Thanks in advance! :)
>
> ___
> Rust-dev mailing list
> Rust-dev@mozilla.org
> https://mail.mozilla.org/listinfo/rust-dev
>
>
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


[rust-dev] NewType change in 0.9

2014-01-11 Thread benjamin adamson
Hello Rust community!

I've been busying myself over the past few weeks learning the different
features of rust, and I have been working on an implementation of Conway's
game of life (while trying to explore different features of rust.

In 0.9, it was changed so that you cannot dereference haskell-like
"NewTypes" with the * operator. In the 0.9 documentation, it says we can
use pattern matching to extract the underlying type.

Right here in my 0.8 code I dererenced the row parameter here:
https://github.com/ShortStomp/ConwayGameOfLife-RUST/blob/master/grid.rs#L42

which is a simple 'NewType', with underlying type uint.

My question is, instead of dereferencing the 'Row' and 'Column' types, how
can I use pattern matching here, to get the underlying uint to index the
array with the code I just linked?

Thanks in advance! :)
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

2014-01-11 Thread Daniel Micay
On Sat, Jan 11, 2014 at 4:31 PM, Owen Shepherd  wrote:
> So I just did a test. Took the following rust code:
> pub fn test_wrap(x : u32, y : u32) -> u32 {
> return x.checked_mul(&y).unwrap().checked_add(&16).unwrap();
> }
>
> And got the following blob of assembly out. What we have there, my friends,
> is a complete failure of the optimizer (N.B. it works for the simple case of
> checked_add alone)
>
> Preamble:
>
> __ZN9test_wrap19hc4c136f599917215af4v0.0E:
> .cfi_startproc
> cmpl%fs:20, %esp
> jaLBB0_2
> pushl$12
> pushl$20
> calll___morestack
> ret
> LBB0_2:
> pushl%ebp
> Ltmp2:
> .cfi_def_cfa_offset 8
> Ltmp3:
> .cfi_offset %ebp, -8
> movl%esp, %ebp
> Ltmp4:
> .cfi_def_cfa_register %ebp
>
> Align stack (for what? We don't do any SSE)
>
> andl$-8, %esp
> subl$16, %esp

The compiler aligns the stack for performance.

> Multiply x * y
>
> movl12(%ebp), %eax
> mull16(%ebp)
> jnoLBB0_4
>
> If it didn't overflow, stash a 0 at top of stack
>
> movb$0, (%esp)
> jmpLBB0_5
>
> If it did overflow, stash a 1 at top of stack (we are building an
> Option here)
> LBB0_4:
> movb$1, (%esp)
> movl%eax, 4(%esp)
>
> Take pointer to &this for __thiscall:
> LBB0_5:
> leal(%esp), %ecx
> calll__ZN6option6Option6unwrap21h05c5cb6c47a61795Zcat4v0.0E
>
> Do the addition to the result
>
> addl$16, %eax
>
> Repeat the previous circus
>
> jaeLBB0_7
> movb$0, 8(%esp)
> jmpLBB0_8
> LBB0_7:
> movb$1, 8(%esp)
> movl%eax, 12(%esp)
> LBB0_8:
> leal8(%esp), %ecx
> calll__ZN6option6Option6unwrap21h05c5cb6c47a61795Zcat4v0.0E
> movl%ebp, %esp
> popl%ebp
> ret
> .cfi_endproc
>
>
> Yeah. Its' not fast because its' not inlining through option::unwrap.

The code to initiate failure is gigantic and LLVM doesn't do partial
inlining by default. It's likely far above the inlining threshold.

> I'm not sure what can be done for this, and whether its' on the LLVM side or
> the Rust side of things. My first instinct: find out what happens when fail!
> is moved out-of-line from unwrap() into its' own function (especially if
> that function can be marked noinline!), because optimizers often choke
> around EH.

I was testing with `rust-core` and calling `abort`, as it doesn't use unwinding.

> I tried to test the "optimal" situation in a synthetic benchmark:
> https://gist.github.com/oshepherd/8376705
> (In C for expediency. N.B. you must set core affinity before running this
> benchmark because I hackishly just read the TSC. i386 only.)
>
>
> but the results are really bizzare and seem to have a multitude of affecting
> factors (For example, if you minimally unroll and have the JCs jump straight
> to abort, you get vastly different performance from jumping to a closer
> location and then onwards to abort. Bear in mind that the overflow case
> never happens during the test). It would be interesting to do a test in
> which a "trivial" implementation of trap-on-overflow is added to rustc
> (read: the overflow case just jumps straight to abort or similar, to
> minimize optimizer influence and variability) to see how defaulting to
> trapping ints affects real world workloads.
>
> I wonder what level of performance impact would be considered "acceptable"
> for improved safety by default?
>
> Mind you, I think that what I'd propose is that i32 = Trapping, i32w =
> wrapping, i32s = saturating, or something similar

A purely synthetic benchmark only executing the unchecked or checked
instruction isn't interesting. You need to include several
optimizations in the loop as real code would use, and you will often
see a massive drop in performance from the serialization of the
pipeline. Register renaming is not as clever as you'd expect.

The impact of trapping is known, because `clang` and `gcc` expose `-ftrapv`.
 Integer-heavy workloads like cryptography and video codecs are
several times slower with the checks.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

2014-01-11 Thread Owen Shepherd
So I just did a test. Took the following rust code:
pub fn test_wrap(x : u32, y : u32) -> u32 {
return x.checked_mul(&y).unwrap().checked_add(&16).unwrap();
}

And got the following blob of assembly out. What we have there, my friends,
is a complete failure of the optimizer (N.B. it works for the simple case
of checked_add alone)

Preamble:

__ZN9test_wrap19hc4c136f599917215af4v0.0E:
.cfi_startproc
cmpl%fs:20, %esp
jaLBB0_2
pushl$12
pushl$20
calll___morestack
ret
LBB0_2:
pushl%ebp
Ltmp2:
.cfi_def_cfa_offset 8
Ltmp3:
.cfi_offset %ebp, -8
movl%esp, %ebp
Ltmp4:
.cfi_def_cfa_register %ebp

Align stack (for what? We don't do any SSE)

andl$-8, %esp
subl$16, %esp

Multiply x * y

movl12(%ebp), %eax
mull16(%ebp)
jnoLBB0_4

If it didn't overflow, stash a 0 at top of stack

movb$0, (%esp)
jmpLBB0_5

If it did overflow, stash a 1 at top of stack (we are building an
Option here)
LBB0_4:
movb$1, (%esp)
movl%eax, 4(%esp)

Take pointer to &this for __thiscall:
LBB0_5:
leal(%esp), %ecx
calll__ZN6option6Option6unwrap21h05c5cb6c47a61795Zcat4v0.0E

Do the addition to the result

addl$16, %eax

Repeat the previous circus

jaeLBB0_7
movb$0, 8(%esp)
jmpLBB0_8
LBB0_7:
movb$1, 8(%esp)
movl%eax, 12(%esp)
LBB0_8:
leal8(%esp), %ecx
calll__ZN6option6Option6unwrap21h05c5cb6c47a61795Zcat4v0.0E
movl%ebp, %esp
popl%ebp
ret
.cfi_endproc


Yeah. Its' not fast because its' not inlining through option::unwrap.


I'm not sure what can be done for this, and whether its' on the LLVM side
or the Rust side of things. My first instinct: find out what happens when
fail! is moved out-of-line from unwrap() into its' own function (especially
if that function can be marked noinline!), because optimizers often choke
around EH.



I tried to test the "optimal" situation in a synthetic benchmark:
https://gist.github.com/oshepherd/8376705
(In C for expediency. N.B. you must set core affinity before running this
benchmark because I hackishly just read the TSC. i386 only.)


but the results are really bizzare and seem to have a multitude of
affecting factors (For example, if you minimally unroll and have the JCs
jump straight to abort, you get vastly different performance from jumping
to a closer location and then onwards to abort. Bear in mind that the
overflow case never happens during the test). It would be interesting to do
a test in which a "trivial" implementation of trap-on-overflow is added to
rustc (read: the overflow case just jumps straight to abort or similar, to
minimize optimizer influence and variability) to see how defaulting to
trapping ints affects real world workloads.

I wonder what level of performance impact would be considered "acceptable"
for improved safety by default?

Mind you, I think that what I'd propose is that i32 = Trapping, i32w =
wrapping, i32s = saturating, or something similar

Owen Shepherd
http://owenshepherd.net | owen.sheph...@e43.eu


On 11 January 2014 19:33, Daniel Micay  wrote:

> On Sat, Jan 11, 2014 at 11:54 AM, Owen Shepherd 
> wrote:
> > On 11 January 2014 06:20, Daniel Micay  wrote:
> >>
> >> The branch on the overflow flag results in a very significant loss in
> >> performance. For example, I had to carefully write the vector `push`
> >> method for my `Vec` type to only perform one overflow check. With
> >> two checks, it's over 5 times slower due to failed branch predictions.
> >
> >
> > What did the generated code look like? I suspect that LLVM wasn't
> generating
> > optimal code, perhaps because Rust wasn't giving it appropriate hints or
> > because of optimizer bugs. For reference, on AMD64 the code should look
> > something like the following hypothetical code:
> >
> > vec_allocate:
> > MOV $SIZE, %eax
> > MUL %rsi
> > JC Lerror
> > ADD $HEADER_SIZE, %rax
> > JC Lerror
> > MOV %rax, %rsi
> > JMP malloc
> > Lerror:
> > // Code to raise error here
> >
> > Note that the ordering is EXTREMELY important! x86 doesn't give you any
> > separate branch hints (excluding two obsolete ones which only the
> Pentium IV
> > ever cared about) so your only clue to the optimizer is the branch
> > direction.
> >
> > I suspect your generated code had forward branches for the no overflow
> case.
> > Thats absolutely no good (codegen inerting "islands" of failure case
> code);
> > it will screw up the branch predictor.
> >
> > x86 defaults to predicting all (conditional) forward jumps not taken, all
> > conditional backwards jumps taken (Loops!). If the optimizer wasn't
> informed
> > correctly, it will probably not have obeyed that.
> >
> > Being as the overflow case should basically be never hit, there is no
> reason
> > for it to ever be loaded into the optimizer, so that is good
> >
> > (P.S. If the rust compiler is really good it'll convince LLVM to put the
> > error

Re: [rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

2014-01-11 Thread Vadim
> > Hitting a slow path unexpectedly on overflow seems to me like a recipe
> for
> > unpredictable performance, which doesn't seem inline with Rust's usual
> > goals.
>
> It's certainly better than the process exiting, which is what's going
> to happen in real systems when failure occurs. Either that, or they're
> going to lose a bunch of data from the task it caused to unwind. The
> only way to make overflow not a bug is to expand to a big integer or
> use a big integer from the start.
>

IMHO, integer overflow detection should be considered strictly a security
feature.   I can think of very few cases, when after expansion to a bigint,
the program wouldn't have bombed out anyways a few lines later - on some
array access or a system API call.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

2014-01-11 Thread Daniel Micay
On Sat, Jan 11, 2014 at 1:18 AM, Brian Anderson  wrote:
> On 01/10/2014 10:08 PM, Daniel Micay wrote:
>>
>> On Sat, Jan 11, 2014 at 1:05 AM, Huon Wilson  wrote:
>>>
>>> On 11/01/14 16:58, Isaac Dupree wrote:

 Scheme's numeric tower is one of the best in extant languages.  Take a
 look at it.  Of course, its dynamic typing is poorly suited for Rust.

 Arbitrary-precision arithmetic can get you mathematically perfect
 integers
 and rational numbers, but not real numbers.  There are an uncountably
 infinite number of real numbers and sophisticated computer algebra
 systems
 are devoted the problem (or estimates are used, or you become unable to
 compare two real numbers for equality).  The MPFR C library implements
 arbitrarily high precision floating point, but that still has all the
 pitfalls of floating-point that you complain about. For starters, try
 representing sqrt(2) and testing its equality with e^(0.5 ln 2).

 In general, Rust is a systems language, so fixed-size integral types are
 important to have.  They are better-behaved than in C and C++ in that
 signed
 types are modulo, not undefined behaviour, on overflow.  It could be
 nice to
 have integral types that are task-failure on overflow as an option too.
>>>
>>>
>>> We do already have some Checked* traits (using the LLVM intrinsics
>>> internally), which let you have task failure as one possibility on
>>> overflow.
>>> e.g. http://static.rust-lang.org/doc/master/std/num/trait.CheckedAdd.html
>>> (and Mul, Sub, Div too).
>>
>> I don't think failure on overflow is very useful. It's still a bug if
>> you overflow when you don't intend it. If we did have a fast big
>> integer type, it would make sense to wrap it with an enum heading down
>> a separate branch for small and large integers, and branching on the
>> overflow flag to expand to a big integer. I think this is how Python's
>> integers are implemented.
>
>
> I do think it's useful and is potentially a good compromise for the
> performance of the default integer type. Overflow with failure is a bug that
> tells you there's a bug. Wrapping is a bug that pretends it's not a bug.

This is why `clang` exposes sanitize options for integer overflow. You
could use `-ftrapv` in production... but why bother using C if you're
taking a significant performance hit like that?

> Hitting a slow path unexpectedly on overflow seems to me like a recipe for
> unpredictable performance, which doesn't seem inline with Rust's usual
> goals.

It's certainly better than the process exiting, which is what's going
to happen in real systems when failure occurs. Either that, or they're
going to lose a bunch of data from the task it caused to unwind. The
only way to make overflow not a bug is to expand to a big integer or
use a big integer from the start.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

2014-01-11 Thread Daniel Micay
On Sat, Jan 11, 2014 at 11:54 AM, Owen Shepherd  wrote:
> On 11 January 2014 06:20, Daniel Micay  wrote:
>>
>> The branch on the overflow flag results in a very significant loss in
>> performance. For example, I had to carefully write the vector `push`
>> method for my `Vec` type to only perform one overflow check. With
>> two checks, it's over 5 times slower due to failed branch predictions.
>
>
> What did the generated code look like? I suspect that LLVM wasn't generating
> optimal code, perhaps because Rust wasn't giving it appropriate hints or
> because of optimizer bugs. For reference, on AMD64 the code should look
> something like the following hypothetical code:
>
> vec_allocate:
> MOV $SIZE, %eax
> MUL %rsi
> JC Lerror
> ADD $HEADER_SIZE, %rax
> JC Lerror
> MOV %rax, %rsi
> JMP malloc
> Lerror:
> // Code to raise error here
>
> Note that the ordering is EXTREMELY important! x86 doesn't give you any
> separate branch hints (excluding two obsolete ones which only the Pentium IV
> ever cared about) so your only clue to the optimizer is the branch
> direction.
>
> I suspect your generated code had forward branches for the no overflow case.
> Thats absolutely no good (codegen inerting "islands" of failure case code);
> it will screw up the branch predictor.
>
> x86 defaults to predicting all (conditional) forward jumps not taken, all
> conditional backwards jumps taken (Loops!). If the optimizer wasn't informed
> correctly, it will probably not have obeyed that.
>
> Being as the overflow case should basically be never hit, there is no reason
> for it to ever be loaded into the optimizer, so that is good
>
> (P.S. If the rust compiler is really good it'll convince LLVM to put the
> error case branch code in a separate section so it can all be packed
> together far away from useful cache lines and TLB entries)

Rust directly exposes the checked overflow intrinsics so these are
what was used. It already considers branches calling a `noreturn`
function to be colder, so adding an explicit branch hint (which is
easy enough via `llvm.expect` doesn't help). Feel free to implement it
yourself if you think you can do better. The compiler work is already
implemented.  I doubt you'll get something performing in the same
ballpark as plain integers.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Failure

2014-01-11 Thread Corey Richardson
What type is `3`? There's know way to know. Use `3i` for int, `3u` for
uint, etc.

On Sat, Jan 11, 2014 at 12:17 PM, Renato Lenzi  wrote:
> The code is trivial:
>
> fn main()
> {
> let x = 3;
> println(x.to_str());
> }
>
> the error is this (on Win7)
>
> d:\Rust09\bin>rustc 00025.rs
> 00025.rs:4:11: 4:22 error: multiple applicable methods in scope
> 00025.rs:4  println(x.to_str());
> ^~~
> 00025.rs:4:11: 4:22 note: candidate #1 is `std::int::ToStr$int::to_str`
> 00025.rs:4  println(x.to_str());
> ^~~
> 00025.rs:4:11: 4:22 note: candidate #2 is `std::i8::ToStr$i8::to_str`
> 00025.rs:4  println(x.to_str());
> ^~~
> 00025.rs:4:11: 4:22 note: candidate #3 is `std::i16::ToStr$i16::to_str`
> 00025.rs:4  println(x.to_str());
> ^~~
> 00025.rs:4:11: 4:22 note: candidate #4 is `std::i32::ToStr$i32::to_str`
> 00025.rs:4  println(x.to_str());
> ^~~
> 00025.rs:4:11: 4:22 note: candidate #5 is `std::i64::ToStr$i64::to_str`
> 00025.rs:4  println(x.to_str());
> ^~~
> 00025.rs:4:11: 4:22 note: candidate #6 is `std::uint::ToStr$uint::to_str`
> 00025.rs:4  println(x.to_str());
> ^~~
> 00025.rs:4:11: 4:22 note: candidate #7 is `std::u8::ToStr$u8::to_str`
> 00025.rs:4  println(x.to_str());
> ^~~
> 00025.rs:4:11: 4:22 note: candidate #8 is `std::u16::ToStr$u16::to_str`
> 00025.rs:4  println(x.to_str());
> ^~~
> 00025.rs:4:11: 4:22 note: candidate #9 is `std::u32::ToStr$u32::to_str`
> 00025.rs:4  println(x.to_str());
> ^~~
> 00025.rs:4:11: 4:22 note: candidate #10 is `std::u64::ToStr$u64::to_str`
> 00025.rs:4  println(x.to_str());
>
>   ^~~
> error: aborting due to previous error
> task 'rustc' failed at 'explicit failure',
> C:\bot\slave\dist2-win\build\src\libs
> yntax\diagnostic.rs:75
> task '' failed at 'explicit failure',
> C:\bot\slave\dist2-win\build\src\lib
> rustc\lib.rs:453
>
> any idea?
> thanks.
>
> ___
> Rust-dev mailing list
> Rust-dev@mozilla.org
> https://mail.mozilla.org/listinfo/rust-dev
>
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


[rust-dev] Failure

2014-01-11 Thread Renato Lenzi
The code is trivial:

fn main()
{
let x = 3;
println(x.to_str());
}

the error is this (on Win7)

d:\Rust09\bin>rustc 00025.rs
00025.rs:4:11: 4:22 error: multiple applicable methods in scope
00025.rs:4  println(x.to_str());
^~~
00025.rs:4:11: 4:22 note: candidate #1 is `std::int::ToStr$int::to_str`
00025.rs:4  println(x.to_str());
^~~
00025.rs:4:11: 4:22 note: candidate #2 is `std::i8::ToStr$i8::to_str`
00025.rs:4  println(x.to_str());
^~~
00025.rs:4:11: 4:22 note: candidate #3 is `std::i16::ToStr$i16::to_str`
00025.rs:4  println(x.to_str());
^~~
00025.rs:4:11: 4:22 note: candidate #4 is `std::i32::ToStr$i32::to_str`
00025.rs:4  println(x.to_str());
^~~
00025.rs:4:11: 4:22 note: candidate #5 is `std::i64::ToStr$i64::to_str`
00025.rs:4  println(x.to_str());
^~~
00025.rs:4:11: 4:22 note: candidate #6 is `std::uint::ToStr$uint::to_str`
00025.rs:4  println(x.to_str());
^~~
00025.rs:4:11: 4:22 note: candidate #7 is `std::u8::ToStr$u8::to_str`
00025.rs:4  println(x.to_str());
^~~
00025.rs:4:11: 4:22 note: candidate #8 is `std::u16::ToStr$u16::to_str`
00025.rs:4  println(x.to_str());
^~~
00025.rs:4:11: 4:22 note: candidate #9 is `std::u32::ToStr$u32::to_str`
00025.rs:4  println(x.to_str());
^~~
00025.rs:4:11: 4:22 note: candidate #10 is `std::u64::ToStr$u64::to_str`
00025.rs:4  println(x.to_str());

  ^~~
error: aborting due to previous error
task 'rustc' failed at 'explicit failure',
C:\bot\slave\dist2-win\build\src\libs
yntax\diagnostic.rs:75
task '' failed at 'explicit failure',
C:\bot\slave\dist2-win\build\src\lib
rustc\lib.rs:453

any idea?
thanks.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: Future of the Build System

2014-01-11 Thread George Makrydakis
Which is why the argument of easier packaging because tool X is used by
person Y advocating it, is meaningless from both a theoretical and
practical point of view. In the end, it is just an application of graph
theory, which is widely used in dependency resolution. Another option would
be a sat solver producing the same result as for example is the case for
libzypp in openSuSE.

Again, it is a matter of combining any of these solutions with a reliable
and simple to understand format in order to produce a build system that is
consistent first, fool - proof most of the time. Surely, doing this in Rust
should be quite easily doable, given the fact that it is designed to be a
systems language. Handling entire package life cycles would be a natural
fit for a language with Rust's problem domain, even beyond the purposes of
Rust itself. Think big, then bigger.

G.



On Sat, Jan 11, 2014 at 1:59 PM, james  wrote:

> On 11/01/2014 07:56, George Makrydakis wrote:
>
>> There is little reason to believe that having a build system in Rust
>> would make It harder for people to package.
>>
> Surely you just need an alternate that is a script generated as a
> from-clean dry run with -j1?  It gives you the commands needed, in an order
> that works.
>
>
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

2014-01-11 Thread Owen Shepherd
On 11 January 2014 06:20, Daniel Micay  wrote:

> The branch on the overflow flag results in a very significant loss in
> performance. For example, I had to carefully write the vector `push`
> method for my `Vec` type to only perform one overflow check. With
> two checks, it's over 5 times slower due to failed branch predictions.
>

What did the generated code look like? I suspect that LLVM wasn't
generating optimal code, perhaps because Rust wasn't giving it appropriate
hints or because of optimizer bugs. For reference, on AMD64 the code should
look something like the following hypothetical code:

vec_allocate:
MOV $SIZE, %eax
MUL %rsi
JC Lerror
ADD $HEADER_SIZE, %rax
JC Lerror
MOV %rax, %rsi
JMP malloc
Lerror:
// Code to raise error here

Note that the ordering is EXTREMELY important! x86 doesn't give you any
separate branch hints (excluding two obsolete ones which only the Pentium
IV ever cared about) so your only clue to the optimizer is the branch
direction.

I suspect your generated code had forward branches for the no overflow
case. Thats absolutely no good (codegen inerting "islands" of failure case
code); it will screw up the branch predictor.

x86 defaults to predicting all (conditional) forward jumps not taken, all
conditional backwards jumps taken (Loops!). If the optimizer wasn't
informed correctly, it will probably not have obeyed that.

Being as the overflow case should basically be never hit, there is no
reason for it to ever be loaded into the optimizer, so that is good

(P.S. If the rust compiler is really good it'll convince LLVM to put the
error case branch code in a separate section so it can all be packed
together far away from useful cache lines and TLB entries)
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: Future of the Build System

2014-01-11 Thread George Makrydakis
I would not exclude ninja from the candidates of an in - between solution
for the time being. Imagine the power of ninja but implementing its
conceptual machinery in expressive Rust code. It would be an interesting
combination indeed. That could bring even more advantages since it would be
implemented in Rust.


On Sat, Jan 11, 2014 at 2:03 PM, james  wrote:

> On 10/01/2014 08:54, Jan Niklas Hasse wrote:
>
>> Also cmake still depends on make (or even worse Visual Studio / Xcode).
>>
> We use cmake with ninja at work, and that seems to work pretty well.
>
> I suggest if you want to write a tool, then you first write something that
> generates ninja files.  Then you immediately have a bootstrap mechanism,
> plus something that will do builds rather quickly thereafter.  scons can be
> (very) slow.  My experience with waf was better, but I think generating a
> nnja script is superior and there are several tools to do that.  You could
> look at gyp, for example.
>
>
> ___
> Rust-dev mailing list
> Rust-dev@mozilla.org
> https://mail.mozilla.org/listinfo/rust-dev
>
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] returning functions in rust

2014-01-11 Thread Gábor Lehel
On Fri, Jan 10, 2014 at 8:01 PM, Daniel Micay  wrote:

> On Fri, Jan 10, 2014 at 1:57 PM, Patrick Walton 
> wrote:
> > It doesn't exist, outside of traits. Unboxed closures will probably make
> it
> > possible to express once again though.
> >
> > Patrick
>
> The tricky part is the need to infer the return type if it's defined
> inside the function since it's different per-closure.
>

I wrote up a small proposal for a feature that could help with this:

https://github.com/mozilla/rust/issues/11455



> ___
> Rust-dev mailing list
> Rust-dev@mozilla.org
> https://mail.mozilla.org/listinfo/rust-dev
>
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: Future of the Build System

2014-01-11 Thread james

On 10/01/2014 08:54, Jan Niklas Hasse wrote:

Also cmake still depends on make (or even worse Visual Studio / Xcode).

We use cmake with ninja at work, and that seems to work pretty well.

I suggest if you want to write a tool, then you first write something 
that generates ninja files.  Then you immediately have a bootstrap 
mechanism, plus something that will do builds rather quickly 
thereafter.  scons can be (very) slow.  My experience with waf was 
better, but I think generating a nnja script is superior and there are 
several tools to do that.  You could look at gyp, for example.


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: Future of the Build System

2014-01-11 Thread james

On 11/01/2014 07:56, George Makrydakis wrote:

There is little reason to believe that having a build system in Rust would make 
It harder for people to package.
Surely you just need an alternate that is a script generated as a 
from-clean dry run with -j1?  It gives you the commands needed, in an 
order that works.


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

2014-01-11 Thread Diggory Hardy
There is static analysis (i.e. determine ahead of time exactly what values 
variables may take), but it's certainly not a panacea: the analysis step is 
slow (probably too slow to fully integrate into a compiler), not everything 
can be solved, and most existing solvers are not free software as far as I am 
aware.

It could perhaps be used for a little optimisation and for proofs that overflow 
doesn't occur in some cases, but integrating a static analysis system with a 
compiler would be no easy task. Leon is the most advanced version I'm aware of 
(though it's not really my field): http://lara.epfl.ch/w/leon

On Saturday 11 January 2014 11:18:41 Marijn Haverbeke wrote:
> I am not aware of an efficient way to provide
> automatic-overflow-to-bignum semantics in a non-garbage-collected
> language, without also imposing the burden of references/move
> semantics/etc on users of small integers. I.e. integers, if they may
> hold references to allocated memory can no longer sanely be considered
> a simple value type, which doesn't seem like it'd be a good idea for
> Rust.
> 
> If there is a good solution to this, I'd love to find out about it.
> ___
> Rust-dev mailing list
> Rust-dev@mozilla.org
> https://mail.mozilla.org/listinfo/rust-dev

signature.asc
Description: This is a digitally signed message part.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: Future of the Build System

2014-01-11 Thread George Makrydakis
Indeed. I fully agree with your apt foreshadowing of events.

If it is not feasible to have a rust based tool now, as long as any other tool 
is not given priviledged status formally speaking, using whatever ready means 
appropriate is a strategy that scales well within a limited time period.

>From this thread it seems a reasonably acceptable compromise - until a rust 
>tool is given priority; but it is not clear if this is the actual plan. I 
>think that discussing about the merits of other build systems should not be 
>transmuted into an agenda of using them as the blessed defaults.

Specifying this is very important for rust to be a modern, cohere platform - 
level solution with easy exchange of libraries among users, relying on a common 
environment that is compatible with the goals of Rust itself. This is why it 
should be fully controlled by the Rust community, thus written in Rust.

Think about the merit of having such a system in rust, eventually deployed by 
other projects, unrelated to rust, because it ends up being *that* good.

This is a matter that should be definitively discussed after Rust 1.0, when the 
language starts being backwards - reliable to a considerable extent.

G.

Michael Neumann  wrote:
>rustc is "just" another regular Rust application. So use the tools that
>
>any other Rust application (will) use ;-)
>
>I think at some point in time there will be a capable build tool
>written 
>in Rust (like there is for all the other languages). Then it would make
>sense to switch using it for the compiler as well.
>
>Michael
>
>Am 11.01.2014 08:56, schrieb George Makrydakis:
>> There is little reason to believe that having a build system in Rust
>would make It harder for people to package.
>>
>> I do understand the predependecy argument, but the Rust compiler
>itself in order to compile has predependencies anyway, as does any
>similar project. Therefore the decisional weight of choosing a non -
>rust based solution over a rust one because Debian packagers have
>problems packaging a compiler is not adequately justified.
>>
>> Using a well known build system as a means to appeal to programmers
>is seemingly an advantage, but it does not exonerate them from having
>to be competent in Rust before they write useful programs. And that has
>a learning curve superior to that of a build system.
>>
>> As for boost's jam I have nothing to say other than boost having its
>own build system makes it easy for boost first; this does not mean that
>their needs are those of everybody else and boost is a library, not a
>programming language itself. So, again, a decision based on picking a
>popular solution on the basis of such a comparison, has flawed
>background.
>>
>> Lastly, imagine the irony of Rust proposing to use python, c, c++
>based build tools for simple packages. That would make packagers more
>frustrated because of a wider set of dependecies. While end users would
>have to also deal with a known system, its eventual inadequacies could
>not be met directly by Rust devs unless they start amending that system
>in order to deal with them. Therefore, maintenance overhead is
>inescapable either way, with the pessimization of relying in another
>nom - Rust project in order to make it worth your while to enjoy
>programming in Rust.
>>
>> The only valid argument against having a build system proposed as the
>official, defacto, cross - platform way of building rust packages
>written in rust is its development and maintenance overhead for the
>rust core team itself.
>>
>> That problem is easily circumvented by not proposing one right now
>and letting it to the end developer decide. If however an official
>build system is to be proposed, Rust developers merit having it done on
>their own platform, thus proving rust's worth. It is 2014 after all.
>>
>> G.
>>
>>
>>
>> Lee Braiden  wrote:
>>> On 10/01/14 08:16, Gaetan wrote:
 I am not in favor of a customized build system. For instance boost
 library use their jam build system, and i never figured how to use
>it
 in my projects.

 I push to use standard and well proved build system like cmake or
 scons, at least for major components. This would give a nice
>example
 of how to use it in any projects.

>>> I'd agree with that on both counts: the principle of using something
>>> standard, and the two recommendations.
>>>
>>> CMake would probably get my vote, because it's not so much a build
>>> tool,
>>> as a meta tool for whichever system you prefer, so it would fit in
>well
>>>
>>> with various platform-specific IDEs, unusual platforms (android,
>>> embedded, ...), etc.  That said, scons is also a strong contender,
>and
>>> which of the two is more open to integrating patches and working
>with
>>> new languages is very much worth considering.
>>>
>>> I think Rust will be contributing to the wider community by lending
>its
>>>
>>> support (and patches) to a common, modern build system, AND it will
>get
>>>
>>> something back in ter

Re: [rust-dev] RFC: Future of the Build System

2014-01-11 Thread Michael Neumann
rustc is "just" another regular Rust application. So use the tools that 
any other Rust application (will) use ;-)


I think at some point in time there will be a capable build tool written 
in Rust (like there is for all the other languages). Then it would make

sense to switch using it for the compiler as well.

Michael

Am 11.01.2014 08:56, schrieb George Makrydakis:

There is little reason to believe that having a build system in Rust would make 
It harder for people to package.

I do understand the predependecy argument, but the Rust compiler itself in 
order to compile has predependencies anyway, as does any similar project. 
Therefore the decisional weight of choosing a non - rust based solution over a 
rust one because Debian packagers have problems packaging a compiler is not 
adequately justified.

Using a well known build system as a means to appeal to programmers is 
seemingly an advantage, but it does not exonerate them from having to be 
competent in Rust before they write useful programs. And that has a learning 
curve superior to that of a build system.

As for boost's jam I have nothing to say other than boost having its own build 
system makes it easy for boost first; this does not mean that their needs are 
those of everybody else and boost is a library, not a programming language 
itself. So, again, a decision based on picking a popular solution on the basis 
of such a comparison, has flawed background.

Lastly, imagine the irony of Rust proposing to use python, c, c++ based build 
tools for simple packages. That would make packagers more frustrated because of 
a wider set of dependecies. While end users would have to also deal with a 
known system, its eventual inadequacies could not be met directly by Rust devs 
unless they start amending that system in order to deal with them. Therefore, 
maintenance overhead is inescapable either way, with the pessimization of 
relying in another nom - Rust project in order to make it worth your while to 
enjoy programming in Rust.

The only valid argument against having a build system proposed as the official, 
defacto, cross - platform way of building rust packages written in rust is its 
development and maintenance overhead for the rust core team itself.

That problem is easily circumvented by not proposing one right now and letting 
it to the end developer decide. If however an official build system is to be 
proposed, Rust developers merit having it done on their own platform, thus 
proving rust's worth. It is 2014 after all.

G.



Lee Braiden  wrote:

On 10/01/14 08:16, Gaetan wrote:

I am not in favor of a customized build system. For instance boost
library use their jam build system, and i never figured how to use it
in my projects.

I push to use standard and well proved build system like cmake or
scons, at least for major components. This would give a nice example
of how to use it in any projects.


I'd agree with that on both counts: the principle of using something
standard, and the two recommendations.

CMake would probably get my vote, because it's not so much a build
tool,
as a meta tool for whichever system you prefer, so it would fit in well

with various platform-specific IDEs, unusual platforms (android,
embedded, ...), etc.  That said, scons is also a strong contender, and
which of the two is more open to integrating patches and working with
new languages is very much worth considering.

I think Rust will be contributing to the wider community by lending its

support (and patches) to a common, modern build system, AND it will get

something back in terms of users who already know the build system.



 On Friday, January 10, 2014, George Makrydakis wrote:


 Hello,

 Having a build system entirely dependent of Rust alone, would
 make the entire experience in deploying the language

extremely

 cohere. The only counter - argument is indeed that it would
 require some work to get this to fruition. I would like to
 know if this has any chance of getting priority soon enough.


Bear in mind that Debian are having a lot of issues packaging Rust
already, because it self-compiles.  If the build tool also had a Rust
pre-dependency, that would be a big step backwards.

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

2014-01-11 Thread Marijn Haverbeke
I am not aware of an efficient way to provide
automatic-overflow-to-bignum semantics in a non-garbage-collected
language, without also imposing the burden of references/move
semantics/etc on users of small integers. I.e. integers, if they may
hold references to allocated memory can no longer sanely be considered
a simple value type, which doesn't seem like it'd be a good idea for
Rust.

If there is a good solution to this, I'd love to find out about it.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Exporting macros: #[macro_escape] usage

2014-01-11 Thread Vladimir Matveev
Oh, thanks. It does work now. Are macro scoping rules documented
somewhere except the compiler source code?

2014/1/11 Chris Morgan :
> The macro is being defined after the module is defined. You need to move the
> macro definition before the "pub mod submod;" line. Also due to the scoping
> rules of macros, you don't need #[macro_escape] there---it's a child, so it
> gets the macro. Only siblings, parents, uncles, aunts, cousins, &c. would
> need it.
>
> On Jan 11, 2014 9:46 AM, "Vladimir Matveev"  wrote:
>>
>> Hi,
>>
>> As far as I understand, the current way to export macros is to
>> annotate the module with macro_rules definition with #[macro_escape]
>> annotation. But I just can't get it right, and my macro is not visible
>> in other module :(
>>
>> Here is what I have:
>>
>> - START -
>> /lib.rs:
>> #[feature(macro_rules)];
>>
>> pub mod m1;
>>
>> --
>> /m1/mod.rs:
>> #[macro_escape];
>>
>> pub mod submod;
>>
>> macro_rules! example_rule(
>> () => (mod test;)
>> )
>>
>> --
>> /m1/submod.rs:
>> use m1;
>>
>> example_rule!()
>> - END -
>>
>> I have assumed that putting #[macro_escape] annotation to a module
>> makes all macros from that module available in all modules which
>> import this module, but apparently I'm wrong because the code above
>> does not work with 'macro undefined' error.
>>
>> Could please someone explain how #[macro_escape] works in detail? I
>> couldn't find any documentation on it, and looking through standard
>> libs was not helpful.
>>
>> Thanks,
>> Vladimir.
>> ___
>> Rust-dev mailing list
>> Rust-dev@mozilla.org
>> https://mail.mozilla.org/listinfo/rust-dev
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev