Re: [rust-dev] std::rand::Rng

2014-09-18 Thread Felix S. Klock II

On 17 Sep 2014, at 23:33, Sean McArthur smcart...@mozilla.com wrote:

 On Wed, Sep 17, 2014 at 2:26 PM, Evan Davis cptr...@gmail.com wrote:
 
 The problem is that you're trying to use a trait as a type.
 
 That shouldn't be a problem. You can use a `mut Trait`, and you'll get 
 dynamic dispatch.

You cannot mix dynamic dispatch with generic methods.  (This is because we 
implement generic methods by monomorphization, i.e. by creating separate copies 
of the implementation for each instantiation -- but with dynamic dispatch, we 
do not know a priori what all of the calls to the method will be.)

I would link a relevant portion of the Rust manual here, but I cannot actually 
find documentation of this restriction.



Pete’s original question was actually answered in his followup email.  It is 
true that you cannot do:

 fn print_numbers(r: mut Rng) {
for _ in range(0u, 10) {
println!({}, r.gen::uint());
}
 }

due to the restriction I described above, but you can do:

 fn print_numbersR: Rng (r: mut R) {
for _ in range(0u, 10) {
println!({}, r.gen::uint());
}
 }

where the trait is now being used as a bound on a generic type, rather than as 
the basis of an object type.

Cheers,
-Felix

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Right list?

2014-08-14 Thread Felix S. Klock II
Philppe (cc’ing rust-dev)-

Hi!  I think that we are currently encouraging users to post questions about 
using the language (or tools, etc) to StackOverflow, using the “rust” tag so 
that the community can find them easily.

http://stackoverflow.com/questions/tagged/rust

You can see Brian’s recent overview of our various communication channels and 
where we hope to go with them here:

https://mail.mozilla.org/pipermail/rust-dev/2014-July/010979.html

Cheers,
-Felix

On 14 Aug 2014, at 11:50, Philippe de Rochambeau phi...@free.fr wrote:

 Hi,
 
 is this the right list to ask « newbie » questions about Rust?
 
 Many thanks.
 
 Philippe
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Why no @Override analogy?

2014-07-17 Thread Felix S. Klock II
Note that we are well aware that the issue Christoph mentioned is a wart for 
some uses of traits.  E.g. some trait implementors really would like to be told 
when they missed a method because it has a default implementation already.

One way to resolve this would be a lint; that has been previously filed as 
Issue #14220:

  add attribute on impl and associated lint for no default methods
  https://github.com/rust-lang/rust/issues/14220

Cheers,
-Felix
  
On 16 Jul 2014, at 21:16, Christoph Husse thesaint1...@googlemail.com wrote:

 doh. okay. it has a lot to do with it but it is enabled by default then :D. 
 slowly climbing the learning curve lol
 
 On Wednesday, July 16, 2014, Steven Fackler sfack...@gmail.com wrote:
 I don't see what this has to do with @Override. @Override causes the compiler 
 to check to make sure that the method signature matches a supertype method. 
 Rust always enforces that check inside of a trait implementation block. This 
 kind of thing is always illegal:
 
 impl SomeTrait for Foo {
 fn some_random_method_that_isnt_part_of_some_trait(self) { ... }
 }
 
 The visitor trait is huge, and 99% of use cases don't need to visit every 
 possible part of the AST, so there are default implementations of all of the 
 methods that simply fall through to visit all of the subparts. That comment 
 is just saying that if you *do* want to visit everything, you have to 
 manually check to make sure you're overriding everything.
 
 Steven Fackler
 
 
 On Wed, Jul 16, 2014 at 11:59 AM, Christoph Husse 
 thesaint1...@googlemail.com wrote:
 This comment from syntax::visit::Visitor really gives me a headache:
 
 /// If you want to ensure that your code handles every variant
 /// explicitly, you need to override each method.  (And you also need
 /// to monitor future changes to `Visitor` in case a new method with a
 /// new default implementation gets introduced.)
 
 I kindof thought we would have passed this :(.
 I need to check for future changes :O? How? Closed source 3rd party,
 just to name one example, or simply oversight. Okay, an IDE could warn
 too. But we dont' have one right now and in the past it didn't seem
 like this would have helped much.
 
 What's the rationale behind this decision?
 
 Why no: #[Impl] attribute or something?
 
 Sry, if I bring up old discussions but it was kinda hard to come up
 with a search term for this.
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Clarification about RFCs

2014-06-16 Thread Felix S. Klock II
Gabor (cc’ing rust-dev)-

I have filed an issue to track incorporating the answers to these questions 
into the RFC process documentation.

  https://github.com/rust-lang/rfcs/issues/121

Here is some of my opinionated answers to the questions:

1. When an RFC PR is merged as accepted into the repository, then that implies 
that we should implement it (or accept a community provided implementation) 
whenever we feel it best.  This could be a matter of scratching an itch, or it 
could be to satisfy a 1.0 requirement; so there is no hard and fast rule about 
when the implementation for an RFC will actually land.

2. An RFC closed with “postponed” is being marked as such because we do not 
want to think about the proposal further until post-1.0, and we believe that we 
can afford to wait until then to do so.  “Evaluate” is a funny word; usually 
something marked as “postponed” has already passed an informal first round of 
evaluation, namely the round of “do we think we would ever possibly consider 
making this change, as outlined here or some semi-obvious variation of it.”  
(When the answer to that question is “no”, then the appropriate response is to 
close the RFC, not postpone it.)

3. We strive to write each RFC in a manner that it will reflect the final 
design of the feature; but the nature of the process means that we cannot 
expect every merged RFC to actually reflect what the end result will be when 
1.0 is released.  The intention, I believe, is to try to keep each RFC document 
somewhat in sync with the language feature as planned.  But just because an RFC 
has been accepted does not mean that the design of the feature is set in stone; 
one can file pull-requests to change an RFC if there is some change to the 
feature that we want to make, or need to make, (or have already made, and are 
going to keep in place).

4. If an RFC is accepted, the RFC author is of course free to submit an 
implementation, but it is not a requirement that an RFC author drive the 
implementation of the change.  Each time an RFC PR is accepted and merged into 
the repository, a corresponding tracking issue is supposed to be opened up on 
the rust repository.  A large point of the RFC process is to help guide 
community members in selecting subtasks to work on that where each member can 
be reasonably confident that their efforts will not be totally misguided.  So, 
it would probably be best if anyone who plans to work on implementing a feature 
actually write a comment *saying* that they are planning such implementation on 
the tracking issue on the rust github repository.  Having said that, I do not 
think we have been strictly following the latter process; I think currently you 
would need to also review the meeting notes to determine if someone might have 
already claimed responsibility for implementation.

5. The choice of which RFC’s get reviewed is somewhat ad-hoc at the moment.  We 
do try to post each agenda topic ahead of time in a bulleted list at the top of 
the shared etherpad ( https://etherpad.mozilla.org/Rust-meeting-weekly ) , and 
RFC’s are no different in this respect.  But in terms of how they are selected, 
I think it is largely driven by an informal estimate of whether the comment 
thread has reached a steady state (i.e. either died out or not showing any sign 
of providing further insight or improvement feedback to the RFC itself).  Other 
than that, we basically try to make sure that any RFC that we accept is 
accepted at the Tuesday meeting, so that there is a formal record of discussion 
regarding acceptance.  So we do not accept RFC’s at the Thursday meeting.  We 
may reject RFC’s at either meeting; in other words, the only RFC activity on 
Thursdays is closing the ones that have reached a steady state and that the 
team agrees we will not be adopting.

I want to call special attention to the question of What if the author of the 
reviewed RFC isn't a participant in the meetings?”  This is an important issue, 
since one might worry that the viewpoint of the author will not be represented 
at the meeting itself.  In general, we try to only review RFC’s that at least a 
few people have taken the time to read the corresponding discussion thread and 
are prepared to represent the viewpoints presented there.  

Ideally at least one meeting participant would act as a champion for the 
feature (and hopefully also have read the discussion thread).  Such a person 
need not *personally* desire the feature; they just need to act to represent 
its virtues and the community’s desire for it.  (I think of it like a criminal 
defense attorney; they may not approve of their client’s actions, but they want 
to ensure their client gets proper legal representation.)

But I did have the qualifier “Ideally” there, since our current process does 
not guarantee that such a champion exists.  If no champion exists, it is either 
because not enough people have read the RFC (and thus we usually try to 
postpone making a 

Re: [rust-dev] Why explicit named lifetimes?

2014-05-22 Thread Felix S. Klock II
Michael (cc'ing rust-dev)-

On 22 May 2014, at 16:32, Michael Woerister michaelwoeris...@posteo.net wrote:

 Lately I've been thinking that it might be nice if one could omit the 
 lifetimes from the list of generic parameters, as in:
 
 fn fooT(x: 'a T, y: 'b MyStruct) - ('b int, 'a uint)
 
 instead of
 
 fn foo'a, 'b, T(x: 'a T, y: 'b MyStruct) - ('b int, 'a uint)
 
 Especially for otherwise non-generic functions, having to explicitly list 
 lifetimes seems a bit redundant, as they are unambiguously defined in the 
 function signature anyway (as opposed to type parameters, which don't have 
 the special `'` marker).

Note that this is not true in the general case.  Consider e.g. methods with an 
type impl or a trait impl: in that scenario, one could bind lifetimes on the 
method itself, or in the type/trait being implemented.  This distinction 
matters (in terms of what limits it imposes on the clients of those 
types/traits).

(I provide a concrete example after my signature.)

There are other changes one could make to still accommodate a change like you 
suggest in the general case, such as inferring some binding site (e.g. the 
nearest one) if a lifetime is otherwise found to be unbound.  But I personally 
do not think such changes improve the character of the language (IMHO); I'd 
rather have a straight-forward rule that one applies in every context.

Cheers,
-Felix

Concrete Example:

The variants below are quite different.

#[deriving(Show)]
struct S'a {
p: 'a int
}

impl'a S'a {
#[cfg(variant1)]
fn foo(mut self, arg: 'a int) - 'a int {
let old = self.p;
self.p = arg;
old
}

#[cfg(variant2)]
fn foo'a(mut self, arg: 'a int) - 'a int {
let old = self.p;
self.p = arg;
old
}

#[cfg(variant3)]
fn foo'a(mut self, arg: 'a int) - 'a int {
arg
}

#[cfg(variant4)]
fn foo'a(mut self, arg: 'a int) - 'a int {
self.p
}

#[cfg(variant5)]
fn foo'a(self, arg: 'a int) - 'a int {
self.p
}

#[cfg(variant6)]
fn foo'a('a self, arg: 'a int) - 'a int {
let _ = arg;
self.p
}
}

#[allow(unused_mut)]
fn main() {
let x = 3;
let mut s = S{ p: x };
let y = 4;
println!(begin: {:?}, s);
let z = s.foo(y);
println!(later: {:?} z: {:?}, s, z);
}

Half of them do not compile (for differing reasons).  Variants 1, 3, and 6 do 
run (and each produces a different output), but the interesting points come 
from understanding why the other cases do not compile.

(FYI: It is probably easier to talk about the example if you first alpha-rename 
the method-bound lifetimes to something other than `'a`.)

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Private trait items

2014-04-22 Thread Felix S. Klock II
Tommi (cc'ing rust-dev)-

I recommend you make a small fake github repository of your own, and learn the 
github workflow directly by forking that (perhaps with a separate fresh dummy 
github user account).

I am not being facetious; I did a lot of that when I was first getting my 
bearings using github (as well as git itself).

Of course, learning methodologies that worked for me may not work for everyone 
else, but that's my two cents.

Cheers,
-Felix

On 22 Apr 2014, at 23:57, Tommi rusty.ga...@icloud.com wrote:

 On 2014-04-22, at 21:44, Brian Anderson bander...@mozilla.com wrote:
 
 I'm not sure what you are asking for here. Have you submitted this as a pull 
 request to http://github.com/rust-lang/rfcs?
 
 No, I haven't made the pull request, because I don't know how to do that (or 
 perhaps I would know how to do that, if I knew how to create a fork for the 
 second time of the same thing). I'm not even sure of what exactly it is that 
 I'm not capable of doing.
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Why mod.rs files?

2014-04-17 Thread Felix S. Klock II
Tommi (cc'ing rust-dev)-

I don't know if this is considered an anti-pattern or not, but if you want to 
structure your files in the manner you describe, you can do so without 
resorting to `#[path=…]`, assuming you're willing to add a bit of boilerplate 
to the foo.rs and bar.rs to pull in the modules that are in the subdirectories.

I've included a transcript describing a tiny example of your layout 
(variant1) versus the mod.rs-based layout (variant2).  The main difference 
is that one needs to put declarations of the modules in the subdirectory into a 
nested private mod (`mod foo { pub mod lut; }`) and a reexport (`pub use lut = 
self::foo::lut`) within foo.rs in the parent directory.

(In case its not obvious from the context, it is legal to put the `foo` mod at 
either dir/foo.rs or at dir/foo/mod.rs; the compiler will look in both 
places for it.  If you put a file for the `foo` mod in both places, the 
compiler signals an error since that is an ambiguity.)

Cheers,
-Felix

Transcript illustrating directory layout and how to make your code accommodate 
either layout.

% find variant1 -type file
variant1/foo/lut.rs
variant1/foo.rs
variant1/main.rs
% find variant2 -type file
variant2/foo/lut.rs
variant2/foo/mod.rs
variant2/main.rs
% rustc variant1/main.rs  ./main
m/variant1/foo.rs m/variant1/foo/lut.rs
% rustc variant2/main.rs  ./main
m/variant2/foo/mod.rs m/variant2/foo/lut.rs
% find variant1 -type file -exec echo == {} == \; -exec cat {} \;
== variant1/foo/lut.rs ==
pub fn lut() - ~str { ~m/variant1/foo/lut.rs }
== variant1/foo.rs ==
pub use lut = self::foo::lut;
pub fn foo() - ~str { ~m/variant1/foo.rs }
mod foo {
pub mod lut;
}
== variant1/main.rs ==
mod foo;
fn main() {
println!({} {}, foo::foo(), foo::lut::lut());
}
% find variant2 -type file -exec echo == {} == \; -exec cat {} \;
== variant2/foo/lut.rs ==
pub fn lut() - ~str { ~m/variant2/foo/lut.rs }
== variant2/foo/mod.rs ==
pub fn foo() - ~str { ~m/variant2/foo/mod.rs }
pub mod lut;
== variant2/main.rs ==
mod foo;
fn main() {
println!({} {}, foo::foo(), foo::lut::lut());
}
% 


On 17 Apr 2014, at 16:39, Tommi rusty.ga...@icloud.com wrote:

 Can someone explain me why the module system maps to the file system in the 
 way it does? The problem is that you can end up with these modules named 
 mod.rs instead of the more informative names. If you have the following 
 modules:
 
 foo
 foo::lut
 bar
 bar::lut
 
 ...that maps to files and folders as such:
 
 foo/mod.rs
 foo/lut.rs
 bar/mod.rs
 bar/lut.rs
 
 ...but why not map such modules to files and folders as the following:
 
 foo.rs
 foo/lut.rs
 bar.rs
 bar/lut.rs
 
 ...and have each module informatively named.
 
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Improving our patch review and approval process (Hopefully)

2014-02-19 Thread Felix S. Klock II


On 19/02/2014 21:12, Flaper87 wrote:

2. Approval Process

[...] For example, requiring 2 r+ from 2 different reviewers instead 
of 1. This might seem a bit drastic now, however as the number of 
contributors grows, this will help with making sure that patches are 
reviewed at least by 2 core reviewers and they get enough attention.


I mentioned this on the #rust-internals irc channel but I figured I 
should broadcast it here as well:


regarding fractional r+, someone I was talking to recently described 
their employer's process, where the first reviewer (who I think is 
perhaps part of a priveleged subgroup) assigned the patch with the 
number of reviewers it needs so that it isn't a flat every patch needs 
two reviewers but instead, someone says this looks like something 
big/hairy enough that it needs K reviewers


just something to consider, if we're going to look into strengthening 
our review process.


Cheers,
-Felix

On 19/02/2014 21:12, Flaper87 wrote:

Hi all,

I'd like to share some thoughts with regard to our current test and 
approval process. Let me break this thoughts into 2 separate sections:


1. Testing:

Currently, all patches are being tested after they are approved. 
However, I think it would be of great benefit for contributors - and 
reviewers - to test patches before and after they're approved. Testing 
the patches before approval will allow folks proposing patches - 
although they're expected to test the patches before submitting them - 
and reviewers to know that the patch is indeed mergeable. Furthermore, 
it will help spotting corner cases, regressions that would benefit 
from a good discussion while the PR is hot.


I think we don't need to run all jobs, perhaps just Windows, OSx and 
Linux should be enough for a first test phase. It would also be nice 
to run lint checks, stability checks etc. IIRC, GH's API should allow 
us to notify this checks failures.


2. Approval Process

I'm very happy about how patches are reviewed. The time a patch waits 
before receiving the first comment is almost 0 seconds and we are 
spread in many patches. If we think someone else should take a look at 
some patch, we always make sure to mention that person.


I think the language would benefit from a more strict approval 
process. For example, requiring 2 r+ from 2 different reviewers 
instead of 1. This might seem a bit drastic now, however as the number 
of contributors grows, this will help with making sure that patches 
are reviewed at least by 2 core reviewers and they get enough attention.



I think both of these points are very important now that we're moving 
towards 1.0 and the community keeps growing.


Thoughts? Feedback?

--
Flavio (@flaper87) Percoco
http://www.flaper87.com
http://github.com/FlaPer87


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev



--
irc: pnkfelix on irc.mozilla.org
email: {fklock, pnkfelix}@mozilla.com

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Fwd: Problems building rust on OSX

2014-02-10 Thread Felix S. Klock II

Martin (cc'ing rust-dev)-

I recommend you file a fresh bug with a transcript of your own build 
attempt.


I infer you are pointing us to issue #11162 because of some similarity 
in the log output you see between that and your own build issue, but 
issue #11162 is fundamentally related to a local LLVM install (at least 
according to its current title) and has been verified as fixed by 
others, so I believe you are probably better off making a fresh bug (and 
perhaps linking to #11162 from it).


Cheers,
-Felix

On 09/02/2014 22:56, Martin Koch wrote:

Thanks for your reply.

I built by cloning from github, so I am on master. Also, I don't have 
llvm installed, so that must come from the rust build somehow?


GCC is

 gcc --version
i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. 
build 5658) (LLVM build 2336.11.00)




On Sun, Feb 9, 2014 at 10:30 PM, Alex Crichton a...@crichton.co 
mailto:a...@crichton.co wrote:


This problem has been fixed on master, so I would recommend using
master or uninstalling LLVM temporarily from the system (a
non-standard gcc in the path may also mess with compilation)

On Sun, Feb 9, 2014 at 1:15 PM, Martin Koch m...@issuu.com
mailto:m...@issuu.com wrote:
 Hi List

 I'm trying to get rust to compile, but I'm apparently running
into this bug:

 https://github.com/mozilla/rust/issues/11162

 So my question is: How do I manually download and use this snapshot:



rust-stage0-2014-01-20-b6400f9-macos-x86_64-6458d3b46a951da62c20dd5b587d44333402e30b.tar.bz2

 Thanks,

 /Martin Koch


 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org mailto:Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev





___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev



--
irc: pnkfelix on irc.mozilla.org
email: {fklock, pnkfelix}@mozilla.com

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Joe Armstrong's universal server

2013-12-19 Thread Felix S. Klock II

rust-dev-

From reading the article, I thought the point was that a universal 
server could be deployed and initiated before the actual service it 
would offer had actually been *written*.


I agree with Kevin that the exercise would be pretty much pointless for 
Rust, unless you enjoy writing interpreters and/or JIT compilers and 
want to implement one for Rust.  (I know we had rusti at one point, keep 
reading...)


In particular, I assume this works in Erlang because Erlang programs are 
compiled to an interpreted bytecode representation (for an abstract BEAM 
machine) that can then be JIT compiled for the target architecture when 
it is run.


But Rust does not not have an architecture independent target code 
representation; I do not think LLVM bitcode counts, at least not the 
kind we generate, since I believe that has assumptions about the target 
architecture baked into the generated code.


Cheers,
-Felix

On 18/12/2013 19:17, Kevin Ballard wrote:
That's cute, but I don't really understand the point. The sample 
program he gave:


test() -
Pid = spawn(fun universal_server/0),
Pid ! {become, fun factorial_server/0},
Pid ! {self(), 50},
receive
X - X
end.

will behave identically if you remove universal_server from the equation:

test() -
Pid = spawn(fun factorial_server/0),
Pid ! {self(), 50},
receive
X - X
end.

The whole point of universal_server, AFAICT, is to just demonstrate 
something clever about Erlang's task communication primitives. The 
equivalent in Rust would require passing channels back and forth, 
because factorial_server needs to receive different data than 
universal_server. The only alternative that I can think of would be to 
have a channel of ~Any+Send objects, which isn't very nice.


To that end, I don't see the benefit of trying to reproduce the same 
functionality in Rust, because it's just not a good fit for Rust's 
task communication primitives.


-Kevin

On Dec 18, 2013, at 6:26 AM, Benjamin Striegel ben.strie...@gmail.com 
mailto:ben.strie...@gmail.com wrote:


Hello rusties, I was reading a blog post by Joe Armstrong recently in 
which he shows off his favorite tiny Erlang program, called the 
Universal Server:


http://joearms.github.io/2013/11/21/My-favorite-erlang-program.html

I know that Rust doesn't have quite the same task communication 
primitives as Erlang, but I'd be interested to see what the Rust 
equivalent of this program would look like if anyone's up to the task 
of translating it.

___
Rust-dev mailing list
Rust-dev@mozilla.org mailto:Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev




___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev



--
irc: pnkfelix on irc.mozilla.org
email: {fklock, pnkfelix}@mozilla.com

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] [whoami] crate, package and module confused me!

2013-12-18 Thread Felix S. Klock II

Gaetan (cc'ing rust-dev)-

On 18/12/2013 09:48, Gaetan wrote:

but I wouldn't understand why the 'mod' keyword would stay

Because a Rust module is not the same thing as a crate.

Changes to the syntax for linking to an external crate is orthogonal to 
the concept of a module.  So as long as we want to be able to define 
modules (which are not crates), we probably want a `mod` keyword.





Quick tutorial
--

The keyword `mod` (without extern) is used to define modules.

A module gathers together a set of names (either by defining each name 
as some item in the module, or by importing them from the outside via 
`use`).  Modules can be nested within one another:


mod a {
pub mod b {
pub fn f(x:i32) - i32 { 3 * x * x }
}
pub mod c {
pub fn f(x:f32) - f32 { 3.14 * x * x }
}
}

fn main() {
use a::{b, c};
println!(b::f(2) = {} c::f(2) = {}, b::f(2), c::f(2.0))
}

One could write a whole slew of modules in a single file by nesting 
them.  But since many developers would prefer to be able to break their 
modules into multiple files, the `mod` form also lets you define a inner 
module but put its body into another file.  So here's another way to 
write the `mod a` from above:


mod a {
pub mod b; // (moved definition of pub fn f:i32 - i32 to file 
at a/b.rs)
pub mod c; // (moved definition of pub fn f:f32 - f32 to file 
at a/c.rs)

}

Modules are for namespace and static-access control.

On the other hand, a crate is a unit of compilation, composed of one or 
more modules nested together in a hierarchical structure. The rust 
compiler works on a whole crate at a time; this allows it to perform 
code analyses and transformations that cross module boundaries (such as 
the enforcing the coherence rule for trait implementations).




I believe all of this is spelled out in the Crates and the module 
system section [1] of the tutorial.


(There is always a single *root module* for each crate; it is the root 
of the tree hierarchy of modules.  This correspondence may be the reason 
for your confusion differentiating the two concepts.)


The use of `extern mod` as a way to import a crate is on its way out, as 
discussed in another message.  But that has nothing to do with the 
Rust's concept of a module.


Cheers,
-Felix

[1] 
http://static.rust-lang.org/doc/master/tutorial.html#crates-and-the-module-system


On 18/12/2013 09:48, Gaetan wrote:

I am in favor of replacing the mod keyword by crate.
#[package_id = whoami];
#[package_type = lib];
...
use crate whoamiextern

but I wouldn't understand why the 'mod' keyword would stay


-
Gaetan



2013/12/18 Liigo Zhuang com.li...@gmail.com mailto:com.li...@gmail.com

`use crate foo; ` looks good to me. i always think it should be
optional. rustc can deduce which crate will be used, from use mods
lines, in most situations.


2013/12/18 Brian Anderson bander...@mozilla.com
mailto:bander...@mozilla.com

We discussed this some in the meeting today:
https://github.com/mozilla/rust/wiki/Meeting-weekly-2013-12-17


On 12/16/2013 06:41 PM, Liigo Zhuang wrote:


2013/12/16 Brian Anderson bander...@mozilla.com
mailto:bander...@mozilla.com

My feeling is that it is a crate, since that's the name
we've historically used. There's already been agreement
to remove extern mod in favor of crate.


IMO, package is used in several languages, maybe it's much
familiar and friendly to rust newbies:

``` 


package newpkg; // pkgid is newpkg, compile to dynamic
library (.so)

package main; // pkgid main means compile to executable
program (.exe)

static package newpkg; // pkgid is newpkg, compile to
static library

extern package newpkg; // or ...

extern package newpkg =
http://github.com/liigo/rust-newpkg#newpkg:1.0;;

```

But I'm OK if crate is used here.

Liigo, 2013-12-17.






-- 
by *Liigo*, http://blog.csdn.net/liigo/

Google+ https://plus.google.com/105597640837742873343/

___
Rust-dev mailing list
Rust-dev@mozilla.org mailto:Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev




___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev



--
irc: pnkfelix on irc.mozilla.org
email: {fklock, pnkfelix}@mozilla.com

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Rust crates and the module system

2013-12-13 Thread Felix S. Klock II

On 13/12/2013 12:53, spir wrote:
I think this is a good possibility, make the module/crate organisation 
mirror the filesystem (or the opposite):

* 1 module = 1 file of code
* 1 package = 1 dir
This may be limiting at times, possibility one may want multi-module 
files and multi-file modules. 
Yes, one may indeed want those things.  In particular, *I* want 
multi-module files.


I do not want to move towards a Java-style approach where the package 
nesting structure needs to match the file/directory nesting structure.  
Being able to declare nested modules within a file is very useful for 
flexible namespace control.


I like our current support for nesting modules in files, and breaking 
them out into separate files as one wants.


But then again, I also think that the current approach of { `extern 
mod`... `use`... `mod`... } is pretty understandable once you, well, 
understand it.  My main complaint has been about the slightly 
context-dependent interpretation of paths [1], but that's pretty minor.  
So perhaps I have the wrong point-of-view for interpreting these 
suggestions for change.


Cheers,
-Felix

[1] https://github.com/mozilla/rust/issues/10910

On 13/12/2013 12:53, spir wrote:

On 12/13/2013 11:43 AM, Diggory Hardy wrote:

What would you do?

Have no structure (no mod)? Or automatically create it from the file 
structure?


I think this is a good possibility, make the module/crate organisation 
mirror the filesystem (or the opposite):

* 1 module = 1 file of code
* 1 package = 1 dir
This may be limiting at times, possibility one may want multi-module 
files and multi-file modules. But this forms a good, simple base 
(anyway, we have mini  maxi modules  code files, whatever the 
logical  physical organisations). Another point is that very often we 
have package (I mean crate ;-) sub-dirs which are not packages 
themselves. Then, as usual, we'd have a special code file representing 
a package at its top dir (the same name as the package, or a magic 
name like 'main').


Then, module sharing is code file sharing, and package management is 
dir management (trivially .zip-ed or .tar.gz-ed, and then the name 
package is here for something).


Denis
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev



--
irc: pnkfelix on irc.mozilla.org
email: {fklock, pnkfelix}@mozilla.com

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Error casting to trait: value may contain borrowed pointers

2013-12-02 Thread Felix S. Klock II

rust-dev-

In general, we need to ensure that for an expression `source as 
target` that any borrowed pointers in the type of source are not 
obscured [1] by the cast.


A collection of conditions sufficient to enforce this are listed in a 
comment in librustc/middle/kind.rs that I think is apropos here:


https://github.com/mozilla/rust/blob/master/src/librustc/middle/kind.rs#L488

However, there are probably other conditions that would also suffice 
that we might add to that set.


In particular, I do not see anything immediately wrong with your 
example; the type-expression  `'a V` should ensure that `V` does not 
contain any lifetimes that are shorter than 'a, and therefore it should 
be safe to cast `v: 'a V` to a `'a T`.


I have filed this as issue #10766 [2].

Cheers,
-Felix

[1] https://github.com/mozilla/rust/issues/5723

[2] https://github.com/mozilla/rust/issues/10766

On 30/11/2013 23:22, Christian Ohler wrote:

Hi all,

I'm trying to learn rust and ran into an error message I don't
understand, and would appreciate some help.  This code:

trait T {}

fn f'a, V: T(v: 'a V) - 'a T {
 v as 'a T
}


is rejected with this error message:

trait-cast.rs:4:4: 4:5 error: value may contain borrowed pointers; add
`'static` bound
trait-cast.rs:4 v as 'a T


I'm trying to upcast from V to T so that I can put v in a container of
element type T (in code not shown here).  The suggestion to add a
'static bound doesn't sound like what I'm looking for.

What is the concern about borrowed pointers here?  What would an
implementation of T and a caller of f look like to lead to a safety
problem?

I'm using a version of rust that is a few days old.

Thanks,
Christian.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev



--
irc: pnkfelix on irc.mozilla.org
email: {fklock, pnkfelix}@mozilla.com

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


[rust-dev] Plz separate mail threads for separated compilation [was Re: Separated/Incremential compilation]

2013-11-29 Thread Felix S. Klock II
First off, the topic of rustc slowness and long bootstrap times has 
indeed been discussed many times.  If you have not yet tried skimming 
the archives, I recommend doing so, e.g. via


http://lmgtfy.com/?q=+site%3Amail.mozilla.org+rust+crate+compilation+speed

(I provide a (surely incomplete) list of previous-post links below.)



Now, a request: when discussing this topic, please try to avoid conflating:

   1. librustc+libstd bootstrap time,

from the somewhat separate issues of

   2. models for incremental compilation, and

   3. compilation times when running rustc on projects other than rustc 
itself.



In particular, incremental compilation alone is not going to solve (1).  
(At least, not as long as one is using the default make rule that 
rebuilds all of librustc+libstd atop the newly built rustc, and rustc 
itself draws upon a (large) libstd.  Under the latter two constraints, 
you *have* to redo the build for all of librustc+libstd: the compiler 
itself was changed.  Incremental compilation does not solve this.)


I am concerned that we will waste time debating tasks/designs related to 
(2) and then people will be disappointed when it does not provide the 
gains that they were expecting for issues like (1).




In case its not clear from the comments above: the team is well aware 
that rustc itself runs more slowly than it should; it is a common topic 
of discussion.


The team is also well aware that the time to bootstrap librustc+libstd 
is longer than many developers can cope with.


I am not sure I agree with the assertion that the approach of breaking a 
project into multiple crates is not a solution.  Yes, we may need better 
tools here (though I don't know how much rustpkg could help with this 
problem).




As promised, here are some relevant links to previous posts.  In *all* 
of the cases below, the *whole thread* is often worth review.


  * A great overview from pcwalton
Thread subject:  Why does Rust compile slowly?
https://mail.mozilla.org/pipermail/rust-dev/2012-October/002462.html

  * In early 2013 there was discussion of easing crate decomposition:
Thread subject:  Plans for improving compiler performance
https://mail.mozilla.org/pipermail/rust-dev/2013-January/002878.html

  * strcat and graydon each had good points in this discussion:
Thread subject: Please tell me about making rustc faster
https://mail.mozilla.org/pipermail/rust-dev/2013-May/004326.html
https://mail.mozilla.org/pipermail/rust-dev/2013-May/004328.html

  * The team internally discussed whether to break lubrustc into 
multiple subcrates here:

Thread subject:  code generation and rustc speed
https://mail.mozilla.org/pipermail/rust-dev/2013-June/004493.html


Cheers,
-Felix

On 29/11/2013 12:22, Guillaume HERVIER wrote:
+1 for this issue. I think that compilation time is really important 
if we want Rust to be used as production language.


For example, I think that if we can reduce significantly the Rust 
compiler's compilation time, it could allow more developers to 
contribute to the Rust language (as they won't have to wait 30min for 
each small modifications in the compiler).
Personally, it's the only thing which blocks me when I want to 
contribute to Rust, because I like to often compile code when I do 
small modifications to test each of these small modifications, partly 
because I don't know the language very well.


On 11/29/2013 12:01 PM, Léo Testard wrote:


Hello,

I think everyone here will agree to say that compilation times in 
Rust are problematic. Recently, there was an argument on IRC about 
reducing compilation times by reducing the use of GC and failures. 
Although I agree it's good to reduce Rustc's overhead, I think there 
are more important problems. The total duration of a build matters 
only because you have to recompile the whole crate on each 
modification. In C++, the duration of the complete build of a project 
matters less because when you compile incrementally, you only have to 
rebuild a couple of files - those you modified. I know the 1 crate = 
1 compilation unit is the model chosen by Rust, but this is a major 
issue for production. Nobody will ever use Rust in production if they 
have to recompile thousands of lines of code on each modification.


On some of my personal projects, I solved this problem by splitting 
the codebase into several crates, that I compile statically, and then 
link together using extern mod. This is not really a solution, 
because this implies that there is no cyclic dependency between each 
of the small crates, or I end up with issues trying to compile it, 
because using extern mod requires that the library corresponding to 
that mod exists before compiling the crate that depends on it.


But strictly speaking, a compiled crate is nothing more than a module 
hierarchy, and so is a single Rust source file, so we should be able 
to compile a single file to some sort of .o and then link all 
together to form a crate. 

Re: [rust-dev] RFC: Put Unicode identifiers behind a feature gate

2013-11-22 Thread Felix S. Klock II

On 22/11/2013 19:32, Brian Anderson wrote:

On 11/21/2013 08:53 PM, Patrick Walton wrote:
There are several issues in the backwards-compatible milestone 
related to Unicode identifiers:


#4928: XID_Start/XID_Continue might not be correct
#2253: Do NKFC normalization in lexer

Given the extreme lack of use of Unicode identifiers and the fact 
that we have much more pressing issues for 1.0, I propose putting 
support for identifiers that don't match 
/^(?:[A-Za-z][A-Za-z0-9]*|_[A-Za-z0-9]+)$/ behind a feature gate.


Thoughts?


This is ok by me. Let's keep looking for rough corners like this to 
scale back or remove.

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev

I agree with the sentiments expressed by Patrick and Brian.

But Patrick, are you also suggesting that in the default case outside 
the feature gate, we would only allow underscore solely as a leading 
character, and not as an embedded one?  Or were you just whipping up a 
quick regexp on the fly and left out the potential for a non-leading 
underscore?


% js
js var r =  /^(?:[A-Za-z][A-Za-z0-9]*|_[A-Za-z0-9]+)$/;
js r.exec(a3b)
[a3b]
js r.exec(_a3b)
[_a3b]
js r.exec(a_b)
null
js r.exec(_a_b)
null
js


Cheers,
-Felix

--
irc: pnkfelix on irc.mozilla.org
email: {fklock, pnkfelix}@mozilla.com

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Struct members in trait definitions

2013-09-20 Thread Felix S. Klock II

Andres (cc'ing rust-dev)-

An initial question, since I'm not clear on one thing:

What is your goal in proposing this change?

That is, is your primary concern that you dislike writing either method 
invocations or method definitions?  Or are you concerned about the 
ability of the compiler to optimize the generated code if one uses 
methods instead of struct fields?




Justifications for why traits should be expressed in terms of associated 
methods, not associated fields (and thus why Rust does things this way):


1.) Method definitions are strictly more general than fields, in terms 
of allowing other implementations to dynamically compute the value, read 
it from a database, from an input stream, etc).  I assume you already 
are aware of this, and just want to know why we don't provide special 
handling for Trait designers willing to rule out such generality up-front.


2.) Associated struct-fields would either disallow mixing traits whose 
names collide, or would require extending the `impl` item syntax with a 
struct-field renaming feature.


Elaboration of point 2:

Traits are designed to be mixed together; the language should discourage 
patterns that make mixing traits on a single type difficult.


The fields of a struct are written down with the `struct` definition.

The associated methods for an implementation are written down with the 
`impl` item.


If two traits require the same kind of associated state, right now you 
would give them identical method names, and the one struct could 
implement both traits (i.e. mixing them) with no ambiguity.


If traits were to define struct names, to get the same amount of 
generality we would need to provide some way to map the field name of 
the struct to the name expected by the trait within `impl` items.  But 
why do that?  A method definition is a perfectly reasonable way to 
express this.




Concrete illustration of point 2 above: How would you express the below 
in your proposal, assuming that *both* T1 and T2 are revised to require 
`state` to be a member field rather than a method?


```rust
trait T1 { fn state(self) - int; }

trait T2 { fn state(self) - int; }

struct one_int { state: int }

struct two_ints { state: int, state2: int }

impl T1 for one_int { fn state(self) - int { self.state } }
impl T2 for one_int { fn state(self) - int { self.state } }

impl T1 for two_ints { fn state(self) - int { self.state } }
impl T2 for two_ints { fn state(self) - int { self.state2 } }
```



Again, to be clear: I'm not saying its impossible to express the example 
above via hypothetical Traits with fields.  But I think it would add 
unnecessary complexity (e.g. extensions to `impl` item syntax).  So 
that's why I wanted to know what the driving motivation here is.


If the motivation is concern over syntactic overhead: method invocations 
vs field deference seems trivial.  The method definitions are more 
annoying boilerplate code, but I imagine that one could write an easy 
macro_rules! for the common case where the Trait method name is the same 
as the struct field name.


If the motivation is concern over the quality of the generated code: I 
assume that LLVM does a good job inlining these things. (If I'm wrong 
about that, I'd like to know.)


Cheers,
-Felix


On 20/09/2013 13:02, Andres Osinski wrote:
Hi all, I have a question which I'm sure must have already been 
discussed and dealt with, but I wanted to understand the design 
rationale:


A lot of trait-level functionality would be enhanced if a trait could 
specify members to be included in the struct which implements the 
trait. This can be solved in practice by wrapping member access in 
accessor methods, but I fail to see why that would be preferable.


The reason I'm asking is because I'm trying to design data structures 
which contain a couple of arrays, and I wanted to define the trait by 
not only a set of supported operations but by the existence of both 
arrays so that a default method could deal with any struct which 
implements the trait, instead of having to define for every struct an 
accessor method for each structure and then have to call the accessors 
in the trait to do anything.


Thanks

--
Andrés Osinski
http://www.andresosinski.com.ar/


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev



--
irc: pnkfelix on irc.mozilla.org
email: {fklock, pnkfelix}@mozilla.com

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] implicit vs explicit generic bounds

2013-09-17 Thread Felix S. Klock II

Gokcehan-

My understanding is that C++, due to its policy of SFINAE [1], usually 
provides error feedback after the template has been fully instantiated 
and the fully instantiated version fails to compile. This can lead to 
hard to debug compile-time failures.


Good compilers can help with dissecting the output you get in this case, 
but it is still a real pain to decode in my experience, especially when 
you have nests of template code written in terms of other template 
code.  Since the template parameters do not have bounds, it is not 
possible for a compiler to provide any error feedback given a template 
definition alone (with no client code), and its difficult for the 
compiler to provide good error feedback that is described solely in 
terms of the definition: you are doomed to thinking about the 
intermingling of the definition with the particular instantiation.


Requiring explicit bounds means that the compiler can provide good error 
messages at the point where the parameterized class is *defined* (even 
in the absence of client code!) rather than delaying to the point where 
the parmeterized class is fully instantiated.  This provides me with 
more confidence that the parametric code I write is actually going to 
compose properly with other implementations of the trait-bounds in terms 
of which I have written my code.


That advantage alone is enough to justify this choice for *me*. There 
may be other justifications that I am overlooking.


(It may also be the case that I'm ignorant of what the best C++ tools 
today do, though I'm pretty sure that these drawbacks are consequences 
of core C++ design.)


Cheers,
-Felix

[1] SFINAE: 
http://en.wikipedia.org/wiki/Substitution_failure_is_not_an_error


On 17/09/2013 13:34, Gokcehan Kara wrote:

Hello,

I have met rust a few days ago and let me pay my respects first for 
making such a powerful language. I really hope to succeed making some 
contribution in the upcoming days.


I was reading the tutorial 
(http://static.rust-lang.org/doc/tutorial.html) specifically the 
section 16.3 and I was wondering if there's a rationale behind making 
generic bounds explicit. For example in C++ I can do:


// clang++ asd.cc -std=c++11 -Weverything -Wno-c++98-compat -g  
./a.out

#include iostream
#include vector
using namespace std;
class Klass {
public:
  void print() { cout  printing the thing  endl; }
};
template typename T void print_all(vectorT things) {
  for (auto thing : things) {
thing.print();
  }
}
int main() {
  vectorKlass v1;
  v1.push_back(Klass());
  v1.push_back(Klass());
  v1.push_back(Klass());
  print_all(v1);  // no errors
  vectorint v2;
  v2.push_back(1);
  v2.push_back(2);
  v2.push_back(3);
  print_all(v2); // /tmp/asd.cc:18:10: error: member reference 
base type 'int'
 // is not a structure or union/tmp/asd.cc:37:3: 
note: in

 // instantiation of function template specialization
 // 'draw_allint' requested here
  return 0;
}

and it gives me the necessary error at compile time. To my limited 
knowledge, this is also statically dispatched so should not cause any 
overhead.


I haven't used Haskell much but I know a little bit of Scala. In Scala 
you need to be explicit because generics are compiled once to run with 
different types. As far as I understand, rust compiles different 
copies for each type (monomorphizing?) just like C++ so it might be 
possible to be implicit in rust as well.


Having said that, I'm not sure if being explicit is necessarily a bad 
thing. It sure looks good for documenting and I haven't thought of any 
cases where it falls short except maybe when a function of the same 
name is implemented in different traits.


Am I failing to see the obvious?

Thanks in advance,
Gokcehan


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev



--
irc: pnkfelix on irc.mozilla.org
email: {fklock, pnkfelix}@mozilla.com

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] implicit vs explicit generic bounds

2013-09-17 Thread Felix S. Klock II

Gokcehan (cc'ing rust-dev)-

Correct, rust generics are not Turing-complete.  (To my knowledge; I 
don't think we've written the termination proof for the type-checker 
yet.  :)


Cheers,
-Felix

On 17/09/2013 15:25, Gokcehan Kara wrote:
Felix, C++ template errors are indeed a joy(!) to deal with. I guess I 
also would rather be explicit than to have cryptic error messages that 
are pages long. You're also right about the absence of errors without 
the client code which makes sense in a context of safety.


It was the first time I heard about SFINAE. It reminds me of 
turing-completeness proofs of C++ templates. Does that mean rust 
generics are not turing complete?


Gokcehan


On Tue, Sep 17, 2013 at 3:28 PM, Felix S. Klock II 
pnkfe...@mozilla.com mailto:pnkfe...@mozilla.com wrote:


Gokcehan-

My understanding is that C++, due to its policy of SFINAE [1],
usually provides error feedback after the template has been fully
instantiated and the fully instantiated version fails to compile. 
This can lead to hard to debug compile-time failures.


Good compilers can help with dissecting the output you get in this
case, but it is still a real pain to decode in my experience,
especially when you have nests of template code written in terms
of other template code. Since the template parameters do not have
bounds, it is not possible for a compiler to provide any error
feedback given a template definition alone (with no client code),
and its difficult for the compiler to provide good error feedback
that is described solely in terms of the definition: you are
doomed to thinking about the intermingling of the definition with
the particular instantiation.

Requiring explicit bounds means that the compiler can provide good
error messages at the point where the parameterized class is
*defined* (even in the absence of client code!) rather than
delaying to the point where the parmeterized class is fully
instantiated.  This provides me with more confidence that the
parametric code I write is actually going to compose properly with
other implementations of the trait-bounds in terms of which I have
written my code.

That advantage alone is enough to justify this choice for *me*. 
There may be other justifications that I am overlooking.


(It may also be the case that I'm ignorant of what the best C++
tools today do, though I'm pretty sure that these drawbacks are
consequences of core C++ design.)

Cheers,
-Felix

[1] SFINAE:
http://en.wikipedia.org/wiki/Substitution_failure_is_not_an_error


On 17/09/2013 13:34, Gokcehan Kara wrote:

Hello,

I have met rust a few days ago and let me pay my respects first
for making such a powerful language. I really hope to succeed
making some contribution in the upcoming days.

I was reading the tutorial
(http://static.rust-lang.org/doc/tutorial.html) specifically the
section 16.3 and I was wondering if there's a rationale behind
making generic bounds explicit. For example in C++ I can do:

// clang++ asd.cc -std=c++11 -Weverything -Wno-c++98-compat
-g  ./a.out
#include iostream
#include vector
using namespace std;
class Klass {
public:
  void print() { cout  printing the thing  endl; }
};
template typename T void print_all(vectorT things) {
  for (auto thing : things) {
thing.print();
  }
}
int main() {
  vectorKlass v1;
  v1.push_back(Klass());
  v1.push_back(Klass());
  v1.push_back(Klass());
  print_all(v1);  // no errors
  vectorint v2;
  v2.push_back(1);
  v2.push_back(2);
  v2.push_back(3);
  print_all(v2); // /tmp/asd.cc:18:10: error: member
reference base type 'int'
 // is not a structure or
union/tmp/asd.cc:37:3: note: in
 // instantiation of function template
specialization
 // 'draw_allint' requested here
  return 0;
}

and it gives me the necessary error at compile time. To my
limited knowledge, this is also statically dispatched so should
not cause any overhead.

I haven't used Haskell much but I know a little bit of Scala. In
Scala you need to be explicit because generics are compiled once
to run with different types. As far as I understand, rust
compiles different copies for each type (monomorphizing?) just
like C++ so it might be possible to be implicit in rust as well.

Having said that, I'm not sure if being explicit is necessarily a
bad thing. It sure looks good for documenting and I haven't
thought of any cases where it falls short except maybe when a
function of the same name is implemented in different traits.

Am I failing to see the obvious?

Thanks

Re: [rust-dev] Compiler assertion and trait impl question

2013-09-12 Thread Felix S. Klock II

Carl, David (cc'ing rust-dev)_

Note that #6396 is about the 'self lifetime.

That may or may not be related to the rustc assertion failure that David 
mentions in one of his comments, but I think the bulk of his example 
does not use lifetimes at all.  (So I'm assuming that his main issues 
about `impl Inner for @Inner` are something else.)


-Felix

On 12/09/2013 17:49, Carl Eastlund wrote:
This looks like bug 6396; function lifetimes don't work when named 
self apparently.  Try naming it something else.


https://github.com/mozilla/rust/issues/6396

Carl Eastlund


On Thu, Sep 12, 2013 at 11:16 AM, David Brown dav...@davidb.org 
mailto:dav...@davidb.org wrote:


Consider the following code
--
pub trait Inner {
   fn secret(self);
}

// This doesn't seem to help.
impl Inner for @Inner {
   fn secret(self) { self.secret(); }
}

pub trait Wrapper {
   fn blort(self);
}

implT: Inner Wrapper for T {
   fn blort(self) { self.secret(); }
}

// This function causes an assertion failure in rustc:
// task unnamed failed at 'assertion failed: rp.is_none()',
/home/davidb/rust/rust/src/librustc/middle/typeck/collect.rs:1108
http://collect.rs:1108
// fn blort'self, T: Inner(item: 'self @T) {
   // item.secret();
// }

struct Client;

impl Inner for Client {
   fn secret(self) { }
}

fn main() {
   let buf = @Client;
   buf.secret(); // Works

   // error: failed to find an implementation of trait Inner for
   // @Client
   buf.blort();
}
--

This fails to compile:
 wrap.rs:32:4: 41:5 error: type `Client` does not implement any
method in scope named `with_data`

I'm modeling this after looking at the code in libstd/io.rs
http://io.rs, but I'm
not sure what I'm missing.

I seem to be able to make it work by using 'impl Inner for @Client',
but I'm not sure why that is required.

Also, the commented out function causes an assertion failure in the
compiler (it was a workaround attempt).

Thanks,
David
___
Rust-dev mailing list
Rust-dev@mozilla.org mailto:Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev




___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev



--
irc: pnkfelix on irc.mozilla.org
email: {fklock, pnkfelix}@mozilla.com

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Structural enums for datasort refinements

2013-08-28 Thread Felix S. Klock II

Bill (cc'ing rust-dev)-

[Executive summary: the provided proposal needs further work.]

I do not know which discussion of data sort refinements you had been 
reading; I would have liked context about where you were coming from.  I 
assume it was either Niko's blog post [1] or issue #1679 [2], but I 
would prefer not to make any assumption at all.


[1]: 
http://smallcultfollowing.com/babysteps/blog/2012/08/24/datasort-refinements/

[2]: https://github.com/mozilla/rust/issues/1679

Some immediate thoughts:

* This strikes me as an extreme change to the language, but perhaps my 
gut is overly conservative.


-- (At first I thought you were suggesting adding headers to every 
struct, but then I realized that the compiler should be able to insert 
the tags at the points where a struct is passed into an evaluation 
context expecting a structural-enum.  So its not as extreme a change as 
I had initially worried; but I still worry.)


-- I think one of Niko's points in his blog post was that his proposal 
was not an extreme change.


* You have not addressed in your proposal how you would change the match 
syntax to deal with non-struct variants, such as ~str or int.


-- (I would probably just sidestep this by including the hypothetical 
stipulation that you mentioned, where only structs can be part of a 
structural enum; then I think the match syntax can remain largely 
unchanged, but see caveats with next bullet.)


* Finally, I think your note at the end about generic instantiation is a 
bigger problem than you make it out to be.


-- For example, can I actually expect to be able to write code that 
processes arguments of type A | B | S(Y) ?


  struct SY { y: Y }
  fn A,B,X(x: A | B | SY) {
match x {
  ... what could go here ? ..., // early case clauses, maybe to 
handle A

  S{ y: the_y } = { ... handle the_y ... },
  ... and what goes here ? ...  // late case clauses, maybe to handle B
}
  }

There is the issue you already pointed out, where a type variables might 
be instantiated to SY.  But could they also be instantiated to SZ?  
(Do the tags on the variants need to encode the whole type, and not just 
which struct it is?)  And what about the poor user who didn't think 
about the fact that they might alias each other, and thought that all 
the clauses in the code for A | B| SY were disjoint, but in fact they 
potentially overlap due to the potential aliasing, and thus the order of 
the cases in the above is now crucial, yes?


-- Another example: Can/should I now just throw seemingly unrelated 
struct's into a match, in order to anticipate that the parameters will 
be instantiated with that struct?  Consider the following:


  struct SY { y: Y }
  struct TZ { z: Z }

  fn A,B,X(x: A | B | SY, f(ab: A | B) - int) - int {
match x {
  T{ z: the_z } = { who knows, maybe A or B were instantiated with 
TZ, handle it },

  S{ y: the_y } = [ ... handle the_y ... },
  other = return f(other)
  }

-- Perhaps I am misunderstanding your proposal, and your hypothetical 
type system would reject the T clause in the latter example, and the 
*only* option for handling parametric variants in structural-enums is 
via a catch all clause (that can pass the problem off to another 
function, as illustrated by the final clause in the latter example).



I do not want to spend too much time trying to infer the fine details of 
what you propose; this e-mail may be prohibitively long as it is.  I 
just wanted to put down my initial thoughts.


It is possible that a more conservative approach would be easier for me 
to swallow.  (And it is also possible that other developers will be 
enthused about tackling these issues, rather than worried.)


Cheers,
-Felix

On 28/08/2013 00:58, Bill Myers wrote:
I was reading a proposal about adding datasort refinements to make 
enum variants first-class types, and it seems to me there is a simpler 
and more effective way of solving the problem.


The idea is that if A, B and C are types, then A | B | C is a 
structural enum type that can be either A, B or C.


In addition, A can be implicitly converted to A | B, A | B can be 
implicitly converted to A | B | C, and also (A | B) | C and A | 
(B | C) are equivalent to A | B | C, and finally C | B | A is 
equivalent to A | B | C (to support the latter, the implementation 
needs to sort variants in some arbitrary total order before assigning 
tag numbers).


Furthermore, a way to bind variables to an or pattern is introduced 
to allow to convert A | B | C to A | B in the case that it holds 
an A or a B.


This way, one can rewrite Option as a type alias like this:
struct SomeT(T);
struct None;

type OptionT = None | SomeT;

Which is like the current Option, but also makes None and SomeT 
first-class types.


The current enum syntax can remain as syntax sugar for the above code.

The only issue I see is what to do for code such as let mut x = 
Some(3); x = None;: with this proposal, Some and None are separate 

Re: [rust-dev] Rust on bare metal ARM - sample project

2013-07-16 Thread Felix S. Klock II

On 16/07/2013 19:52, Svetoslav Neykov wrote:


On 15.07.13 23:37, Patrick Walton wrote:

On 7/14/13 1:04 PM, Svetoslav Neykov wrote:

 ·unsafe is not a normal block and doesn't return a value, can't be

 used as an expression. This made it impossible to wrap the unsafe casts

 from fixed memory locations to borrowed pointers in macros. Instead I

 had to use inline functions and assign the resulting value to a local

 function variable.



This should not be the case. There may be a bug in the way macros

interact with unsafe checking. Do you have a test case by any chance?

Yes, my bad. I double checked, there is no problem using unsafe as 
an expression or wrapping it inside of a macro.


The real problem, turns out, is that the macro can't be used as an 
expression unless wrapped inside parentheses.


GPIOD!().MODER //doesn't compile (error: unexpected token: `.`)

(GPIOD!()).MODER //OK

Svetoslav.



___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev

Yes I believe this is filed under the following ticket:

   Syntax extensions (and macros) that start a line want to be a whole 
statement

  https://github.com/mozilla/rust/issues/5941

(I noted on that bug that we could probably provide a better error 
message here.)


Cheers,
-Felix

--
irc: pnkfelix on irc.mozilla.org
email: {fklock, pnkfelix}@mozilla.com

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Incremental and generational GC

2013-07-11 Thread Felix S. Klock II

On 10/07/2013 22:04, Graydon Hoare wrote:

On 13-07-10 11:32 AM, Patrick Walton wrote:


I've been thinking about what will be needed to support incremental and
generational GC in Rust. To be clear, I don't think we should block on
it now, but we should not make it unduly difficult for us to implement
in the future.

I figured we'd be doing the incremental-generational variant of
mostly-copying (once we get mostly-copying working _at all_ -- we're a
long ways from there). Trap on @-writes to heap, mask to page bits,
compare generation numbers. With MMU or without, depending on environment.

I assume you've read this:

http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-91-8.pdf


This is also relevant, I think (and more recent):


Antony L. Hosking. 2006. Portable, mostly-concurrent, mostly-copying 
garbage collection for multi-processors. In /Proceedings of the 5th 
international symposium on Memory management/ (ISMM '06). ACM, New York, 
NY, USA, 40-51. DOI=10.1145/1133956.1133963 
http://doi.acm.org/10.1145/1133956.1133963


http://dl.acm.org/citation.cfm?id=1133963

Cheers,
-Felix (who has not yet caught up with the rest of the thread)

--
irc: pnkfelix on irc.mozilla.org
email: {fklock, pnkfelix}@mozilla.com

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Incremental and generational GC

2013-07-11 Thread Felix S. Klock II

On 11/07/2013 13:30, Felix S. Klock II wrote:

On 10/07/2013 22:04, Graydon Hoare wrote:
I assume you've read this: 
http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-91-8.pdf 

This is also relevant, I think (and more recent):


Antony L. Hosking. 2006. Portable, mostly-concurrent, mostly-copying 
garbage collection for multi-processors. In /Proceedings of the 5th 
international symposium on Memory management/ (ISMM '06). ACM, New 
York, NY, USA, 40-51. DOI=10.1145/1133956.1133963 
http://doi.acm.org/10.1145/1133956.1133963


http://dl.acm.org/citation.cfm?id=1133963

Caveat: The latter paper that I cited is for a Modula-3 compiler/runtime 
(i.e., compiler support for the GC is definitely provided; in other 
words: cooperative software environment).


The former paper that Graydon cited is for the explicitly uncooperative 
environment of C++; no hardware nor compiler support.


I presume Rust falls somewhere between these two extremes, so both may 
be relevant.


Cheers,
-Felix, (still playing catchup)

--
irc: pnkfelix on irc.mozilla.org
email: {fklock,pnkfelix}@mozilla.com

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] howto: c++ reference in rust

2013-06-25 Thread Felix S. Klock II

Rémi (cc'ing Philipp and rust-dev list)-

We track bugs, work-items, and (some) feature requests here:

  https://github.com/mozilla/rust/issues

Cheers,
-Felix

On 25/06/2013 12:18, Rémi Fontan wrote:
good to know. should I be reporting the bug to some sort of 
bug-report-system?


Rémi


On Mon, Jun 24, 2013 at 11:48 PM, Philipp Brüschweiler 
ble...@gmail.com mailto:ble...@gmail.com wrote:


Hi Rémi,

Yes, this looks like a compiler bug to me.

Cheers,
Philipp


On Mon, Jun 24, 2013 at 1:07 PM, Rémi Fontan remifon...@yahoo.fr
mailto:remifon...@yahoo.fr wrote:

I have another example that puzzles me.

struct Vec { x:float, y:float, z:float }
impl Vec {
pub fn getRef'a('a mut self, i:uint) - 'a mut float {
// if(i==0) { mut self.x }
// else if(i==1) { mut self.x }
// else {mut self.x }
match(i) {
0 = mut self.x,
1 = mut self.y,
_ = mut self.z
}
}
}

when compiling with match I get following errors :

rustc test.rs http://test.rs -o test-test --test
test.rs:122:17: 122:28 error: cannot infer an appropriate
lifetime due to conflicting requirements
test.rs:122 http://test.rs:122 _ = mut self.z
 ^~~
test.rs:121:17: 121:28 note: first, the lifetime must be
contained by the expression at 121:17...
test.rs:121 http://test.rs:121 1 = mut self.y,
 ^~~
test.rs:121:17: 121:28 note: ...due to the following expression
test.rs:121 http://test.rs:121 1 = mut self.y,
 ^~~
test.rs:120:17: 120:28 note: but, the lifetime must also be
contained by the expression at 120:17...
test.rs:120 http://test.rs:120 0 = mut self.x,
 ^~~
test.rs:120:17: 120:28 note: ...due to the following expression
test.rs:120 http://test.rs:120 0 = mut self.x,
 ^~~
error: aborting due to previous error
make: *** [test-test] Error 101



but it compiles correctly with the if version. I don't
understand why it's not behaving the same way. Actually I
don't understand why the compiler is not capable of finding
out about the life cycle on its own for this particular
example. Could the life cycle of the return type of getRef be
inferred from the lifecycle of  the intersection of {self.x,
self.y, self.z} ?

cheers,

Rémi




On Mon, Jun 24, 2013 at 2:32 AM, Philipp Brüschweiler
ble...@gmail.com mailto:ble...@gmail.com wrote:

Hi Rémi,

Yes, the documentation of Rust is not very good at the
moment. The concept and syntax of lifetimes is explained
in this and the following chapter:


http://static.rust-lang.org/doc/tutorial-borrowed-ptr.html#returning-borrowed-pointers

The only single quotes are only used to declare literal
characters, e.g. 'c', and lifetime variables.

Cheers,
Philipp


On Sun, Jun 23, 2013 at 12:03 PM, Rémi Fontan
remifon...@yahoo.fr mailto:remifon...@yahoo.fr wrote:

thanks, it works.

however it is not yet very clear to me how lifetime
works in rust. is there a bit of doc that explain
about the concept of lifetime and the syntax?

is using a quote before the lifetime variable
necessary? I realised that the first time I read the
rust tutorial I saw a lot of variables with a quote in
front of their name, I simply ignore that details and
went ahead. Were there all lifetime variables?

cheers,

Rémi


On Sat, Jun 22, 2013 at 8:47 PM, Philipp Brüschweiler
ble...@gmail.com mailto:ble...@gmail.com wrote:

Hi Rémi,

The problem in your code was that you have to
return a mut reference. Here's a version of your
code that works:

```

struct Mat {
data:[float, ..16]
}

impl Mat {
pub fn new() - Mat {
Mat { data: [0.0, ..16] }
}
pub fn Get'a('a mut self, r:int, c:int) -
'a mut float {
mut self.data[r+c*4]
}
}

fn main() {
let mut a = Mat::new();
*a.Get(0, 0) = 5.0;
println(fmt!(%?, 

Re: [rust-dev] Using new I/O error handling

2013-06-03 Thread Felix S. Klock II

On 01/06/2013 06:10, Brian Anderson wrote:

On 05/31/2013 01:44 AM, Sanghyeon Seo wrote:
Is it actually possible to use new I/O error handling at the moment? 
It seems to me that
it is not possible to get at std::rt::io::io_error at all, because 
conditions are private and

do not work cross-crate.

https://github.com/mozilla/rust/issues/5446
https://github.com/mozilla/rust/issues/6009
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


You are right that it is not possible because of those errors. I 
believe that the fix for #6009 is 1cbf0a8 and has been snapshotted, so 
io_error and read_error can have their /*pub*/ uncommented.

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev
I had thought that conditions were currently public, due to commit 
fe13b865.  (Commit 1cbf0a8 is indeed related to all this, but it did not 
revert conditions to being private-by-default; that is planned for 
later, as described on pull request #6271)


But certainly the ICE described by #5446 sounds serious.  I'll poke at 
it (and probably take care of the rest of #6009 while I am at it).


Cheers,
-Felix


-- irc: pnkfelix on irc.mozilla.org email: {fklock, pnkfelix}@mozilla.org
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] PSA: use #[inline(always)] with care

2013-05-30 Thread Felix S. Klock II

On 30/05/2013 05:15, James Miller wrote:

This meant that a fairly simple, standard pass, was being called on over 100x 
more code than it
needed to be. If you want to know why a full build takes so long, there is why.
Since I've been curious about the build times this week, I thought I'd 
experiment with this assertion in a brute force way: compare the times 
between a baseline build and a build with every occurrence of 
#[inline(always)] replaced with #[inline].


The above is not ideal, especially if uint::range and vec::each are not 
being inlined without an inline(always) directive, as pcwalton 
mentioned.  But it was something easy to try.


This is on a system that's making use of a ccache for the C/C++ portions 
of the runtime (and I checked it was getting used as the builds ran, via 
ccache --show-stat), so hopefully its just rustc invocations that are 
taking the bulk of the time below; but I have not carefully verified 
that claim.


% cd baseline-reference-repository

% time make  ../../rust-baseline-build-log 21
real23m38.869s
user22m50.757s
sys0m55.156s

% cd repo-with-inline-always-hacked-out

% time make  ../../rust-reinline-build-log 21
real20m52.833s
user20m1.642s
sys0m56.104s

It's just two data points, but I thought I would share.  Two and a half 
minutes may not seem like a lot, but its quite possible that better 
tuning would reap more gains here.


Cheers,
-Felix

--
irc: pnkfelix on irc.mozilla.org
email: {fklock, pnkfelix}@mozilla.org

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: Pattern matching binding operator

2013-05-03 Thread Felix S. Klock II

Patrick (cc'ing rust-dev)-

Between the two options Patrick presented, my vote is for bifurcating 
the grammar into irrefutable and refutable variants. I like having one 
operator to denote binding (even if it also sometimes means mutation).


However, my (potentially-wrong) intuition is that the problem Patrick 
describes is stemming from a not-particularly useful special case of 
pattern binding.


In particular, I wonder whether the problem could be resolved by not 
allowing a binding `=` at the topmost level of the pattern.


I mentioned this on IRC last night, but it was late and I'm not 
convinced I explained myself properly.


More concretely, what I'm suggesting is the following:

What we now write as:

fn main() {
enum Foo { A((int, int)), B(('static str, 'static str)) };
fn visit (x:Foo) {
match x {
i @ A(j@(k,l)) = io::println(fmt!(an A %? %? %? %?, i, 
j, k, l)),
m @ B(n@(o,p)) = io::println(fmt!(a  B %? %? %? %?, m, 
n, o, p))

}
}

visit(A((1,2)));
visit(B((three, four)));
}

would become illegal.  In particular, the bindings for `i` and `m` would 
be disallowed.  But the other bindings would continue to be allowed, and 
we would switch to the `=` operator for binding, yieldign:


fn main() {
enum Foo { A((int, int)), B(('static str, 'static str)) };
fn visit (x:Foo) {
match x {
A(j=(k,l)) = io::println(fmt!(an A %? %? %? %?, x, j, k, 
l)),

B(n=(o,p)) = io::println(fmt!(a  B %? %? %? %?, x, n, o, p))
}
}

visit(A((1,2)));
visit(B((three, four)));
}


patrick: Does this get rid of the problem, since the `=`'s could only 
occur beneath pattern structure?  Or does it leave the grammar just as 
ugly as bifurcating it with irrefutable and refutable variants?  
(Although at least now, even though the grammar is a little more 
complex, it at least might be *consistent* across both let and match.)


Cheers,
-Felix

On 03/05/2013 03:12, Patrick Walton wrote:

Hi everyone,

There's consensus that `@` (imported from Haskell) is a bad binding 
operator for patterns, because it leads to the confusing-looking `@@` 
in, for example:


struct Foo {
field: int
}

...

match foo {
foo@@Foo { field: x } = ...
}

However, there is not consensus as to what to change it to. 
Suggestions are `=` and `as`.


The problem with `=` is that, if implemented naively, it makes our 
grammar ambiguous:


let x = y = 3; // is x the result of evaluating `y = 3` (i.e. unit)
   // or are x and y bound to 3?

The easiest way to fix this problem is to forbid `=` in irrefutable 
patterns, such as those introduced by `let`. However, this bifurcates 
the pattern grammar into the irrefutable-pattern grammar and the 
refutable-pattern grammar, with some conceptually-ugly overlap.


The alternative is `as`, like OCaml. However, this conflicts with `as` 
in the expression grammar. A subset of the expression grammar is part 
of the pattern grammar in order to permit matching against constants. 
Removing `as` expressions from the subset of expression productions 
permitted in patterns would mean that this would no longer do what you 
expect:


match 22.0f32 / 7.0f32 {
math::PI as f32 = println(Good grief!),
_ = {}
}

So both `=` and `as` have drawbacks.

I don't really have any preference at all; I just need to know what to 
implement. Opinions?


Patrick
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev



--
irc: pnkfelix on irc.mozilla.org
email: {fklock, pnkfelix}@mozilla.org

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


[rust-dev] continue and related keyword bikeshed [was Re: LL(1) problems]

2013-04-26 Thread Felix S. Klock II

On 26/04/2013 17:22, Alex Bradbury wrote:

On 26 April 2013 16:15, Erik S sw...@earthling.net wrote:

For how rarely used continue is, four extra characters don't much matter.
The time savings from not having to check the documentation to confirm what
the keyword is will outweigh four characters of typing.

Indeed, Lua for example doesn't even feature continue. It is an issue
of much consternation though.

Alex
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


(Felix-Hulk would appreciate it if further discussion on this fork of 
the thread had a subject line that reflected its content.)


--
irc: pnkfelix on irc.mozilla.org
email: {fklock, pnkfelix}@mozilla.org

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] LL(1) problems

2013-04-25 Thread Felix S. Klock II

On 25/04/2013 18:12, Graydon Hoare wrote:
I've been relatively insistent on LL(1) since it is a nice 
intersection-of-inputs, practically guaranteed to parse under any 
framework we retarget it to.
I'm a fan of this choice too, if only because the simplest efficient 
parser-generators and/or parser-composition methodologies I know of take 
an LL(1) grammar as input.


However, Paul's earlier plea on this thread (Please don't do this 
[grammar factoring] to the official parser!) raised the following 
question in my mind:


Are we allowing for the possibility of choosing the semi-middle ground 
of: There *exists* an LL(1) grammar for Rust that is derivable from the 
non-LL(1)-but-official grammar for Rust. ?  Or do we want to go all the 
way to ensuring that our own grammar that we e.g. use for defining the 
syntactic classes of the macro system etc is strictly LL(1) (or perhaps 
LL(k) for some small but known fixed k)?


(I'd have to go review my compiler text books at home to review how much 
this would actually buy us.)


If we've already discussed the latter, mea culpa.  :)

Cheers,
-Felix

On 25/04/2013 18:12, Graydon Hoare wrote:

On 13-04-25 08:37 AM, John Clements wrote:

FWIW, I'm (mildly) concerned too.  In particular, I'm disappointed to 
discover that in its present form (and using my present caveman-like 
invocation style), ANTLR parses source files so slowly that it's 
impossible to use directly as a validation tool; I would very much 
like to directly validate the grammar used for documentation purposes 
against the existing sources.  I haven't yet asked for help from the 
ANTLR folks, because I don't yet feel like I've finished due 
diligence on RTFMing ANTLR, which I would prefer to do before dumping 
the problem in their lap.


I'm sorry for the confusion; I don't think patrick's work here 
represents a divergence from yours so much as a continuation of it, in 
a direction that answers a question I've asked repeatedly while you 
were working on your grammar: can we actually find an LL(1) 
factoring?. Also can any other non-antlr tools consume this grammar?


Since you chose antlr4 rather than antlr3, the LL(1) question in 
particular was obscured under the antlr4 will parse anything! sales 
pitch[1]. Which is fine as far as getting the grammar roughed out and 
running -- I'm not criticizing that decision, it was yours to make, as 
was the choice of antlr in the first place. But it _did_ dodge a 
question I've been quite persistent about asking; one which I wanted 
to have an answer for before considering the grammar done.


Longer term, I would like whatever grammar we wind up denoting as 
canonical / documented / spec'ed to be as (re)target-able as possible. 
I've been relatively insistent on LL(1) since it is a nice 
intersection-of-inputs, practically guaranteed to parse under any 
framework we retarget it to. IOW I do _not_ want to force anyone 
working with rust grammars in the future to use antlr (3, 4, or 
anything else). That's too tool-specific[2]. A grammar that is 
trivally translatable between antlr4, antlr3, yapp2, llgen, llnextgen, 
coco, javacc, parsec, spirit, some rust parser-generator, and so 
forth is my eventual goal here.


-Graydon

[1]: We parse any grammar is unfortunately common in 
parser-generator sales pitches these days, with a profusion GLR and 
LL(*) things. As a downstream consumer of parser-generator technology, 
let me point out that while I appreciate broad guarantees by 
tool-makers, I very much _dislike_ a tool that offers broad guarantees 
at the expense of being able to make promises about efficiency, 
grammar class and algorithmic complexity. IOW I actually prefer tools 
that can tell me what I need to leave out (or change) in my grammar in 
order to arrive at an efficient parser in a given complexity class. 
Don't worry about it is the wrong answer here. I want to worry about 
it.


[2]: we also seem to be most-invested in python for in-tree 
maintainer-mode tools associated with rust development; it seems 
like a lot to ask to install a JDK in order to verify the grammar. If 
the grammar-check _can_ be done in a python module, I'm happy to shift 
over to using it. Unless antlr-ness is an important part of the 
grammar in some way I'm not perceiving; do you have a strong 
preference for keeping the java + antlr dependency?

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev



--
irc: pnkfelix on irc.mozilla.org
email: {fklock, pnkfelix}@mozilla.org

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] legality/utility of modules inside functions

2013-04-06 Thread Felix S. Klock II

John (cc'ing rust-dev)-

You can still use the module in other contexts.

pub fn baz() {
mod z {
pub fn f () - int { 19 }
}
fn g() - int { use z; z::f() }
g();
}

Cheers,
-Felix

On 06/04/2013 03:50, John Clements wrote:

Our grammar currently parses modules inside functions just fine. However, it 
doesn't look like it's possible to call any functions defined in one. For 
instance:

fn main () {
 use z;
 mod z {
 fn f () - int { 19 }
 }
 z::f();
}

Note that the use has to be at the beginning of the block--that's enforced. 
This gives the error:

/tmp/foo.rs:2:8: 2:10 error: failed to resolve import: z
/tmp/foo.rs:2 use z;
   ^~
error: failed to resolve imports
error: aborting due to 2 previous errors

It looks to me like modules are allowed here just because it's consistent with allowing 
other item types--struct and enum decls, for instance. Am I missing something obvious? In 
particular, I'm bad with the resolver; perhaps there's a way to write the use 
to make this work. If not, it would seem to me like removing them would be the sensible 
choice.

Thoughts?

John Clements

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev



--
irc: pnkfelix on irc.mozilla.org
email: {fklock, pnkfelix}@mozilla.org

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] JFYI: ANTLR grammar for Rust tokens

2013-03-26 Thread Felix S. Klock II

Cool, John beat me to the punch and posted a grammar to a repository!

(And better still, John's grammar looks way more complete than my own.)

I went ahead and shoved my version of Rust.g4 into a branch on a fork 
of your repo, so that people can skim and compare the approaches.  
(E.g. the use of named terminals such a LPAREN versus hard-coding them 
as literals in the productions.)


 
https://github.com/pnkfelix/rust-antlr/blob/pnkfelix-draft-from-rust-manual/Rust.g4


Note the above link is not expected to actually work on real rust 
expressions of any size.  I was working by transcribing from the 
manual; I infer that John took a much different tack, perhaps using the 
parser source as a basis...?  In any case, I am posting this mostly to 
stimulate thought as to what tack to take that will best integrate both 
with our code and also with our documentation.


Cheers,
-Felix


On Mon Mar 25 21:23:32 2013, John Clements wrote:

Following an offhand comment of Patrick's on Friday, I decided to take a look 
and see just how easy it is to formulate an ANTLR grammar. The answer is: very, 
very easy. I wrote up a simple grammar for Rust tokens; it's up at

https://github.com/jbclements/rust-antlr/

The tokens work great, and so does a simple token-tree parser; it can now parse 
large rust files into token trees. I started adding an AST-level parser, but it's 
just got bits  pieces of the grammar, for now.

John

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev




--
irc: pnkfelix on irc.mozilla.org
email: {fklock, pnkfelix}@mozilla.org

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Compilation Error in Rust in Nesting of Modules

2013-02-04 Thread Felix S Klock II

Ranvijay (cc'ing rust-dev)-

Since you put your source code in the file c/c.rs, that code belongs to 
a module named `c` *inside* of the `c` module you have already put into 
your orig.rs crate.


You can see the git diff invocation in the transcript below to see how 
I fixed your code.


Cheers,
-Felix

% rustc --version
rustc 0.5 (09bb07b 2012-12-24 18:29:02 -0800)
host: x86_64-apple-darwin

% git diff
diff --git a/orig.rs b/orig.rs
index 8a81117..decb7fe 100644
--- a/orig.rs
+++ b/orig.rs
@@ -7,6 +7,9 @@ pub mod a ;
 pub mod b ;
 pub mod c {
 pub mod inner_mod ;
+
+// Fixed code:
+pub mod c;
 }


diff --git a/test_orig.rs b/test_orig.rs
index 9b0bd63..babf26f 100644
--- a/test_orig.rs
+++ b/test_orig.rs
@@ -5,7 +5,12 @@ orig::c::inner_mod::inner_mod();
 orig::dummy();
 orig::a::a_func();
 orig::b::b_func();
-orig::c::c_func();
+
+// // Original code:
+// orig::c::c_func();
+//
+// // Fixed Code:
+orig::c::c::c_func();
 //parserutils::utils::inner_mod::inner_mod();
 }


% rustc orig.rs  rustc -L. test_orig.rs
warning: no debug symbols in executable (-arch x86_64)
warning: no debug symbols in executable (-arch x86_64)

% ./test_orig
 i am in inner_mod
 i am in dummy
 i am in a
 i am in b
 i am in c


On Fri Feb 1 12:29:44 2013, Ranvijay Singh wrote:


Hi,
I am getting compilation error while implementing the nesting of
modules as per the Rust Language tutorial.
Below is the description of my code structure.

Inside example directory,I created a file orig.rs http://orig.rs and
declared 3 modules a, b and c inside it. Inside module c, I declared
another module inner_mod. I created 3 files a.rs http://a.rs, b.rs
http://b.rs and c.rs http://c.rs and one directory c, all inside
the example directory.Also, I created another file inner_mod.rs
http://inner_mod.rs and kept it in c directory. In c.rs
http://c.rs, I defined a function *C_func* as below.

pub fn *c_func*() {
io::println(I am in c);
}
I called the function c_func in file test_orig.rs
http://test_orig.rs which is in example directory.
As per the tutorial, I can create c.rs http://c.rs file and c
directory both, but nothing is mentioned about the location of c.rs
http://c.rs with respect to c directory i.e. whether c.rs
http://c.rs will be kept inside c directory or outside of it at the
same level as c directory inside example directory. I tried both and
got the below error in each case when compiled:

*test_orig.rs:8:0: 8:15 error: unresolved name: orig::c::c_func
test_orig.rs:8 http://test_orig.rs:8 orig::c::c_func();*

Tar file of the code is attached.Please suggest a solution to this.


thanks
Ranvijay



___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev




--
irc: pnkfelix on irc.mozilla.org
email: {fklock, pnkfelix}@mozilla.org

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Rust 0.5 for Windows

2013-02-01 Thread Felix S Klock II

On Fri Feb  1 19:13:03 2013, Graydon Hoare wrote:

(they act as sequence-points for make, since it cannot easily guess
the actual hash-based output filenames of the libraries)


I was idly wondering about those hashy filenames: Is the long term plan 
to continue to encoding hashes into the filenames?  Or is there 
non-zero chance that the infrastructure of Rust will change so that one 
can easily predict the output file name for a library, to accommodate 
tools like make.


(Sorry if the answer to the above question is already documented 
somewhere obvious, I had not noticed it addressed when I was going 
through the documentation.)


I suppose I might just as well implement the empty-file work-around 
myself, I had not really thought terribly hard about the problem and 
was just doing make clean; make whenever I needed to in my little 
Rust experiments so far.


Cheers,
-Felix

--
irc: pnkfelix on irc.mozilla.org
email: {fklock, pnkfelix}@mozilla.org

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev