[rust-dev] Pointer syntax

2014-02-01 Thread Nathan Myers

On 01/31/2014 09:43 PM, Eric Summers wrote:

I think I like the mut syntax in let expressions, but I still like shoving the 
pointer next the type like I would do in C/C++ for something like fn drop(mut 
self) {}.

I guess it is somewhat rare to use mutable pointers as function parameters, so 
maybe not a big deal.


While we're talking about syntax, hasn't anybody noticed
that prefix pointer-designator and dereference operators
are crazy, especially for otherwise left-to-right declaration
order?  C++ had no choice, but Rust can make the sensible
choice: the only one that Pascal got right.  (That used the
caret, also an eerily apt choice.)

Nathan Myers

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deprecating rustpkg

2014-02-01 Thread Gaetan
There is not only API change. Sometime, from a minor version to another, a
feature get silently broken (that is silent regression). While it might not
impact libA which depends on it, but it may fail libB which also depends on
it, but with a previous version.
As a result, libA force installation of this dependency without any concern
(all its features works) but libB get broken without any concern.

And that the real mess to deal with.

That's happened this week at my job...

I largely prefer each library be self contained, ie, if libA depends on
libZ version X.X.X, and libB depends on libZZ version Y.Y.Y, just let each
one be installed and used at there own version. That is perfectly
acceptable (and even recommended) for a non system integrated software (for
example when a companie want to build a software with minimum system
dependency that would run on any version of Ubuntu, with the only
dependency on libc.
On the other hand, when the software get integrated into the distribution
(ubuntu, redhat, homebrew), let the distrib version manager do its job.



-
Gaetan



2014-02-01 Tony Arcieri basc...@gmail.com:

 On Fri, Jan 31, 2014 at 4:03 PM, Lee Braiden leebr...@gmail.com wrote:

 This would be counterproductive.  If a library cannot be upgraded to 1.9,
 or even 2.2, because some app REQUIRES 1.4, then that causes SERIOUS,
 SECURITY issues.


 Yes, these are exactly the types of problems I want to help solve. Many
 people on this thread are talking about pinning to specific versions of
 libraries. This will prevent upgrades in the event of a security problem.

 Good dependency resolvers work on constraints, not specific versions.

 The ONLY realistic way I can see to solve this, is to have all higher
 version numbers of the same package be backwards compatible, and have
 incompatible packages be DIFFERENT packages, as I mentioned before.

 Really, there is a contract here: an API contract.


 Are you familiar with semantic versioning?

 http://semver.org/

 Semantic Versioning would stipulate that a backwards incompatible change
 in an API would necessitate a MAJOR version bump. This indicates a break in
 the original contract.

 Ideally if people are using multiple major versions of the same package,
 and a security vulnerability is discovered which affects all versions of a
 package, that the package maintainers release a hotfix for all major
 versions.

 --
 Tony Arcieri

 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deprecating rustpkg

2014-02-01 Thread Lee Braiden

On 01/02/14 00:09, Tony Arcieri wrote:
On Fri, Jan 31, 2014 at 4:03 PM, Lee Braiden leebr...@gmail.com 
mailto:leebr...@gmail.com wrote:


This would be counterproductive.  If a library cannot be upgraded
to 1.9, or even 2.2, because some app REQUIRES 1.4, then that
causes SERIOUS, SECURITY issues.


Yes, these are exactly the types of problems I want to help solve. 
Many people on this thread are talking about pinning to specific 
versions of libraries. This will prevent upgrades in the event of a 
security problem.


Good dependency resolvers work on constraints, not specific versions.



Agreed.


Are you familiar with semantic versioning?

http://semver.org/

Semantic Versioning would stipulate that a backwards incompatible 
change in an API would necessitate a MAJOR version bump. This 
indicates a break in the original contract.




I'm familiar, in the sense that it's what many libs/apps do, but again, 
I don't believe that library 29.x should be backwards-incompatible with 
28.x.  Major versions of a package, to me, should indicate major new 
features, but not abandonment of old features.  If you want to redesign 
some code base so it's incompatible (i.e., no longer the same thing), 
then it deserves a new name.


Let's compare the mindsets of backwards-compatible library design, 
vs oh, let's call it major-breakage ;) language design:


Let's say you follow a common major-breakage approach, and do this:

1) Create a general-compression-library, version 1.0, which uses the 
LZ algorithm, and exposes some details of that.

2) During the course of development, you get ideas for version 2.0
2) You publish the 1.x library
3) Create a general-compression-library, version 2.0.  This, you 
decide, will use LZMA algorithm, and exposes some details of that.

4) You publish the 2.x library.
5) You receive a patch from someone, adding BZIP support, for 1.x. It 
includes code to make 1.x more general.  However, it's incompatible with 
2.x, and you've moved on, so you drop it, or backport your 2.x stuff.  
Maybe you publish 3.x, but now it's incompatible with 2.x AND 1.x...
6) All the while, people have been using your libraries in products, and 
some depend on 1.x, some on 2.x, some on 3.x.  It's a mess of 
compatibility hell, with no clear direction, security issues due to 
unmaintained code, etc.


Because details are exposed in each, 2.0 breaks compatibility with 1.x.  
Under a model where version 2.x can be incompatible with version 1.x, 
you say, OK, fine.  Slightly broken stuff, but new features.  People 
can upgrade and use the new stuff, or not.  Up to them.



The problem though, is that the thinking behind all this is wrong-headed 
--- beginning from bad assumptions --- and the acceptance of 
backward-incompatibility encourages that way of thinking.


Let's enforce backwards-compatiblity, and see what *might* happen, instead:

1) You create a general-compression-library, version 1.0.  You use the 
LZ algorithm, and expose details of that.

2) During the course of development, you get ideas for 2.0
3) You're about to publish the library, and realise that your 2.0 
changes won't be backwards compatible, because 1.x exposes API details 
in a non-futureproof way.
4) You do a little extra work on 1.x, making it more general -- i.e., 
living up to its name.

5) You publish 1.x
6) You create version 2.x, which ALSO supports LZMA.
7) You publish version 2.x, which now has twice as many features, does 
what it says on the tin by being a general compression library, etc.
8) You receive a patch from someone, adding BZIP support, for 1.x. You 
merge it in, and publish 3.x, which now supports 3 compression formats.
9) All the while, people have been using your libraries in products: 
they all work with general compression library x.x, later versions being 
better, minor OR major.  No security issues, because you just upgrade to 
the latest library version.


Now, instead of one base library and two forks, you have a one library 
with three versions, each backwards-compatible, each building features 
over the last.  That's a MUCH better outcome.


Now, that does involve a bit more foresight, but I think it's the kind 
of foresight that enforcing backwards compatibility encourages, and 
rightly so.




I said *might* happen.  Let's explore another turn of events, and 
imagine that you didn't have the foresight in step 3 above: you create 
general-compression-library, never realising that it's not general at 
all, and that 1.x is going to be incompatible with 2.x, until 1.x is 
published, and you come to create 2.x.  Under a backwards-compatibility 
model, that might go like this:


1) You create general-compression-library, version 1.0, with LZ support, 
expose details of that, and publish it.
2) You want to add LZMA support to this library, but can't because it 
breaks backwards compatibility.
3) Instead, you create a new library, universal-compression-library, 
1.0, with plugin support, including 

Re: [rust-dev] Deprecating rustpkg

2014-02-01 Thread Vladimir Lushnikov
There are some great points here (that should probably go into some sort of
'best practices' doc for rust package authors).

However there is a fundamental flaw IMO - we are talking about open-source
code, where the author is not obligated to do *any* of this. Most
open-source licenses explicitly state that there are no implied warranties
of *any* kind. Indeed, I think we would see far less open-source software
published if we started imposing requirements on how to go about it (this
includes the versioning of libraries). (Whether this is a good thing for
open-source in general is open to debate).

In an enterprise-only world, this would obviously work because the
companies that are providing your libraries actually have to give you a
QoS. But with open-source software, you can pick any library you want or
fork any library you want - if you want something better that follows your
requirements, then just fork it and make the changes you want. Which
library fork will end up the most used is essentially a popularity contest.
Of course this excludes the standard library because that approach
definitely does not work there (most notably the D tango vs. dmd
train-wreck).

The 'don't switch things from under people' idea is definitely sound. But
if you care about your application's stability, you test each new library
upgrade for changes with your unit, regression and integration tests. That
is the only way to be sure that nothing is broken and semantic versioning
does not address that, except by making some instances where things are
really likely to be incompatible much clearer. If nothing else, this is why
even if you allow constraints as the units of dependency resolution, at any
given time you are relying upon a single pinned version. You should rebuild
and retest if you want to upgrade *anything*. Whether this can or should be
done dynamically (at runtime) is another question.

I disagree with the 'breaking-changes == new version' idea. The rust
developers have already said that whatever 2.0 will be, it may break
backward compatibility. This is a *good thing* because it's chance to clean
up. One of the reasons C++ is so huge is because it almost never removes
features, and this leads to unnecessary complexity.

Someone mentioned passing two objects with the same 'type' from different
versions of a library and how this would work in terms of memory layout.
But with 'slots', this wouldn't be allowed by the linker, because
effectively it sees the two different versions of the library as different
libraries (even though they have the same name).



On Sat, Feb 1, 2014 at 2:01 PM, Lee Braiden leebr...@gmail.com wrote:

  On 01/02/14 00:09, Tony Arcieri wrote:

  On Fri, Jan 31, 2014 at 4:03 PM, Lee Braiden leebr...@gmail.com wrote:

 This would be counterproductive.  If a library cannot be upgraded to 1.9,
 or even 2.2, because some app REQUIRES 1.4, then that causes SERIOUS,
 SECURITY issues.


  Yes, these are exactly the types of problems I want to help solve. Many
 people on this thread are talking about pinning to specific versions of
 libraries. This will prevent upgrades in the event of a security problem.

  Good dependency resolvers work on constraints, not specific versions.


 Agreed.


   Are you familiar with semantic versioning?

  http://semver.org/

  Semantic Versioning would stipulate that a backwards incompatible change
 in an API would necessitate a MAJOR version bump. This indicates a break in
 the original contract.


 I'm familiar, in the sense that it's what many libs/apps do, but again, I
 don't believe that library 29.x should be backwards-incompatible with
 28.x.  Major versions of a package, to me, should indicate major new
 features, but not abandonment of old features.  If you want to redesign
 some code base so it's incompatible (i.e., no longer the same thing), then
 it deserves a new name.

 Let's compare the mindsets of backwards-compatible library design, vs
 oh, let's call it major-breakage ;) language design:

 Let's say you follow a common major-breakage approach, and do this:

 1) Create a general-compression-library, version 1.0, which uses the LZ
 algorithm, and exposes some details of that.
 2) During the course of development, you get ideas for version 2.0
 2) You publish the 1.x library
 3) Create a general-compression-library, version 2.0.  This, you decide,
 will use LZMA algorithm, and exposes some details of that.
 4) You publish the 2.x library.
 5) You receive a patch from someone, adding BZIP support, for 1.x.  It
 includes code to make 1.x more general.  However, it's incompatible with
 2.x, and you've moved on, so you drop it, or backport your 2.x stuff.
 Maybe you publish 3.x, but now it's incompatible with 2.x AND 1.x...
 6) All the while, people have been using your libraries in products, and
 some depend on 1.x, some on 2.x, some on 3.x.  It's a mess of compatibility
 hell, with no clear direction, security issues due to unmaintained code,
 etc.

 

Re: [rust-dev] Deprecating rustpkg

2014-02-01 Thread Lee Braiden

On 01/02/14 00:12, Tony Arcieri wrote:
On Fri, Jan 31, 2014 at 4:07 PM, Vladimir Lushnikov 
vladi...@slate-project.org mailto:vladi...@slate-project.org wrote:


Just to be clear, I think what you are saying is that you want
version pinning to be dynamic? I.e. when a new version of a
library dependency becomes available, upgrade the package with
that dependency?


I would like a constraints-based system that is able to calculate the 
latest API-compatible version of a given package based on rules less 
strict than version = X.Y.Z




Agreed, but that's VERY low bar for requirements; I think we need to be 
more specific.  Apt, debian's package manager, for example, can have 
package dependency rules like these:


some-package:
Version 4.11_amd64
Depends: X-bin (ver == 2.4, ver  3.1   3.7) | opengl-dev
Source-Package: some-source
Build-depends: X-devel, (scons-builder (ver = 3  ver != 3.3) | basic-make

nvidia-headers:
Provides: opengl-dev

ati-radeon-hd-devel:
Provides: radeon-dev

GNUMake:
Provides: basic-make

BSDMake:
Provides: basic-make


Which says that:

* You can build some-package 4.11_amd64 from some-source-4.11, any 
version of X-devel, version 3.x of scons builder (except for 3.3 which 
is broken somehow), and that anything providing basic make functionality 
is needed, whether it's BSD's make of GNU's.


* However, if you just want to install the binary version, you only 
need one of X-bin, or opengl-dev


You could get around the fact that android-native-devkit is a whole 
bunch of tools and libraries which don't confirm to the package system, 
by creating a dummy package requiring android-native-devkit, and saying 
that it provides basic-make and opengl-headers, so that the dependencies 
all work out.  As another example, you could break opengl-dev into API 
versions, saying that android-native-devkit provides opengl-dev, 
opengl2-dev, and opengl3-dev, but that ati-radeon-hd-dev provides only 
opengl-dev, and opengl2-dev.


Then you can say, for example, get android-native-devkit from here, and 
always use the latest, most unstable version, but give me the most 
stable versionm of BSDMake, and make sure X-bin is the stable version, 
but with the latest security patches.


One thing you can't do (without chroot/jails/containers) is to say, 
Install these packages here, and install version 1.x of packageN here, 
with 3.x there, and 2.5 there..  That's pretty important for virtual 
hosting, and development, for example.


In short, Rust's package system should probably support:

* Package names, independent of sources
* Parseable versions
* Dependency EXPRESSIONS, including boolean logic, comparison operators, 
negation (i.e., none of the packages in this sub-expression are 
compatible), etc.

* Virtual packages which include other packages or wrap other packages
* Multiple installation paths, with a list of packages to be installed / 
maintained there
* Some way to use different installation paths based on which project 
you're in
* Some way to specify the local installation path during development, 
the default installation path for public packages, and a way to override 
the default installation path for specific sysadmin purposes.
* Some way to specify dependencies on third-party libraries / tools, 
from other languages / package managers.  I've little idea of to do 
about that.  Probably just print an error message and quit, to begin with?



--
Lee

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deprecating rustpkg

2014-02-01 Thread Vladimir Lushnikov
Portage has a very similar syntax/way of specifying runtime vs. build-time
dependencies: http://devmanual.gentoo.org/general-concepts/dependencies/.

Apt doesn't have support for slots and USE flags (code that is
included/excluded at compile time for optional features).


On Sat, Feb 1, 2014 at 2:38 PM, Lee Braiden leebr...@gmail.com wrote:

Agreed, but that's VERY low bar for requirements; I think we need to be
 more specific.  Apt, debian's package manager, for example, can have
 package dependency rules like these:

 some-package:
 Version 4.11_amd64
 Depends: X-bin (ver == 2.4, ver  3.1   3.7) | opengl-dev
 Source-Package: some-source
 Build-depends: X-devel, (scons-builder (ver = 3  ver != 3.3) |
 basic-make

 nvidia-headers:
 Provides: opengl-dev

 ati-radeon-hd-devel:
 Provides: radeon-dev

 GNUMake:
 Provides: basic-make

 BSDMake:
 Provides: basic-make

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deprecating rustpkg

2014-02-01 Thread Vladimir Matveev
Is it possible at all to find the latest version of a library which is
still compatible completely automatically? Incompatibilites can be
present on logic level, so the compilation wiith incompatible version
will succeed, but the program will work incorrectly. I don't think
that this can be solved without assumptions about versioning (like
semver) and/or without manual intervention.

Couldn't we just use more loose variant of version pinning inside
semantic versioning, with manual user intervention when it is needed?
For example, assuming there is something like semantic versioning
adopted, packages specify dependencies on certain major version, and
the dependency resolver downloads latest available package inside this
major version. If for some reason automatically selected dependency is
incompatible with our package or other dependencies of our package,
the user can manually override this selection, maybe even with another
major version. This is, as far as I understand, the system of slots
used by Portage as Vladimir Lushnikov described. Slots correspond to
major versions in semver terms, and other packages depend on concrete
slot. But the user has ultimate power to select whichever version they
need, overriding automatic choice.

In short, we allow dependency resolver to use the latest possible
packages which should be compatible according to semantic versioning,
and if it fails, we provide the user with ability to override
dependency resolver choices.

2014-02-01 Tony Arcieri basc...@gmail.com:
 On Fri, Jan 31, 2014 at 3:59 PM, Jack Moffitt j...@metajack.im wrote:

 The algorithm here is rather simple. We try to satisfy rust-logger and
 rust-rest. rust-rest has a version (or could be a tag like 1.x) so we
 go get that. It depends on rust-json 2.0 so we get that. Then we try
 to look for rust-logger, whatever version is latest (in rustpkg this
 would mean current master since no version or tag is given). This
 pulls in rust-json 1.0 since 1.0 != 2.0 and those have specific tags.
 Everything is built and linked as normal. Whether rust-json's
 constraints are exact revisions or they are intervals ( 2.0 and =
 2.0 for example), makes little difference I think.


 To reiterate, it sounds like you're describing every package pinning its
 dependencies to a specific version, which I'd consider an antipattern.

 What is to prevent a program using this (still extremely handwavey)
 algorithm from depending on rust-json 1.0, 1.1, 1.2, 1.3, 1.4, 2.0, 2.1, and
 2.2 simultaneously?

 What if some of these are buggy, but the fixed versions aren't used due to
 version pinning?

 What if rust-json 1.0 has a security issue?

 --
 Tony Arcieri

 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deprecating rustpkg

2014-02-01 Thread Lee Braiden

On 01/02/14 09:39, Gaetan wrote:
There is not only API change. Sometime, from a minor version to 
another, a feature get silently broken (that is silent regression). 
While it might not impact libA which depends on it, but it may fail 
libB which also depends on it, but with a previous version.


Silent regressions are the exceptional case though, not the norm. As a 
general rule, upgrades are important and necessary, at least for 
security reasons.  It's kind of up to developers, up to distro 
maintainers, and certainly up to mission-critical sysadmins, to choose 
software (libs, and apps which use those libs) which are QA'd well 
enough to avoid this.  Breakage SOMETIMES happens, but, much like 
recovering from a failed write to disk, you just have to weigh the odds, 
try it, then back up if it didn't work out.  In fact, you could think of 
the process of upgrading a library as simple write followed by a 
verify.  If you do it properly, like a good admin would, it'll all be 
wrapped in a transaction that you can roll back.  BUT, the important 
part is that you'll probably still need to upgrade versoin x.19-x.21, 
even if upgrading x.19-x.20 fails for some reason.


Either you have a system which just works, isn't connected to the net, 
has no bugs, and no security risks associated with it, or you upgrade 
sooner or later.  In most cases, if you're acting responsibly, you 
CANNOT just install version x.19, call that a working system, and forget 
about it, installing x.21 only for newer customers / systems.  Not if 
those systems are connected to the internet, at least.


--
Lee

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deprecating rustpkg

2014-02-01 Thread Lee Braiden

On 01/02/14 14:55, Vladimir Matveev wrote:

Is it possible at all to find the latest version of a library which is
still compatible completely automatically? Incompatibilites can be
present on logic level, so the compilation wiith incompatible version
will succeed, but the program will work incorrectly. I don't think
that this can be solved without assumptions about versioning (like
semver) and/or without manual intervention.


No, it's not.  It's always going to be the library developers / 
programmer's responsibilty, to some extent.


For example, if a library adds three new functions to fit within some 
begin/end wrapper, it may modify the begin/end functions to behave 
differently.  If the library author does that in a way that breaks 
existing logic, then that's a bug, to my mind, or a deliberate 
divergence / API contract breakage.


At that point, what the author has REALLY done is decided that his 
original design for begin() end(), and for that whole part of the 
library in general, is wrong, and needs a REDESIGN.  What he can then do is:


a) Create different functions, which have extended functionality, and 
support the three new in-wrapper functions.  So, you could call:


begin()
old_funcs...
end()

OR:

extended_begin()
old_funcs()
new_funcs()
extended_end()

b) Create a new library, similar to the old one, but with new 
functionality, new API guarantees, etc.


Ignoring the problem just creates a mess though, which ripples 
throughout the development space (downstream products, library forks, 
etc.), and no package manager will completely solve it after the fact, 
except to acknowledge the mess and install separate packages for every 
program that needs them (but that has security / feature-loss issues).




Couldn't we just use more loose variant of version pinning inside
semantic versioning, with manual user intervention when it is needed?
For example, assuming there is something like semantic versioning
adopted, packages specify dependencies on certain major version, and
the dependency resolver downloads latest available package inside this
major version.


You can do that within a major version, except for one case - multiple 
developers creating diverged versions of 2.13, based on 2.12, each with 
their own features.  Really, though, what you're doing is just 
shifting/brushing the compatibility issue under the rug each time: y is 
OK in x.y because x guarantees backwards compatibility.  Fork1 in 
x.y.fork1 is OK, because x.y guarantees backwards compatibility... and 
so on, ad infinitum.  Whatever level you're at, you have two issues:


a) Backwards compatibility between library versions
b) The official, named version of the library, vs. unofficial code.

Assuming you guarantee A in some way (backwards compatibility in 
general, across all versions of the library, or backwards compatibility 
for minor versions), you still have incompatibility if (b) arises, which 
it will in all distributed repository scenarios, UNLESS you can do 
something like git's version tracking per branch, where any version 
number is unique, and also implies every version before.  Then you're 
back to whether you want to do that per major version, or overall.


But doing it per major version recursively raises the question of which 
major version is authorised: what if you have a single library at 19.x, 
and TWO people create 20.0 independently?  Again, you have 
incompatibility.  So, you're back to the question of (a): is it the same 
library, or should an author simply stay within the bounds of a 
library's API, and fork a new CONCEPTUALLY DIFFERENT new lib (most 
likely with a new name) when they break that API?




If for some reason automatically selected dependency is
incompatible with our package or other dependencies of our package,
the user can manually override this selection


But what does the user know about library APIs?  He needs to dig into 
the logic of the program, and worse, the logic of underlying libraries, 
to figure out that:


somelib::begin() from 
github://somelib/someplace/v23.2/src/module1/submod2/utils.rs, line 24


does not mean the same as:

somelib::begin() from 
github://somelib/otherplace/v23.2/src/module1/submod2/utils.rs, line 35


! ;)



major version. This is, as far as I understand, the system of slots
used by Portage as Vladimir Lushnikov described. Slots correspond to
major versions in semver terms, and other packages depend on concrete
slot.


This sounds interesting (I'll have to track down Vladimir's original 
post on that), but so far, I'm not sure it solves the problem of a 
forked minor version, any more than other methods solve a forked major 
version.  It seems to me that it always comes back to people choosing to 
break library APIs, and other people trying to clean it up in one way or 
another, which ultimately fails, at some point -- major, minor, fork, 
repository, branch, or 

Re: [rust-dev] Deprecating rustpkg

2014-02-01 Thread Lee Braiden

On 01/02/14 14:49, Vladimir Lushnikov wrote:
Portage has a very similar syntax/way of specifying runtime vs. 
build-time dependencies: 
http://devmanual.gentoo.org/general-concepts/dependencies/.


Apt doesn't have support for slots and USE flags (code that is 
included/excluded at compile time for optional features).




Agreed; use flags are very nice :) I find them a bit clunky / breakable, 
though -- it's very hard to know what the valid range of flags is, and 
how that will affect every package on your system.  If Rust gets 
something similar, the exact circumstances under which they're used, the 
range valid values, and the effects of each, should be EXTREMELY clear.



--
Lee

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deprecating rustpkg

2014-02-01 Thread Vladimir Lushnikov
I think USE flags are more appropriate for library features (which is
exactly the way portage uses them). So you have your rust app with
conditional code that depends on a particular cfg (
https://github.com/mozilla/rust/wiki/Doc-attributes) and then you expose a
list of these in your package specification so that others can know to say
- I use the json library but with built-in URI support.


On Sat, Feb 1, 2014 at 3:45 PM, Lee Braiden leebr...@gmail.com wrote:

 On 01/02/14 14:49, Vladimir Lushnikov wrote:

 Portage has a very similar syntax/way of specifying runtime vs.
 build-time dependencies: http://devmanual.gentoo.org/
 general-concepts/dependencies/.

 Apt doesn't have support for slots and USE flags (code that is
 included/excluded at compile time for optional features).


 Agreed; use flags are very nice :) I find them a bit clunky / breakable,
 though -- it's very hard to know what the valid range of flags is, and how
 that will affect every package on your system.  If Rust gets something
 similar, the exact circumstances under which they're used, the range valid
 values, and the effects of each, should be EXTREMELY clear.


 --
 Lee


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deprecating rustpkg

2014-02-01 Thread Lee Braiden

On 01/02/14 15:48, Vladimir Lushnikov wrote:
I think USE flags are more appropriate for library features (which is 
exactly the way portage uses them). So you have your rust app with 
conditional code that depends on a particular cfg 
(https://github.com/mozilla/rust/wiki/Doc-attributes) and then you 
expose a list of these in your package specification so that others 
can know to say - I use the json library but with built-in URI support.


Interesting.  I was thinking more of compiling for specific CPU 
optimisations, etc.  For the use this optional library thing, debian 
seems to mostly just use optional / recommended dependencies.  The 
package manager informs you that a package is recommended / optional, 
and you can install them if you want.  Then the ./configure script or 
whatever will normally just use it if it's there, by default if that's 
considered sensible as a default, or you can build it with extra flags 
manually, to make it build in a non-default way.  I like that Debian 
exposes those optional packages at the package manager level, but the 
global / local (iirc) use flags make a lot of sense too.


Some hybrid that had option flags when installing/building, and informed 
you of additional packages needed (much like when you select features 
to install in a GUI installer), folding that back into the package 
management/dependencies etc. might be best, but it would be relatively 
complex to implement.


--
Lee

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deprecating rustpkg

2014-02-01 Thread Lee Braiden

Ah, this:

On 01/02/14 15:43, Lee Braiden wrote:

extended_begin()
old_funcs()
new_funcs()
extended_end()


should read more like:

begin()
old_funcs()

extended_begin()
new_funcs()
extended_end()
end()


--
Lee

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deprecating rustpkg

2014-02-01 Thread Sean McArthur
On Fri, Jan 31, 2014 at 1:05 PM, Tony Arcieri basc...@gmail.com wrote:

 IMO, a system that respects semantic versioning, allows you to constrain
 the dependency to a particular *major* version without requiring pinning
 to a *specific* version.

 I would call anything that requires pinning to a specific version an
 antipattern. Among other things, pinning to specific versions precludes
 software updates which may be security-critical.


It's perfectly reasonable to require a certain *minor* version, since minor
versions (in semver) can include API additions that you may depend on.

Also, nodejs and npm supposedly support semver, but it's impossible to
enforce library authors actually do this, so you'll get libraries with
breaking changes going from 1.1.2 to 1.1.3 because reasons.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deprecating rustpkg

2014-02-01 Thread Gaetan
why not enforcing in a way or another a API compatibility test suite for
ensuring at least a certain level of compatibility between two version? I
think it is something quite doable, and moreover this would kinda force the
package manager to write unit tests which is always a good practice.

-
Gaetan



2014-01-31 Sean McArthur s...@seanmonstar.com:

 On Fri, Jan 31, 2014 at 1:05 PM, Tony Arcieri basc...@gmail.com wrote:

 IMO, a system that respects semantic versioning, allows you to constrain
 the dependency to a particular *major* version without requiring pinning
 to a *specific* version.

 I would call anything that requires pinning to a specific version an
 antipattern. Among other things, pinning to specific versions precludes
 software updates which may be security-critical.


 It's perfectly reasonable to require a certain *minor* version, since
 minor versions (in semver) can include API additions that you may depend on.

 Also, nodejs and npm supposedly support semver, but it's impossible to
 enforce library authors actually do this, so you'll get libraries with
 breaking changes going from 1.1.2 to 1.1.3 because reasons.

 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deprecating rustpkg

2014-02-01 Thread Lee Braiden

On 01/02/14 18:54, Gaetan wrote:
why not enforcing in a way or another a API compatibility test suite 
for ensuring at least a certain level of compatibility between two 
version? I think it is something quite doable, and moreover this would 
kinda force the package manager to write unit tests which is always a 
good practice.


At the moment, we're trying to agree the policy.  After the policy is 
agreed, tools could be created to help ensure that those policies are 
met.  People would then use them if they see fit, or they could be built 
into package creation / version upload tools as standard. The first 
thing is to agree a reliable, sensible policy that improves the quality 
of software / package management, and is WORTH enforcing, though.



--
Lee

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deprecating rustpkg

2014-02-01 Thread Lee Braiden

On 01/02/14 18:54, Gaetan wrote:
why not enforcing in a way or another a API compatibility test suite 
for ensuring at least a certain level of compatibility between two 
version? I think it is something quite doable, and moreover this would 
kinda force the package manager to write unit tests which is always a 
good practice.




One other thing: I don't believe a certain level of compatibility is a 
useful attribute to track in releases.  Either something is fully 
compatible, or it breaks existing software.  It might be useful to judge 
the suitability of software for release (i.e., software passes one 
release-readiness test when it's fully compatible with a previous 
release), but that's a different thing, imho.


--
Lee

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deprecating rustpkg

2014-02-01 Thread Vladimir Matveev
 You can do that within a major version, except for one case - multiple 
 developers creating diverged versions of 2.13, based on 2.12, each with 
 their own features.  
 ...
 But doing it per major version recursively raises the question of which 
 major version is authorised: what if you have a single library at 19.x, 
 and TWO people create 20.0 independently?  Again, you have 
 incompatibility.  So, you're back to the question of (a): is it the same 
 library, or should an author simply stay within the bounds of a 
 library's API, and fork a new CONCEPTUALLY DIFFERENT new lib (most 
 likely with a new name) when they break that API?

I think that forks should be considered as completely different libraries. This
shouldn't be a problem when certain naming scheme is used, for example,
two-level names like in Java world. Central repository will certainly help,
because each entry in it will be controlled by concrete user. These entries
can also be linked with version control stream which represents main
development line. No ambiguities here.

It may be desirable then to use specific fork instead of the mainline project.
This can be a feature of overriding system, which will be present anyway. If the
user wants to use a fork instead of a library (all its versions or a specific
version), he/she will be able to specify this requirement somehow, and
dependency resolver will take it into account. Obviously, package authors will
be able to choose default fork which they want to use.

 But what does the user know about library APIs?  He needs to dig into 
 the logic of the program, and worse, the logic of underlying libraries, 
 to figure out that:
 
  somelib::begin() from 
 github://somelib/someplace/v23.2/src/module1/submod2/utils.rs, line 24
 
 does not mean the same as:
 
  somelib::begin() from 
 github://somelib/otherplace/v23.2/src/module1/submod2/utils.rs, line 35
 
 ! ;)
 

When this API is used directly by the package, then the user *should* know
about it. He's using it, after all. If this API belongs to a transitive
dependency, then I don't think there is an ideal solution. Either the version is
pinned (like in Java world), or it is chosen by the dependency resolver. In the
former case all transitive dependencies are guaranteed to be intercompatible,
because these pinned versions were deliberately chosen by libraries developers.

In the latter case there is always a possibility of compatibility problems,
because it is impossible to guarantee complete compatibility - libraries are
written by people, after all. Then it is the user's responsibility to resolve
these problems, no one else will be able to do this.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deprecating rustpkg

2014-02-01 Thread Isaac Dupree

On 02/01/2014 06:27 AM, Matthieu Monrocq wrote:

In short, isn't there a risk of crashes if one accidentally links two
versions of a given library and start exchanging objects ? It seems
impractical to prove that objects created by one version cannot
accidentally end up being passed to the other version:

- unless the types differ at compilation time (seems awkward)


Haskell does this. Types are equal if their {package, package-version, 
module-name, type-name} is the same.  (Or maybe it is even more rigorous 
about type equality.)  Using multiple versions of some packages turns 
out not to be awkward at all, such as libraries for writing tests and 
libraries that don't export important data types.


-Isaac

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deprecating rustpkg

2014-02-01 Thread Lee Braiden

On 01/02/14 19:32, Vladimir Matveev wrote:
When this API is used directly by the package, then the user *should* 
know about it. He's using it, after all.


There are developers (direct library users), and then distro 
maintainers/admins/users who need to manage libraries installed on their 
system.  The former should know, but the others shouldn't have to think 
about it, yet should (must) be able to override the defaults if they 
need to, at least for shared libraries.  Presumably we want shared 
libraries and static libraries to function similarly, except for whether 
the user chooses static or dynamic linkage.


If this API belongs to a transitive dependency, then I don't think 
there is an ideal solution. Either the version is pinned (like in Java 
world), or it is chosen by the dependency resolver.


If we're talking about pinning to an absolute version (no upgrades), 
then I think that's a security / bugfix issue, unless we're also talking 
about static linkage in that case (which is reasonable because then the 
bug is essentially part of the black box that is the software the user 
is installing, and in that case, the software maintainer is also 
responsible for releasing updates to fix bugs within the statically 
linked code.


In the former case all transitive dependencies are guaranteed to be 
intercompatible


Are they?  What if the statically pinned version of a scanner library 
doesn't support the user's new scanner, there's an update to support his 
scanner, but it's ignored because the software allows only an absolute 
version number?


because these pinned versions were deliberately chosen by libraries 
developers.


Who are not infallible, and do/should not get to choose everything about 
the target system's libraries.  There is also a freedom issue, regarding 
someone's right to implement a new version of the library, say, to port 
it to a new GUI toolkit.


In the latter case there is always a possibility of compatibility 
problems, because it is impossible to guarantee complete compatibility 
- libraries are written by people, after all.


Yes, but we can encourage it, just like we encourage immutability, even 
though we can't force everyone to use it.


Then it is the user's responsibility to resolve these problems, no one 
else will be able to do this. 


But the user can't do this, if new libraries break old programs, or old 
programs won't allow upgrading.



--
Lee




___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deprecating rustpkg

2014-02-01 Thread Lee Braiden

On 01/02/14 19:59, Isaac Dupree wrote:

On 02/01/2014 06:27 AM, Matthieu Monrocq wrote:

In short, isn't there a risk of crashes if one accidentally links two
versions of a given library and start exchanging objects ? It seems
impractical to prove that objects created by one version cannot
accidentally end up being passed to the other version:

- unless the types differ at compilation time (seems awkward)


Haskell does this. Types are equal if their {package, package-version, 
module-name, type-name} is the same.  (Or maybe it is even more 
rigorous about type equality.)  Using multiple versions of some 
packages turns out not to be awkward at all, such as libraries for 
writing tests and libraries that don't export important data types.




This sounds useful, but still seems like it's prone to error, unless you 
can define versions in some reliable way, which works despite 
distributed repositories, branches on those repositories, etc.


Does anyone have a proposal for methods of doing that?  I think it would 
require tracking version + hash of all code --- a bit like the way git 
tracks the head of a branch.  Is that what the hash in rust libraries 
currently includes?



--
Lee

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deprecating rustpkg

2014-02-01 Thread Vladimir Matveev
To clarify, when I was writing user I meant the developer who uses this
package, not the end user of complete program.

 On 01/02/14 19:32, Vladimir Matveev wrote:
  When this API is used directly by the package, then the user *should* 
  know about it. He's using it, after all.
 
 There are developers (direct library users), and then distro 
 maintainers/admins/users who need to manage libraries installed on their 
 system.  The former should know, but the others shouldn't have to think 
 about it, yet should (must) be able to override the defaults if they 
 need to, at least for shared libraries.  Presumably we want shared 
 libraries and static libraries to function similarly, except for whether 
 the user chooses static or dynamic linkage.

Well, it seems that working for a long time with a code targeting virtual
machines is corrupting :) I completely forgot about different models of
compilation. I see your point. But I think that developing and distributing
should be considered separately. Package manager for developers should be a
part of language infrastructure (like rustpkg is now for Rust and, for example,
go tool for Go language or cabal for Haskell). This package manager allows
flexible management of Rust libraries and their dependencies, and it should be
integrated with the build system (or *be* this build system). It is used by
developers to create applications and libraries and by maintainers to prepare
these applications and libraries for integration with the distribution system
for end users.

Package manager for general users (I'll call it system package manager),
however, depends on the OS, and it is maintainer's task to determine correct
dependencies for each package. Rust package manager should not depend in any
way on the system package manager and its packages, because each system has its
own package manager, and it is just impossible to support them all. Rust also
should not force usage of concrete user-level package manager (like 0install,
for example), because this means additional unrelated software on the user
installation.

Go and Haskell do not have this problem because they are linked statically, and
their binary packages do not have any library dependencies at all. Rust is a
different story as it supports and encourages dynamic linkage. I think that
maintainers should choose standard set of Rust libraries which is OK for most
applications, and support and update them and their dependent applications. If
there are conflicts between versions (for example, some application started to
depend on another fork of a library), then maintainers should resolve this in
the standard way of their distribution system (e.g. slots for Portage, name
suffixes in apt and so on).

Essentially there is a large graph of packages in the Rust world, consisting of
packages under Rust package manager control (main graph). Developers are only
working with this graph. Then for each distribution system maintainers of this
system pull packages from the main graph and adapt it to their system in a way
this system allows and encourages. I don't think that it is possible to achieve
anything better than this. We cannot and should not force end users to use
something other than their packaging system.

  If this API belongs to a transitive dependency, then I don't think 
  there is an ideal solution. Either the version is pinned (like in Java 
  world), or it is chosen by the dependency resolver.
 
 If we're talking about pinning to an absolute version (no upgrades), 
 then I think that's a security / bugfix issue, unless we're also talking 
 about static linkage in that case (which is reasonable because then the 
 bug is essentially part of the black box that is the software the user 
 is installing, and in that case, the software maintainer is also 
 responsible for releasing updates to fix bugs within the statically 
 linked code.
 
  In the former case all transitive dependencies are guaranteed to be 
  intercompatible
 
 Are they?  What if the statically pinned version of a scanner library 
 doesn't support the user's new scanner, there's an update to support his 
 scanner, but it's ignored because the software allows only an absolute 
 version number?

I don't think your example is related. By guaranteed intercompatibility I meant
something like the following. Suppose your package is called `package`. It
depends on `foo-x` who in turn depends on `bar-y`. When versions are always
pinned by their developers, `foo` author deliberately has chosen `bar-y`
version, and he knows that `foo-x` library will work properly with `bar-y`.
This is how Java ecosystem works now. New scanner, however, is not an API
feature. Your example seems to support the general point about outdated
dependencies, and I generally agree with it.

  because these pinned versions were deliberately chosen by libraries 
  developers.
 
 Who are not infallible, and do/should not get to choose everything about 
 the target system's libraries.  

[rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Corey Richardson
Hey all,

bjz and I have worked out a nice proposal[0] for a slight syntax
change, reproduced here. It is a breaking change to the syntax, but it
is one that I think brings many benefits.

Summary
===

Change the following syntax:

```
struct FooT, U { ... }
implT, U TraitT for FooT, U { ... }
fn fooT, U(...) { ... }
```

to:

```
forallT, U struct Foo { ... }
forallT, U impl TraitT for FooT, U { ... }
forallT, U fn foo(...) { ... }
```

The Problem
===

The immediate, and most pragmatic, problem is that in today's Rust one cannot
easily search for implementations of a trait. Why? `grep 'impl Clone'` is
itself not sufficient, since many types have parametric polymorphism. Now I
need to come up with some sort of regex that can handle this. An easy
first-attempt is `grep 'impl(.*?)? Clone'` but that is quite inconvenient to
type and remember. (Here I ignore the issue of tooling, as I do not find the
argument of But a tool can do it! valid in language design.)

A deeper, more pedagogical problem, is the mismatch between how `struct
Foo... { ... }` is read and how it is actually treated. The straightforward,
left-to-right reading says There is a struct Foo which, given the types ...
has the members  This might lead one to believe that `Foo` is a single
type, but it is not. `Fooint` (that is, type `Foo` instantiated with type
`int`) is not the same type as `Foounit` (that is, type `Foo` instantiated
with type `uint`). Of course, with a small amount of experience or a very
simple explanation, that becomes obvious.

Something less obvious is the treatment of functions. What does `fn
foo...(...) { ... }` say? There is a function foo which, given types ...
and arguments ..., does the following computation: ... is not very adequate.
It leads one to believe there is a *single* function `foo`, whereas there is
actually a single `foo` for every substitution of type parameters! This also
holds for implementations (both of traits and of inherent methods).

Another minor problem is that nicely formatting long lists of type parameters
or type parameters with many bounds is difficult.

Proposed Solution
=

Introduce a new keyword, `forall`. This choice of keyword reads very well and
will not conflict with any identifiers in code which follows the [style
guide](https://github.com/mozilla/rust/wiki/Note-style-guide).

Change the following declarations from

```
struct FooT, U { ... }
implT, U TraitT for FooT, U { ... }
fn fooT, U(...) { ... }
```

to:

```
forallT, U struct Foo { ... }
forallT, U impl TraitT for FooT, U { ... }
forallT, U fn foo(...) { ... }
```

These read very well. for all types T and U, there is a struct Foo ..., for
all types T and U, there is a function foo ..., etc. These reflect that there
are in fact multiple functions `foo` and structs `Foo` and implementations of
`Trait`, due to monomorphization.


[0]: 
http://cmr.github.io/blog/2014/02/01/polymorphic-declaration-syntax-in-rust/
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Kevin Ballard
On Feb 1, 2014, at 2:39 PM, Corey Richardson co...@octayn.net wrote:

 The immediate, and most pragmatic, problem is that in today's Rust one cannot
 easily search for implementations of a trait. Why? `grep 'impl Clone'` is
 itself not sufficient, since many types have parametric polymorphism. Now I
 need to come up with some sort of regex that can handle this. An easy
 first-attempt is `grep 'impl(.*?)? Clone'` but that is quite inconvenient to
 type and remember. (Here I ignore the issue of tooling, as I do not find the
 argument of But a tool can do it! valid in language design.)

Putting your other arguments aside, I am not convinced by the grep argument.
With the syntax as it is today, I use `grep 'impl.*Clone'` if I want to find 
Clone
impls. Yes, it can match more than just Clone impls. But that's true too even 
with this
change. At the very least, any sort of multiline comment or string can contain 
text that
matches even the most rigorously specified grep. The only way to truly 
guarantee you're
only matching real impls is to actually parse the file with a real parser.

-Kevin
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Scott Lawrence

May as well throw my 2 cents in.

This is a pretty nice idea (I've always found 'implT' to be particularly 
confusing anyway). It does loose a nice property, though. Previously, there 
was a nice parallelism between


struct FooT

and

let foo: FooT

and so the syntax was quite obvious for beginners. The extra complexity of 
forall kills this. Of course, one could write


forallT:K,L struct FooT {

but that's just ugly.

On Sat, 1 Feb 2014, Corey Richardson wrote:


Hey all,

bjz and I have worked out a nice proposal[0] for a slight syntax
change, reproduced here. It is a breaking change to the syntax, but it
is one that I think brings many benefits.

Summary
===

Change the following syntax:

```
struct FooT, U { ... }
implT, U TraitT for FooT, U { ... }
fn fooT, U(...) { ... }
```

to:

```
forallT, U struct Foo { ... }
forallT, U impl TraitT for FooT, U { ... }
forallT, U fn foo(...) { ... }
```

The Problem
===

The immediate, and most pragmatic, problem is that in today's Rust one cannot
easily search for implementations of a trait. Why? `grep 'impl Clone'` is
itself not sufficient, since many types have parametric polymorphism. Now I
need to come up with some sort of regex that can handle this. An easy
first-attempt is `grep 'impl(.*?)? Clone'` but that is quite inconvenient to
type and remember. (Here I ignore the issue of tooling, as I do not find the
argument of But a tool can do it! valid in language design.)

A deeper, more pedagogical problem, is the mismatch between how `struct
Foo... { ... }` is read and how it is actually treated. The straightforward,
left-to-right reading says There is a struct Foo which, given the types ...
has the members  This might lead one to believe that `Foo` is a single
type, but it is not. `Fooint` (that is, type `Foo` instantiated with type
`int`) is not the same type as `Foounit` (that is, type `Foo` instantiated
with type `uint`). Of course, with a small amount of experience or a very
simple explanation, that becomes obvious.

Something less obvious is the treatment of functions. What does `fn
foo...(...) { ... }` say? There is a function foo which, given types ...
and arguments ..., does the following computation: ... is not very adequate.
It leads one to believe there is a *single* function `foo`, whereas there is
actually a single `foo` for every substitution of type parameters! This also
holds for implementations (both of traits and of inherent methods).

Another minor problem is that nicely formatting long lists of type parameters
or type parameters with many bounds is difficult.

Proposed Solution
=

Introduce a new keyword, `forall`. This choice of keyword reads very well and
will not conflict with any identifiers in code which follows the [style
guide](https://github.com/mozilla/rust/wiki/Note-style-guide).

Change the following declarations from

```
struct FooT, U { ... }
implT, U TraitT for FooT, U { ... }
fn fooT, U(...) { ... }
```

to:

```
forallT, U struct Foo { ... }
forallT, U impl TraitT for FooT, U { ... }
forallT, U fn foo(...) { ... }
```

These read very well. for all types T and U, there is a struct Foo ..., for
all types T and U, there is a function foo ..., etc. These reflect that there
are in fact multiple functions `foo` and structs `Foo` and implementations of
`Trait`, due to monomorphization.


[0]: 
http://cmr.github.io/blog/2014/02/01/polymorphic-declaration-syntax-in-rust/
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev



--
Scott Lawrence
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Benjamin Striegel
First of all, why a new keyword? Reusing `for` here would be totally
unambiguous. :P And also save us from creating the precedent of multi-word
keywords.

Secondly, currently Rust has a philosophy of use-follows-declaration (i.e.
the syntax for using something mirrors the syntax for declaring it). This
would eliminate that.

Thirdly, I've actually been thinking about something like this for quite a
while. The reason is that our function signatures are LOOONG, and I've
always thought that it would be great to be able to declare the type
parameters above the function, in an attribute or something. But you could
just as easily split after your closing  for the same effect. If people
are fine with ditching use-follows-declaration, then this could be pretty
nice.


On Sat, Feb 1, 2014 at 5:39 PM, Corey Richardson co...@octayn.net wrote:

 Hey all,

 bjz and I have worked out a nice proposal[0] for a slight syntax
 change, reproduced here. It is a breaking change to the syntax, but it
 is one that I think brings many benefits.

 Summary
 ===

 Change the following syntax:

 ```
 struct FooT, U { ... }
 implT, U TraitT for FooT, U { ... }
 fn fooT, U(...) { ... }
 ```

 to:

 ```
 forallT, U struct Foo { ... }
 forallT, U impl TraitT for FooT, U { ... }
 forallT, U fn foo(...) { ... }
 ```

 The Problem
 ===

 The immediate, and most pragmatic, problem is that in today's Rust one
 cannot
 easily search for implementations of a trait. Why? `grep 'impl Clone'` is
 itself not sufficient, since many types have parametric polymorphism. Now I
 need to come up with some sort of regex that can handle this. An easy
 first-attempt is `grep 'impl(.*?)? Clone'` but that is quite
 inconvenient to
 type and remember. (Here I ignore the issue of tooling, as I do not find
 the
 argument of But a tool can do it! valid in language design.)

 A deeper, more pedagogical problem, is the mismatch between how `struct
 Foo... { ... }` is read and how it is actually treated. The
 straightforward,
 left-to-right reading says There is a struct Foo which, given the types
 ...
 has the members  This might lead one to believe that `Foo` is a single
 type, but it is not. `Fooint` (that is, type `Foo` instantiated with type
 `int`) is not the same type as `Foounit` (that is, type `Foo`
 instantiated
 with type `uint`). Of course, with a small amount of experience or a very
 simple explanation, that becomes obvious.

 Something less obvious is the treatment of functions. What does `fn
 foo...(...) { ... }` say? There is a function foo which, given types ...
 and arguments ..., does the following computation: ... is not very
 adequate.
 It leads one to believe there is a *single* function `foo`, whereas there
 is
 actually a single `foo` for every substitution of type parameters! This
 also
 holds for implementations (both of traits and of inherent methods).

 Another minor problem is that nicely formatting long lists of type
 parameters
 or type parameters with many bounds is difficult.

 Proposed Solution
 =

 Introduce a new keyword, `forall`. This choice of keyword reads very well
 and
 will not conflict with any identifiers in code which follows the [style
 guide](https://github.com/mozilla/rust/wiki/Note-style-guide).

 Change the following declarations from

 ```
 struct FooT, U { ... }
 implT, U TraitT for FooT, U { ... }
 fn fooT, U(...) { ... }
 ```

 to:

 ```
 forallT, U struct Foo { ... }
 forallT, U impl TraitT for FooT, U { ... }
 forallT, U fn foo(...) { ... }
 ```

 These read very well. for all types T and U, there is a struct Foo ...,
 for
 all types T and U, there is a function foo ..., etc. These reflect that
 there
 are in fact multiple functions `foo` and structs `Foo` and implementations
 of
 `Trait`, due to monomorphization.


 [0]:
 http://cmr.github.io/blog/2014/02/01/polymorphic-declaration-syntax-in-rust/
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


[rust-dev] Replacement for #[link_args]

2014-02-01 Thread Cadence Marseille
Hello,

It seems that support for #[link_args] was recently removed (even with
#[feature(link_args)]), so now the -L argument is not being passed to the
linker command:
https://travis-ci.org/cadencemarseille/rust-pcre/builds/18054206

How do you specify a library directory when building a package with rustpkg?

Cadence
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Tim Kuehn
On Sat, Feb 1, 2014 at 3:06 PM, Benjamin Striegel ben.strie...@gmail.comwrote:

 Another point in favor of this plan is that it would eliminate the need to
 put type parameters directly after the `impl`, which to be honest *is*
 pretty weird and inconsistent with the rest of the language. But I'm still
 not sure how I feel about the look of it:

 for T: Clone+Eq, U fn foo(t: T, u: U) - (T, U) {

 If you choose *not* to wrap after the type parameters there, you're really
 obscuring what the heck you're trying to declare.

 Heck, maybe what we're really asking for is for the ability to have
 generic blocks within which type parameters can be declared once:

 for T: Clone+Eq, U {
 fn foo(t: T, u: U) - (T, U) {

 ...but that's even *more* boilerplate!

 It'd mirror how impls work, though, which would be nice. It could be
optional.










 On Sat, Feb 1, 2014 at 5:59 PM, Benjamin Striegel 
 ben.strie...@gmail.comwrote:

  Yes, and I don't have a solution for that.

 Well, it's not like we don't already stumble here a bit, what with
 requiring :: instead of just . Not sure how much other people value the
 consistency here.


 On Sat, Feb 1, 2014 at 5:58 PM, Corey Richardson co...@octayn.netwrote:

 On Sat, Feb 1, 2014 at 5:55 PM, Benjamin Striegel
 ben.strie...@gmail.com wrote:
  First of all, why a new keyword? Reusing `for` here would be totally
  unambiguous. :P And also save us from creating the precedent of
 multi-word
  keywords.
 

 I'd be equally happy with for instead of forall.

  Secondly, currently Rust has a philosophy of use-follows-declaration
 (i.e.
  the syntax for using something mirrors the syntax for declaring it).
 This
  would eliminate that.
 

 Yes, and I don't have a solution for that.




 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Vladimir Lushnikov
Placing type bounds before the name of the thing you are trying to declare
feels unnatural to me. And the generic block is far too much boilerplate!

How about supporting type aliases as Scala does? So you write:

  type MyT = Clone + Eq
  fn fooMyT, U(t: T, u: U) - …

and the 'type' is just an alias for any type that has Clone + Eq?

Obviously the above only solves repeating yourself for complex type
constraints.

Also, reusing 'for' would be confusing as well, because you expect a loop
there, not a generic type bound. How about 'any':

any T: Clone + Eq, U fn foo (t: T, u: U) - …

?


On Sat, Feb 1, 2014 at 11:06 PM, Benjamin Striegel
ben.strie...@gmail.comwrote:

 Another point in favor of this plan is that it would eliminate the need to
 put type parameters directly after the `impl`, which to be honest *is*
 pretty weird and inconsistent with the rest of the language. But I'm still
 not sure how I feel about the look of it:

 for T: Clone+Eq, U fn foo(t: T, u: U) - (T, U) {

 If you choose *not* to wrap after the type parameters there, you're really
 obscuring what the heck you're trying to declare.

 Heck, maybe what we're really asking for is for the ability to have
 generic blocks within which type parameters can be declared once:

 for T: Clone+Eq, U {
 fn foo(t: T, u: U) - (T, U) {

 ...but that's even *more* boilerplate!




 On Sat, Feb 1, 2014 at 5:59 PM, Benjamin Striegel 
 ben.strie...@gmail.comwrote:

  Yes, and I don't have a solution for that.

 Well, it's not like we don't already stumble here a bit, what with
 requiring :: instead of just . Not sure how much other people value the
 consistency here.


 On Sat, Feb 1, 2014 at 5:58 PM, Corey Richardson co...@octayn.netwrote:

 On Sat, Feb 1, 2014 at 5:55 PM, Benjamin Striegel
 ben.strie...@gmail.com wrote:
  First of all, why a new keyword? Reusing `for` here would be totally
  unambiguous. :P And also save us from creating the precedent of
 multi-word
  keywords.
 

 I'd be equally happy with for instead of forall.

  Secondly, currently Rust has a philosophy of use-follows-declaration
 (i.e.
  the syntax for using something mirrors the syntax for declaring it).
 This
  would eliminate that.
 

 Yes, and I don't have a solution for that.




 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Benjamin Striegel
 How about supporting type aliases as Scala does?

In theory I think that should be achievable today, using trait inheritance:

trait MyT : Clone, Eq {}

...at least, I *think* we allow multiple trait inheritance. Not sure what
the syntax is!


On Sat, Feb 1, 2014 at 6:12 PM, Vladimir Lushnikov 
vladi...@slate-project.org wrote:

 Placing type bounds before the name of the thing you are trying to declare
 feels unnatural to me. And the generic block is far too much boilerplate!

 How about supporting type aliases as Scala does? So you write:

   type MyT = Clone + Eq
   fn fooMyT, U(t: T, u: U) - ...

 and the 'type' is just an alias for any type that has Clone + Eq?

 Obviously the above only solves repeating yourself for complex type
 constraints.

 Also, reusing 'for' would be confusing as well, because you expect a loop
 there, not a generic type bound. How about 'any':

 any T: Clone + Eq, U fn foo (t: T, u: U) - ...

 ?


 On Sat, Feb 1, 2014 at 11:06 PM, Benjamin Striegel ben.strie...@gmail.com
  wrote:

 Another point in favor of this plan is that it would eliminate the need
 to put type parameters directly after the `impl`, which to be honest *is*
 pretty weird and inconsistent with the rest of the language. But I'm still
 not sure how I feel about the look of it:

 for T: Clone+Eq, U fn foo(t: T, u: U) - (T, U) {

 If you choose *not* to wrap after the type parameters there, you're
 really obscuring what the heck you're trying to declare.

 Heck, maybe what we're really asking for is for the ability to have
 generic blocks within which type parameters can be declared once:

 for T: Clone+Eq, U {
 fn foo(t: T, u: U) - (T, U) {

 ...but that's even *more* boilerplate!




 On Sat, Feb 1, 2014 at 5:59 PM, Benjamin Striegel ben.strie...@gmail.com
  wrote:

  Yes, and I don't have a solution for that.

 Well, it's not like we don't already stumble here a bit, what with
 requiring :: instead of just . Not sure how much other people value the
 consistency here.


 On Sat, Feb 1, 2014 at 5:58 PM, Corey Richardson co...@octayn.netwrote:

 On Sat, Feb 1, 2014 at 5:55 PM, Benjamin Striegel
 ben.strie...@gmail.com wrote:
  First of all, why a new keyword? Reusing `for` here would be totally
  unambiguous. :P And also save us from creating the precedent of
 multi-word
  keywords.
 

 I'd be equally happy with for instead of forall.

  Secondly, currently Rust has a philosophy of use-follows-declaration
 (i.e.
  the syntax for using something mirrors the syntax for declaring it).
 This
  would eliminate that.
 

 Yes, and I don't have a solution for that.




 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev



___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Scott Lawrence

It seems to use a '+' instead of ','.

On Sat, 1 Feb 2014, Benjamin Striegel wrote:


How about supporting type aliases as Scala does?


In theory I think that should be achievable today, using trait inheritance:

   trait MyT : Clone, Eq {}

...at least, I *think* we allow multiple trait inheritance. Not sure what
the syntax is!


On Sat, Feb 1, 2014 at 6:12 PM, Vladimir Lushnikov 
vladi...@slate-project.org wrote:


Placing type bounds before the name of the thing you are trying to declare
feels unnatural to me. And the generic block is far too much boilerplate!

How about supporting type aliases as Scala does? So you write:

  type MyT = Clone + Eq
  fn fooMyT, U(t: T, u: U) - ...

and the 'type' is just an alias for any type that has Clone + Eq?

Obviously the above only solves repeating yourself for complex type
constraints.

Also, reusing 'for' would be confusing as well, because you expect a loop
there, not a generic type bound. How about 'any':

any T: Clone + Eq, U fn foo (t: T, u: U) - ...

?


On Sat, Feb 1, 2014 at 11:06 PM, Benjamin Striegel ben.strie...@gmail.com

wrote:



Another point in favor of this plan is that it would eliminate the need
to put type parameters directly after the `impl`, which to be honest *is*
pretty weird and inconsistent with the rest of the language. But I'm still
not sure how I feel about the look of it:

for T: Clone+Eq, U fn foo(t: T, u: U) - (T, U) {

If you choose *not* to wrap after the type parameters there, you're
really obscuring what the heck you're trying to declare.

Heck, maybe what we're really asking for is for the ability to have
generic blocks within which type parameters can be declared once:

for T: Clone+Eq, U {
fn foo(t: T, u: U) - (T, U) {

...but that's even *more* boilerplate!




On Sat, Feb 1, 2014 at 5:59 PM, Benjamin Striegel ben.strie...@gmail.com

wrote:



Yes, and I don't have a solution for that.


Well, it's not like we don't already stumble here a bit, what with
requiring :: instead of just . Not sure how much other people value the
consistency here.


On Sat, Feb 1, 2014 at 5:58 PM, Corey Richardson co...@octayn.netwrote:


On Sat, Feb 1, 2014 at 5:55 PM, Benjamin Striegel
ben.strie...@gmail.com wrote:

First of all, why a new keyword? Reusing `for` here would be totally
unambiguous. :P And also save us from creating the precedent of

multi-word

keywords.



I'd be equally happy with for instead of forall.


Secondly, currently Rust has a philosophy of use-follows-declaration

(i.e.

the syntax for using something mirrors the syntax for declaring it).

This

would eliminate that.



Yes, and I don't have a solution for that.






___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev








--
Scott Lawrence
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Corey Richardson
On Sat, Feb 1, 2014 at 6:12 PM, Vladimir Lushnikov
vladi...@slate-project.org wrote:
 Also, reusing 'for' would be confusing as well, because you expect a loop
 there, not a generic type bound. How about 'any':


any is a super useful identifier and is already used. I do not want to
reserve it.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Eric Summers

 ```
 forallT, U struct Foo { ... }
 forallT, U impl TraitT for FooT, U { ... }
 forallT, U fn foo(...) { ... }
 ```


I’m new to rust, so maybe this doesn’t make sense, but would it make sense to 
have a variation of this syntax to make implementing related traits and 
functions more DRY?  Essentially allow the for all to be shared.  While I’ve 
been skimming code to learn Rust, I noticed trait restrictions in particular 
seem to be repeated a lot in functions and traits that are related to each 
other.

forallT:ByteStream, U {
impl BinaryEncoderT for MyStructU { … }
impl BinaryDecoderT for MyStructU { … }
}

I also like how it breaks across lines:

forallT, U
struct Foo {
...
}

It looks like someone else suggested this while I was typing, but I like the 
aesthetics of it.

-Eric
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Corey Richardson
On Sat, Feb 1, 2014 at 6:24 PM, Eric Reed ecr...@cs.washington.edu wrote:
 Responses inlined.


 Hey all,

 bjz and I have worked out a nice proposal[0] for a slight syntax
 change, reproduced here. It is a breaking change to the syntax, but it
 is one that I think brings many benefits.

 Summary
 ===

 Change the following syntax:

 ```
 struct FooT, U { ... }
 implT, U TraitT for FooT, U { ... }
 fn fooT, U(...) { ... }
 ```

 to:

 ```
 forallT, U struct Foo { ... }
 forallT, U impl TraitT for FooT, U { ... }
 forallT, U fn foo(...) { ... }
 ```

 The Problem
 ===

 The immediate, and most pragmatic, problem is that in today's Rust one
 cannot
 easily search for implementations of a trait. Why? `grep 'impl Clone'` is
 itself not sufficient, since many types have parametric polymorphism. Now
 I
 need to come up with some sort of regex that can handle this. An easy
 first-attempt is `grep 'impl(.*?)? Clone'` but that is quite
 inconvenient to
 type and remember. (Here I ignore the issue of tooling, as I do not find
 the
 argument of But a tool can do it! valid in language design.)


 I think what I've done in the past was just `grep impl | grep Clone'.


 A deeper, more pedagogical problem, is the mismatch between how `struct
 Foo... { ... }` is read and how it is actually treated. The
 straightforward,
 left-to-right reading says There is a struct Foo which, given the types
 ...
 has the members  This might lead one to believe that `Foo` is a
 single
 type, but it is not. `Fooint` (that is, type `Foo` instantiated with
 type
 `int`) is not the same type as `Foounit` (that is, type `Foo`
 instantiated
 with type `uint`). Of course, with a small amount of experience or a very
 simple explanation, that becomes obvious.


 I strongly disagree with this reasoning.
 There IS only one type Foo. It's a type constructor with kind * - * (where
 * means proper type).
 Fooint and Foouint are two different applications of Foo and are proper
 types (i.e. *) because Foo is * - * and both int and uint are *.
 Regarding people confusing Foo, Fooint and Foouint, I think the proposed
 forallT struct Foo {...} syntax is actually more confusing.
 With the current syntax, it's never legal to write Foo without type
 parameters, but with the proposed syntax it would be.


I've yet to see a proposal for HKT, but with them that interpretation
would be valid and indeed make this proposal's argument weaker.


 Something less obvious is the treatment of functions. What does `fn
 foo...(...) { ... }` say? There is a function foo which, given types
 ...
 and arguments ..., does the following computation: ... is not very
 adequate.
 It leads one to believe there is a *single* function `foo`, whereas there
 is
 actually a single `foo` for every substitution of type parameters! This
 also
 holds for implementations (both of traits and of inherent methods).


 Again, I strongly disagree here.
 There IS only one function foo. Some of it's arguments are types. foo's
 behavior *does not change* based on the type parameters because of
 parametricity.
 That the compiler monomporphizes generic functions is just an implementation
 detail and doesn't change the semantics of the function.


It can if it uses Any, size_of, etc. eddyb had integers in the
typesystem by using size_of and [u8, ..N]. Anything using the
properties of types or the tydescs *will* change for each
instantiation.


 Another minor problem is that nicely formatting long lists of type
 parameters
 or type parameters with many bounds is difficult.


 I'm not sure how this proposal would address this problem. All of your
 proposed examples are longer than the current syntax equivalents.


The idea is there is an obvious place to insert a newline (after the
forall), though bjz would have to comment more on that.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Brian Anderson

On 02/01/2014 02:59 PM, Benjamin Striegel wrote:

 Yes, and I don't have a solution for that.

Well, it's not like we don't already stumble here a bit, what with 
requiring :: instead of just . Not sure how much other people 
value the consistency here.


Yeah, the existing solution is bad, and also rare. If changing the 
declaration might happen then you might as well make another minor 
change for consistency, possibly for the better.





On Sat, Feb 1, 2014 at 5:58 PM, Corey Richardson co...@octayn.net 
mailto:co...@octayn.net wrote:


On Sat, Feb 1, 2014 at 5:55 PM, Benjamin Striegel
ben.strie...@gmail.com mailto:ben.strie...@gmail.com wrote:
 First of all, why a new keyword? Reusing `for` here would be totally
 unambiguous. :P And also save us from creating the precedent of
multi-word
 keywords.


I'd be equally happy with for instead of forall.

 Secondly, currently Rust has a philosophy of
use-follows-declaration (i.e.
 the syntax for using something mirrors the syntax for declaring
it). This
 would eliminate that.


Yes, and I don't have a solution for that.




___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Corey Richardson
On Sat, Feb 1, 2014 at 6:31 PM, Corey Richardson co...@octayn.net wrote:
 On Sat, Feb 1, 2014 at 6:24 PM, Eric Reed ecr...@cs.washington.edu wrote:
 Again, I strongly disagree here.
 There IS only one function foo. Some of it's arguments are types. foo's
 behavior *does not change* based on the type parameters because of
 parametricity.
 That the compiler monomporphizes generic functions is just an implementation
 detail and doesn't change the semantics of the function.


 It can if it uses Any, size_of, etc. eddyb had integers in the
 typesystem by using size_of and [u8, ..N]. Anything using the
 properties of types or the tydescs *will* change for each
 instantiation.


Furthermore, I don't considered monomorphic instantiation to be an
implementation detail. Without it the difference between trait objects
and generics is nonsensical, and iirc there's code that depends on the
addresses of different instantiations being different (though I might
be confusing that with statics). It's also important to understanding
the performance characteristics of Rust, esp binary size and why
metadata is so huge.

It's a vital detail to understanding Rust, and any use of it needs to
consider it. If it is indeed considered an implementation detail, it's
probably the most important implementation detail I've seen in
anything. Given Rust's target market, it'd be irresponsible to ignore
it...
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Syntax for custom type bounds

2014-02-01 Thread Vadim
On Sat, Feb 1, 2014 at 4:57 AM, Niko Matsakis n...@alum.mit.edu wrote:

 Regarding the marker types, they are somewhat awkward, and are not the
 approach I originally favored. But they have some real advantages:

 - Easily extensible as we add new requirements, unlike syntax.
 - Easily documented.
 - These bounds are only used for unsafe code, so it's not something
   ordinary users should have to stumble over.

 What concerns me more is that marker types are opt in -- so if you
 don't know that you need them, and you build a datatype founded on
 unsafe code, you can get incorrect behavior. There may be some steps
 we can take to mitigate that in some cases.

 In any case, the use of marker types are also quite orthogonal to your
 other concerns:


I meant that marker types seem like more of the same approach that was
taken with lifetimes in iterators (i.e. declaring a dummy field).  I don't
have a firm opinion about what syntax I'd prefer for markers in general,
but I do have some ideas about lifetimes, which are probably the most
commonly used type bound in Rust.



  This also makes the intent much more clear.   Currently, one would have
 to
  dig into the definition of MutItems'a,T to figure out that the lifetime
  parameter 'a is used to create a dummy borrow field into the vector, so
  that the whole iterator is then treated as a mutable borrow.   This feels
  very convoluted, if you ask me.

 I disagree -- I think treating lifetime and type parameters uniformly
 feels cleaner than permitting lifetime bounds to appear in random
 places. Presumably `'a FooT` would be syntactic sugar for `Foo'a, T`?
 There's an obvious ambiguity here with `'a T`.


Since 'a FooT currently means the return value is a reference into
something that has lifetime 'a,  'a FooT feels like a natural extension
for saying the return value is a reference-like thing whose safety depends
on something that has lifetime 'a still being around.
Foo'a,T, of the other hand... it is not obvious to me why would it
necessarily mean that.   Is this because the only way to use a lifetime
parameter in a type is to create a reference field into something with that
lifetime?  If so, it feels like one logical deduction too many for the
reader of the code to make.

And what if the lifetime parameter isn't used at all?   After all, I can do
that with regular type parameters (i.e. declare, but not use).  Then
Foo'a,T would only appear as having lifetime 'a, without actually being
so?


 On a slightly different note, is there a strong reason for having to name
  lifetime parameters explicitly?   Could we simply re-use actual parameter
  names prefixed with ' as their lifetimes?

 It is plausible we could say that a lifetime parameter name that is
 the same as a local variable binding whose type is a borrowed pointer
 refers to the lifetime of that borrowed pointer. To me, it feels like
 a rather ad hoc rule, though I can see it would sometimes be convenient.

 The current rules are intended to showcase how lifetime parameters work
 precisely like type parameters. In other words, we write:

 fn copyT(t: T) - T;

 we do not write:

 fn copy(t) - t;

 In the same way, we identify and declare lifetime parameters.

 Note that lifetime parameters do not have a natural one-to-one
 relationship with variables. It's certainly possible (and reasonable)
 to declare a function like:

 fn foo'a, 'b, 'c(x: 'a Foo'b, 'c)

 In which case, the meaning of `'x` is pretty unclear to me.


I'd like it to mean the lifetime of whatever x points to, i.e. 'x == 'a.
I realize that this is somewhat problematic, because x itself is the
reference, not something it points to, but... you know, because
auto-dereferencing...  :-)


  The above could then be reduced to this:
 
  pub trait MutableVectorT {
  fn mut_iter(self) - 'self MutItemsT;
  ...
  }
 
  This used to be valid syntax, btw, though it worked because 'self
 lifetime
  was special, IIRC.

 Writing `'self` was valid syntax, but it didn't have the meaning you
 are proposing. Which is one of the reasons we removed it.


I've been around Rust for almost a year now,  and certainly since the time
the current lifetime notation has been introduced, and I *still *could not
explain to somebody, why a lifetime parameter appearing among the type
parameters of a trait or a struct refers to the lifetime of that trait or
struct.  It isn't used to declare any reference fields...  (and traits
can't have fields, of course).

I'd understand if the above example had to be written as:

pub trait MutableVectorT {
 fn mut_iter'a('a mut self) - 'a MutItemsT;
 ...
  }
But the current notation completely evades me.


Regarding 'self:  Ok, say what you want about reusing parameter names for
lifetimes in general, but having syntax sugar for the lifetime of the
current struct was totally worth it, IMHO.
We already have sugar for self, or else we'd be writing trait FooT {
fn 

Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread comex
On Sat, Feb 1, 2014 at 5:39 PM, Corey Richardson co...@octayn.net wrote:
 A deeper, more pedagogical problem, is the mismatch between how `struct
 Foo... { ... }` is read and how it is actually treated. The straightforward,
 left-to-right reading says There is a struct Foo which, given the types ...
 has the members 

I read struct Foo... { ... } the same way as fn foo(...) - 
 In the latter case, given some value parameters, I get a return
value; in the former, given some type parameters, I get a struct.  On
the contrary, I would find the idea that forallT fn is specified
with fn::T, like in C++ (template typename T) relatively
confusing.

For bulk generics I would rather have the concept of a generic module
than just copy the generic arguments onto each item.  (For that
matter, I think I'd like to have traits on modules or something like
that, so that you can have something like a list trait which comes
with a type for the list and a type for its iterator, without having
to write two generic parameters on everything.  But that's a different
story.)

Also, I think the syntax for generics is verbose enough as it is; I'd
rather see it shortened than lengthened.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Eric Summers

 
 forallT:ByteStream, U {
   impl BinaryEncoderT for MyStructU { … }
   impl BinaryDecoderT for MyStructU { … }
 }

comex mentioned the idea of a generic module.  That would be interesting.  I 
like that idea better then this.

 
 I also like how it breaks across lines:
 
 forallT, U
 struct Foo {
 …
 }
 

I guess it currently breaks ok for long type params:
implT, U
TraitT 
for FooT,U {
...
}

I think the grep issue will be solved by libsyntax being integrated in text 
editor plugins.

-Eric
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Armin Ronacher

Hi,

On 01/02/2014 22:58, Corey Richardson wrote:

I'd be equally happy with for instead of forall.
+1 on not using forall, it sounds confusing and is actually quite a bit to type 
considering how frequent these things are.


As an alternative to for I would want to throw be and use into the mix. 
be is currently already reserved, but not sure how well it sounds.



Regards,
Armin

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Nick Cameron
I prefer the existing syntax.

- As pointed out above, there are solutions to the non-grep-ability.

- The change adds boilerplate and nomenclature that is likely unfamiliar to
our target audience - 'for all' is well known to functional programmers,
but I believe that is not true for most users of C++ (or Java). Being
closer to the C++/Java syntax for generics is probably more 'intuitive'.

- I do not think generics imply polymorphic implementation - C++
programmers are used to generics having monomorphic implementations.

- Starting an item definition with the most important keyword is nice and
we would lose that - currently it is easy to scan down the start of lines
and see that something is a fn, impl, struct, etc. With the change you just
see that something is generic or not, which is not what you are interested
in when scanning. Put another way, I believe this change prioritises
automatic search (grep, which can be fixed) over visual search (which
cannot).

(I do agree that formatting lists of type params is difficult)

Cheers, Nick


On Sun, Feb 2, 2014 at 11:39 AM, Corey Richardson co...@octayn.net wrote:

 Hey all,

 bjz and I have worked out a nice proposal[0] for a slight syntax
 change, reproduced here. It is a breaking change to the syntax, but it
 is one that I think brings many benefits.

 Summary
 ===

 Change the following syntax:

 ```
 struct FooT, U { ... }
 implT, U TraitT for FooT, U { ... }
 fn fooT, U(...) { ... }
 ```

 to:

 ```
 forallT, U struct Foo { ... }
 forallT, U impl TraitT for FooT, U { ... }
 forallT, U fn foo(...) { ... }
 ```

 The Problem
 ===

 The immediate, and most pragmatic, problem is that in today's Rust one
 cannot
 easily search for implementations of a trait. Why? `grep 'impl Clone'` is
 itself not sufficient, since many types have parametric polymorphism. Now I
 need to come up with some sort of regex that can handle this. An easy
 first-attempt is `grep 'impl(.*?)? Clone'` but that is quite
 inconvenient to
 type and remember. (Here I ignore the issue of tooling, as I do not find
 the
 argument of But a tool can do it! valid in language design.)

 A deeper, more pedagogical problem, is the mismatch between how `struct
 Foo... { ... }` is read and how it is actually treated. The
 straightforward,
 left-to-right reading says There is a struct Foo which, given the types
 ...
 has the members  This might lead one to believe that `Foo` is a single
 type, but it is not. `Fooint` (that is, type `Foo` instantiated with type
 `int`) is not the same type as `Foounit` (that is, type `Foo`
 instantiated
 with type `uint`). Of course, with a small amount of experience or a very
 simple explanation, that becomes obvious.

 Something less obvious is the treatment of functions. What does `fn
 foo...(...) { ... }` say? There is a function foo which, given types ...
 and arguments ..., does the following computation: ... is not very
 adequate.
 It leads one to believe there is a *single* function `foo`, whereas there
 is
 actually a single `foo` for every substitution of type parameters! This
 also
 holds for implementations (both of traits and of inherent methods).

 Another minor problem is that nicely formatting long lists of type
 parameters
 or type parameters with many bounds is difficult.

 Proposed Solution
 =

 Introduce a new keyword, `forall`. This choice of keyword reads very well
 and
 will not conflict with any identifiers in code which follows the [style
 guide](https://github.com/mozilla/rust/wiki/Note-style-guide).

 Change the following declarations from

 ```
 struct FooT, U { ... }
 implT, U TraitT for FooT, U { ... }
 fn fooT, U(...) { ... }
 ```

 to:

 ```
 forallT, U struct Foo { ... }
 forallT, U impl TraitT for FooT, U { ... }
 forallT, U fn foo(...) { ... }
 ```

 These read very well. for all types T and U, there is a struct Foo ...,
 for
 all types T and U, there is a function foo ..., etc. These reflect that
 there
 are in fact multiple functions `foo` and structs `Foo` and implementations
 of
 `Trait`, due to monomorphization.


 [0]:
 http://cmr.github.io/blog/2014/02/01/polymorphic-declaration-syntax-in-rust/
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] let mut - let !

2014-02-01 Thread Steve Klabnik
We've been steadily reducing the amount of punctuation in the
language, because people tend not to like it. Plus, in this case,
`mut` being longer than `!` or any other symbol is useful: mutability
should be a teeny bit painful.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Eric Reed
I'm going to respond to Any and size_of separately because there's a
significant difference IMO.

It's true that Any and trait bounds on type parameters in general can let
function behavior depend on the passed type, but only in the specific
behavior defined by the trait. Everything that's not a trait function is
still independent of the passed type (contrast this with a setup where this
wasn't true. `fn fooA() - int' could return 2i for int and spin up a
tetris game then crash for uint). Any just happens to be powerful enough to
allow complete variance, which is expected since it's just dynamic typing,
but there's an important distinction still: behavior variance because of
Any *is* part of the function because you need to do explicit type tests.

I wasn't aware of mem::size_of before, but I'm rather annoyed to find out
we've started adding bare A - B functions since it breaks parametricity.
I'd much rather put size_of in a trait, at which point it's just a weaker
version of Any.
Being able to tell how a function's behavior might vary just from the type
signature is a very nice property, and I'd like Rust to keep it.

Now, onto monomorphization.
I agree that distinguishing static and dynamic dispatch is important for
performance characterization, but static dispatch != monomorphization (or
if it currently does, then it probably shouldn't) because not all
statically dispatched code needs to be monomorphizied. Consider a function
like this:

fn fooA, B(ox: Option~A, f: |~A| - ~B) - Option~B {
match ox {
Some(x) = Some(f(x)),
None = None,
}
}

It's quite generic, but AFAIK there's no need to monomorphize it for static
dispatch. It uses a constant amount of stack space (not counting what `f'
uses when called) and could run the exact same code for any types A or B
(check discriminant, potentially call a function pointer, and return). I
would guess most cases require monomorphization, but I consider universal
monomorphization a way of implementing static dispatch (as opposed to
partial monomorphization).
I agree that understanding monomorphization is important for understanding
the performance characteristics of code generated by *rustc*, but rustc !=
Rust.
Unless universal monomorphization for static dispatch makes its way into
the Rust language spec, I'm going to consider it an implementation detail
for rustc.



On Sat, Feb 1, 2014 at 3:31 PM, Corey Richardson co...@octayn.net wrote:

 On Sat, Feb 1, 2014 at 6:24 PM, Eric Reed ecr...@cs.washington.edu
 wrote:
  Responses inlined.
 
 
  Hey all,
 
  bjz and I have worked out a nice proposal[0] for a slight syntax
  change, reproduced here. It is a breaking change to the syntax, but it
  is one that I think brings many benefits.
 
  Summary
  ===
 
  Change the following syntax:
 
  ```
  struct FooT, U { ... }
  implT, U TraitT for FooT, U { ... }
  fn fooT, U(...) { ... }
  ```
 
  to:
 
  ```
  forallT, U struct Foo { ... }
  forallT, U impl TraitT for FooT, U { ... }
  forallT, U fn foo(...) { ... }
  ```
 
  The Problem
  ===
 
  The immediate, and most pragmatic, problem is that in today's Rust one
  cannot
  easily search for implementations of a trait. Why? `grep 'impl Clone'`
 is
  itself not sufficient, since many types have parametric polymorphism.
 Now
  I
  need to come up with some sort of regex that can handle this. An easy
  first-attempt is `grep 'impl(.*?)? Clone'` but that is quite
  inconvenient to
  type and remember. (Here I ignore the issue of tooling, as I do not find
  the
  argument of But a tool can do it! valid in language design.)
 
 
  I think what I've done in the past was just `grep impl | grep Clone'.
 
 
  A deeper, more pedagogical problem, is the mismatch between how `struct
  Foo... { ... }` is read and how it is actually treated. The
  straightforward,
  left-to-right reading says There is a struct Foo which, given the types
  ...
  has the members  This might lead one to believe that `Foo` is a
  single
  type, but it is not. `Fooint` (that is, type `Foo` instantiated with
  type
  `int`) is not the same type as `Foounit` (that is, type `Foo`
  instantiated
  with type `uint`). Of course, with a small amount of experience or a
 very
  simple explanation, that becomes obvious.
 
 
  I strongly disagree with this reasoning.
  There IS only one type Foo. It's a type constructor with kind * - *
 (where
  * means proper type).
  Fooint and Foouint are two different applications of Foo and are
 proper
  types (i.e. *) because Foo is * - * and both int and uint are *.
  Regarding people confusing Foo, Fooint and Foouint, I think the
 proposed
  forallT struct Foo {...} syntax is actually more confusing.
  With the current syntax, it's never legal to write Foo without type
  parameters, but with the proposed syntax it would be.
 

 I've yet to see a proposal for HKT, but with them that interpretation
 would be valid and indeed make this proposal's argument weaker.

 
  Something less 

Re: [rust-dev] Replacement for #[link_args]

2014-02-01 Thread Jack Moffitt
This might be a recent regression of rustpkg, but rustpkg should pass
through compiler options you give it.

`rustpkg install foo -L some/path` I think is supposed to work.

If nothing else, rustc will definitely take -L arguments. Also, in
your crate source you want to annotate you extern block with
`#[link(name=...)]`.

jack.

On Sat, Feb 1, 2014 at 4:05 PM, Cadence Marseille
cadencemarsei...@gmail.com wrote:
 Hello,

 It seems that support for #[link_args] was recently removed (even with
 #[feature(link_args)]), so now the -L argument is not being passed to the
 linker command:
 https://travis-ci.org/cadencemarseille/rust-pcre/builds/18054206

 How do you specify a library directory when building a package with rustpkg?

 Cadence

 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Daniel Micay
On Sat, Feb 1, 2014 at 9:27 PM, Eric Reed ecr...@cs.washington.edu wrote:

 I wasn't aware of mem::size_of before, but I'm rather annoyed to find out
 we've started adding bare A - B functions since it breaks parametricity.
 I'd much rather put size_of in a trait, at which point it's just a weaker
 version of Any.

You do realize how widely used size_of is, right? I don't this it
makes sense to say we've *started* adding this stuff when being able
to get the size/alignment has pretty much always been there.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Patrick Walton

On 2/1/14 6:43 PM, Daniel Micay wrote:

On Sat, Feb 1, 2014 at 9:27 PM, Eric Reed ecr...@cs.washington.edu wrote:


I wasn't aware of mem::size_of before, but I'm rather annoyed to find out
we've started adding bare A - B functions since it breaks parametricity.
I'd much rather put size_of in a trait, at which point it's just a weaker
version of Any.


You do realize how widely used size_of is, right? I don't this it
makes sense to say we've *started* adding this stuff when being able
to get the size/alignment has pretty much always been there.


`transmute()` breaks parametricity too, which is annoying to me because 
you can get C++-template-expansion-style errors in translation time 
(transmute called on types of different sizes). I proposed changing it 
to a dynamic runtime failure if the types had different sizes, which 
eliminates ad-hoc templates leaking into our trait system, but that met 
with extremely strong objections from pretty much everyone.


Patrick


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Daniel Micay
On Sat, Feb 1, 2014 at 9:46 PM, Patrick Walton pcwal...@mozilla.com wrote:
 On 2/1/14 6:43 PM, Daniel Micay wrote:

 On Sat, Feb 1, 2014 at 9:27 PM, Eric Reed ecr...@cs.washington.edu
 wrote:


 I wasn't aware of mem::size_of before, but I'm rather annoyed to find out
 we've started adding bare A - B functions since it breaks parametricity.
 I'd much rather put size_of in a trait, at which point it's just a weaker
 version of Any.


 You do realize how widely used size_of is, right? I don't this it
 makes sense to say we've *started* adding this stuff when being able
 to get the size/alignment has pretty much always been there.


 `transmute()` breaks parametricity too, which is annoying to me because you
 can get C++-template-expansion-style errors in translation time (transmute
 called on types of different sizes). I proposed changing it to a dynamic
 runtime failure if the types had different sizes, which eliminates ad-hoc
 templates leaking into our trait system, but that met with extremely strong
 objections from pretty much everyone.

 Patrick

This could be restricted to `unsafe` code by making reflection
features `unsafe` and mandating that safe functions must compile for
types meeting the bounds.

The `size_of` functionality is absolutely required to have any hope of
writing smart pointers and containers in the library, without using ~T
and ~[T] as the sole allocators.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Patrick Walton

On 2/1/14 6:50 PM, Daniel Micay wrote:

This could be restricted to `unsafe` code by making reflection
features `unsafe` and mandating that safe functions must compile for
types meeting the bounds.

The `size_of` functionality is absolutely required to have any hope of
writing smart pointers and containers in the library, without using ~T
and ~[T] as the sole allocators.


Oh, don't worry, I'm not proposing removing either transmute or sizeof. 
Just saying it bugs the theorist in me. :)


Patrick

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Eric Reed
Well there's only 260 uses of the string size_of in rustc's src/
according to grep and only 3 uses of size_of in servo according to
GitHub, so I think you may be overestimating its usage.

Either way, I'm not proposing we get rid of size_of. I just think we should
put it in an automatically derived trait instead of defining a function on
all types.
Literally the only thing that would change would be code like this:

fn fooT(t: T) {
let size = mem::size_of(t);
}

would have to be changed to:

fn fooT: SizeOf(t: T) {
let size = SizeOf::size_of(t); // or t.size_of()
}

Is that really so bad?
Now the function's type signature documents that the function's behavior
depends on the size of the type.
If you see a signature like `fn fooT(t: T)', then you know that it
doesn't.
There's no additional performance overhead and it makes size_of like other
intrinsic operators (+, ==, etc.).

I seriously don't see what downside this could possibly have.



On Sat, Feb 1, 2014 at 6:43 PM, Daniel Micay danielmi...@gmail.com wrote:

 On Sat, Feb 1, 2014 at 9:27 PM, Eric Reed ecr...@cs.washington.edu
 wrote:
 
  I wasn't aware of mem::size_of before, but I'm rather annoyed to find out
  we've started adding bare A - B functions since it breaks parametricity.
  I'd much rather put size_of in a trait, at which point it's just a weaker
  version of Any.

 You do realize how widely used size_of is, right? I don't this it
 makes sense to say we've *started* adding this stuff when being able
 to get the size/alignment has pretty much always been there.

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deprecating rustpkg

2014-02-01 Thread Daniel Micay
On Sat, Feb 1, 2014 at 4:28 PM, Vladimir Matveev

 Well, it seems that working for a long time with a code targeting virtual
 machines is corrupting :) I completely forgot about different models of
 compilation. I see your point. But I think that developing and distributing
 should be considered separately. Package manager for developers should be a
 part of language infrastructure (like rustpkg is now for Rust and, for 
 example,
 go tool for Go language or cabal for Haskell). This package manager allows
 flexible management of Rust libraries and their dependencies, and it should be
 integrated with the build system (or *be* this build system). It is used by
 developers to create applications and libraries and by maintainers to prepare
 these applications and libraries for integration with the distribution system
 for end users.

How will it handle external dependencies?

 Package manager for general users (I'll call it system package manager),
 however, depends on the OS, and it is maintainer's task to determine correct
 dependencies for each package. Rust package manager should not depend in any
 way on the system package manager and its packages, because each system has 
 its
 own package manager, and it is just impossible to support them all. Rust also
 should not force usage of concrete user-level package manager (like 0install,
 for example), because this means additional unrelated software on the user
 installation.

I don't understand this. A package manager specific to Rust is
additional software, just like 0install. 0install has full support for
installing dependencies via the system package manager on many systems
if desired.

http://0install.net/distribution-integration.html
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Daniel Micay
On Sat, Feb 1, 2014 at 10:12 PM, Eric Reed ecr...@cs.washington.edu wrote:
 Well there's only 260 uses of the string size_of in rustc's src/ according
 to grep and only 3 uses of size_of in servo according to GitHub, so I
 think you may be overestimating its usage.

The number of calls to `size_of` isn't a useful metric. It's the
building block required to allocate memory (vectors, unique pointers)
and in the slice iterators (to perform pointer arithmetic). If it
requires a bound, then so will any code using a slice iterator.

 Either way, I'm not proposing we get rid of size_of. I just think we should
 put it in an automatically derived trait instead of defining a function on
 all types.
 Literally the only thing that would change would be code like this:

 fn fooT(t: T) {
 let size = mem::size_of(t);
 }

 would have to be changed to:

 fn fooT: SizeOf(t: T) {
 let size = SizeOf::size_of(t); // or t.size_of()
 }

 Is that really so bad?

Yes, it is.

 Now the function's type signature documents that the function's behavior
 depends on the size of the type.
 If you see a signature like `fn fooT(t: T)', then you know that it
 doesn't.
 There's no additional performance overhead and it makes size_of like other
 intrinsic operators (+, ==, etc.).

The operators are not implemented for every type as they are for `size_of`.

 I seriously don't see what downside this could possibly have.

Using unique pointers, vectors and even slice iterators will require a
semantically irrelevant `SizeOf` bound. Whether or not you allocate a
unique pointer to store a value internally shouldn't be part of the
function signature.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Huon Wilson

On 02/02/14 14:18, Daniel Micay wrote:

On Sat, Feb 1, 2014 at 10:12 PM, Eric Reed ecr...@cs.washington.edu wrote:

Well there's only 260 uses of the string size_of in rustc's src/ according
to grep and only 3 uses of size_of in servo according to GitHub, so I
think you may be overestimating its usage.

The number of calls to `size_of` isn't a useful metric. It's the
building block required to allocate memory (vectors, unique pointers)
and in the slice iterators (to perform pointer arithmetic). If it
requires a bound, then so will any code using a slice iterator.


Either way, I'm not proposing we get rid of size_of. I just think we should
put it in an automatically derived trait instead of defining a function on
all types.
Literally the only thing that would change would be code like this:

fn fooT(t: T) {
 let size = mem::size_of(t);
}

would have to be changed to:

fn fooT: SizeOf(t: T) {
 let size = SizeOf::size_of(t); // or t.size_of()
}

Is that really so bad?

Yes, it is.


Now the function's type signature documents that the function's behavior
depends on the size of the type.
If you see a signature like `fn fooT(t: T)', then you know that it
doesn't.
There's no additional performance overhead and it makes size_of like other
intrinsic operators (+, ==, etc.).

The operators are not implemented for every type as they are for `size_of`.


I seriously don't see what downside this could possibly have.

Using unique pointers, vectors and even slice iterators will require a
semantically irrelevant `SizeOf` bound. Whether or not you allocate a
unique pointer to store a value internally shouldn't be part of the
function signature.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


To add to this, a SizeOf bound would be essentially equivalent to the 
Sized bound from DST, and I believe experimentation a while ago decided 
that requiring Sized is the common case (or, at least, so common that it 
would be extremely annoying to require it be explicit).



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev