On Fri, Feb 28, 2014 at 10:36:11PM +0100, Gábor Lehel wrote:
> I don't see the difference here. Why do you think this should be handled
> differently? 

To be honest I hadn't thought about it before. I agree there is no
fundamental difference, though there may be a practical one. I have to
mull this over.

I think you're on to something in that there is a connection
public/private fields, but I'm not quite sure what that ought to
mean. I dislike the idea that adding a private field changes a whole
bunch of defaults. (I've at times floating the idea of having a
`struct`, where things are public-by-default and we auto-derive
various traits, and a `class`, where things are private-by-default and
everything is "opt in", but nobody ever likes it but me. It's also
unclear how to treat enums in such a scheme.)

My first thought regarding variance was that we ought to say that
any type which is not freeze is invariant with respect to its parameters,
but that is really quite overly conservative. That means for example
that something like 

    struct MyVec<T> {
        r: Rc<Vec<T>>
    }

is not covariant with respect to `T`, which strikes me as quite
unfortunate! Rc, after all, is not freeze, but it's not freeze because
of the ref count, not the data it points at.

Some reasons I think it might be reasonable to treat variance differently:

- Nobody but type theorists understands it, at least not at an
  intuitive level. It has been my experience that many, many very
  smart people just find it hard to grasp, so declaring variance
  explicitly is probably bad. Unfortunately, making all type
  parameters invariant is problematic when working with a type like
  `Option<&'a T>`.

- It's unclear if the variance of type parameters is likely to evolve
  over time.

- It may not make much practical difference, since we don't have much
  subtyping in the language and it's likely to stay that way. I think
  right now subtyping is *only* induced via subtyping. The variable
  here is that if we introduced substruct types *and* subtyping rules
  there, subtyping might be more common.

  As an aside, I tried at one point to remove subtyping from the type
  inference altogether. This was working fine for a while but when I
  rebased the branch a while back I got lots of errors. I'm still tempted
  to try again.

- On the other hand, some part of me thinks we ought to just remove
  the variance inference and instead have a simple variance scheme: no
  annotation means covariant, otherwise you write `mut T` or `mut 'a`
  to declare an invariant parameter (intution being: a parameter must
  be invariant if it used within an interior mutable thing like a
  `Cell`). That would rather make this question moot.

Lots to think about!


Niko
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to