On Tue, Sep 01, 2009 at 10:46:55AM -0700, Garrett D'Amore wrote:
> Nicolas Williams wrote:
> >Actually, I wasn't.  The points of my two posts in this sub-thread were:
> >
> > - The ARC should stay out of design issues (in this case the design
> >   issue of using Tcl to configure shell environment modules);
> >  
> 
> Why?  If the design imposes upon the consumers, then ARC should be 

Because design review is not normally in scope for the ARC.

> concerned.  As a specific case here, Tcl imposes two things on consumers:
> 
>    1) a scripting syntax/language heretofore not used for core system bits

If from the user's point of view it's just a few extra curly braces,
then who cares?

>    2) a dependency on the Tcl packages itself.

The dependency on Tcl is something to be documented.  It is not remotely
a problem, particularly given the stability of Tcl.

> Both of those considerations fall, IMO, well within ARC oversight.

I don't agree.

> > - /contrib is a not good place for Sun projects to integrate into --
> >   aim for /dev & /release if you can;
> 
> Why?  I think /contrib is perfectly reasonable for projects that just 
> want to enable something, and don't want to make support guarantees.  We 
> have no other way to deliver 'caveat emptor' bits that I can see.  If 
> /contrib works for the rest of the community, why not also Sun?

Precisely because there's no commitment to keep anything in /contrib
working.  That's fine for: a) random third parties, b) as a stepping
stone to /dev and /release (e.g., for alpha and beta testing purposes).

> > - What is the architectural status of /contrib?  IMO: none.  /contrib
> >   is for third party contributions that are not yet ready to integrate
> >   into /dev & /release;
> 
> True.  Or things that don't need/want to integrate into /dev or 
> /release.  For example, joe random software package that someone sets 
> up, but doesn't want to commit to sustaining forever.

Only in so far as it may be the only reasonable option if the ARC
rejects a case.  The case that led to this sub-thread should not be
rejected.

> > - What to do about FOSS backwards compatibility breakage?  (This was in
> >   my follow-up.)  My suggestion: documented stability levels can and
> >   should be used to set expectations, but we need to be much more
> >   flexible with respect to backwards compatibility, and for the reasons
> >   that you pointed out (i.e., we're in violent agreement).
> 
> Yes.  I'd like to point out that FOSS gets a special exception when it 
> is delivered strictly for FOSS compatibility.  I think that exception 
> goes away when we start using it as a building block for our own 
> architecture pieces.  I.e., its bad if project team A delivers a new 
> version of a package that breaks project team B.

I don't think that's true _now_, except, perhaps, as a matter of ARC
discretion.  I'm arguing that the interface and/or the release binding
taxonomies need to explicitly state this.

> >   Specifically, IMO: we should have very frequent Minor releases, or at
> >   least treat releases as "Minor for some things, Mico/Patch for most
> >   things", or otherwise allow for out of cycle breaks, plus suitable
> >   warnings on breaks.
> 
> This topic is not a matter for ARC to decide.  Release timelines are 
> decided by the PAC and business teams.  ARC can however relax some of 
> the breakage rules.  We've already widely done that for a number of 
> items in Nevada/OpenSolaris.

The notion of "Minor for some things, Mico/Patch for most things" is not
in the release binding taxonomy, IIRC.  Suppose the PAC proposes that.
Then what?  IMO: the release binding taxonomy should have to be updated,
for it'd be silly for the ARC to deny the PAC on such a matter.  But the
ARC could update that taxonomy now -- we already know that OpenSolaris
will be "Major for some things", so what's the ARC waiting for?

> >   Alt. description: keep existing ARC interface taxonomy, but make it
> >   easier to make out of cycle breaks by adding to the release binding
> >   taxonomy.
> 
> I'm not sure how that will help.  Release taxonomies need to follow user 
> expectations.  Otherwise they become useless.  (By that I mean, a 
> "minor" release needs to be associated with a real versioned product, so 
> end users know where the boundaries are.)

Expectations are set according to past track record, but track record is
not exactly a failsafe predictor when it comes to third parties.  _We_
can make strong interface commitments, but we can't really do it on
behalf of others.  Yet labeling all FOSS as Volatile is hardly helpful.

> We used to have better ways of expressing this.  Like "Evolving".  But 
> folks decided that the fine grained stability levels were not useful.

I don't agree.

> I'd really like to have something like "External", which would mean that 
> Sun (or the Solaris org) makes no guarantees about interface stability, 
> we just follow the upstream.  Caveat emptor sort of thing.

Yes, I agree with this.  We need an adjective that indicates that
something is of third party origin and that advertised interface
stability is advisory only, not necessarily reliable.

> >The vast majority of the time upstream will manage to stay backwards
> >compatible, so the majority of the time Committed and Uncommitted will
> >be much more useful than Volatile.
> 
> It only takes one nasty breakage to be really annoying.  And it depends 
> on your upstream.  Some are pretty good at understanding binary 
> compatibility.  Others, much less so.  (OpenSSL, I'm looking squarely at 
> you.)

And yet what are we doing to keep OpenSSL compatible yet full-featured?
Nothing.  In OpenSolaris we just update it and _try_ to fix every
consumer in OpenSolaris consolidations.

> >>               ...  The question isn't "should we provide a new
> >>version?", but rather "how can we provide a *set* of versions that can
> >>be used to meet the various needs articulated above?".
> >>    
> >
> >Sometimes you get into DLL hell if you provide multiple versions.  At
> >least with direct binding that's less likely than it used to be.
> >
> >We might even need to deliver more than two versions of some FOSS.
> 
> This way generally lies madness.  There are some specific counter 
> examples were it has worked out (multiple perl versions for example), 
> but doing this generally for shared libraries will cause huge kinds of 
> problems.

We will have these choices when the upstream community breaks
compatibility:

a) don't update;
b) warn, update and break compatibility and oh well;
c) deliver multiple versions, risk DLL hell;
d) fund an effort to restore backwards compatibility and contribute the
   fixes back upstream;
e) fund an effort to restore backwards compatibility and fork;

The ideal choice is (d), but it's not realistic in many cases (we
can't fund everything, and not all upstream communities would welcome
such changes).

(e) is a nightmare.  (c) is great, if you can get away with it.  (a) is
not tolerable for long.

(b) is only practical if our users will deal.  To make that likely we
should set expectations that stable-looking FOSS interfaces might break
on release boundaries that normally Sun-owned code wouldn't, and warn
loudly.

> I don't care what crapware we import for end-users to use at their own 
> risk; but if we ought to be *architecting* things so that the bits we 
> rely on and use are not subject to constant or unplanned breakage.

Those two statements are in conflict.  Please resolve it :)

> Put another way, IMO a project that is a "leaf project" (no dependents) 
> can do whatever it wants as long as it sets end-user expectations 
> appropriately.

A project that is a leaf today may not be tomorrow.  I can see some app
depending on Environment Modules; can't you?

> >The practically inescapable conclusion is that we should just allow out
> >of cycle backwards incompatible changes (after some teeth gnashing,
> >perhaps), 
> 
> This has already happened for specific cases.  I think its OK but it 
> needs to be an exception and such cases carefully reviewed for impact.

See above.

Nico
-- 

Reply via email to