"C. Bergstr?m" writes:
> James Carlson wrote:
> > It sounds like an interesting project, but unless there's some buy-in
> > from the others who'd be affected, I think you might be talking about
> > a fork of the code.
> >
> >   
> 
> I'm not sure I would call it a brand new compiler since open64 shares 
> the same frontend as gcc and cw/aw would function the same, but the code 
> generation on the backend certainly would change.. I've fairly 
> optimistic experience with the PathScale backend and optimization tools 
> for it, but that's another discussion.
> 
> /we/ are not planning to fork the code, but will almost certainly be an 
> external gate..

That sounds a little problematic to me on two fronts.

First, I think it's a rough assumption to say that _no_ code changes
will ever be needed when switching compilers, now or in the future as
new (unrelated) projects go into ON.  Maybe you'll get lucky, but it's
certainly been our experience that every minor change in the compiler
involves at least *some* small change in the existing code --
sometimes it's as small as stamping out a new warning nit that shows
up, and sometimes it's as big as a kernel corruption showstopper --
and often affects future projects as well.

If you do need to make changes, then you'll have at least two choices
to keep current:

  A. Maintain a separate fork and merge and test as you feel
     necessary.

  B. Contribute the changes back upstream to ON, where this now causes
     us to ask the question: "so, how will we know we won't break your
     fixes in the future?"  That gets into the policy issue noted
     before.

That question aside, there's potentially a second front here.
Assuming that you have no changes at all in the code, how do you
propose testing your separate build to make sure that every integrated
project still works fine?  How do you know that this new super-
optimizing compiler hasn't opened a timing hole somewhere, given that
direct comparison of the binaries won't answer the question?  Or that
the new compiler has exactly zero bugs that are triggered by all of
the code in ON?

My guess is that you wouldn't _really_ be in a position to test it,
which means that you'd run the risk of stability problems.

That's obviously conjecture, but it's based on what I've seen with
compiler transitions in the past.  They're hard ... much harder than I
think you're guessing.  They usually take weeks.

(I'm sure at least one of the folks who works in this area is lurking
right now on the list ... he can probably speak up and give some more
words of wisdom.)

> it will start as a slave and then if/when we get 
> contributions which meet the level of engineering/licensing (be that 
> patches/bug fixes or new implementations..) we'll merge it..  QA/auto 
> build and other things are planned because frankly I trust a centralized 
> approach in *addition* to developer check to add another layer.  imho 
> Sun should at the very least eat their own dog food and support whatever 
> the latest SSX is available as well.  I mean.. why release that if it 
> can't even compile onnv-gate.. there's _SSNEXT variable in 
> Makefile.master just for this.. why it's not used.. ? I appreciate the 
> input.. if you want to elaborate on the political situation for any 
> potentially soon to be open sourced bits that could benefit my/others 
> efforts I'm all ears..

It's not political, in the sense that "politics" refers to
relationships between people.

It's technical.  Even minor changes to the CBE (common build
environment), such as applying a patch to the compilers, are
potentially quite disruptive and have to made cautiously.
Historically, they always involve substantial changes.  These changes
must be made carefully *exactly* because we do in fact "eat our own
dog food" and/or "fly our own airplanes:" the kernel that gets built
today will be running on desktops and servers throughout the company
tomorrow morning.

(Yes, I realize that you're referring to our non-use of the latest-
and-greatest from the DevPro chain.  But this is the on-discuss list,
and we're charged with keeping ON sane, not the entire Universe.)

Making random and untested changes in a large and complex system like
Solaris wouldn't just be irresponsible; it'd be completely untenable.
See the QDS: nothing would work again, and (to paraphrase Woody Allen)
nobody would care.

The political aspect, if there is one, is the development process
around all of this, which includes the rules for gatelings: you must
test, you must do so on all affected platforms, and you must do so
thoroughly.  The political aspect of adding a new compiler to the mix
is that you'd be forcing every ON contributor to compile three times
instead of the current two -- just to make sure that your "open64"
support doesn't itself suffer bit-rot in the gate.  That's not
necessarily a bad thing, but it's a non-zero change, and would take
work.

-- 
James Carlson, Solaris Networking              <james.d.carlson at sun.com>
Sun Microsystems / 35 Network Drive        71.232W   Vox +1 781 442 2084
MS UBUR02-212 / Burlington MA 01803-2757   42.496N   Fax +1 781 442 1677

Reply via email to