Hi Roman,

I am not absolutely convinced of #1, #2 and #3 to be the right way:

There must be a way to try out new versions and see the full mess without
ploughing to all the big data universe.

Right now I am seeing the mess.

I was seriously running out of time: Having an unsupported spark1 version 
hanging around for some emergency situations seems a lot more worthwile than 
not to have spark2 at all. I seriously doubt anyone will support spark1 any 
more.

If the majority likes to stay at the old versions, please revert.

Olaf






> Am 30.12.2016 um 06:46 schrieb Roman Shaposhnik <[email protected]>:
> 
> Hi!
> 
> as BIGTOP-2282 indicated it seems that we have a bit
> of a difference in opinion on how major version bumps
> in the stack need to be handled. Spark 1 vs 2 and Hive
> 1 vs 2 are a good examples.
> 
> Since JIRA is not always the best medium for a discussion
> I wanted to get this back to the mailing list.
> 
> My biggest question is actually around the goals/assumptions
> that I wanted to validate with y'all.
> 
> So, am I right in assuming that:
>   #1 our implicit bias is to NOT have multiple version of
>      the same component in a stack?
> 
>   #2 we try to figure out what version is THE version based
>       on how ready the component is to be integrated with the
>       rest of the stack
> 
>   #3 if somebody wants to do the work to support an extra
>     version -- that's fine, but that version gets the digit
>     as in spark1 and also that person gets to do all the work
> 
> Thanks,
> Roman.

Reply via email to