> On 25 Nov 2015, at 08:54, Sandy Ryza <sandy.r...@cloudera.com> wrote: > > I see. My concern is / was that cluster operators will be reluctant to > upgrade to 2.0, meaning that developers using those clusters need to stay on > 1.x, and, if they want to move to DataFrames, essentially need to port their > app twice. > > I misunderstood and thought part of the proposal was to drop support for 2.10 > though. If your broad point is that there aren't changes in 2.0 that will > make it less palatable to cluster administrators than releases in the 1.x > line, then yes, 2.0 as the next release sounds fine to me. > > -Sandy >
mixing spark versions in a JAR cluster with compatible hadoop native libs isn't so hard: users just deploy them up separately. But: -mixing Scala version is going to be tricky unless the jobs people submit are configured with the different paths -the history server will need to be of the most latest spark version being executed in the cluster --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For additional commands, e-mail: dev-h...@spark.apache.org