Hey Michael,
There is a discussion on TIMESTAMP semantics going on the thread "SQL
TIMESTAMP semantics vs. SPARK-18350" which might impact Spark 2.2. Should
we make a decision there before voting on the next RC for Spark 2.2?
Thanks,
Kostas
On Tue, May 30, 2017 at 12:09 PM, Michael Armbrust
wro
>From both this and the JDK thread, I've noticed (including myself) that
people have different notions of compatibility guarantees between major and
minor versions.
A simple question I have is: What compatibility can we break between minor
vs. major releases?
It might be worth getting on the same
Also, +1 on dropping jdk7 in Spark 2.0.
Kostas
On Mon, Mar 28, 2016 at 2:01 PM, Marcelo Vanzin wrote:
> Finally got some internal feedback on this, and we're ok with
> requiring people to deploy jdk8 for 2.0, so +1 too.
>
> On Mon, Mar 28, 2016 at 1:15 PM, Luciano Resende
> wrote:
> > +1, I al
In addition, with Spark 2.0, we are throwing away binary compatibility
anyways so user applications will have to be recompiled.
The only argument I can see is for libraries that have already been built
on Scala 2.10 that are no longer being maintained. How big of an issue do
we think that is?
Kos
If an argument here is the ongoing build/maintenance burden I think we
should seriously consider dropping scala 2.10 in Spark 2.0. Supporting
scala 2.10 is bigger build/infrastructure burden than supporting jdk7 since
you actually have to build different artifacts and test them whereas you
can targ
Hello all,
I'd like to close out the discussion on SPARK-13843 by getting a poll from
the community on which components we should seriously reconsider re-adding
back to Apache Spark. For reference, here are the modules that were removed
as part of SPARK-13843 and pushed to: https://github.com/spar
I'd also like to make it a requirement that Spark 2.0 have a stable
dataframe and dataset API - we should not leave these APIs experimental in
the 2.0 release. We already know of at least one breaking change we need to
make to dataframes, now's the time to make any other changes we need to
stabiliz
tion of those two features require a 1.7 release instead
> of 1.6.1?
>
> On Fri, Nov 13, 2015 at 11:40 AM, Kostas Sakellis
> wrote:
>
>> We have veered off the topic of Spark 2.0 a little bit here - yes we can
>> talk about RDD vs. DS/DF more but lets refocus on Spark 2.0.
w features/APIs stabilized will be very beneficial. This might
make Spark 1.7 a lighter release but that is not necessarily a bad thing.
Any thoughts on this timeline?
Kostas Sakellis
On Thu, Nov 12, 2015 at 8:39 PM, Cheng, Hao wrote:
> Agree, more features/apis/optimization need to be a
I know we want to keep breaking changes to a minimum but I'm hoping that
with Spark 2.0 we can also look at better classpath isolation with user
programs. I propose we build on spark.{driver|executor}.userClassPathFirst,
setting it true by default, and not allow any spark transitive dependencies
to
+1 on a lightweight 2.0
What is the thinking around the 1.x line after Spark 2.0 is released? If
not terminated, how will we determine what goes into each major version
line? Will 1.x only be for stability fixes?
Thanks,
Kostas
On Tue, Nov 10, 2015 at 3:41 PM, Patrick Wendell wrote:
> I also f
+1 on RC3
I agree that this should not block the release. Once we have a fix for it,
putting it in a double dot release sounds like a good plan.
Kostas
On Mon, Mar 9, 2015 at 11:27 AM, Patrick Wendell wrote:
> Hey All,
>
> Today there was a JIRA posted with an observed regression around Spar
12 matches
Mail list logo