On 2 Feb 2016, at 18:48, Michael Armbrust 
<mich...@databricks.com<mailto:mich...@databricks.com>> wrote:

I'm waiting for a few last fixes to be merged.  Hoping to cut an RC in the next 
few days.


I've just added https://issues.apache.org/jira/browse/SPARK-12807 to the list; 
there's a PR urgently in need of review.

Essentially: spark 1.6 network shuffle bundles a version of jackson 2.x 
incompatible with Hadoop's, leading to one of two outcomes: (a) shuffle broken 
(b) node managers not starting.

ongoing Hadoop work to have classpath isolation (and eventually, forked 
plugins) will fix this long term, short term; shade the imports

If this doesn't go in, then the release notes should warn that dynamic resource 
allocation won't work


On Tue, Feb 2, 2016 at 10:43 AM, Mingyu Kim 
<m...@palantir.com<mailto:m...@palantir.com>> wrote:
Hi all,

Is there an estimated timeline for 1.6.1 release? Just wanted to check how the 
release is coming along. Thanks!

Mingyu

From: Romi Kuntsman <r...@totango.com<mailto:r...@totango.com>>
Date: Tuesday, February 2, 2016 at 3:16 AM
To: Michael Armbrust <mich...@databricks.com<mailto:mich...@databricks.com>>
Cc: Hamel Kothari <hamelkoth...@gmail.com<mailto:hamelkoth...@gmail.com>>, Ted 
Yu <yuzhih...@gmail.com<mailto:yuzhih...@gmail.com>>, 
"dev@spark.apache.org<mailto:dev@spark.apache.org>" 
<dev@spark.apache.org<mailto:dev@spark.apache.org>>
Subject: Re: Spark 1.6.1

Hi Michael,
What about the memory leak bug?
https://issues.apache.org/jira/browse/SPARK-11293<https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_SPARK-2D11293&d=CwMFaQ&c=izlc9mHr637UR4lpLEZLFFS3Vn2UXBrZ4tFb6oOnmz8&r=ennQJq47pNnObsDh-88a9YUrUulcYQoV8giPASqXB84&m=tI8Pjfii7XuX3Suiky8mImD7S5BoAq6fgOSdJ7rt2Wo&s=R_B4rDig-0VPE5Q4YeLEs2HUIg-A8St1OtDjD89d_zY&e=>
Even after the memory rewrite in 1.6.0, it still happens in some cases.
Will it be fixed for 1.6.1?
Thanks,

Romi Kuntsman, Big Data Engineer
http://www.totango.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.totango.com_&d=CwMFaQ&c=izlc9mHr637UR4lpLEZLFFS3Vn2UXBrZ4tFb6oOnmz8&r=ennQJq47pNnObsDh-88a9YUrUulcYQoV8giPASqXB84&m=tI8Pjfii7XuX3Suiky8mImD7S5BoAq6fgOSdJ7rt2Wo&s=Z4TgGF0h7oetD4O6u_3qjrYbe0ZtW2g_In7V8tkByPg&e=>

On Mon, Feb 1, 2016 at 9:59 PM, Michael Armbrust 
<mich...@databricks.com<mailto:mich...@databricks.com>> wrote:
We typically do not allow changes to the classpath in maintenance releases.

On Mon, Feb 1, 2016 at 8:16 AM, Hamel Kothari 
<hamelkoth...@gmail.com<mailto:hamelkoth...@gmail.com>> wrote:
I noticed that the Jackson dependency was bumped to 2.5 in master for something 
spark-streaming related. Is there any reason that this upgrade can't be 
included with 1.6.1?

According to later comments on this thread: 
https://issues.apache.org/jira/browse/SPARK-8332<https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_SPARK-2D8332&d=CwMFaQ&c=izlc9mHr637UR4lpLEZLFFS3Vn2UXBrZ4tFb6oOnmz8&r=ennQJq47pNnObsDh-88a9YUrUulcYQoV8giPASqXB84&m=tI8Pjfii7XuX3Suiky8mImD7S5BoAq6fgOSdJ7rt2Wo&s=i-ngQFHfxOmgkYx_5NiCaHdIlm7zi2LYpUxm9I3RfR4&e=>
 and my personal experience using with Spark with Jackson 2.5 hasn't caused any 
issues but it does have some useful new features. It should be fully backwards 
compatible according to the Jackson folks.

On Mon, Feb 1, 2016 at 10:29 AM Ted Yu 
<yuzhih...@gmail.com<mailto:yuzhih...@gmail.com>> wrote:
SPARK-12624 has been resolved.
According to Wenchen, SPARK-12783 is fixed in 1.6.0 release.

Are there other blockers for Spark 1.6.1 ?

Thanks

On Wed, Jan 13, 2016 at 5:39 PM, Michael Armbrust 
<mich...@databricks.com<mailto:mich...@databricks.com>> wrote:
Hey All,

While I'm not aware of any critical issues with 1.6.0, there are several corner 
cases that users are hitting with the Dataset API that are fixed in branch-1.6. 
 As such I'm considering a 1.6.1 release.

At the moment there are only two critical issues targeted for 1.6.1:
 - SPARK-12624 - When schema is specified, we should treat undeclared fields as 
null (in Python)
 - SPARK-12783 - Dataset map serialization error

When these are resolved I'll likely begin the release process.  If there are 
any other issues that we should wait for please contact me.

Michael





Reply via email to