[
https://issues.apache.org/jira/browse/SPARK-16725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen updated SPARK-16725:
--
Flags: (was: Patch)
Target Version/s: (was: 2.0.1, 2.1.0)
Labels: (was: patch)
Priority: Minor (was: Major)
Fix Version/s: (was: 2.0.1)
Issue Type: Improvement (was: Bug)
Posting a diff as a comment is unhelpful, and we don't even use JIRA patches.
Use pull requests. This is also not a Bug and many of the JIRA fields are not
valid. Read
https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark before
going further.
The problem is that this mismatches the version in Hadoop, causing errors at
runtime. I think Hadoop is still on 11? and 14 is the most recent version that
is compatible with it. See comments in the pom files.
[~vanzin] I've forgotten why we can't just shade Guava at this point and let
Spark internally use what it likes?
> Migrate Guava to 16+
>
>
> Key: SPARK-16725
> URL: https://issues.apache.org/jira/browse/SPARK-16725
> Project: Spark
> Issue Type: Improvement
> Components: Build
>Affects Versions: 2.0.1
>Reporter: Min Wei
>Priority: Minor
> Original Estimate: 12h
> Remaining Estimate: 12h
>
> Currently Spark depends on an old version of Guava, version 14. However
> Spark-cassandra driver asserts on Guava version 16 and above.
> It would be great to update the Guava dependency to version 16+
> diff --git a/core/src/main/scala/org/apache/spark/SecurityManager.scala
> b/core/src/main/scala/org/apache/spark/SecurityManager.scala
> index f72c7de..abddafe 100644
> --- a/core/src/main/scala/org/apache/spark/SecurityManager.scala
> +++ b/core/src/main/scala/org/apache/spark/SecurityManager.scala
> @@ -23,7 +23,7 @@ import java.security.{KeyStore, SecureRandom}
> import java.security.cert.X509Certificate
> import javax.net.ssl._
>
> -import com.google.common.hash.HashCodes
> +import com.google.common.hash.HashCode
> import com.google.common.io.Files
> import org.apache.hadoop.io.Text
>
> @@ -432,7 +432,7 @@ private[spark] class SecurityManager(sparkConf: SparkConf)
> val secret = new Array[Byte](length)
> rnd.nextBytes(secret)
>
> -val cookie = HashCodes.fromBytes(secret).toString()
> +val cookie = HashCode.fromBytes(secret).toString()
> SparkHadoopUtil.get.addSecretKeyToUserCredentials(SECRET_LOOKUP_KEY,
> cookie)
> cookie
>} else {
> diff --git a/core/src/main/scala/org/apache/spark/SparkEnv.scala
> b/core/src/main/scala/org/apache/spark/SparkEnv.scala
> index af50a6d..02545ae 100644
> --- a/core/src/main/scala/org/apache/spark/SparkEnv.scala
> +++ b/core/src/main/scala/org/apache/spark/SparkEnv.scala
> @@ -72,7 +72,7 @@ class SparkEnv (
>
>// A general, soft-reference map for metadata needed during HadoopRDD
> split computation
>// (e.g., HadoopFileRDD uses this to cache JobConfs and InputFormats).
> - private[spark] val hadoopJobMetadata = new
> MapMaker().softValues().makeMap[String, Any]()
> + private[spark] val hadoopJobMetadata = new
> MapMaker().weakValues().makeMap[String, Any]()
>
>private[spark] var driverTmpDir: Option[String] = None
>
> diff --git a/pom.xml b/pom.xml
> index d064cb5..7c3e036 100644
> --- a/pom.xml
> +++ b/pom.xml
> @@ -368,8 +368,7 @@
>
> com.google.guava
> guava
> -14.0.1
> -provided
> +19.0
>
>
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org