On 27 Oct 2016, at 23:04, adam kramer 
<ada...@gmail.com<mailto:ada...@gmail.com>> wrote:

Is the version of Spark built for Hadoop 2.7 and later only for 2.x releases?

Is there any reason why Hadoop 3.0 is a non-starter for use with Spark
2.0? The version of aws-sdk in 3.0 actually works for DynamoDB which
would resolve our driver dependency issues.

what version problems are you having there?


There's a patch to move to AWS SDK 10.10, but that has a jackson 2.6.6+ 
dependency; that being something I'd like to do in Hadoop branch-2 as well, as 
it is Time to Move On ( HADOOP-12705 ) . FWIW all jackson 1.9 dependencies have 
been ripped out, leaving on that 2.x version problem.

https://issues.apache.org/jira/browse/HADOOP-13050

The HADOOP-13345 s3guard work will pull in a (provided) dependency on dynamodb; 
looks like the HADOOP-13449 patch moves to SDK 1.11.0.

I think we are likely to backport that to branch-2 as well, though it'd help 
the dev & test there if you built and tested your code against trunk early —not 
least to find any changes in that transitive dependency set.


Thanks,
Adam

---------------------------------------------------------------------
To unsubscribe e-mail: 
user-unsubscr...@spark.apache.org<mailto:user-unsubscr...@spark.apache.org>



Reply via email to