Re: Scala 2.11 builds broken/ Can the PR build run also 2.11?

2015-10-08 Thread Ted Yu
Interesting https://amplab.cs.berkeley.edu/jenkins/view/Spark-QA-Compile/job/Spark-Master-Scala211-Compile/ shows green builds. On Thu, Oct 8, 2015 at 6:40 AM, Iulian Dragoș wrote: > Since Oct. 4 the build fails on 2.11 with the dreaded > > [error]

Scala 2.11 builds broken/ Can the PR build run also 2.11?

2015-10-08 Thread Iulian Dragoș
Since Oct. 4 the build fails on 2.11 with the dreaded [error] /home/ubuntu/workspace/Apache Spark (master) on 2.11/core/src/main/scala/org/apache/spark/rpc/netty/NettyRpcEnv.scala:310: no valid targets for annotation on value conf - it is discarded unused. You may specify targets with

Re: Scala 2.11 builds broken/ Can the PR build run also 2.11?

2015-10-08 Thread Ted Yu
I tried building with Scala 2.11 on Linux with latest master branch : [INFO] Spark Project External MQTT SUCCESS [ 19.188 s] [INFO] Spark Project External MQTT Assembly ... SUCCESS [ 7.081 s] [INFO] Spark Project External ZeroMQ .. SUCCESS

RE: RowNumber in HiveContext returns null, negative numbers or huge

2015-10-08 Thread Saif.A.Ellafi
Hi, I have figured this only happens in cluster mode. working properly in local[32] From: saif.a.ell...@wellsfargo.com [mailto:saif.a.ell...@wellsfargo.com] Sent: Thursday, October 08, 2015 10:23 AM To: dev@spark.apache.org Subject: RowNumber in HiveContext returns null, negative numbers or huge

Re: Scala 2.11 builds broken/ Can the PR build run also 2.11?

2015-10-08 Thread Reynold Xin
The problem only applies to the sbt build because it treats warnings as errors. @Iulian - how about we disable warnings -> errors for 2.11? That would seem better until we switch 2.11 to be the default build. On Thu, Oct 8, 2015 at 7:55 AM, Ted Yu wrote: > I tried

Re: Understanding code/closure shipment to Spark workers‏

2015-10-08 Thread Xiao Li
Hi, Arijit, The code flow of spark-submit is simple. Enter the main function of SparkSubmit.scala --> case SparkSubmitAction.SUBMIT => submit(appArgs) --> doRunMain() in function submit() in the same file --> runMain(childArgs,...) in the same file --> mainMethod.invoke(null,

Compiling Spark with a local hadoop profile

2015-10-08 Thread sbiookag
I'm modifying hdfs module inside hadoop, and would like the see the reflection while i'm running spark on top of it, but I still see the native hadoop behaviour. I've checked and saw Spark is building a really fat jar file, which contains all hadoop classes (using hadoop profile defined in maven),

Re: Compiling Spark with a local hadoop profile

2015-10-08 Thread Ted Yu
In root pom.xml : 2.2.0 You can override the version of hadoop with command similar to: -Phadoop-2.4 -Dhadoop.version=2.7.0 Cheers On Thu, Oct 8, 2015 at 11:22 AM, sbiookag wrote: > I'm modifying hdfs module inside hadoop, and would like the see the > reflection while

spark over drill

2015-10-08 Thread Pranay Tonpay
hi ,, Is spark-drill integration already done ? if yes, which spark version supports it ... it was in the "upcming list for 2015" is what i had read somewhere

Re: Compiling Spark with a local hadoop profile

2015-10-08 Thread sbiookag
Thanks Ted for reply. But this is not what I want. This would tell spark to read hadoop dependency from maven repository, which is the original version of hadoop. I myslef is modifying the hadoop code, and wanted to include them inside the spark fat jar. "Spark-Class" would run slaves with the

Re: spark over drill

2015-10-08 Thread Reynold Xin
You probably saw that in a presentation given by the drill team. You should check with them on that. On Thu, Oct 8, 2015 at 11:51 AM, Pranay Tonpay wrote: > hi ,, > Is spark-drill integration already done ? if yes, which spark version > supports it ... it was in the "upcming