Hi Patrick,
thank you for testing. I think I found out what is wrong: I am trying to build 
my own examples that also depend on another library which in turns depends on 
hadoop 2.2.
what was happening is that my library brings in hadoop 2.2, while spark depends 
on hadoop 1.04 and then I think I get conflict versions of the classes. 

A couple of things are not clear to me:

1: do the published artifacts support YARN and hadoop 2.2 or will I need to 
make my own build? 
2: if they do, how do I activate the profiles in my maven config? I tried mvn 
-Pyarn compile but it does not work (maven says “[WARNING] The requested 
profile "yarn" could not be activated because it does not exist.”)


essentially I would like to specify the spark dependencies as:

<dependencies>
                <dependency>
                        <groupId>org.scala-lang</groupId>
                        <artifactId>scala-library</artifactId>
                        <version>${scala.version}</version>
                </dependency>

                <dependency>
                        <groupId>org.apache.spark</groupId>
                        
<artifactId>spark-core_${scala.tools.version}</artifactId>
                        <version>0.9.0-incubating</version>
                </dependency>

and tell maven to use the “yarn” profile for this dependency, but I do not seem 
to be able to make it work. 
Anybody has any suggestion?

Alex

Reply via email to