Hi, Azuryy  

Thank you for the reply

So you compiled Spark with mvn?

I’m watching the pom.xml, I think it is doing the same work as 
SparkBuild.Scala,  

I’m still confused by that, in Spark, some class utilized some classes like 
InputFormat, I assume that this should be included in hadoop-core.jar,

but I didn’t find any line specified hadoop-core-1.0.4.jar in pom.xml and 
SparkBuild.scala,  

Can you explain a bit to me?

Best,  

--  
Nan Zhu
School of Computer Science,
McGill University



On Monday, December 16, 2013 at 3:58 AM, Azuryy Yu wrote:

> Hi Nan,
> I am also using our customized hadoop, so you need to modiy the pom.xml, but 
> before this change, you should install your customized hadoop-* jar in the 
> local maven repo.
>  
>  
>  
>  
> On Sun, Dec 15, 2013 at 2:45 AM, Nan Zhu <[email protected] 
> (mailto:[email protected])> wrote:
> > Hi, all  
> >  
> > I’m trying to compile Spark with a customized version of hadoop, where I 
> > modify the implementation of DFSInputStream,  
> >  
> > I would like to SparkBuild.scala to make spark compile with my 
> > hadoop-core.xxx.jar instead of download a original one?  
> >  
> > I only found hadoop-client-xxx.jar and some lines about yarn jars in 
> > ScalaBuild.scala,  
> >  
> > Can you tell me which line I should modify to achieve the goal?
> >  
> > Best,  
> >  
> > --  
> > Nan Zhu
> > School of Computer Science,
> > McGill University
> >  
> >  
>  

Reply via email to