> On May 5, 2014, 10:59 p.m., Rohini Palaniswamy wrote:
> > core/src/main/java/org/apache/oozie/action/hadoop/JavaActionExecutor.java, 
> > lines 443-458
> > <https://reviews.apache.org/r/19929/diff/3/?file=574188#file574188line443>
> >
> >     Can just do:
> >     Path pathToAdd = new Path(uri.normalize());
> >     
> > Services.get().get(HadoopAccessorService.class).addFileToClassPath(user, 
> > pathToAdd, conf);
> >     
> >     
> >     Make the below change to HadoopAccessorService.java:
> >     public void addFileToClassPath(String user, final Path file, final 
> > Configuration conf)
> >                 throws IOException {
> >             ParamChecker.notEmpty(user, "user");
> >             try {
> >                 UserGroupInformation ugi = getUGI(user);
> >                 ugi.doAs(new PrivilegedExceptionAction<Void>() {
> >                     public Void run() throws Exception {
> >                         Configuration defaultConf = new Configuration();
> >                         XConfiguration.copy(conf, defaultConf);
> >                         //Doing this NOP add first to have the FS created 
> > and cached
> >                         DistributedCache.addFileToClassPath(file, 
> > defaultConf);
> >     
> >                         // Hadoop 0.20/1.x.
> >                         if (defaultConf.get("mapred.job.classpath.files") 
> > != null) {
> >                             // Duplicate hadoop 1.x code to workaround 
> > MAPREDUCE-2361 in Hadoop 0.20
> >                             // Refer OOZIE-1806.
> >                             String filepath = file.toUri().getPath();
> >                             String classpath = 
> > conf.get("mapred.job.classpath.files");
> >                             conf.set("mapred.job.classpath.files", 
> > classpath == null
> >                                 ? filepath
> >                                 : classpath + 
> > System.getProperty("path.separator") + filepath);
> >                             URI uri = 
> > file.getFileSystem(defaultConf).makeQualified(file).toUri();
> >                             DistributedCache.addCacheFile(uri, conf);
> >                         }
> >                         else { // Hadoop 0.23/2.x
> >                             DistributedCache.addFileToClassPath(file, conf);
> >                         }
> >     
> >                         return null;
> >                     }
> >                 });
> >     
> >             }
> >             catch (InterruptedException ex) {
> >                 throw new IOException(ex);
> >             }
> >     
> >         }

The code inside run() method can be replaced with JobUtils.addFileToClasspath() 
which is added by OOZIE-1806


- Rohini


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19929/#review42205
-----------------------------------------------------------


On May 4, 2014, 7:05 p.m., Benjamin Zhitomirsky wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/19929/
> -----------------------------------------------------------
> 
> (Updated May 4, 2014, 7:05 p.m.)
> 
> 
> Review request for oozie.
> 
> 
> Repository: oozie-git
> 
> 
> Description
> -------
> 
> When <name-node> element in Oozie workflow specifies a name node different 
> from the default one (specified in core-site.xml), the following 
> functionality doesn’t work properly:
> ?Location of libraries specified via 
> oozie.service.WorkflowAppService.system.libpath. Oozie first (during launcher 
> configuration) tries to locate them using name node specified by the 
> <name-node> element, but later during job submission it expects this path to 
> be under the default Oozie name node
> ?Processing of the job-xml element if job xml is specified via absolute path. 
> Oozie tries locate it under the default Oozie name node instead of the 
> name-node specified in action.
> 
> Specifying non-default name node makes a lot of sense in Azure environment, 
> because it allows to submit the same job to different Hadoop clusters.
> 
> 
> Diffs
> -----
> 
>   core/src/main/java/org/apache/oozie/action/hadoop/JavaActionExecutor.java 
> 59ad143 
>   
> core/src/test/java/org/apache/oozie/action/hadoop/ActionExecutorTestCase.java 
> bc2c1b6 
>   
> core/src/test/java/org/apache/oozie/action/hadoop/TestJavaActionExecutor.java 
> 390ad3f 
>   core/src/test/java/org/apache/oozie/test/XFsTestCase.java 18cb742 
>   core/src/test/java/org/apache/oozie/test/XTestCase.java 1536927 
> 
> Diff: https://reviews.apache.org/r/19929/diff/
> 
> 
> Testing
> -------
> 
> On deployed Hadoop cluster. Two tests were added.
> 
> 
> Thanks,
> 
> Benjamin Zhitomirsky
> 
>

Reply via email to