[ 
https://issues.apache.org/jira/browse/HIVE-5016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741219#comment-13741219
 ] 

Abin Shahab commented on HIVE-5016:
-----------------------------------

Root cause of this issue is:
The classpath for jars in local mode points to a real file on disk. However, 
the JobSubmitter was cutting off the protocol part of the path. By default 
DistributedCache assumes that a protocol-less file is from HDFS, and that was 
causing the FileNotFound exception.
The solution is to the entire path to the DistributedCache, which allowed 
DistributedCache to find it in the file system.
                
> Local mode FileNotFoundException: File does not exist
> -----------------------------------------------------
>
>                 Key: HIVE-5016
>                 URL: https://issues.apache.org/jira/browse/HIVE-5016
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.10.0
>         Environment: Centos 6.3 (final)
> Hadoop 2.0.2-alpha
> Java(TM) SE Runtime Environment (build 1.6.0_31-b04)
> Hive libs:
> ls -1 lib/
> antlr-2.7.7.jar
> antlr-runtime-3.0.1.jar
> avro-1.7.1.jar
> avro-mapred-1.7.1.jar
> commons-cli-1.2.jar
> commons-codec-1.4.jar
> commons-collections-3.2.1.jar
> commons-compress-1.4.1.jar
> commons-configuration-1.6.jar
> commons-dbcp-1.4.jar
> commons-lang-2.4.jar
> commons-logging-1.0.4.jar
> commons-logging-api-1.0.4.jar
> commons-pool-1.5.4.jar
> datanucleus-connectionpool-2.0.3.jar
> datanucleus-core-2.0.3.jar
> datanucleus-enhancer-2.0.3.jar
> datanucleus-rdbms-2.0.3.jar
> derby-10.4.2.0.jar
> guava-r09.jar
> hbase-0.92.0.jar
> hbase-0.92.0-tests.jar
> hive-builtins-0.10.0.jar
> hive-cli-0.10.0.jar
> hive-common-0.10.0.jar
> hive-contrib-0.10.0.jar
> hive-exec-0.10.0.jar
> hive-hbase-handler-0.10.0.jar
> hive-hwi-0.10.0.jar
> hive-hwi-0.10.0.war
> hive-jdbc-0.10.0.jar
> hive-metastore-0.10.0.jar
> hive-pdk-0.10.0.jar
> hive-serde-0.10.0.jar
> hive-service-0.10.0.jar
> hive-shims-0.10.0.jar
> jackson-core-asl-1.8.8.jar
> jackson-jaxrs-1.8.8.jar
> jackson-mapper-asl-1.8.8.jar
> jackson-xc-1.8.8.jar
> JavaEWAH-0.3.2.jar
> javolution-5.5.1.jar
> jdo2-api-2.3-ec.jar
> jetty-6.1.26.jar
> jetty-util-6.1.26.jar
> jline-0.9.94.jar
> json-20090211.jar
> libfb303-0.9.0.jar
> libthrift-0.9.0.jar
> log4j-1.2.16.jar
> php
> py
> servlet-api-2.5-20081211.jar
> slf4j-api-1.6.1.jar
> slf4j-log4j12-1.6.1.jar
> sqlline-1_0_2.jar
> stringtemplate-3.1-b1.jar
> xz-1.0.jar
> zookeeper-3.4.3.jar
>            Reporter: Abin Shahab
>            Priority: Critical
>
> Hive jobs in local mode fail with the error posted below. The jar file that's 
> not being found exists and has the following access:
> > ls -l hive-0.10.0/lib/hive-builtins-0.10.0.jar
> rw-rw-r-- 1 ashahab ashahab 3914 Dec 18  2012 
> hive-0.10.0/lib/hive-builtins-0.10.0.jar
> Steps to reproduce:
> hive> set hive.exec.mode.local.auto=true;
> hive> set hive.exec.mode.local.auto;
> hive.exec.mode.local.auto=true
> hive> select count(*) from abin_test_table;
> Automatically selecting local only mode for query
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> 13/08/06 21:37:11 WARN conf.Configuration: 
> file:/tmp/ashahab/hive_2013-08-06_21-37-09_046_3263640403676309186/-local-10002/jobconf.xml:an
>  attempt to override final parameter: 
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 13/08/06 21:37:11 WARN conf.Configuration: 
> file:/tmp/ashahab/hive_2013-08-06_21-37-09_046_3263640403676309186/-local-10002/jobconf.xml:an
>  attempt to override final parameter: 
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use 
> org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
> Execution log at: 
> /tmp/ashahab/ashahab_20130806213737_7d26b796-5f55-44ca-a755-8898153d963b.log
> java.io.FileNotFoundException: File does not exist: 
> /home/ashahab/dev/hive-0.10.0/lib/hive-builtins-0.10.0.jar
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:782)
>       at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:208)
>       at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:71)
>       at 
> org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:252)
>       at 
> org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:290)
>       at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:361)
>       at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1218)
>       at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1215)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:396)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1367)
>       at org.apache.hadoop.mapreduce.Job.submit(Job.java:1215)
>       at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:617)
>       at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:612)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:396)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1367)
>       at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:612)
>       at 
> org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:447)
>       at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:689)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to