hi,ted
thanks for your reply.
we suspect whether there is something wrong with our yarn config.
Is it necessary for yarn to start up if we want to run the command
“hbase org.apache.hadoop.hbase.mapreduce.Import”
thanks again,Best wishes.
> 在 2015年10月27日,上午11:42,Ted Yu <[email protected]> 写道:
>
> Please note that the scheme was hdfs.
>
> Normally htrace-core-3.1.0-incubating.jar is under lib dir on each node
> where hbase is deployed.
>
> FYI
>
> On Mon, Oct 26, 2015 at 8:38 PM, panghaoyuan <[email protected]> wrote:
>
>> hi,ted
>>
>> the cluster is secure.
>> we had set our cluster’s dfs.permission to false.
>>
>>
>>
>>> 在 2015年10月27日,上午11:13,Ted Yu <[email protected]> 写道:
>>>
>>> Can you give us a bit more information:
>>>
>>> Is the cluster secure ?
>>> Have you checked permission for hdfs://mgfscluster/home/
>>> hadoop/hbase-1.1.2/lib/htrace-core-3.1.0-incubating.jar (accessible by
>> the
>>> user running Import) ?
>>>
>>> Cheers
>>>
>>> On Mon, Oct 26, 2015 at 7:58 PM, panghaoyuan <[email protected]> wrote:
>>>
>>>> hi,all
>>>>
>>>> our hbase is 1.1.2 and our hadoop is 2.5.2, we want to run a map reduce
>> job
>>>> which is “hbase org.apache.hadoop.hbase.mapreduce.Import”, and we get
>> the
>>>> error below
>>>>
>>>> Exception in thread "main" java.io.FileNotFoundException: File does not
>>>> exist:
>>>>
>> hdfs://mgfscluster/home/hadoop/hbase-1.1.2/lib/htrace-core-3.1.0-incubating.jar
>>>> at
>>>>
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1072)
>>>> at
>>>>
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)
>>>> at
>>>>
>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>>> at
>>>>
>> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)
>>>> at
>>>>
>> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
>>>> at
>>>>
>> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
>>>> at
>>>>
>> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
>>>> at
>>>>
>> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
>>>> at
>>>>
>> org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:265)
>>>> at
>>>>
>> org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:301)
>>>> at
>>>>
>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:389)
>>>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285) at
>>>> org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282) at
>>>> java.security.AccessController.doPrivileged(Native Method) at
>>>> javax.security.auth.Subject.doAs(Subject.java:415) at
>>>>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282) at
>>>> org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303) at
>>>> org.apache.hadoop.hbase.mapreduce.Import.main(Import.java:547)
>>>>
>>>> but actually we have htrace-core-3.1.0-incubating.jar in that directory.
>>>>
>>>>
>>>> Thanks!
>>>>
>>
>>
>>