Can you show us the command you used ? What hbase release are you using ?
Thanks > On Mar 17, 2015, at 4:38 AM, Akmal Abbasov <[email protected]> wrote: > > Hi, I have 2 clusters running HBase, and I want to export a snapshot from > cluster A to cluster B. > When I am doing exportSnapshot I am getting java.io.FileNotFoundException, > because it is searching for a jar file in hdfs, not in my local storage. > Any ideas how it could be solved? > > Here is an output: > 2015-03-17 11:30:26,310 INFO [main] Configuration.deprecation: session.id is > deprecated. Instead, use dfs.metrics.session-id > 2015-03-17 11:30:26,312 INFO [main] jvm.JvmMetrics: Initializing JVM Metrics > with processName=JobTracker, sessionId= > 2015-03-17 11:30:27,383 INFO [main] mapreduce.JobSubmitter: Cleaning up the > staging area > file:/tmp/hadoop-hadoop/mapred/staging/hadoop806729561/.staging/job_local806729561_0001 > 2015-03-17 11:30:27,387 ERROR [main] snapshot.ExportSnapshot: Snapshot export > failed > java.io.FileNotFoundException: File does not exist: > hdfs://hiveprodeuw1/opt/hadoop/hbase-0.98.7-hadoop2/lib/hadoop-common-2.5.1.jar > at > org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1072) > at > org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064) > at > org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288) > at > org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224) > at > org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93) > at > org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57) > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:265) > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:301) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:389) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282) > at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot.runCopyJob(ExportSnapshot.java:768) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:925) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:991) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:995)
