You need to give user hbase write permission to the subdirectory under /tmp

Cheers

On Jun 19, 2014, at 7:33 PM, Chen Wang <[email protected]> wrote:

> Ted,
> Thanks for the pointer!! After checking the 04 regionserver log, they are
> flushed with permission denied error. Does it mean that user hbase does not
> have permission to access to the hdfs?
> 
> aused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> Permission denied: user=hbase, access=WRITE,
> inode="/tmp/campaign_generator/2014-06-20-00-52-19/campaign":hdfs:supergroup:drwxr-xr-x
>        at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:265)
>        at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:251)
>        at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:232)
>        at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)
>        at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5489)
>        at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameToInternal(FSNamesystem.java:3196)
>        at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameToInt(FSNamesystem.java:3166)
>        at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:3134)
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:680)
>        at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:523)
> 
> Hbase should really throw exception instead of hanging there trying...
> 
> In any case, big step for me. Thanks for the debugging pointers. I used to
> be in .net scope.:-)
> Chen
> 
> 
> On Thu, Jun 19, 2014 at 7:25 PM, Ted Yu <[email protected]> wrote:
> 
>> Was cluster-04 always showing up in the log ?
>> 
>> Have you checked region server log on cluster-04 ?
>> 
>> Cheers
>> 
>> 
>> On Thu, Jun 19, 2014 at 6:09 PM, Chen Wang <[email protected]>
>> wrote:
>> 
>>> Last piece of the puzzle!
>>> 
>>> My Mapreduce succeeded in generating hdfs file, However, bulk load with
>> the
>>> following code:
>>> 
>>> LoadIncrementalHFiles loader = new LoadIncrementalHFiles(hbaseConf);
>>> 
>>> loader.doBulkLoad(newExecutionOutput, candidateSendTable);
>>> 
>>> Just hangs there without any output. I tried to run
>>> 
>>> hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
>>> <hdfs://storefileoutput> <tablename>
>>> 
>>> It seems to get into some kind of infinite loop..
>>> 
>>> *2014-06-19 18:06:29,990 DEBUG [LoadIncrementalHFiles-1]
>>> client.HConnectionManager$HConnectionImplementation: Removed
>>> cluster-04:60020 as a location of
>>> [tablenae],1403133308612.060ff9282b3b653c59c1e6be82d2521a. for
>>> tableName=[table] from cache*
>>> 
>>> *2014-06-19 18:06:30,004 DEBUG [LoadIncrementalHFiles-1]
>>> mapreduce.LoadIncrementalHFiles: Going to connect to server
>>> region=[tablename],,1403133308612.060ff9282b3b653c59c1e6be82d2521a.,
>>> hostname=cluster-04,60020,1403211430209, seqNum=1 for row  with hfile
>> group
>>> [{[B@3b5d5e0d,hdfs://mypath}]*
>>> 
>>> *2014-06-19 18:06:45,839 DEBUG [LruStats #0] hfile.LruBlockCache:
>>> Total=3.17 MB, free=383.53 MB, max=386.70 MB, blocks=0, accesses=0,
>> hits=0,
>>> hitRatio=0, cachingAccesses=0, cachingHits=0,
>>> cachingHitsRatio=0,evictions=0, evicted=0, evictedPerRun=NaN*
>>> 
>>> 
>>> *Any guidence on how I can debug this?*
>>> 
>>> *Thanks much!*
>>> 
>>> *Chen*
>> 

Reply via email to