Of course, in hbase shell, it works OK when creating a sample table

2015-05-26 16:28 GMT+08:00 dong wang <[email protected]>:

> Is there a good way to make sure that my "HBASE" can work OK or not?
>
> 2015-05-26 16:08 GMT+08:00 dong wang <[email protected]>:
>
>> Hi shaofeng, I checked the log for a long time, this may be the only
>> hints:
>>
>> 15/05/26 16:03:07 INFO hs.HistoryFileManager: Moving hdfs://
>> abc-master.test.com:8020/tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1432623918955_0018_conf.xml
>> to hdfs://
>> abc-master.test.com:8020/tmp/hadoop-yarn/staging/history/done/2015/05/26/000000/job_1432623918955_0018_conf.xml
>> 15/05/26 16:05:17 INFO hs.JobHistory: Starting scan to move intermediate
>> done files
>> 15/05/26 16:05:28 INFO hs.CompletedJob: Loading job:
>> job_1432623918955_0019 from file: hdfs://
>> abc-master.test.com:8020/tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1432623918955_0019-1432627478584-root-Kylin_HFile_Generator_tbl1-1432627521052-5-0-FAILED-root.root-1432627485193.jhist
>> 15/05/26 16:05:28 INFO hs.CompletedJob: Loading history file: [hdfs://
>> abc-master.test.com:8020/tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1432623918955_0019-1432627478584-root-Kylin_HFile_Generator_tbl1-1432627521052-5-0-FAILED-root.root-1432627485193.jhist
>> ]
>> 15/05/26 16:05:28 INFO jobhistory.JobSummary:
>> jobId=job_1432623918955_0019,submitTime=1432627478584,launchTime=1432627485193,firstMapTaskLaunchTime=1432627487604,firstReduceTaskLaunchTime=1432627495325,finishTime=1432627521052,resourcesPerMap=4096,resourcesPerReduce=8192,numMaps=5,numReduces=1,user=root,queue=default,status=FAILED,mapSlotSeconds=57,reduceSlotSeconds=75,jobName=Kylin_HFile_Generator_tbl1_1_Step
>> 15/05/26 16:05:28 INFO hs.HistoryFileManager: Deleting JobSummary file:
>> [hdfs://
>> abc-master.test.com:8020/tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1432623918955_0019.summary
>> ]
>> 15/05/26 16:05:28 INFO hs.HistoryFileManager: Moving hdfs://
>> abc-master.test.com:8020/tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1432623918955_0019-1432627478584-root-Kylin_HFile_Generator_tbl1-1432627521052-5-0-FAILED-root.root-1432627485193.jhist
>> to hdfs://
>> abc-master.test.com:8020/tmp/hadoop-yarn/staging/history/done/2015/05/26/000000/job_1432623918955_0019-1432627478584-root-Kylin_HFile_Generator_tbl1-1432627521052-5-0-FAILED-root.root-1432627485193.jhist
>> 15/05/26 16:05:28 INFO hs.HistoryFileManager: Moving hdfs://
>> abc-master.test.com:8020/tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1432623918955_0019_conf.xml
>> to hdfs://
>> abc-master.test.com:8020/tmp/hadoop-yarn/staging/history/done/2015/05/26/000000/job_1432623918955_0019_conf.xml
>>
>> 2015-05-26 16:02 GMT+08:00 dong wang <[email protected]>:
>>
>>> does anyone know that is there any stuff related to "HBase" itself when 
>>> ""Convert
>>> Cuboid Data to HFile"?
>>>
>>> 2015-05-26 15:58 GMT+08:00 dong wang <[email protected]>:
>>>
>>>> do anyone meet with the same problem with CDH 5.4.2 +
>>>> kylin-0.7.1-staging source codes?
>>>>
>>>> 2015-05-26 13:54 GMT+08:00 dong wang <[email protected]>:
>>>>
>>>>> sorry, I mis-click the log information button, I will check the MR log
>>>>> first
>>>>>
>>>>> 2015-05-26 13:51 GMT+08:00 dong wang <[email protected]>:
>>>>>
>>>>>> today, I update the environment, and when building the cube, the
>>>>>> error looks the following:
>>>>>> 2015-05-25 22:40:04.388 - State of Hadoop job:
>>>>>> job_1432568508250_0142:ACCEPTED - UNDEFINED
>>>>>> 2015-05-25 22:40:14.405 - State of Hadoop job:
>>>>>> job_1432568508250_0142:RUNNING - UNDEFINED
>>>>>> 2015-05-25 22:40:24.424 - State of Hadoop job:
>>>>>> job_1432568508250_0142:RUNNING - UNDEFINED
>>>>>> 2015-05-25 22:40:34.438 - State of Hadoop job:
>>>>>> job_1432568508250_0142:RUNNING - UNDEFINED
>>>>>> 2015-05-25 22:40:44.451 - State of Hadoop job:
>>>>>> job_1432568508250_0142:RUNNING - UNDEFINED
>>>>>> 2015-05-25 22:40:54.465 - State of Hadoop job:
>>>>>> job_1432568508250_0142:FINISHED - FAILED
>>>>>> no counters for job job_1432568508250_0142
>>>>>>
>>>>>>
>>>>>> and when looking into the MR log, it says:
>>>>>>
>>>>>> Total Vmem allocated for Containers 29.40 GB
>>>>>> Vmem enforcement enabled false
>>>>>> Total Pmem allocated for Container 14 GB
>>>>>> Pmem enforcement enabled true
>>>>>> Total VCores allocated for Containers 8
>>>>>> NodeHealthyStatus true
>>>>>> LastNodeHealthTime Tue May 26 13:20:13 CST 2015
>>>>>> NodeHealthReport
>>>>>> Node Manager Version: 2.6.0-cdh5.4.2 from
>>>>>> 15b703c8725733b7b2813d2325659eb7d57e7a3f by jenkins source checksum
>>>>>> e7a085479aa1989b5cecfabea403549 on 2015-05-20T00:09Z
>>>>>> Hadoop Version: 2.6.0-cdh5.4.2 from
>>>>>> 15b703c8725733b7b2813d2325659eb7d57e7a3f by jenkins source checksum
>>>>>> de74f1adb3744f8ee85d9a5b98f90d on 2015-05-20T00:03Z
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to