yes I will merge it into 0.7; it should be released in 0.7.2 as Luke
already upgraded the version on pom.xml; From 0.7.1 to 0.7.2 only includes
bug fixes so the impact to user should be very minor.

2015-05-27 14:26 GMT+08:00 dong wang <[email protected]>:

> In addition, still think that the patch merge to kylin-0.7.1 is very
> necessary, since the release will be based on 0.7.1, and the new users may
> use hbase-1.0 as well~
>
> 2015-05-27 14:05 GMT+08:00 dong wang <[email protected]>:
>
> > sure, thanks, this time, it is caused by that using yum install, it
> > automatically upgrades to hbase-1.0 for cdh 5.4.2, in the future, I will
> > manually install hbase with specified version from hbase tarball
> >
> > 2015-05-27 13:39 GMT+08:00 ShaoFeng Shi <[email protected]>:
> >
> >> No it wasn't proved, it is just a deduction; If downgrade to 0.98 can
> >> solve
> >> your problem, that is good; In the future, when you make such upgrade,
> >> suggest you confirm with Kylin team, or do some testing in QA
> environment
> >> before make the change into production.
> >>
> >> 2015-05-27 13:33 GMT+08:00 dong wang <[email protected]>:
> >>
> >> > Shaofeng, it is proved the above environment will hit the error,
> >> currently,
> >> > I switch back to hbase-0.98.6-cdh5.3.2(yum remove the hbase-1.0.0
> >> mentioned
> >> > above, and then, manually install hbase-0.98.6-cdh5.3.2 tar.gz file
> from
> >> > cloudera site), it works OK now~
> >> >
> >> > 2015-05-27 11:55 GMT+08:00 dong wang <[email protected]>:
> >> >
> >> > > much appreciated!
> >> > >
> >> > > 2015-05-27 11:47 GMT+08:00 ShaoFeng Shi <[email protected]>:
> >> > >
> >> > >> 0.8.0 is still under development, it is unstable; please don't use
> >> it so
> >> > >> far, and we will not provide support on any issue of 0.8 at this
> >> moment;
> >> > >>
> >> > >> I'm applying the patch on 0.7.1, but before that I need merge 0.7.1
> >> to
> >> > >> 0.8,
> >> > >> many conflicts to resolve...
> >> > >>
> >> > >> 2015-05-27 11:13 GMT+08:00 dong wang <[email protected]>:
> >> > >>
> >> > >> > Shaofeng, I intended to build 0.8.0, however, it fails, and there
> >> are
> >> > >> much
> >> > >> > version mismatch info when building(some are 0.7.1, some are
> >> 0.8.0,)
> >> > >> is it
> >> > >> > unstable, right?
> >> > >> >
> >> > >> > 2015-05-27 9:46 GMT+08:00 网站联系人 <[email protected]>:
> >> > >> >
> >> > >> > > yes, very urgent, I want to try the patch! I have re-setup the
> >> CDH
> >> > env
> >> > >> > > twice, still the above error------------------ 原始邮件
> >> > ------------------
> >> > >> > > 发件人: "ShaoFeng Shi"<[email protected]>
> >> > >> > > 发送时间: 2015年5月27日(星期三) 上午9:38
> >> > >> > > 收件人: "dev"<[email protected]>;
> >> > >> > > 主题: Re: urgent help, fail when "Convert Cuboid Data to HFile"
> >> > >> > >
> >> > >> > >
> >> > >> > > Dong,  your kylin env might be affected by KYLIN-753
> >> > >> > > <https://issues.apache.org/jira/browse/KYLIN-753>;
> >> > >> > >
> >> > >> > > So far the fix for this issue was only made on 0.8.0 branch;
> But
> >> if
> >> > >> your
> >> > >> > > case is urgent we can apply the patch on 0.7 branch I think;
> >> > >> > >
> >> > >> > > 2015-05-27 9:17 GMT+08:00 Luke Han <[email protected]>:
> >> > >> > >
> >> > >> > > > Looks like you are using HBase v1.0? could you please help to
> >> > >> compare
> >> > >> > the
> >> > >> > > > difference of this API between v1.0 and v0.98?
> >> > >> > > >
> >> > >> > > > I'm guessing there's some trick stuff.
> >> > >> > > >
> >> > >> > > >
> >> > >> > > > Best Regards!
> >> > >> > > > ---------------------
> >> > >> > > >
> >> > >> > > > Luke Han
> >> > >> > > >
> >> > >> > > > 2015-05-27 2:49 GMT+08:00 dong wang <[email protected]
> >:
> >> > >> > > >
> >> > >> > > > > shaofeng, I can create and scan table successfully in
> hbase,
> >> but
> >> > >> > always
> >> > >> > > > > fails when the above step, and as checked the MR log
> >> further, I
> >> > >> found
> >> > >> > > > that
> >> > >> > > > > all of the 4 attempts of the task fail with the same error
> >> > below:
> >> > >> > > > >
> >> > >> > > > > "status":"FAILED","error":"Error:
> >> > >> > > > >
> >> > >> > > > >
> >> > >> > > >
> >> > >> > >
> >> > >> >
> >> > >>
> >> >
> >>
> org.apache.hadoop.hbase.io.encoding.HFileBlockDefaultEncodingContext.compressAndEncrypt([B)[B"
> >> > >> > > > >
> >> > >> > > > > the info about hbase and cdh are: hbase.x86_64
> >> > >> > > > >  1.0.0+cdh5.4.2+142-1.cdh5.4.2.p0.4.el6
> >> > >> > > > >
> >> > >> > > > > 2015-05-26 17:00 GMT+08:00 ShaoFeng Shi <
> >> [email protected]
> >> > >:
> >> > >> > > > >
> >> > >> > > > > > you can use hbase shell to scan a table to see whether
> the
> >> > >> server
> >> > >> > as
> >> > >> > > > well
> >> > >> > > > > > as the table are working; But this step is an offline MR
> >> job,
> >> > it
> >> > >> > > > doesn't
> >> > >> > > > > > depend on HBase server.
> >> > >> > > > > >
> >> > >> > > > > > 2015-05-26 16:48 GMT+08:00 dong wang <
> >> [email protected]
> >> > >:
> >> > >> > > > > >
> >> > >> > > > > > > is there a good way to test whether the HBase works OK
> or
> >> > not?
> >> > >> > > > > > >
> >> > >> > > > > > > 2015-05-26 16:46 GMT+08:00 dong wang <
> >> > [email protected]
> >> > >> >:
> >> > >> > > > > > >
> >> > >> > > > > > > > as we checked, the cube is very small~ it's weird
> >> > >> > > > > > > >
> >> > >> > > > > > > > 2015-05-26 16:33 GMT+08:00 ShaoFeng Shi <
> >> > >> [email protected]
> >> > >> > >:
> >> > >> > > > > > > >
> >> > >> > > > > > > >> This job will convert Kylin's cuboid (sequence
> files)
> >> to
> >> > >> > HBases
> >> > >> > > > > HFile
> >> > >> > > > > > > >> format; It will use HBase classes in the MR; This
> >> step is
> >> > >> > stable
> >> > >> > > > as
> >> > >> > > > > we
> >> > >> > > > > > > >> didn't see error for some time; The only issue I
> know
> >> for
> >> > >> this
> >> > >> > > > step
> >> > >> > > > > > is,
> >> > >> > > > > > > if
> >> > >> > > > > > > >> the cube is very large, the convert may take long
> time
> >> > and
> >> > >> > makes
> >> > >> > > > > Kylin
> >> > >> > > > > > > job
> >> > >> > > > > > > >> engine time out; Any way, you can investigate from
> >> > another
> >> > >> > > > > direction:
> >> > >> > > > > > is
> >> > >> > > > > > > >> there any change be made on this env before this
> >> error?
> >> > >> > > > > > > >>
> >> > >> > > > > > > >> 2015-05-26 16:08 GMT+08:00 dong wang <
> >> > >> [email protected]
> >> > >> > >:
> >> > >> > > > > > > >>
> >> > >> > > > > > > >> > Hi shaofeng, I checked the log for a long time,
> this
> >> > may
> >> > >> be
> >> > >> > > the
> >> > >> > > > > only
> >> > >> > > > > > > >> hints:
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >> > 15/05/26 16:03:07 INFO hs.HistoryFileManager:
> Moving
> >> > >> hdfs://
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >>
> >> > >> > > > > > >
> >> > >> > > > > >
> >> > >> > > > >
> >> > >> > > >
> >> > >> > >
> >> > >> >
> >> > >>
> >> >
> >>
> abc-master.test.com:8020/tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1432623918955_0018_conf.xml
> >> > >> > > > > > > >> > to hdfs://
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >>
> >> > >> > > > > > >
> >> > >> > > > > >
> >> > >> > > > >
> >> > >> > > >
> >> > >> > >
> >> > >> >
> >> > >>
> >> >
> >>
> abc-master.test.com:8020/tmp/hadoop-yarn/staging/history/done/2015/05/26/000000/job_1432623918955_0018_conf.xml
> >> > >> > > > > > > >> > 15/05/26 16:05:17 INFO hs.JobHistory: Starting
> scan
> >> to
> >> > >> move
> >> > >> > > > > > > intermediate
> >> > >> > > > > > > >> > done files
> >> > >> > > > > > > >> > 15/05/26 16:05:28 INFO hs.CompletedJob: Loading
> job:
> >> > >> > > > > > > >> job_1432623918955_0019
> >> > >> > > > > > > >> > from file: hdfs://
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >>
> >> > >> > > > > > >
> >> > >> > > > > >
> >> > >> > > > >
> >> > >> > > >
> >> > >> > >
> >> > >> >
> >> > >>
> >> >
> >>
> abc-master.test.com:8020/tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1432623918955_0019-1432627478584-root-Kylin_HFile_Generator_tbl1-1432627521052-5-0-FAILED-root.root-1432627485193.jhist
> >> > >> > > > > > > >> > 15/05/26 16:05:28 INFO hs.CompletedJob: Loading
> >> history
> >> > >> > file:
> >> > >> > > > > > [hdfs://
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >>
> >> > >> > > > > > >
> >> > >> > > > > >
> >> > >> > > > >
> >> > >> > > >
> >> > >> > >
> >> > >> >
> >> > >>
> >> >
> >>
> abc-master.test.com:8020/tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1432623918955_0019-1432627478584-root-Kylin_HFile_Generator_tbl1-1432627521052-5-0-FAILED-root.root-1432627485193.jhist
> >> > >> > > > > > > >> > ]
> >> > >> > > > > > > >> > 15/05/26 16:05:28 INFO jobhistory.JobSummary:
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >>
> >> > >> > > > > > >
> >> > >> > > > > >
> >> > >> > > > >
> >> > >> > > >
> >> > >> > >
> >> > >> >
> >> > >>
> >> >
> >>
> jobId=job_1432623918955_0019,submitTime=1432627478584,launchTime=1432627485193,firstMapTaskLaunchTime=1432627487604,firstReduceTaskLaunchTime=1432627495325,finishTime=1432627521052,resourcesPerMap=4096,resourcesPerReduce=8192,numMaps=5,numReduces=1,user=root,queue=default,status=FAILED,mapSlotSeconds=57,reduceSlotSeconds=75,jobName=Kylin_HFile_Generator_tbl1_1_Step
> >> > >> > > > > > > >> > 15/05/26 16:05:28 INFO hs.HistoryFileManager:
> >> Deleting
> >> > >> > > > JobSummary
> >> > >> > > > > > > file:
> >> > >> > > > > > > >> > [hdfs://
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >>
> >> > >> > > > > > >
> >> > >> > > > > >
> >> > >> > > > >
> >> > >> > > >
> >> > >> > >
> >> > >> >
> >> > >>
> >> >
> >>
> abc-master.test.com:8020/tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1432623918955_0019.summary
> >> > >> > > > > > > >> > ]
> >> > >> > > > > > > >> > 15/05/26 16:05:28 INFO hs.HistoryFileManager:
> Moving
> >> > >> hdfs://
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >>
> >> > >> > > > > > >
> >> > >> > > > > >
> >> > >> > > > >
> >> > >> > > >
> >> > >> > >
> >> > >> >
> >> > >>
> >> >
> >>
> abc-master.test.com:8020/tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1432623918955_0019-1432627478584-root-Kylin_HFile_Generator_tbl1-1432627521052-5-0-FAILED-root.root-1432627485193.jhist
> >> > >> > > > > > > >> > to hdfs://
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >>
> >> > >> > > > > > >
> >> > >> > > > > >
> >> > >> > > > >
> >> > >> > > >
> >> > >> > >
> >> > >> >
> >> > >>
> >> >
> >>
> abc-master.test.com:8020/tmp/hadoop-yarn/staging/history/done/2015/05/26/000000/job_1432623918955_0019-1432627478584-root-Kylin_HFile_Generator_tbl1-1432627521052-5-0-FAILED-root.root-1432627485193.jhist
> >> > >> > > > > > > >> > 15/05/26 16:05:28 INFO hs.HistoryFileManager:
> Moving
> >> > >> hdfs://
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >>
> >> > >> > > > > > >
> >> > >> > > > > >
> >> > >> > > > >
> >> > >> > > >
> >> > >> > >
> >> > >> >
> >> > >>
> >> >
> >>
> abc-master.test.com:8020/tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1432623918955_0019_conf.xml
> >> > >> > > > > > > >> > to hdfs://
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >>
> >> > >> > > > > > >
> >> > >> > > > > >
> >> > >> > > > >
> >> > >> > > >
> >> > >> > >
> >> > >> >
> >> > >>
> >> >
> >>
> abc-master.test.com:8020/tmp/hadoop-yarn/staging/history/done/2015/05/26/000000/job_1432623918955_0019_conf.xml
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >> > 2015-05-26 16:02 GMT+08:00 dong wang <
> >> > >> > [email protected]
> >> > >> > > >:
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >> > > does anyone know that is there any stuff related
> >> to
> >> > >> > "HBase"
> >> > >> > > > > itself
> >> > >> > > > > > > >> when
> >> > >> > > > > > > >> > ""Convert
> >> > >> > > > > > > >> > > Cuboid Data to HFile"?
> >> > >> > > > > > > >> > >
> >> > >> > > > > > > >> > > 2015-05-26 15:58 GMT+08:00 dong wang <
> >> > >> > > [email protected]
> >> > >> > > > >:
> >> > >> > > > > > > >> > >
> >> > >> > > > > > > >> > >> do anyone meet with the same problem with CDH
> >> 5.4.2
> >> > +
> >> > >> > > > > > > >> > kylin-0.7.1-staging
> >> > >> > > > > > > >> > >> source codes?
> >> > >> > > > > > > >> > >>
> >> > >> > > > > > > >> > >> 2015-05-26 13:54 GMT+08:00 dong wang <
> >> > >> > > [email protected]
> >> > >> > > > >:
> >> > >> > > > > > > >> > >>
> >> > >> > > > > > > >> > >>> sorry, I mis-click the log information
> button, I
> >> > will
> >> > >> > > check
> >> > >> > > > > the
> >> > >> > > > > > MR
> >> > >> > > > > > > >> log
> >> > >> > > > > > > >> > >>> first
> >> > >> > > > > > > >> > >>>
> >> > >> > > > > > > >> > >>> 2015-05-26 13:51 GMT+08:00 dong wang <
> >> > >> > > > [email protected]
> >> > >> > > > > >:
> >> > >> > > > > > > >> > >>>
> >> > >> > > > > > > >> > >>>> today, I update the environment, and when
> >> building
> >> > >> the
> >> > >> > > > cube,
> >> > >> > > > > > the
> >> > >> > > > > > > >> error
> >> > >> > > > > > > >> > >>>> looks the following:
> >> > >> > > > > > > >> > >>>> 2015-05-25 22:40:04.388 - State of Hadoop
> job:
> >> > >> > > > > > > >> > >>>> job_1432568508250_0142:ACCEPTED - UNDEFINED
> >> > >> > > > > > > >> > >>>> 2015-05-25 22:40:14.405 - State of Hadoop
> job:
> >> > >> > > > > > > >> > >>>> job_1432568508250_0142:RUNNING - UNDEFINED
> >> > >> > > > > > > >> > >>>> 2015-05-25 22:40:24.424 - State of Hadoop
> job:
> >> > >> > > > > > > >> > >>>> job_1432568508250_0142:RUNNING - UNDEFINED
> >> > >> > > > > > > >> > >>>> 2015-05-25 22:40:34.438 - State of Hadoop
> job:
> >> > >> > > > > > > >> > >>>> job_1432568508250_0142:RUNNING - UNDEFINED
> >> > >> > > > > > > >> > >>>> 2015-05-25 22:40:44.451 - State of Hadoop
> job:
> >> > >> > > > > > > >> > >>>> job_1432568508250_0142:RUNNING - UNDEFINED
> >> > >> > > > > > > >> > >>>> 2015-05-25 22:40:54.465 - State of Hadoop
> job:
> >> > >> > > > > > > >> > >>>> job_1432568508250_0142:FINISHED - FAILED
> >> > >> > > > > > > >> > >>>> no counters for job job_1432568508250_0142
> >> > >> > > > > > > >> > >>>>
> >> > >> > > > > > > >> > >>>>
> >> > >> > > > > > > >> > >>>> and when looking into the MR log, it says:
> >> > >> > > > > > > >> > >>>>
> >> > >> > > > > > > >> > >>>> Total Vmem allocated for Containers 29.40 GB
> >> > >> > > > > > > >> > >>>> Vmem enforcement enabled false
> >> > >> > > > > > > >> > >>>> Total Pmem allocated for Container 14 GB
> >> > >> > > > > > > >> > >>>> Pmem enforcement enabled true
> >> > >> > > > > > > >> > >>>> Total VCores allocated for Containers 8
> >> > >> > > > > > > >> > >>>> NodeHealthyStatus true
> >> > >> > > > > > > >> > >>>> LastNodeHealthTime Tue May 26 13:20:13 CST
> 2015
> >> > >> > > > > > > >> > >>>> NodeHealthReport
> >> > >> > > > > > > >> > >>>> Node Manager Version: 2.6.0-cdh5.4.2 from
> >> > >> > > > > > > >> > >>>> 15b703c8725733b7b2813d2325659eb7d57e7a3f by
> >> > jenkins
> >> > >> > > source
> >> > >> > > > > > > checksum
> >> > >> > > > > > > >> > >>>> e7a085479aa1989b5cecfabea403549 on
> >> > 2015-05-20T00:09Z
> >> > >> > > > > > > >> > >>>> Hadoop Version: 2.6.0-cdh5.4.2 from
> >> > >> > > > > > > >> > >>>> 15b703c8725733b7b2813d2325659eb7d57e7a3f by
> >> > jenkins
> >> > >> > > source
> >> > >> > > > > > > checksum
> >> > >> > > > > > > >> > >>>> de74f1adb3744f8ee85d9a5b98f90d on
> >> > 2015-05-20T00:03Z
> >> > >> > > > > > > >> > >>>>
> >> > >> > > > > > > >> > >>>
> >> > >> > > > > > > >> > >>>
> >> > >> > > > > > > >> > >>
> >> > >> > > > > > > >> > >
> >> > >> > > > > > > >> >
> >> > >> > > > > > > >>
> >> > >> > > > > > > >
> >> > >> > > > > > > >
> >> > >> > > > > > >
> >> > >> > > > > >
> >> > >> > > > >
> >> > >> > > >
> >> > >> > >
> >> > >> >
> >> > >>
> >> > >
> >> > >
> >> >
> >>
> >
> >
>

Reply via email to