Root cause as I quote: The problem is that the job was running on one or more than one nodes of the YARN cluster, where MR 2.3 libs were installed, and JobCounter.MB_MILLIS_REDUCES is available in the counters. On the other side, due to the classpath setting, the client was likely to run with MR 2.2 libs. After the client retrieved the counters from MR AM, it tried to construct the Counter object with the received counter name. Unfortunately, the enum didn't exist in the client's classpath. Therefore, "No enum constant" exception is thrown here.
JobCounter.MB_MILLIS_REDUCES is brought to MR2 since Hadoop 2.3. You should find log like this:"Hadoop job classpath is:”, this will tell you the exact map reduce class path Best Regard Zhou QianHao On 4/8/15, 4:13 PM, "[email protected]" <[email protected]> wrote: >Hi > >I searched for kylin.log and see the exception below . Does that mean the >mapreduce or yarn environment are not configured rightly? > >[pool-5-thread-2]:[2015-04-08 >10:28:02,778][ERROR][org.apache.kylin.job.common.HadoopCmdOutput.updateJob >Counter(HadoopCmdOutput.java:100)] - No enum constant >org.apache.hadoop.mapreduce.JobCounter.MB_MILLIS_REDUCES >java.lang.IllegalArgumentException: No enum constant >org.apache.hadoop.mapreduce.JobCounter.MB_MILLIS_REDUCES >at java.lang.Enum.valueOf(Enum.java:236) >at >org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.valueOf(Framewo >rkCounterGroup.java:148) >at >org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.findCounter(Fra >meworkCounterGroup.java:182) >at >org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(Abstract >Counters.java:154) >at >org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:240) > >at >org.apache.hadoop.mapred.ClientServiceDelegate.getJobCounters(ClientServic >eDelegate.java:370) >at >org.apache.hadoop.mapred.YARNRunner.getJobCounters(YARNRunner.java:511) >at org.apache.hadoop.mapreduce.Job$7.run(Job.java:756) >at org.apache.hadoop.mapreduce.Job$7.run(Job.java:753) >at java.security.AccessController.doPrivileged(Native Method) >at javax.security.auth.Subject.doAs(Subject.java:415) >at >org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation. >java:1491) >at org.apache.hadoop.mapreduce.Job.getCounters(Job.java:753) >at >org.apache.kylin.job.common.HadoopCmdOutput.updateJobCounter(HadoopCmdOutp >ut.java:86) >at >org.apache.kylin.job.common.MapReduceExecutable.doWork(MapReduceExecutable >.java:144) >at >org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutab >le.java:107) >at >org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChai >nedExecutable.java:50) >at >org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutab >le.java:107) >at >org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(Defaul >tScheduler.java:132) >at >java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java: >1145) >at >java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java >:615) >at java.lang.Thread.run(Thread.java:744) > > > >[email protected] > >From: Zhou, Qianhao >Date: 2015-04-08 16:04 >To: [email protected] >Subject: Re: Update Cube Info failed due to illegalstate exception >Hi, > Binary package should be fine. Kylin does NOT have to be in the hadoop >cluster, only the CLI will be enough. > According to what you said, maybe the one of the building step("Build >Base Cuboid Data") has succeeded while it failed to get statistical info >through TaskCounter.MAP_INPUT_RECORDS. > Can you please check the output of this step or kylin.log to see if >there is any error log? > >Best Regard >Zhou QianHao > > > > >On 4/8/15, 3:22 PM, "[email protected]" <[email protected]> wrote: > >>Hi , Zhou >>Thanks for your following. >> >>Our environment goes to CDH 5.3.0 with hadoop and yarn 2.5.0, hbase >>0.98.10.1, hive 0.13.1 >>One concern is that we did not recompile the source code and directely >>use the bin package. >>We did not change any config or environment settings when installing >>kylin. >> >>Forget to mention. Does kylin need hdfs and mapreduce instance for the >>intall node? Cause the node >>we install kylin in does not contain hadoop process living, but with CDH >>host we can issue hadoop CLI on this node. >> >>Thanks for any suggestion. >> >>Best, >>Sun. >> >> >> >>[email protected] >> >>From: 周千昊 >>Date: 2015-04-08 15:07 >>To: dev >>Subject: Re: Update Cube Info failed due to illegalstate exception >>Hi, >> Kylin use the pattern "Map input records=(\d+)" to get the total >>records count. >> This situation may happen when the pattern does no work on certain >>hadoop environment, can you please tell us some details about your hadoop >>environment? >> >>On Wed, Apr 8, 2015 at 1:46 PM [email protected] <[email protected]> wrote: >> >>> Hi, >>> >>> We just setuped kylin 0.7.1-SNAPSHOT from here : >>> http://kylin.incubator.apache.org/download/ >>> >>> Using On-Hadoop-CLI installation we can successfully initiate kylin >>> server and visit the kylin homepage. Then we run ./bin/sample.sh >>> >>> to create sample project and try to build the kylin_sales_cube. >>>However, >>> when building into the last step with Update Cube Info, we see >>>exception >>> >>> errors like the following. The problem still exists when resume and >>> rebuild the cube. Can any experts verify and show some explaination? >>> >>> Best, >>> >>> Sun. >>> >>> >>> java.lang.IllegalStateException: Can't get cube source record count. >>> at com.google.common.base.Preconditions.checkState( >>> Preconditions.java:149) >>> at >>>org.apache.kylin.job.cube.UpdateCubeInfoAfterBuildStep.doWork( >>> UpdateCubeInfoAfterBuildStep.java:104) >>> at org.apache.kylin.job.execution.AbstractExecutable. >>> execute(AbstractExecutable.java:107) >>> at >>>org.apache.kylin.job.execution.DefaultChainedExecutable.doWork( >>> DefaultChainedExecutable.java:50) >>> at org.apache.kylin.job.execution.AbstractExecutable. >>> execute(AbstractExecutable.java:107) >>> at org.apache.kylin.job.impl.threadpool.DefaultScheduler$ >>> JobRunner.run(DefaultScheduler.java:132) >>> at java.util.concurrent.ThreadPoolExecutor.runWorker( >>> ThreadPoolExecutor.java:1145) >>> at java.util.concurrent.ThreadPoolExecutor$Worker.run( >>> ThreadPoolExecutor.java:615) >>> at java.lang.Thread.run(Thread.java:744) >>> >>> >>> [email protected] >>> > >
