Hi Yang,

While installing I followed "Off Hadoop CLI Installation" tutorial to set up 
the development  environment. So I have Kylin server on my Windows machine 
pointing to my Dev hadoop cluster. 
I cross checked the versions which are installed on my hadoop cluster and made 
the same changes to Kylin/pom.xml. Even though it downloads the same version of 
Hadoop hive hbase jars in my local repo as mentioned below, I am still hitting 
the issue. I am not sure from where it is referencing the old Hadoop versions.

Please note that when I build Kylin sourcecode on my local machine (which has 
kylin server running and pointing to Hadoop cluster), it downloads Hadoop hive 
hbase jars on my local repo.

Thanks,
Mohit 




-----Original Message-----
From: Li Yang [mailto:[email protected]] 
Sent: Thursday, June 04, 2015 12:16 PM
To: [email protected]
Subject: Re: RE: Building cube issue: IncompatibleClassChangeError error

Aiming to be compatible with all Hadoop 2.x versions, Kylin does not ship any 
Hadoop/Hive jar. There must be multiple versions of Hadoop/Hive in your cluster.

From the stack trace, my guess is the Hive version of the Kylin host and the 
Hive server might be different... not sure however.

On Tue, Jun 2, 2015 at 2:29 PM, [email protected] <[email protected]> wrote:

> Hi, there,
>
> I think the root cause is you got some old hadoop or yarn version jars 
> in your environment.
>
> Which step does you go when you issue cube build job? You may want to 
> check the yarn history server
>
> for more log information.
>
> Thanks,
> Sun.
>
>
> [email protected]
>
> From: Mohit Bhatnagar
> Date: 2015-06-02 14:17
> To: [email protected]
> CC: Sakshi Gupta
> Subject: RE: Building cube issue: IncompatibleClassChangeError error 
> Can anyone help me out with this?
>
> -----Original Message-----
> From: Mohit Bhatnagar [mailto:[email protected]]
> Sent: Monday, June 01, 2015 5:06 PM
> To: [email protected]
> Cc: Sakshi Gupta
> Subject: Building cube issue: IncompatibleClassChangeError error
>
> Hi,
>
> I m hitting the below IncompatibleClassChangeError  error. We have the 
> following versions already installed in our Prod cluster and I have 
> also changed the kylin/pom.xml file to have the same versions.
> Here are the details:
>   <!-- Hadoop versions -->
>         <hadoop2.version>2.5.0</hadoop2.version>
>         <yarn.version>2.5.0</yarn.version>
>         <zookeeper.version>3.4.5</zookeeper.version>
>         <hive.version>0.13.1</hive.version>
>         <hive-hcatalog.version>0.13.1</hive-hcatalog.version>
>         <hbase-hadoop2.version>0.98.6-hadoop2</hbase-hadoop2.version>
>
> From Hadoop 2.0 onwards JobContext is an interface. Don't know why it 
> is expecting a class:
> Here is the log:
>
>
> [pool-6-thread-1]:[2015-06-01
> 16:56:53,644][INFO][org.apache.kylin.job.impl.threadpool.DefaultSchedu
> ler$FetcherRunner.run(DefaultScheduler.java:117)]
> - Job Fetcher: 0 running, 1 actual running, 1 ready, 44 others
> [pool-7-thread-2]:[2015-06-01
> 16:57:48,749][INFO][org.apache.hadoop.mapreduce.JobSubmitter.submitJob
> Internal(JobSubmitter.java:441)]
> - Cleaning up the staging area
> /user/mobhatna/.staging/job_1432706588203_13062
> [pool-7-thread-2]:[2015-06-01
> 16:57:49,031][ERROR][org.apache.kylin.job.execution.AbstractExecutable
> .execute(AbstractExecutable.java:109)]
> - error running Executable
> java.lang.IncompatibleClassChangeError: Found interface 
> org.apache.hadoop.mapreduce.JobContext, but class was expected
>        at
> org.apache.hive.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:102)
>        at
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
>        at
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
>        at
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
>        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
>        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:415)
>        at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
>        at
> org.apache.kylin.job.hadoop.AbstractHadoopJob.waitForCompletion(AbstractHadoopJob.java:123)
>        at
> org.apache.kylin.job.hadoop.cube.FactDistinctColumnsJob.run(FactDistinctColumnsJob.java:80)
>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>        at
> org.apache.kylin.job.common.MapReduceExecutable.doWork(MapReduceExecutable.java:112)
>        at
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
>        at
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:50)
>        at
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
>        at
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:132)
>        at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>        at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>        at java.lang.Thread.run(Thread.java:745)
> [pool-7-thread-2]:[2015-06-01
> 16:57:52,178][DEBUG][org.apache.kylin.common.persistence.ResourceStore
> .putResource(ResourceStore.java:171)]
> - Saving resource 
> /execute_output/81a626bd-60a7-4993-9d3e-ca3013e76fb9-01
> (Store kylin_metadata@hbase)
> [pool-7-thread-2]:[2015-06-01
> 16:57:54,016][DEBUG][org.apache.kylin.common.persistence.ResourceStore
> .putResource(ResourceStore.java:171)]
> - Saving resource 
> /execute_output/81a626bd-60a7-4993-9d3e-ca3013e76fb9-01
> (Store kylin_metadata@hbase)
> [pool-7-thread-2]:[2015-06-01
> 16:57:54,282][INFO][org.apache.kylin.job.manager.ExecutableManager.upd
> ateJobOutput(ExecutableManager.java:222)]
> - job id:81a626bd-60a7-4993-9d3e-ca3013e76fb9-01 from RUNNING to ERROR
> [pool-7-thread-2]:[2015-06-01
> 16:57:54,282][ERROR][org.apache.kylin.job.execution.AbstractExecutable
> .execute(AbstractExecutable.java:109)]
> - error running Executable
> org.apache.kylin.job.exception.ExecuteException:
> java.lang.IncompatibleClassChangeError: Found interface 
> org.apache.hadoop.mapreduce.JobContext, but class was expected
>        at
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:111)
>        at
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:50)
>        at
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
>        at
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:132)
>        at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>        at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>        at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IncompatibleClassChangeError: Found interface 
> org.apache.hadoop.mapreduce.JobContext, but class was expected
>        at
> org.apache.hive.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:102)
>        at
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
>        at
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
>        at
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
>        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
>        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:415)
>        at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
>        at
> org.apache.kylin.job.hadoop.AbstractHadoopJob.waitForCompletion(AbstractHadoopJob.java:123)
>        at
> org.apache.kylin.job.hadoop.cube.FactDistinctColumnsJob.run(FactDistinctColumnsJob.java:80)
>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>        at
> org.apache.kylin.job.common.MapReduceExecutable.doWork(MapReduceExecutable.java:112)
>        at
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExec
> utable.java:107)
>

Reply via email to