Java.lang.NoSuchMethodError: org.apache.hadoop.yarn.conf.YarnConfiguration.getServiceAddressConfKeys (Lorg/apache/hadoop/conf/Configuration; ) Ljava/util/List;
Hi, I've recently update the binaries in my Kylin setup from v1.1 incubating to v1.2. The cubes which were building fine till now are throwing the above error. This error is occuring in the extract fact table distinct columns step. (Step 2). Can you please point out any mistakes with the upgrading procedure or anything else. Thank you, Sai Kiriti B.
Re: Java.lang.NoSuchMethodError: org.apache.hadoop.yarn.conf.YarnConfiguration.getServiceAddressConfKeys (Lorg/apache/hadoop/conf/Configuration; ) Ljava/util/List;
Hi Sai! You can see the same topic : http://apache-kylin.74782.x6.nabble.com/NoSuchMethodError-org-apache-hadoop-yarn-conf-YarnConfiguration-getServiceAddressConfKeys-td2937.html#a2943 在 2016年01月05日 18:27, Kiriti Sai 写道: Hi, I've recently update the binaries in my Kylin setup from v1.1 incubating to v1.2. The cubes which were building fine till now are throwing the above error. This error is occuring in the extract fact table distinct columns step. (Step 2). Can you please point out any mistakes with the upgrading procedure or anything else. Thank you, Sai Kiriti B.
Re: Re: about the parameter 'acceptPartial'
You don't need to set it up in JDBC, it's false by default. On Tue, Jan 5, 2016 at 6:10 PM, wangsh...@sinoaudit.cn < wangsh...@sinoaudit.cn> wrote: > Yes, how can I do? > > > > wangsh...@sinoaudit.cn > > From: Li Yang > Date: 2016-01-05 17:29 > To: dev > Subject: Re: about the parameter 'acceptPartial' > You want it be "false" always. > > When "true", Kylin may choose to return incorrect partial result as purpose > of preview. > > On Mon, Jan 4, 2016 at 6:16 PM, wangsh...@sinoaudit.cn < > wangsh...@sinoaudit.cn> wrote: > > > Hi all: > > Can anybody tell me what the query parameter 'acceptPartial' means? and > I > > wonder how I can setup this parameter in jdbc. > > > > > > > > wangsh...@sinoaudit.cn > > >
Re: Re: about the parameter 'acceptPartial'
Yes, how can I do? wangsh...@sinoaudit.cn From: Li Yang Date: 2016-01-05 17:29 To: dev Subject: Re: about the parameter 'acceptPartial' You want it be "false" always. When "true", Kylin may choose to return incorrect partial result as purpose of preview. On Mon, Jan 4, 2016 at 6:16 PM, wangsh...@sinoaudit.cn < wangsh...@sinoaudit.cn> wrote: > Hi all: > Can anybody tell me what the query parameter 'acceptPartial' means? and I > wonder how I can setup this parameter in jdbc. > > > > wangsh...@sinoaudit.cn >
Re: org.apache.hadoop.hive.ql.metadata.HiveException
It makes progress, this time the root cause is "java.lang.ClassNotFoundException: javax.jdo.JDOObjectNotFoundException". It indicates that 4 jars are not enough, jars which are needed by HCatalog and not appeared in your MR nodes need be added to the HDFS folder as well, like jdo-api-xx.jar. Just take a try. 2016-01-05 19:42 GMT+08:00 和风 <363938...@qq.com>: > thanks,but i add > hive-common-xx.jar,hive-exec-xx.jar,hive-hcatalog-core-xx.jar,hive-metastore-xx.jar > to hdfs folder. > run have a new error; > > > logs: > [pool-7-thread-4]:[2016-01-05 > 19:19:59,443][ERROR][org.apache.kylin.job.common.MapReduceExecutable.doWork(MapReduceExecutable.java:123)] > - error execute > MapReduceExecutable{id=9c9d1e69-678c-45a0-a054-4dac9d07503a-01, > name=Extract Fact Table Distinct Columns, state=RUNNING} > java.io.IOException: > com.google.common.util.concurrent.UncheckedExecutionException: > java.lang.RuntimeException: Unable to instantiate > org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient > at > org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:97) > at > org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:51) > at > org.apache.kylin.job.hadoop.cube.FactDistinctColumnsJob.setupMapper(FactDistinctColumnsJob.java:101) > at > org.apache.kylin.job.hadoop.cube.FactDistinctColumnsJob.run(FactDistinctColumnsJob.java:77) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) > at > org.apache.kylin.job.common.MapReduceExecutable.doWork(MapReduceExecutable.java:120) > at > org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107) > at > org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:51) > at > org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107) > at > org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:130) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: com.google.common.util.concurrent.UncheckedExecutionException: > java.lang.RuntimeException: Unable to instantiate > org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient > at > com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2256) > at com.google.common.cache.LocalCache.get(LocalCache.java:3985) > at > com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4788) > at > org.apache.hive.hcatalog.common.HiveClientCache.getOrCreate(HiveClientCache.java:227) > at > org.apache.hive.hcatalog.common.HiveClientCache.get(HiveClientCache.java:202) > at > org.apache.hive.hcatalog.common.HCatUtil.getHiveMetastoreClient(HCatUtil.java:558) > at > org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:104) > at > org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86) > at > org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:95) > ... 13 more > Caused by: java.lang.RuntimeException: Unable to instantiate > org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1523) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:86) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:118) > at > org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveClientCache.java:230) > at > org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveClientCache.java:227) > at > com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4791) > at > com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3584) > at > com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2372) > at > com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2335) > at > com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2250) > ... 21 more > Caused by: java.lang.reflect.InvocationTargetException > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at >
ODBC Driver Source Location
Hi, I want to make sure that the ODBC driver repo ( https://github.com/KylinOLAP/odbc-driver) has the up to date code. There has been no commits since in that July 2015 but I noticed some improvements on the driver in the Kylin 1.2 release notes. Regards, Shashank
Re: ODBC Driver Source Location
Thanks Dong Li Regards, Shashank On Wed, Jan 6, 2016 at 11:58 AM, Dong Liwrote: > Hello Shashank, > > Latest ODBC code repo is on 2.x-staging branch: > https://github.com/apache/kylin/tree/2.x-staging/odbc > > Thanks, > Dong Li > > 2016-01-06 14:24 GMT+08:00 Shashank Prabhakara : > > > Hi, > > > > I want to make sure that the ODBC driver repo ( > > https://github.com/KylinOLAP/odbc-driver) has the up to date code. There > > has been no commits since in that July 2015 but I noticed some > > improvements on the driver in the Kylin 1.2 release notes. > > > > Regards, > > Shashank > > > > > > -- > Thanks, > Dong >
?????? org.apache.hadoop.hive.ql.metadata.HiveException
hi, I,m from hive copy all jars to hdfs folder. run have a error. java.io.IOException: NoSuchObjectException(message:default.kylin_intermediate_learn_kylin_four_2015020100_2015123000_8d26cc4b_e012_4414_a89b_c8d9323ae277 table not found) at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:97) at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:51) at org.apache.kylin.job.hadoop.cube.FactDistinctColumnsJob.setupMapper(FactDistinctColumnsJob.java:101) at org.apache.kylin.job.hadoop.cube.FactDistinctColumnsJob.run(FactDistinctColumnsJob.java:77) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) at org.apache.kylin.job.common.MapReduceExecutable.doWork(MapReduceExecutable.java:120) at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107) at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:51) at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107) at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:130) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: NoSuchObjectException(message:default.kylin_intermediate_learn_kylin_four_2015020100_2015123000_8d26cc4b_e012_4414_a89b_c8d9323ae277 table not found) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_core(HiveMetaStore.java:1808) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1778) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) at com.sun.proxy.$Proxy47.get_table(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1208) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152) at com.sun.proxy.$Proxy48.getTable(Unknown Source) at org.apache.hive.hcatalog.common.HCatUtil.getTable(HCatUtil.java:180) at org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:105) at org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86) at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:95) ... 13 more -- -- ??: "ShaoFeng Shi";; : 2016??1??5??(??) 9:48 ??: "dev" ; : Re: org.apache.hadoop.hive.ql.metadata.HiveException It makes progress, this time the root cause is "java.lang.ClassNotFoundException: javax.jdo.JDOObjectNotFoundException". It indicates that 4 jars are not enough, jars which are needed by HCatalog and not appeared in your MR nodes need be added to the HDFS folder as well, like jdo-api-xx.jar. Just take a try. 2016-01-05 19:42 GMT+08:00 <363938...@qq.com>: > thanks,but i add > hive-common-xx.jar,hive-exec-xx.jar,hive-hcatalog-core-xx.jar,hive-metastore-xx.jar > to hdfs folder. > run have a new error; > > > logs: > [pool-7-thread-4]:[2016-01-05 > 19:19:59,443][ERROR][org.apache.kylin.job.common.MapReduceExecutable.doWork(MapReduceExecutable.java:123)] > - error execute > MapReduceExecutable{id=9c9d1e69-678c-45a0-a054-4dac9d07503a-01, > name=Extract Fact Table Distinct Columns, state=RUNNING} > java.io.IOException: > com.google.common.util.concurrent.UncheckedExecutionException: > java.lang.RuntimeException: Unable to instantiate > org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient > at > org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:97) > at > org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:51) > at > org.apache.kylin.job.hadoop.cube.FactDistinctColumnsJob.setupMapper(FactDistinctColumnsJob.java:101) >
Re: Welcome new Apache Kylin committer: Luwei Chen
Welcome Luwei ! On Mon, Jan 4, 2016 at 3:55 PM, Li Yangwrote: > Welcome, Luwei~~~ > > On Sat, Jan 2, 2016 at 12:08 AM, Julian Hyde wrote: > > > Congratulations and welcome, Luwei! > > > > Julian > > > > > On Dec 30, 2015, at 7:56 PM, wangxianbin1...@gmail.com wrote: > > > > > > hi all! > > > > > > congratulations, lu wei! and now I know how it works now! > > > > > > best regards! > > > > > > > > > wangxianbin1...@gmail.com > > > > > > From: Luke Han > > > Date: 2015-12-30 21:47 > > > To: dev > > > Subject: Welcome new Apache Kylin committer: Luwei Chen > > > I am very pleased to announce that the Project Management Committee > > > (PMC) of Apache Kylin has asked Luwei Chen to become Apache Kylin > > committer > > > , > > > and she has already accepted. > > > > > > Luwei has already made many contribution to Kylin community, about > > website, > > > documentation, UI and others as well. > > > > > > Welcome Luwei, our first female committer:) > > > Please share with us a little about yourself, > > > > > > Luke > > > > > > On behalf of the Apache Kylin PPMC > > > > >
[jira] [Created] (KYLIN-1290) Add new model: False alarm when clicking previous wizard step at Dimensions or Measures
Lola Liu created KYLIN-1290: --- Summary: Add new model: False alarm when clicking previous wizard step at Dimensions or Measures Key: KYLIN-1290 URL: https://issues.apache.org/jira/browse/KYLIN-1290 Project: Kylin Issue Type: Bug Components: Web Affects Versions: v2.0 Reporter: Lola Liu Assignee: Zhong,Jason Priority: Minor STEPS: 1. Login 2. Add new model 3. Add information, go to dimensions step or measures step 4. Click on previous wizard steps without add dimensions or measures info RESULT: Alert for Measures: Please define your metrics. Alert for Dimensions: No dimensions defined. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Java.lang.NoSuchMethodError: org.apache.hadoop.yarn.conf.YarnConfiguration.getServiceAddressConfKeys (Lorg/apache/hadoop/conf/Configuration; ) Ljava/util/List;
Hi Wang, The version of Hadoop in the cluster is 2.6. I've setup Hbase 0.98.15 using binaries, in which there are some jar files of the Hadoop version 2.2, like hadoop-yarn-client-2.2.jar. As I've mentioned already, this setup has worked with the previous version of Kylin 1.1-incubating, but has been throwing this error after updating to v1.2. (Dont know if there is anything due to this, but just mentioning it). So, is there any other to solve this other than building HBase from source using the latest Hadoop libraries. Thank You. On Jan 5, 2016 8:26 PM, "Xiaoyu Wang"wrote: > Hi, > The api YarnConfiguration.getServiceAddressConfKeys required Hadoop2.4+ > Which version Hadoop do you use ? > You can recompile the hbase with hadoop 2.4+ version or your hadoop > cluster version. > > > > > 在 2016年01月05日 18:52, Kiriti Sai 写道: > >> Hi, >> I have looked at the suggested link before posting the question here. I >> didn't understand how to resolve this issue. >> I've tried replacing the 2.2 hadoop yarn libs present in the HBase lib >> directory but then it throws FileNotFoundException. >> Can you please explain in a detailed way how to resolve this issue. >> I'm using Hbase 0.98.15-hadoop2 version. >> >> Thank you, >> Sai Kiriti B >> On Jan 5, 2016 7:38 PM, "Xiaoyu Wang" wrote: >> >> Hi Sai! >>> You can see the same topic : >>> >>> >>> http://apache-kylin.74782.x6.nabble.com/NoSuchMethodError-org-apache-hadoop-yarn-conf-YarnConfiguration-getServiceAddressConfKeys-td2937.html#a2943 >>> >>> 在 2016年01月05日 18:27, Kiriti Sai 写道: >>> >>> Hi, I've recently update the binaries in my Kylin setup from v1.1 incubating to v1.2. The cubes which were building fine till now are throwing the above error. This error is occuring in the extract fact table distinct columns step. (Step 2). Can you please point out any mistakes with the upgrading procedure or anything else. Thank you, Sai Kiriti B. >
Re: Java.lang.NoSuchMethodError: org.apache.hadoop.yarn.conf.YarnConfiguration.getServiceAddressConfKeys (Lorg/apache/hadoop/conf/Configuration; ) Ljava/util/List;
Hi, When kylin build cube,it will execute mapreduce job,kylin use ResourceManager address to get the mapreduce job status。kylin read the kylin.properties file to get the RM url first。if not config,kylin will read hadoop config to get the RM address by HAUtils api in hadoop 2.4+。because hadoop support RM HA after hadoop2.4。 but hbase0.98x complie with hadoop 2.2 by default。does not support RM HA。so you got the exception! the config property is correct,kylin will replace the ${job_id}to real jobid,you can only replace the YOUR_RM_AND_PORT to your RM address and port like:192.168.2.2:8088 thanks! > 在 2016年1月5日,20:29,Kiriti Sai写道: > > Hi, > Can you please explain it in a slightly detailed manner. I understand that > the url you are referring to is the resource manager url, but it's > particular to a job right? How can something particular to a job be set as > a property for Kylin. I'm sorry if I'm mistaken. > Or are you intending that {job_id} will actually get the id of the MR job > running? Sorry for these naive questions. > > Thank you. >> On Jan 5, 2016 8:42 PM, "Xiaoyu Wang" wrote: >> >> Hi, >> You can set the property in kylin.properties file >> kylin.job.yarn.app.rest.check.status.url= >> https://YOUR_RM_AND_PORT/ws/v1/cluster/apps/${job_id}?anonymous=true >> >>> 在 2016年01月05日 19:38, Kiriti Sai 写道: >>> >>> Hi Wang, >>> The version of Hadoop in the cluster is 2.6. I've setup Hbase 0.98.15 >>> using >>> binaries, in which there are some jar files of the Hadoop version 2.2, >>> like >>> hadoop-yarn-client-2.2.jar. >>> As I've mentioned already, this setup has worked with the previous version >>> of Kylin 1.1-incubating, but has been throwing this error after updating >>> to >>> v1.2. (Dont know if there is anything due to this, but just mentioning >>> it). >>> So, is there any other to solve this other than building HBase from source >>> using the latest Hadoop libraries. >>> >>> Thank You. >>> On Jan 5, 2016 8:26 PM, "Xiaoyu Wang" wrote: >>> >>> Hi, The api YarnConfiguration.getServiceAddressConfKeys required Hadoop2.4+ Which version Hadoop do you use ? You can recompile the hbase with hadoop 2.4+ version or your hadoop cluster version. 在 2016年01月05日 18:52, Kiriti Sai 写道: Hi, > I have looked at the suggested link before posting the question here. I > didn't understand how to resolve this issue. > I've tried replacing the 2.2 hadoop yarn libs present in the HBase lib > directory but then it throws FileNotFoundException. > Can you please explain in a detailed way how to resolve this issue. > I'm using Hbase 0.98.15-hadoop2 version. > > Thank you, > Sai Kiriti B > On Jan 5, 2016 7:38 PM, "Xiaoyu Wang" wrote: > > Hi Sai! > >> You can see the same topic : >> >> >> >> http://apache-kylin.74782.x6.nabble.com/NoSuchMethodError-org-apache-hadoop-yarn-conf-YarnConfiguration-getServiceAddressConfKeys-td2937.html#a2943 >> >> 在 2016年01月05日 18:27, Kiriti Sai 写道: >> >> Hi, >> >>> I've recently update the binaries in my Kylin setup from v1.1 >>> incubating >>> to >>> v1.2. The cubes which were building fine till now are throwing the >>> above >>> error. >>> This error is occuring in the extract fact table distinct columns >>> step. >>> (Step 2). >>> Can you please point out any mistakes with the upgrading procedure or >>> anything else. >>> >>> Thank you, >>> Sai Kiriti B. >>
Re: Java.lang.NoSuchMethodError: org.apache.hadoop.yarn.conf.YarnConfiguration.getServiceAddressConfKeys (Lorg/apache/hadoop/conf/Configuration; ) Ljava/util/List;
Hi, Can you please explain it in a slightly detailed manner. I understand that the url you are referring to is the resource manager url, but it's particular to a job right? How can something particular to a job be set as a property for Kylin. I'm sorry if I'm mistaken. Or are you intending that {job_id} will actually get the id of the MR job running? Sorry for these naive questions. Thank you. On Jan 5, 2016 8:42 PM, "Xiaoyu Wang"wrote: > Hi, > You can set the property in kylin.properties file > kylin.job.yarn.app.rest.check.status.url= > https://YOUR_RM_AND_PORT/ws/v1/cluster/apps/${job_id}?anonymous=true > > 在 2016年01月05日 19:38, Kiriti Sai 写道: > >> Hi Wang, >> The version of Hadoop in the cluster is 2.6. I've setup Hbase 0.98.15 >> using >> binaries, in which there are some jar files of the Hadoop version 2.2, >> like >> hadoop-yarn-client-2.2.jar. >> As I've mentioned already, this setup has worked with the previous version >> of Kylin 1.1-incubating, but has been throwing this error after updating >> to >> v1.2. (Dont know if there is anything due to this, but just mentioning >> it). >> So, is there any other to solve this other than building HBase from source >> using the latest Hadoop libraries. >> >> Thank You. >> On Jan 5, 2016 8:26 PM, "Xiaoyu Wang" wrote: >> >> Hi, >>> The api YarnConfiguration.getServiceAddressConfKeys required Hadoop2.4+ >>> Which version Hadoop do you use ? >>> You can recompile the hbase with hadoop 2.4+ version or your hadoop >>> cluster version. >>> >>> >>> >>> >>> 在 2016年01月05日 18:52, Kiriti Sai 写道: >>> >>> Hi, I have looked at the suggested link before posting the question here. I didn't understand how to resolve this issue. I've tried replacing the 2.2 hadoop yarn libs present in the HBase lib directory but then it throws FileNotFoundException. Can you please explain in a detailed way how to resolve this issue. I'm using Hbase 0.98.15-hadoop2 version. Thank you, Sai Kiriti B On Jan 5, 2016 7:38 PM, "Xiaoyu Wang" wrote: Hi Sai! > You can see the same topic : > > > > http://apache-kylin.74782.x6.nabble.com/NoSuchMethodError-org-apache-hadoop-yarn-conf-YarnConfiguration-getServiceAddressConfKeys-td2937.html#a2943 > > 在 2016年01月05日 18:27, Kiriti Sai 写道: > > Hi, > >> I've recently update the binaries in my Kylin setup from v1.1 >> incubating >> to >> v1.2. The cubes which were building fine till now are throwing the >> above >> error. >> This error is occuring in the extract fact table distinct columns >> step. >> (Step 2). >> Can you please point out any mistakes with the upgrading procedure or >> anything else. >> >> Thank you, >> Sai Kiriti B. >> >> >> >> >