Re: ODBC Driver Source Location

2016-01-05 Thread Shashank Prabhakara
Thanks Dong Li

Regards,
Shashank

On Wed, Jan 6, 2016 at 11:58 AM, Dong Li  wrote:

> Hello Shashank,
>
> Latest ODBC code repo is on 2.x-staging branch:
> https://github.com/apache/kylin/tree/2.x-staging/odbc
>
> Thanks,
> Dong Li
>
> 2016-01-06 14:24 GMT+08:00 Shashank Prabhakara :
>
> > Hi,
> >
> > I want to make sure that the ODBC driver repo (
> > https://github.com/KylinOLAP/odbc-driver) has the up to date code. There
> > has been no commits since in that  July 2015 but I noticed some
> > improvements on the driver in the Kylin 1.2 release notes.
> >
> > Regards,
> > Shashank
> >
>
>
>
> --
> Thanks,
> Dong
>


Re: ODBC Driver Source Location

2016-01-05 Thread Dong Li
Hello Shashank,

Latest ODBC code repo is on 2.x-staging branch:
https://github.com/apache/kylin/tree/2.x-staging/odbc

Thanks,
Dong Li

2016-01-06 14:24 GMT+08:00 Shashank Prabhakara :

> Hi,
>
> I want to make sure that the ODBC driver repo (
> https://github.com/KylinOLAP/odbc-driver) has the up to date code. There
> has been no commits since in that  July 2015 but I noticed some
> improvements on the driver in the Kylin 1.2 release notes.
>
> Regards,
> Shashank
>



-- 
Thanks,
Dong


ODBC Driver Source Location

2016-01-05 Thread Shashank Prabhakara
Hi,

I want to make sure that the ODBC driver repo (
https://github.com/KylinOLAP/odbc-driver) has the up to date code. There
has been no commits since in that  July 2015 but I noticed some
improvements on the driver in the Kylin 1.2 release notes.

Regards,
Shashank


Re: Re: how to set 'acceptPartial' parameter in jdbc driver?

2016-01-05 Thread ShaoFeng Shi
This is a bug in Kylin JDBC driver, found by Yerui:
https://issues.apache.org/jira/browse/KYLIN-1274

2015-09-29 14:26 GMT+08:00 Li Yang :

> Sounds like a bug around limit.
>
> Meng, could you share a sample query that is unstable with limit clause?
>
> On Tue, Sep 22, 2015 at 9:40 AM, 13802880...@139.com <13802880...@139.com>
> wrote:
>
> > i use jdbc to query, and don't know how to set “acceptPartial=true",
> > without limit clause in sql, it return right number of results, but with
> > limit clause, it seems that random number of results would be returned,
> for
> > example: with limit 10 in different sql, sometimes it return
> > 10,sometimes less than 10,but all these query sql, the whole
> > results number >10
> >
> >
> >
> >
> > 中国移动广东有限公司 网管中心 梁猛
> > 13802880...@139.com
> >
> > 发件人: Shi, Shaofeng
> > 发送时间: 2015-09-22 09:29
> > 收件人: d...@kylin.incubator.apache.org
> > 主题: Re: 回复: Re: how to set 'acceptPartial' parameter in jdbc driver?
> > Meng, if you don’t set limit, but set “acceptPartial=true”, will it
> return
> > records? ‘acceptPartial’ is a threshold when “limit” not presented; So if
> > its behavior is different, there should be a bug; Could you please open a
> > JIRA with your sample SQL? Thank you!
> >
> > On 9/22/15, 9:19 AM, "13802880...@139.com" <13802880...@139.com> wrote:
> >
> > >'limit' clause dosen't return right answer, for example, i use limit
> > >10 after query sql, but it returns nothing, without 'limit 10',
> > >it returns about a million results;
> > >
> > >
> > >
> > >中国移动广东有限公司 网管中心 梁猛
> > >13802880...@139.com
> > >
> > >发件人: Shi, Shaofeng
> > >发送时间: 2015-09-22 09:12
> > >收件人: d...@kylin.incubator.apache.org
> > >主题: Re: how to set 'acceptPartial' parameter in jdbc driver?
> > >so far no such option in jdbc driver, the default behavior is to accept
> > >all results; If you only need a partial result set, you can use “limit”
> > >clause in your SQL; Please open a JIRA if you have specific scenario or
> > >need;
> > >
> > >On 9/21/15, 11:07 PM, "13802880...@139.com" <13802880...@139.com>
> wrote:
> > >
> > >>i am using kylin jdbc driver, how to set 'acceptPartial'  to be
> > >>false/true in jdbc driver?
> > >>
> > >>
> > >>
> > >>中国移动广东有限公司 网管中心 梁猛
> > >>13802880...@139.com
> > >
> >
> >
>



-- 
Best regards,

Shaofeng Shi


[jira] [Created] (KYLIN-1290) Add new model: False alarm when clicking previous wizard step at Dimensions or Measures

2016-01-05 Thread Lola Liu (JIRA)
Lola Liu created KYLIN-1290:
---

 Summary: Add new model: False alarm when clicking previous wizard 
step at Dimensions or Measures
 Key: KYLIN-1290
 URL: https://issues.apache.org/jira/browse/KYLIN-1290
 Project: Kylin
  Issue Type: Bug
  Components: Web 
Affects Versions: v2.0
Reporter: Lola Liu
Assignee: Zhong,Jason
Priority: Minor


STEPS:
1. Login
2. Add new model
3. Add information, go to dimensions step or measures step
4. Click on previous wizard steps without add dimensions or measures info

RESULT:
Alert for Measures: Please define your metrics.
Alert for Dimensions: No dimensions defined.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


?????? org.apache.hadoop.hive.ql.metadata.HiveException

2016-01-05 Thread ????
hi,
  I,m from hive copy  all jars to hdfs folder. run have a error.


java.io.IOException: 
NoSuchObjectException(message:default.kylin_intermediate_learn_kylin_four_2015020100_2015123000_8d26cc4b_e012_4414_a89b_c8d9323ae277
 table not found)
at 
org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:97)
at 
org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:51)
at 
org.apache.kylin.job.hadoop.cube.FactDistinctColumnsJob.setupMapper(FactDistinctColumnsJob.java:101)
at 
org.apache.kylin.job.hadoop.cube.FactDistinctColumnsJob.run(FactDistinctColumnsJob.java:77)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at 
org.apache.kylin.job.common.MapReduceExecutable.doWork(MapReduceExecutable.java:120)
at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
at 
org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:51)
at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
at 
org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:130)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: 
NoSuchObjectException(message:default.kylin_intermediate_learn_kylin_four_2015020100_2015123000_8d26cc4b_e012_4414_a89b_c8d9323ae277
 table not found)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_core(HiveMetaStore.java:1808)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1778)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
at com.sun.proxy.$Proxy47.get_table(Unknown Source)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1208)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)
at com.sun.proxy.$Proxy48.getTable(Unknown Source)
at org.apache.hive.hcatalog.common.HCatUtil.getTable(HCatUtil.java:180)
at 
org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:105)
at 
org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86)
at 
org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:95)
... 13 more





--  --
??: "ShaoFeng Shi";;
: 2016??1??5??(??) 9:48
??: "dev"; 

: Re: org.apache.hadoop.hive.ql.metadata.HiveException



It makes progress, this time the root cause is
"java.lang.ClassNotFoundException:
javax.jdo.JDOObjectNotFoundException".

It indicates that 4 jars are not enough, jars which are needed by HCatalog
and not appeared in your MR nodes need be added to the HDFS folder as well,
like jdo-api-xx.jar. Just take a try.

2016-01-05 19:42 GMT+08:00  <363938...@qq.com>:

> thanks,but i add
> hive-common-xx.jar,hive-exec-xx.jar,hive-hcatalog-core-xx.jar,hive-metastore-xx.jar
> to hdfs folder.
> run have a new error;
>
>
> logs:
> [pool-7-thread-4]:[2016-01-05
> 19:19:59,443][ERROR][org.apache.kylin.job.common.MapReduceExecutable.doWork(MapReduceExecutable.java:123)]
> - error execute
> MapReduceExecutable{id=9c9d1e69-678c-45a0-a054-4dac9d07503a-01,
> name=Extract Fact Table Distinct Columns, state=RUNNING}
> java.io.IOException:
> com.google.common.util.concurrent.UncheckedExecutionException:
> java.lang.RuntimeException: Unable to instantiate
> org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient
> at
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:97)
> at
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:51)
> at
> org.apache.kylin.job.hadoop.cube.FactDistinctColumnsJob.setupMapper(FactDistinctColumnsJob.java:101)
> at
> org.apache.kylin.job.hadoop.cube.FactDistin

[jira] [Created] (KYLIN-1289) Click on subsequent wizard steps doesn't work when editing existing cube or model

2016-01-05 Thread Lola Liu (JIRA)
Lola Liu created KYLIN-1289:
---

 Summary: Click on subsequent wizard steps doesn't work when 
editing existing cube or model
 Key: KYLIN-1289
 URL: https://issues.apache.org/jira/browse/KYLIN-1289
 Project: Kylin
  Issue Type: Bug
  Components: Web 
Affects Versions: v2.0
Reporter: Lola Liu
Assignee: Zhong,Jason


STEPS:
1. Login
2. Edit existing cube or model
3. Click on subsequent wizard steps

RESULT:
doesn't work




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Welcome new Apache Kylin committer: Luwei Chen

2016-01-05 Thread Jian Zhong
Welcome Luwei !

On Mon, Jan 4, 2016 at 3:55 PM, Li Yang  wrote:

> Welcome, Luwei~~~
>
> On Sat, Jan 2, 2016 at 12:08 AM, Julian Hyde  wrote:
>
> > Congratulations and welcome, Luwei!
> >
> > Julian
> >
> > > On Dec 30, 2015, at 7:56 PM, wangxianbin1...@gmail.com wrote:
> > >
> > > hi all!
> > >
> > > congratulations, lu wei! and now I know how it works now!
> > >
> > > best regards!
> > >
> > >
> > > wangxianbin1...@gmail.com
> > >
> > > From: Luke Han
> > > Date: 2015-12-30 21:47
> > > To: dev
> > > Subject: Welcome new Apache Kylin committer: Luwei Chen
> > > I am very pleased to announce that the Project Management Committee
> > > (PMC) of Apache Kylin has asked Luwei Chen to become Apache Kylin
> > committer
> > > ,
> > > and she has already accepted.
> > >
> > > Luwei has already made many contribution to Kylin community, about
> > website,
> > > documentation, UI and others as well.
> > >
> > > Welcome Luwei, our first female committer:)
> > > Please share with us a little about yourself,
> > >
> > > Luke
> > >
> > > On behalf of the Apache Kylin PPMC
> >
> >
>


Re: org.apache.hadoop.hive.ql.metadata.HiveException

2016-01-05 Thread ShaoFeng Shi
It makes progress, this time the root cause is
"java.lang.ClassNotFoundException:
javax.jdo.JDOObjectNotFoundException".

It indicates that 4 jars are not enough, jars which are needed by HCatalog
and not appeared in your MR nodes need be added to the HDFS folder as well,
like jdo-api-xx.jar. Just take a try.

2016-01-05 19:42 GMT+08:00 和风 <363938...@qq.com>:

> thanks,but i add
> hive-common-xx.jar,hive-exec-xx.jar,hive-hcatalog-core-xx.jar,hive-metastore-xx.jar
> to hdfs folder.
> run have a new error;
>
>
> logs:
> [pool-7-thread-4]:[2016-01-05
> 19:19:59,443][ERROR][org.apache.kylin.job.common.MapReduceExecutable.doWork(MapReduceExecutable.java:123)]
> - error execute
> MapReduceExecutable{id=9c9d1e69-678c-45a0-a054-4dac9d07503a-01,
> name=Extract Fact Table Distinct Columns, state=RUNNING}
> java.io.IOException:
> com.google.common.util.concurrent.UncheckedExecutionException:
> java.lang.RuntimeException: Unable to instantiate
> org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient
> at
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:97)
> at
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:51)
> at
> org.apache.kylin.job.hadoop.cube.FactDistinctColumnsJob.setupMapper(FactDistinctColumnsJob.java:101)
> at
> org.apache.kylin.job.hadoop.cube.FactDistinctColumnsJob.run(FactDistinctColumnsJob.java:77)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at
> org.apache.kylin.job.common.MapReduceExecutable.doWork(MapReduceExecutable.java:120)
> at
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
> at
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:51)
> at
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
> at
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:130)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: com.google.common.util.concurrent.UncheckedExecutionException:
> java.lang.RuntimeException: Unable to instantiate
> org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient
> at
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2256)
> at com.google.common.cache.LocalCache.get(LocalCache.java:3985)
> at
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4788)
> at
> org.apache.hive.hcatalog.common.HiveClientCache.getOrCreate(HiveClientCache.java:227)
> at
> org.apache.hive.hcatalog.common.HiveClientCache.get(HiveClientCache.java:202)
> at
> org.apache.hive.hcatalog.common.HCatUtil.getHiveMetastoreClient(HCatUtil.java:558)
> at
> org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:104)
> at
> org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86)
> at
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:95)
> ... 13 more
> Caused by: java.lang.RuntimeException: Unable to instantiate
> org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient
> at
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1523)
> at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:86)
> at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
> at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:118)
> at
> org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveClientCache.java:230)
> at
> org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveClientCache.java:227)
> at
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4791)
> at
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3584)
> at
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2372)
> at
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2335)
> at
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2250)
> ... 21 more
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at
> sun.reflect.DelegatingConstructorA

Re: Java.lang.NoSuchMethodError: org.apache.hadoop.yarn.conf.YarnConfiguration.getServiceAddressConfKeys (Lorg/apache/hadoop/conf/Configuration; ) Ljava/util/List;

2016-01-05 Thread 王晓雨
Hi,
When kylin build cube,it will execute mapreduce job,kylin use ResourceManager 
address to get the mapreduce job status。kylin read the kylin.properties file to 
get the RM url first。if not config,kylin will  read hadoop config to get the RM 
address by HAUtils api in hadoop 2.4+。because hadoop support RM HA after 
hadoop2.4。
but hbase0.98x complie with hadoop 2.2 by default。does not support RM HA。so you 
got the exception!
the config property is correct,kylin will replace the ${job_id}to real 
jobid,you can only replace the YOUR_RM_AND_PORT to your RM address and port 
like:192.168.2.2:8088

thanks!


> 在 2016年1月5日,20:29,Kiriti Sai  写道:
> 
> Hi,
> Can you please explain it in a slightly detailed manner. I understand that
> the url you are referring to is the resource manager url, but it's
> particular to a job right? How can something particular to a job be set as
> a property for Kylin. I'm sorry if I'm mistaken.
> Or are you intending that {job_id} will actually get the id of the MR job
> running? Sorry for these naive questions.
> 
> Thank you.
>> On Jan 5, 2016 8:42 PM, "Xiaoyu Wang"  wrote:
>> 
>> Hi,
>> You can set the property in kylin.properties file
>> kylin.job.yarn.app.rest.check.status.url=
>> https://YOUR_RM_AND_PORT/ws/v1/cluster/apps/${job_id}?anonymous=true
>> 
>>> 在 2016年01月05日 19:38, Kiriti Sai 写道:
>>> 
>>> Hi Wang,
>>> The version of Hadoop in the cluster is 2.6. I've setup Hbase 0.98.15
>>> using
>>> binaries, in which there are some jar files of the Hadoop version 2.2,
>>> like
>>> hadoop-yarn-client-2.2.jar.
>>> As I've mentioned already, this setup has worked with the previous version
>>> of Kylin 1.1-incubating, but has been throwing this error after updating
>>> to
>>> v1.2. (Dont know if there is anything due to this, but just mentioning
>>> it).
>>> So, is there any other to solve this other than building HBase from source
>>> using the latest Hadoop libraries.
>>> 
>>> Thank You.
>>> On Jan 5, 2016 8:26 PM, "Xiaoyu Wang"  wrote:
>>> 
>>> Hi,
 The api YarnConfiguration.getServiceAddressConfKeys required Hadoop2.4+
 Which version Hadoop do you use ?
 You can recompile the hbase with hadoop 2.4+ version or your hadoop
 cluster version.
 
 
 
 
 在 2016年01月05日 18:52, Kiriti Sai 写道:
 
 Hi,
> I have looked at the suggested link before posting the question here. I
> didn't understand how to resolve this issue.
> I've tried replacing the 2.2 hadoop yarn libs present in the HBase lib
> directory but then it throws FileNotFoundException.
> Can you please explain in a detailed way how to resolve this issue.
> I'm using Hbase 0.98.15-hadoop2 version.
> 
> Thank you,
> Sai Kiriti B
> On Jan 5, 2016 7:38 PM, "Xiaoyu Wang"  wrote:
> 
> Hi Sai!
> 
>> You can see the same topic :
>> 
>> 
>> 
>> http://apache-kylin.74782.x6.nabble.com/NoSuchMethodError-org-apache-hadoop-yarn-conf-YarnConfiguration-getServiceAddressConfKeys-td2937.html#a2943
>> 
>> 在 2016年01月05日 18:27, Kiriti Sai 写道:
>> 
>> Hi,
>> 
>>> I've recently update the binaries in my Kylin setup from v1.1
>>> incubating
>>> to
>>> v1.2. The cubes which were building fine till now are throwing the
>>> above
>>> error.
>>> This error is occuring in the extract fact table distinct columns
>>> step.
>>> (Step 2).
>>> Can you please point out any mistakes with the upgrading procedure or
>>> anything else.
>>> 
>>> Thank you,
>>> Sai Kiriti B.
>> 


Re: Java.lang.NoSuchMethodError: org.apache.hadoop.yarn.conf.YarnConfiguration.getServiceAddressConfKeys (Lorg/apache/hadoop/conf/Configuration; ) Ljava/util/List;

2016-01-05 Thread Kiriti Sai
Hi,
Can you please explain it in a slightly detailed manner. I understand that
the url you are referring to is the resource manager url, but it's
particular to a job right? How can something particular to a job be set as
a property for Kylin. I'm sorry if I'm mistaken.
Or are you intending that {job_id} will actually get the id of the MR job
running? Sorry for these naive questions.

Thank you.
On Jan 5, 2016 8:42 PM, "Xiaoyu Wang"  wrote:

> Hi,
> You can set the property in kylin.properties file
> kylin.job.yarn.app.rest.check.status.url=
> https://YOUR_RM_AND_PORT/ws/v1/cluster/apps/${job_id}?anonymous=true
>
> 在 2016年01月05日 19:38, Kiriti Sai 写道:
>
>> Hi Wang,
>> The version of Hadoop in the cluster is 2.6. I've setup Hbase 0.98.15
>> using
>> binaries, in which there are some jar files of the Hadoop version 2.2,
>> like
>> hadoop-yarn-client-2.2.jar.
>> As I've mentioned already, this setup has worked with the previous version
>> of Kylin 1.1-incubating, but has been throwing this error after updating
>> to
>> v1.2. (Dont know if there is anything due to this, but just mentioning
>> it).
>> So, is there any other to solve this other than building HBase from source
>> using the latest Hadoop libraries.
>>
>> Thank You.
>> On Jan 5, 2016 8:26 PM, "Xiaoyu Wang"  wrote:
>>
>> Hi,
>>> The api YarnConfiguration.getServiceAddressConfKeys required Hadoop2.4+
>>> Which version Hadoop do you use ?
>>> You can recompile the hbase with hadoop 2.4+ version or your hadoop
>>> cluster version.
>>>
>>>
>>>
>>>
>>> 在 2016年01月05日 18:52, Kiriti Sai 写道:
>>>
>>> Hi,
 I have looked at the suggested link before posting the question here. I
 didn't understand how to resolve this issue.
 I've tried replacing the 2.2 hadoop yarn libs present in the HBase lib
 directory but then it throws FileNotFoundException.
 Can you please explain in a detailed way how to resolve this issue.
 I'm using Hbase 0.98.15-hadoop2 version.

 Thank you,
 Sai Kiriti B
 On Jan 5, 2016 7:38 PM, "Xiaoyu Wang"  wrote:

 Hi Sai!

> You can see the same topic :
>
>
>
> http://apache-kylin.74782.x6.nabble.com/NoSuchMethodError-org-apache-hadoop-yarn-conf-YarnConfiguration-getServiceAddressConfKeys-td2937.html#a2943
>
> 在 2016年01月05日 18:27, Kiriti Sai 写道:
>
> Hi,
>
>> I've recently update the binaries in my Kylin setup from v1.1
>> incubating
>> to
>> v1.2. The cubes which were building fine till now are throwing the
>> above
>> error.
>> This error is occuring in the extract fact table distinct columns
>> step.
>> (Step 2).
>> Can you please point out any mistakes with the upgrading procedure or
>> anything else.
>>
>> Thank you,
>> Sai Kiriti B.
>>
>>
>>
>>
>


Re: Java.lang.NoSuchMethodError: org.apache.hadoop.yarn.conf.YarnConfiguration.getServiceAddressConfKeys (Lorg/apache/hadoop/conf/Configuration; ) Ljava/util/List;

2016-01-05 Thread Xiaoyu Wang

Hi,
You can set the property in kylin.properties file
kylin.job.yarn.app.rest.check.status.url=https://YOUR_RM_AND_PORT/ws/v1/cluster/apps/${job_id}?anonymous=true

在 2016年01月05日 19:38, Kiriti Sai 写道:

Hi Wang,
The version of Hadoop in the cluster is 2.6. I've setup Hbase 0.98.15 using
binaries, in which there are some jar files of the Hadoop version 2.2, like
hadoop-yarn-client-2.2.jar.
As I've mentioned already, this setup has worked with the previous version
of Kylin 1.1-incubating, but has been throwing this error after updating to
v1.2. (Dont know if there is anything due to this, but just mentioning it).
So, is there any other to solve this other than building HBase from source
using the latest Hadoop libraries.

Thank You.
On Jan 5, 2016 8:26 PM, "Xiaoyu Wang"  wrote:


Hi,
The api YarnConfiguration.getServiceAddressConfKeys required Hadoop2.4+
Which version Hadoop do you use ?
You can recompile the hbase with hadoop 2.4+ version or your hadoop
cluster version.




在 2016年01月05日 18:52, Kiriti Sai 写道:


Hi,
I have looked at the suggested link before posting the question here. I
didn't understand how to resolve this issue.
I've tried replacing the 2.2 hadoop yarn libs present in the HBase lib
directory but then it throws FileNotFoundException.
Can you please explain in a detailed way how to resolve this issue.
I'm using Hbase 0.98.15-hadoop2 version.

Thank you,
Sai Kiriti B
On Jan 5, 2016 7:38 PM, "Xiaoyu Wang"  wrote:

Hi Sai!

You can see the same topic :


http://apache-kylin.74782.x6.nabble.com/NoSuchMethodError-org-apache-hadoop-yarn-conf-YarnConfiguration-getServiceAddressConfKeys-td2937.html#a2943

在 2016年01月05日 18:27, Kiriti Sai 写道:

Hi,

I've recently update the binaries in my Kylin setup from v1.1 incubating
to
v1.2. The cubes which were building fine till now are throwing the above
error.
This error is occuring in the extract fact table distinct columns step.
(Step 2).
Can you please point out any mistakes with the upgrading procedure or
anything else.

Thank you,
Sai Kiriti B.







?????? org.apache.hadoop.hive.ql.metadata.HiveException

2016-01-05 Thread ????
thanks,but i add 
hive-common-xx.jar,hive-exec-xx.jar,hive-hcatalog-core-xx.jar,hive-metastore-xx.jar
 to hdfs folder.
run have a new error;


logs:
[pool-7-thread-4]:[2016-01-05 
19:19:59,443][ERROR][org.apache.kylin.job.common.MapReduceExecutable.doWork(MapReduceExecutable.java:123)]
 - error execute 
MapReduceExecutable{id=9c9d1e69-678c-45a0-a054-4dac9d07503a-01, name=Extract 
Fact Table Distinct Columns, state=RUNNING}
java.io.IOException: 
com.google.common.util.concurrent.UncheckedExecutionException: 
java.lang.RuntimeException: Unable to instantiate 
org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient
at 
org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:97)
at 
org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:51)
at 
org.apache.kylin.job.hadoop.cube.FactDistinctColumnsJob.setupMapper(FactDistinctColumnsJob.java:101)
at 
org.apache.kylin.job.hadoop.cube.FactDistinctColumnsJob.run(FactDistinctColumnsJob.java:77)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at 
org.apache.kylin.job.common.MapReduceExecutable.doWork(MapReduceExecutable.java:120)
at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
at 
org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:51)
at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
at 
org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:130)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.google.common.util.concurrent.UncheckedExecutionException: 
java.lang.RuntimeException: Unable to instantiate 
org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2256)
at com.google.common.cache.LocalCache.get(LocalCache.java:3985)
at 
com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4788)
at 
org.apache.hive.hcatalog.common.HiveClientCache.getOrCreate(HiveClientCache.java:227)
at 
org.apache.hive.hcatalog.common.HiveClientCache.get(HiveClientCache.java:202)
at 
org.apache.hive.hcatalog.common.HCatUtil.getHiveMetastoreClient(HCatUtil.java:558)
at 
org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:104)
at 
org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86)
at 
org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:95)
... 13 more
Caused by: java.lang.RuntimeException: Unable to instantiate 
org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient
at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1523)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:86)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:118)
at 
org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveClientCache.java:230)
at 
org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveClientCache.java:227)
at 
com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4791)
at 
com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3584)
at 
com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2372)
at 
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2335)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2250)
... 21 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
... 31 more
Caused by: java.lang.NoClassDefFoundError: javax/jdo/JDOObjectNotFoundException
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:274)
at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.getCl

Re: Java.lang.NoSuchMethodError: org.apache.hadoop.yarn.conf.YarnConfiguration.getServiceAddressConfKeys (Lorg/apache/hadoop/conf/Configuration; ) Ljava/util/List;

2016-01-05 Thread Kiriti Sai
Hi Wang,
The version of Hadoop in the cluster is 2.6. I've setup Hbase 0.98.15 using
binaries, in which there are some jar files of the Hadoop version 2.2, like
hadoop-yarn-client-2.2.jar.
As I've mentioned already, this setup has worked with the previous version
of Kylin 1.1-incubating, but has been throwing this error after updating to
v1.2. (Dont know if there is anything due to this, but just mentioning it).
So, is there any other to solve this other than building HBase from source
using the latest Hadoop libraries.

Thank You.
On Jan 5, 2016 8:26 PM, "Xiaoyu Wang"  wrote:

> Hi,
> The api YarnConfiguration.getServiceAddressConfKeys required Hadoop2.4+
> Which version Hadoop do you use ?
> You can recompile the hbase with hadoop 2.4+ version or your hadoop
> cluster version.
>
>
>
>
> 在 2016年01月05日 18:52, Kiriti Sai 写道:
>
>> Hi,
>> I have looked at the suggested link before posting the question here. I
>> didn't understand how to resolve this issue.
>> I've tried replacing the 2.2 hadoop yarn libs present in the HBase lib
>> directory but then it throws FileNotFoundException.
>> Can you please explain in a detailed way how to resolve this issue.
>> I'm using Hbase 0.98.15-hadoop2 version.
>>
>> Thank you,
>> Sai Kiriti B
>> On Jan 5, 2016 7:38 PM, "Xiaoyu Wang"  wrote:
>>
>> Hi Sai!
>>> You can see the same topic :
>>>
>>>
>>> http://apache-kylin.74782.x6.nabble.com/NoSuchMethodError-org-apache-hadoop-yarn-conf-YarnConfiguration-getServiceAddressConfKeys-td2937.html#a2943
>>>
>>> 在 2016年01月05日 18:27, Kiriti Sai 写道:
>>>
>>> Hi,
 I've recently update the binaries in my Kylin setup from v1.1 incubating
 to
 v1.2. The cubes which were building fine till now are throwing the above
 error.
 This error is occuring in the extract fact table distinct columns step.
 (Step 2).
 Can you please point out any mistakes with the upgrading procedure or
 anything else.

 Thank you,
 Sai Kiriti B.



>


Re: Java.lang.NoSuchMethodError: org.apache.hadoop.yarn.conf.YarnConfiguration.getServiceAddressConfKeys (Lorg/apache/hadoop/conf/Configuration; ) Ljava/util/List;

2016-01-05 Thread Xiaoyu Wang

Hi,
The api YarnConfiguration.getServiceAddressConfKeys required Hadoop2.4+
Which version Hadoop do you use ?
You can recompile the hbase with hadoop 2.4+ version or your hadoop 
cluster version.





在 2016年01月05日 18:52, Kiriti Sai 写道:

Hi,
I have looked at the suggested link before posting the question here. I
didn't understand how to resolve this issue.
I've tried replacing the 2.2 hadoop yarn libs present in the HBase lib
directory but then it throws FileNotFoundException.
Can you please explain in a detailed way how to resolve this issue.
I'm using Hbase 0.98.15-hadoop2 version.

Thank you,
Sai Kiriti B
On Jan 5, 2016 7:38 PM, "Xiaoyu Wang"  wrote:


Hi Sai!
You can see the same topic :

http://apache-kylin.74782.x6.nabble.com/NoSuchMethodError-org-apache-hadoop-yarn-conf-YarnConfiguration-getServiceAddressConfKeys-td2937.html#a2943

在 2016年01月05日 18:27, Kiriti Sai 写道:


Hi,
I've recently update the binaries in my Kylin setup from v1.1 incubating
to
v1.2. The cubes which were building fine till now are throwing the above
error.
This error is occuring in the extract fact table distinct columns step.
(Step 2).
Can you please point out any mistakes with the upgrading procedure or
anything else.

Thank you,
Sai Kiriti B.






Re: about the parameter 'acceptPartial'

2016-01-05 Thread Yerui Sun
By design,acceptPartial should be ‘true’ in JDBC driver by default, but it was 
‘false’ in fact.

I’ve fixed this in https://issues.apache.org/jira/browse/KYLIN-1274, you can 
refer that.

> 在 2016年1月5日,18:16,Li Yang  写道:
> 
> You don't need to set it up in JDBC, it's false by default.
> 
> On Tue, Jan 5, 2016 at 6:10 PM, wangsh...@sinoaudit.cn <
> wangsh...@sinoaudit.cn> wrote:
> 
>> Yes, how can I do?
>> 
>> 
>> 
>> wangsh...@sinoaudit.cn
>> 
>> From: Li Yang
>> Date: 2016-01-05 17:29
>> To: dev
>> Subject: Re: about the parameter 'acceptPartial'
>> You want it be "false" always.
>> 
>> When "true", Kylin may choose to return incorrect partial result as purpose
>> of preview.
>> 
>> On Mon, Jan 4, 2016 at 6:16 PM, wangsh...@sinoaudit.cn <
>> wangsh...@sinoaudit.cn> wrote:
>> 
>>> Hi all:
>>> Can anybody tell me what the query parameter 'acceptPartial' means? and
>> I
>>> wonder how I can setup this parameter in jdbc.
>>> 
>>> 
>>> 
>>> wangsh...@sinoaudit.cn
>>> 
>> 



Re: Java.lang.NoSuchMethodError: org.apache.hadoop.yarn.conf.YarnConfiguration.getServiceAddressConfKeys (Lorg/apache/hadoop/conf/Configuration; ) Ljava/util/List;

2016-01-05 Thread Kiriti Sai
Hi,
I have looked at the suggested link before posting the question here. I
didn't understand how to resolve this issue.
I've tried replacing the 2.2 hadoop yarn libs present in the HBase lib
directory but then it throws FileNotFoundException.
Can you please explain in a detailed way how to resolve this issue.
I'm using Hbase 0.98.15-hadoop2 version.

Thank you,
Sai Kiriti B
On Jan 5, 2016 7:38 PM, "Xiaoyu Wang"  wrote:

> Hi Sai!
> You can see the same topic :
>
> http://apache-kylin.74782.x6.nabble.com/NoSuchMethodError-org-apache-hadoop-yarn-conf-YarnConfiguration-getServiceAddressConfKeys-td2937.html#a2943
>
> 在 2016年01月05日 18:27, Kiriti Sai 写道:
>
>> Hi,
>> I've recently update the binaries in my Kylin setup from v1.1 incubating
>> to
>> v1.2. The cubes which were building fine till now are throwing the above
>> error.
>> This error is occuring in the extract fact table distinct columns step.
>> (Step 2).
>> Can you please point out any mistakes with the upgrading procedure or
>> anything else.
>>
>> Thank you,
>> Sai Kiriti B.
>>
>>


Re: Java.lang.NoSuchMethodError: org.apache.hadoop.yarn.conf.YarnConfiguration.getServiceAddressConfKeys (Lorg/apache/hadoop/conf/Configuration; ) Ljava/util/List;

2016-01-05 Thread Xiaoyu Wang

Hi Sai!
You can see the same topic :
http://apache-kylin.74782.x6.nabble.com/NoSuchMethodError-org-apache-hadoop-yarn-conf-YarnConfiguration-getServiceAddressConfKeys-td2937.html#a2943

在 2016年01月05日 18:27, Kiriti Sai 写道:

Hi,
I've recently update the binaries in my Kylin setup from v1.1 incubating to
v1.2. The cubes which were building fine till now are throwing the above
error.
This error is occuring in the extract fact table distinct columns step.
(Step 2).
Can you please point out any mistakes with the upgrading procedure or
anything else.

Thank you,
Sai Kiriti B.



Java.lang.NoSuchMethodError: org.apache.hadoop.yarn.conf.YarnConfiguration.getServiceAddressConfKeys (Lorg/apache/hadoop/conf/Configuration; ) Ljava/util/List;

2016-01-05 Thread Kiriti Sai
Hi,
I've recently update the binaries in my Kylin setup from v1.1 incubating to
v1.2. The cubes which were building fine till now are throwing the above
error.
This error is occuring in the extract fact table distinct columns step.
(Step 2).
Can you please point out any mistakes with the upgrading procedure or
anything else.

Thank you,
Sai Kiriti B.


Re: Re: about the parameter 'acceptPartial'

2016-01-05 Thread Li Yang
You don't need to set it up in JDBC, it's false by default.

On Tue, Jan 5, 2016 at 6:10 PM, wangsh...@sinoaudit.cn <
wangsh...@sinoaudit.cn> wrote:

> Yes, how can I do?
>
>
>
> wangsh...@sinoaudit.cn
>
> From: Li Yang
> Date: 2016-01-05 17:29
> To: dev
> Subject: Re: about the parameter 'acceptPartial'
> You want it be "false" always.
>
> When "true", Kylin may choose to return incorrect partial result as purpose
> of preview.
>
> On Mon, Jan 4, 2016 at 6:16 PM, wangsh...@sinoaudit.cn <
> wangsh...@sinoaudit.cn> wrote:
>
> > Hi all:
> >  Can anybody tell me what the query parameter 'acceptPartial' means? and
> I
> > wonder how I can setup this parameter in jdbc.
> >
> >
> >
> > wangsh...@sinoaudit.cn
> >
>


Re: Re: about the parameter 'acceptPartial'

2016-01-05 Thread wangsh...@sinoaudit.cn
Yes, how can I do? 



wangsh...@sinoaudit.cn
 
From: Li Yang
Date: 2016-01-05 17:29
To: dev
Subject: Re: about the parameter 'acceptPartial'
You want it be "false" always.
 
When "true", Kylin may choose to return incorrect partial result as purpose
of preview.
 
On Mon, Jan 4, 2016 at 6:16 PM, wangsh...@sinoaudit.cn <
wangsh...@sinoaudit.cn> wrote:
 
> Hi all:
>  Can anybody tell me what the query parameter 'acceptPartial' means? and I
> wonder how I can setup this parameter in jdbc.
>
>
>
> wangsh...@sinoaudit.cn
>


[jira] [Created] (KYLIN-1288) Kylin

2016-01-05 Thread felix.hua (JIRA)
felix.hua created KYLIN-1288:


 Summary: Kylin
 Key: KYLIN-1288
 URL: https://issues.apache.org/jira/browse/KYLIN-1288
 Project: Kylin
  Issue Type: Bug
  Components: Web 
Reporter: felix.hua
Assignee: Zhong,Jason






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: cut size for hbase region

2016-01-05 Thread Li Yang
Given v1.1 code.  Just from the code, the only guess I could make is the
"kylin.hbase.region.count.max" in kylin.properties is really more than 1000.

To confirm, we need to see the reducer log of step "Calculate HTable Region
Splits" if it's still there...

On Tue, Jan 5, 2016 at 1:38 PM, Zhang, Zhong  wrote:

> Hongbin,
>
> It's 1.1. Shaofeng pointed that  it's a bug in the configuration in 1.1.
>
> Thanks,
> Zhong
>
> -Original Message-
> From: hongbin ma [mailto:mahong...@apache.org]
> Sent: Monday, January 04, 2016 9:01 PM
> To: dev@kylin.apache.org
> Subject: Re: cut size for hbase region
>
> btw, what is the version you're using? in theory region count of a single
> segment will not exceed 500 by default, we want to check that
>
> On Tue, Jan 5, 2016 at 9:40 AM, hongbin ma  wrote:
>
> > the cutting is based on estimation. due to hbase compression and
> > encoding, the estimation might be not very accurate. one recent ticket
> > on this is
> > https://issues.apache.org/jira/browse/KYLIN-1237
> >
> > On Tue, Jan 5, 2016 at 12:30 AM, Zhang, Zhong 
> > wrote:
> >
> >> Hi All,
> >>
> >> Happy new year!
> >>
> >> Kylin provides three options for cut size. Please see the following:
> >>
> >> # The cut size for hbase region, in GB.
> >> # E.g, for cube whose capacity be marked as "SMALL", split region per
> >> 10GB by default
> >> kylin.hbase.region.cut.small=10
> >> kylin.hbase.region.cut.medium=20
> >> kylin.hbase.region.cut.large=100
> >>
> >> I choose cube size as small to build the cube and the following is
> >> one of the HTable I got.
> >> HTable: KYLIN_O03ZWB4DK9
> >>
> >>   *   Region Count: 979
> >>   *   Size: 5.75 TB
> >>   *   Start Time: 2011-12-31 00:00:00
> >>   *   End Time: 2014-05-01 01:00:00
> >> So the size of the HTable is 5.75TB and there are 979 regions in total?
> >>
> >> Let's do a little bit math. 979*10GB (since split region per 10GB
> >> when cube size is marked as small) definitely does not equal 5.75TB.
> >> Do I understand correctly?
> >>
> >> Best regards
> >> Zhong
> >>
> >>
> >
> >
> > --
> > Regards,
> >
> > *Bin Mahone | 马洪宾*
> > Apache Kylin: http://kylin.io
> > Github: https://github.com/binmahone
> >
>
>
>
> --
> Regards,
>
> *Bin Mahone | 马洪宾*
> Apache Kylin: http://kylin.io
> Github: https://github.com/binmahone
>


Re: about the parameter 'acceptPartial'

2016-01-05 Thread Li Yang
You want it be "false" always.

When "true", Kylin may choose to return incorrect partial result as purpose
of preview.

On Mon, Jan 4, 2016 at 6:16 PM, wangsh...@sinoaudit.cn <
wangsh...@sinoaudit.cn> wrote:

> Hi all:
>  Can anybody tell me what the query parameter 'acceptPartial' means? and I
> wonder how I can setup this parameter in jdbc.
>
>
>
> wangsh...@sinoaudit.cn
>


Re: kylin sync hive table failed

2016-01-05 Thread Li Yang
Suggest attach the full kylin.log instead of saying no log.

On Mon, Jan 4, 2016 at 10:24 AM, Jian Zhong  wrote:

> any more log?
>
> On Thu, Dec 31, 2015 at 2:33 PM, 和风 <363938...@qq.com> wrote:
>
> > run find-hive-dependency.sh can find jars;
> > env: hadoop2.7.1,hive 1.21,kylin 1.2
> >
> >
> > log:
> >
> >
> > Logging initialized using configuration in
> > jar:file:/usr/local/hive/lib/hive-common-1.2.1.jar!/hive-log4j.properties
> > SLF4J: Class path contains multiple SLF4J bindings.
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hive/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > explanation.
> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> > HCAT_HOME not found, try to find hcatalog path from hadoop home
> > hive dependency:
> >
> /usr/local/hive/conf:/usr/local/hive/lib/jsr305-1.3.9.jar:/usr/local/hive/lib/jetty-all-7.6.0.v20120127.jar:/usr/local/hive/lib/servlet-api-2.5.jar:/usr/local/hive/lib/jets3t-0.9.0.jar:/usr/local/hive/lib/accumulo-core-1.6.0.jar:/usr/local/hive/lib/libfb303-0.9.2.jar:/usr/local/hive/lib/json-serde-1.3.6-jar-with-dependencies.jar:/usr/local/hive/lib/commons-httpclient-3.0.1.jar:/usr/local/hive/lib/ivy-2.4.0.jar:/usr/local/hive/lib/hbase-examples-0.98.16.1-hadoop2.jar:/usr/local/hive/lib/jersey-client-1.9.jar:/usr/local/hive/lib/jersey-server-1.9.jar:/usr/local/hive/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hive/lib/zookeeper-3.4.6.jar:/usr/local/hive/lib/bonecp-0.8.0.RELEASE.jar:/usr/local/hive/lib/activation-1.1.jar:/usr/local/hive/lib/snappy-java-1.0.5.jar:/usr/local/hive/lib/commons-cli-1.2.jar:/usr/local/hive/lib/ST4-4.0.4.jar:/usr/local/hive/lib/asm-3.1.jar:/usr/local/hive/lib/hive-common-1.2.1.jar:/usr/local/hive/lib/avro-1.7.5.jar:/usr/local/hive/lib/findbugs-annotations-1.3.9-1.jar:/usr/local/hive/lib/accumulo-trace-1.6.0.jar:/usr/local/hive/lib/jcommander-1.32.jar:/usr/local/hive/lib/commons-lang-2.6.jar:/usr/local/hive/lib/
> >
> >
> >
> > --
> > 年轻就要对味
> >
> >
> >
> >
> >
> >
> >
> > -- 原始邮件 --
> > 发件人: "nichunen";;
> > 发送时间: 2015年12月31日(星期四) 中午1:15
> > 收件人: "dev";
> >
> > 主题: 回复: kylin sync hive table failed
> >
> >
> >
> > Hi,
> >
> >
> > Maybe you can run find-hive-dependency.sh to check whether the hive jar
> > packages can be found.
> >
> >
> >
> >
> > Best Regards,
> >
> >
> >
> > George/倪春恩
> >
> > Software Engineer/软件工程师
> >
> > Mobile:+86-13501723787| Fax:+8610-56842040
> >
> > 北京明略软件系统有限公司(www.mininglamp.com)
> >
> > 北京市昌平区东小口镇中东路398号中煤建设集团大厦1号楼4层
> >
> > F4,1#,Zhongmei Construction Group Plaza,398# Zhongdong Road,Changping
> > District,Beijing,102218
> >
> >
> >
> 
> >
> >
> >
> >
> >
> > 发件人: 和风
> > 发送时间: 2015-12-31 11:24
> > 收件人: dev
> > 主题: kylin sync hive table failed
> >
> >
> > hi:
> >
> >
> > kylin sync hive table failed;   "failed to take action",but not error
> > log. (l use database.table);
> >
> >
> >
> >
> >
> >
> >  log:
> >
> >
> >
> org.apache.kylin.rest.filter.KylinApiFilter.logRequest(KylinApiFilter.java:120)]
> > - REQUEST: REQUESTER=ADMIN;REQ_TIME=GMT-08:00 2015-12-30
> >
> 19:05:21;URI=/kylin/api/tables/froad_data.bi_refund_log/hive_hbase_kylin;METHOD=POST;QUERY_STRING=null;
> >  PAYLOAD=;RESP_STATUS=200;
> >
> >
> >
> >
> >  yehefeng
> >
>