[jira] [Created] (KYLIN-3763) Kylin failed to start with CDH 6.1.0

2019-01-08 Thread Travis Gu (JIRA)
Travis Gu created KYLIN-3763:


 Summary: Kylin failed to start with CDH 6.1.0
 Key: KYLIN-3763
 URL: https://issues.apache.org/jira/browse/KYLIN-3763
 Project: Kylin
  Issue Type: Bug
Affects Versions: v2.5.2
Reporter: Travis Gu


When I am trying to run kylin 2.5.2-bin-cdh57 with Cloudera CDH 6.1.0, the 
check-env script doesn't print any error. But when I ran the kylin.sh start 
command, the server failed to start.

This is the error log:

2019-01-09 14:06:11,504 INFO [main-SendThread(iot-mes-host-a.novalocal:2181)] 
zookeeper.ClientCnxn:1012 : Opening socket connection to server 
iot-mes-host-a.novalocal/10.135.99.196:2181. Will not attempt to authenticate 
using SASL (unknown error)
2019-01-09 14:06:11,504 INFO [main-SendThread(iot-mes-host-a.novalocal:2181)] 
zookeeper.ClientCnxn:856 : Socket connection established, initiating session, 
client: /10.135.99.196:45868, server: 
iot-mes-host-a.novalocal/10.135.99.196:2181
2019-01-09 14:06:11,509 INFO [main] imps.CuratorFrameworkImpl:326 : Default 
schema
2019-01-09 14:06:11,511 DEBUG [main] util.ZookeeperDistributedLock:143 : 
5359@iot-mes-host-a.novalocal trying to lock 
/kylin/kylin_metadata/create_htable/kylin_metadata/lock
2019-01-09 14:06:11,545 INFO [main-SendThread(iot-mes-host-a.novalocal:2181)] 
zookeeper.ClientCnxn:1272 : Session establishment complete on server 
iot-mes-host-a.novalocal/10.135.99.196:2181, sessionid = 0x1682be976a90b96, 
negotiated timeout = 6
2019-01-09 14:06:11,550 INFO [main-EventThread] 
state.ConnectionStateManager:237 : State change: CONNECTED
Exception in thread "main" java.lang.IllegalArgumentException: Failed to find 
metadata store by url: kylin_metadata@hbase
 at 
org.apache.kylin.common.persistence.ResourceStore.createResourceStore(ResourceStore.java:98)
 at 
org.apache.kylin.common.persistence.ResourceStore.getStore(ResourceStore.java:110)
 at 
org.apache.kylin.rest.service.AclTableMigrationTool.checkIfNeedMigrate(AclTableMigrationTool.java:99)
 at 
org.apache.kylin.tool.AclTableMigrationCLI.main(AclTableMigrationCLI.java:43)
Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
 at 
org.apache.kylin.common.persistence.ResourceStore.createResourceStore(ResourceStore.java:92)
 ... 3 more
Caused by: java.lang.NoSuchMethodError: 
org.apache.curator.framework.api.CreateBuilder.creatingParentsIfNeeded()Lorg/apache/curator/framework/api/ProtectACLCreateModePathAndBytesable;
 at 
org.apache.kylin.storage.hbase.util.ZookeeperDistributedLock.lock(ZookeeperDistributedLock.java:146)
 at 
org.apache.kylin.storage.hbase.util.ZookeeperDistributedLock.lock(ZookeeperDistributedLock.java:167)
 at 
org.apache.kylin.storage.hbase.HBaseConnection.createHTableIfNeeded(HBaseConnection.java:328)
 at 
org.apache.kylin.storage.hbase.HBaseResourceStore.createHTableIfNeeded(HBaseResourceStore.java:112)
 at 
org.apache.kylin.storage.hbase.HBaseResourceStore.(HBaseResourceStore.java:93)
 ... 8 more
2019-01-09 14:06:11,565 INFO [close-hbase-conn] hbase.HBaseConnection:136 : 
Closing HBase connections...

 

It looks like there are mutiple curator in the class path. How to solve it?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KYLIN-3762) 构建cube出现error

2019-01-08 Thread JIRA
王振强 created KYLIN-3762:
--

 Summary: 构建cube出现error 
 Key: KYLIN-3762
 URL: https://issues.apache.org/jira/browse/KYLIN-3762
 Project: Kylin
  Issue Type: Bug
Affects Versions: v2.5.2
Reporter: 王振强
 Attachments: mr_error_log.jpg

构建cube出现error,以下是mr的log

2019-01-09 14:06:43,552 INFO [fetcher#1] 
org.apache.hadoop.mapreduce.task.reduce.Fetcher: fetcher#1 about to shuffle 
output of map attempt_1544413042966_0495_m_02_0 decomp: 59732393 len: 
9306880 to MEMORY
2019-01-09 14:06:43,595 INFO [fetcher#2] 
org.apache.hadoop.mapreduce.task.reduce.Fetcher: fetcher#2 about to shuffle 
output of map attempt_1544413042966_0495_m_03_0 decomp: 61855877 len: 
9709575 to MEMORY
2019-01-09 14:06:43,625 INFO [fetcher#1] 
org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput: Read 59732393 bytes 
from map-output for attempt_1544413042966_0495_m_02_0
2019-01-09 14:06:43,625 INFO [fetcher#1] 
org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: closeInMemoryFile -> 
map-output of size: 59732393, inMemoryMapOutputs.size() -> 1, commitMemory -> 
0, usedMemory ->121588270
2019-01-09 14:06:43,872 INFO [fetcher#1] 
org.apache.hadoop.mapreduce.task.reduce.Fetcher: fetcher#1 about to shuffle 
output of map attempt_1544413042966_0495_m_01_0 decomp: 60219839 len: 
9341252 to MEMORY
2019-01-09 14:06:43,889 INFO [fetcher#2] 
org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput: Read 61855877 bytes 
from map-output for attempt_1544413042966_0495_m_03_0
2019-01-09 14:06:43,889 INFO [fetcher#2] 
org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: closeInMemoryFile -> 
map-output of size: 61855877, inMemoryMapOutputs.size() -> 2, commitMemory -> 
59732393, usedMemory ->181808109
2019-01-09 14:06:43,889 INFO [fetcher#2] 
org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl: ap-cdh04:13562 
freed by fetcher#2 in 579ms
2019-01-09 14:06:43,938 INFO [fetcher#1] 
org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput: Read 60219839 bytes 
from map-output for attempt_1544413042966_0495_m_01_0
2019-01-09 14:06:43,938 INFO [fetcher#1] 
org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: closeInMemoryFile -> 
map-output of size: 60219839, inMemoryMapOutputs.size() -> 3, commitMemory -> 
121588270, usedMemory ->181808109
2019-01-09 14:06:43,949 INFO [fetcher#1] 
org.apache.hadoop.mapreduce.task.reduce.Fetcher: fetcher#1 about to shuffle 
output of map attempt_1544413042966_0495_m_00_0 decomp: 61659651 len: 
9657307 to MEMORY
2019-01-09 14:06:43,994 INFO [fetcher#1] 
org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput: Read 61659651 bytes 
from map-output for attempt_1544413042966_0495_m_00_0
2019-01-09 14:06:43,995 INFO [fetcher#1] 
org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: closeInMemoryFile -> 
map-output of size: 61659651, inMemoryMapOutputs.size() -> 4, commitMemory -> 
181808109, usedMemory ->243467760
2019-01-09 14:06:43,995 INFO [fetcher#1] 
org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl: ap-cdh03:13562 
freed by fetcher#1 in 685ms
2019-01-09 14:06:44,001 INFO [EventFetcher for fetching Map Completion Events] 
org.apache.hadoop.mapreduce.task.reduce.EventFetcher: EventFetcher is 
interrupted.. Returning
2019-01-09 14:06:44,014 INFO [main] 
org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: finalMerge called 
with 4 in-memory map-outputs and 0 on-disk map-outputs
2019-01-09 14:06:44,024 INFO [main] org.apache.hadoop.mapred.Merger: Merging 4 
sorted segments
2019-01-09 14:06:44,026 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
the last merge-pass, with 4 segments left of total size: 243467522 bytes
2019-01-09 14:06:47,997 INFO [main] 
org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: Merged 4 segments, 
243467760 bytes to disk to satisfy reduce memory limit
2019-01-09 14:06:47,998 INFO [main] 
org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: Merging 1 files, 
35816160 bytes from disk
2019-01-09 14:06:47,998 INFO [main] 
org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: Merging 0 segments, 0 
bytes from memory into reduce
2019-01-09 14:06:47,999 INFO [main] org.apache.hadoop.mapred.Merger: Merging 1 
sorted segments
2019-01-09 14:06:48,004 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
the last merge-pass, with 1 segments left of total size: 243467668 bytes
^@^A^@^@^V^@
 
^@^GVERSION*(^@_1544413042966_0495_01_02^H*(^@_1544413042966_0495_01_39^D*(^@_1544413042966_0495_01_19^F*(^@_1544413042966_0495_01_25^D^Dnone^D^P¬<90>¥¬<90>¥¬<90>µ¬w
 ¬w 
±^GÕ¬`~¬`~µhS«^O^A«^O^A^C^Qdata:BCFile.index^Dnone<90>^Px^Q((^Pdata:TFile.index^Dnone<90>^PwZÌ·Ì·^Odata:TFile.meta^Dnone<90>^PwT^F^F^@^@^@^@^@^Px9^@^A^@^@Ñ^QÓh<91>µ×¶9ßA@<92>ºáP



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Increment Upload in Kylin

2019-01-08 Thread Support DrakosData
Ans1: Multilevel partitioning of cube is not supported in Apache Kylin, 
it's an feature of Kyligence Enterprise


Ans2: Mandatory choosing parititon column in kylin? yes, and must be a 
datetime (or equivalent)  (Kylin 2032 
 v1.5.4.1  & Kylin 
2026  v1.5.4.1)


Ans2.1: is good idea if Hive Data Source (DWH) and Kylin Cube are 
partitioned by same column, but isn't mandatory




On 8/1/19 17:34, somu0...@gmail.com wrote:

Yes, Is multilevel partitioning supported. As I am not able to find that
option, at the end it will ask to choose the partition column from the fact
table?

  if this feature is supported then once the data come we will triger the
build it will create a new cube segment for each changes later on we will
need to merge them that is my understanding please correct me.

Also i have doubt like is hive partitioning is also mandatatory for choosing
partitioning column in kylin?

Thanks in advance

--
Sent from: http://apache-kylin.74782.x6.nabble.com/


--


DRAKOSDATA: Apache Kylin Experts
Kyligence Partner Spain (Madrid)


This message and any attachments are confidential and privileged and intented 
for the use of the addressee only.
You are entitled to exercise your rights of access, rectification, cancellation 
and opposition by addressing such written application to address 
admin@a...@drakosdata.com



Re: Increment Upload in Kylin

2019-01-08 Thread somu0...@gmail.com
Yes, Is multilevel partitioning supported. As I am not able to find that
option, at the end it will ask to choose the partition column from the fact
table?

 if this feature is supported then once the data come we will triger the
build it will create a new cube segment for each changes later on we will
need to merge them that is my understanding please correct me.

Also i have doubt like is hive partitioning is also mandatatory for choosing
partitioning column in kylin?

Thanks in advance

--
Sent from: http://apache-kylin.74782.x6.nabble.com/


Re: Increment Upload in Kylin

2019-01-08 Thread JiaTao Tao
Seems like "incremental build"? Cube data consists of segments and
every building is a new segment and will not refresh the old segs.

somu0...@gmail.com  于2019年1月7日周一 上午2:16写道:

> Is there any feature in kylin which will do increment update without
> refreshing the complete cube.  for example if one dimension get new data
> every day it should calculate the new one without refreshing the complete
> cube which will save time for building the cube. Could you please help me
> if
> such feature available in kylin?
>
> --
> Sent from: http://apache-kylin.74782.x6.nabble.com/
>


-- 


Regards!

Aron Tao


Re: 答复: 请问可以设置多台机器同时构建cube吗?

2019-01-08 Thread JiaTao Tao
Kylin will submit cubing tasks on Yarn, if your Hadoop cluster has
multi-nodes, it can use their abilities

NoOne <3513797...@qq.com> 于2019年1月8日周二 上午8:04写道:

> sorry,我问的是可以设置多台机器同时构建同一个cube吗?
>
> --
> Sent from: http://apache-kylin.74782.x6.nabble.com/
>


-- 


Regards!

Aron Tao


[jira] [Created] (KYLIN-3761) Kylin for CDH6.1.0

2019-01-08 Thread Davide Malagoli (JIRA)
Davide Malagoli created KYLIN-3761:
--

 Summary: Kylin for CDH6.1.0
 Key: KYLIN-3761
 URL: https://issues.apache.org/jira/browse/KYLIN-3761
 Project: Kylin
  Issue Type: Wish
Reporter: Davide Malagoli






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: 答复: 请问可以设置多台机器同时构建cube吗?

2019-01-08 Thread NoOne
sorry,我问的是可以设置多台机器同时构建同一个cube吗?

--
Sent from: http://apache-kylin.74782.x6.nabble.com/