[jira] [Closed] (KYLIN-657) JDBC Driver not register into DriverManager

2018-02-01 Thread Dong Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Li closed KYLIN-657.
-
Assignee: Xiaoyu Wang  (was: liyang)

> JDBC Driver not register into DriverManager
> ---
>
> Key: KYLIN-657
> URL: https://issues.apache.org/jira/browse/KYLIN-657
> Project: Kylin
>  Issue Type: Bug
>  Components: Driver - JDBC
>Affects Versions: v0.6.5, v0.7.1
>Reporter: Xiaoyu Wang
>Assignee: Xiaoyu Wang
>Priority: Major
> Fix For: v0.7.1
>
>
> The Driver not register into DriverManager
> when use spring to manage the datasource pool with 
> org.apache.commons.dbcp.BasicDataSource
> got exception:
> Exception in thread "main" java.sql.SQLException: No suitable driver found 
> for jdbc:kylin://x.x.x.x/default
> at java.sql.DriverManager.getConnection(DriverManager.java:596)
> at java.sql.DriverManager.getConnection(DriverManager.java:187)
> Pull Request:
> https://github.com/KylinOLAP/Kylin/pull/452



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (KYLIN-643) JDBC couldn't connect to Kylin: "java.sql.SQLException: Authentication Failed"

2018-02-01 Thread Dong Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Li closed KYLIN-643.
-

> JDBC couldn't connect to Kylin: "java.sql.SQLException: Authentication Failed"
> --
>
> Key: KYLIN-643
> URL: https://issues.apache.org/jira/browse/KYLIN-643
> Project: Kylin
>  Issue Type: Bug
>  Components: Driver - JDBC
>Affects Versions: v0.7.1
>Reporter: Shaofeng SHI
>Assignee: Shaofeng SHI
>Priority: Major
> Fix For: v0.7.1
>
>
> Get authentication error at client side (I'm using eclipse as client):
> java.sql.SQLException: Authentication Failed.
>   at org.apache.kylin.jdbc.Driver$1.onConnectionInit(Driver.java:116)
>   at 
> net.hydromatic.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:137)
>   at 
> org.eclipse.datatools.connectivity.drivers.jdbc.JDBCConnection.createConnection(JDBCConnection.java:328)
>   at 
> org.eclipse.datatools.connectivity.DriverConnectionBase.internalCreateConnection(DriverConnectionBase.java:105)
>   at 
> org.eclipse.datatools.connectivity.DriverConnectionBase.open(DriverConnectionBase.java:54)
>   at 
> org.eclipse.datatools.connectivity.drivers.jdbc.JDBCConnection.open(JDBCConnection.java:96)
>   at 
> org.eclipse.datatools.connectivity.drivers.jdbc.JDBCConnectionFactory.createConnection(JDBCConnectionFactory.java:53)
>   at 
> org.eclipse.datatools.connectivity.internal.ConnectionFactoryProvider.createConnection(ConnectionFactoryProvider.java:83)
>   at 
> org.eclipse.datatools.connectivity.internal.ConnectionProfile.createConnection(ConnectionProfile.java:359)
>   at 
> org.eclipse.datatools.connectivity.ui.PingJob.createTestConnection(PingJob.java:76)
>   at org.eclipse.datatools.connectivity.ui.PingJob.run(PingJob.java:59)
>   at org.eclipse.core.internal.jobs.Worker.run(Worker.java:54)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (KYLIN-595) Kylin JDBC driver should not assume Kylin server listen on either 80 or 443

2018-02-01 Thread Dong Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Li closed KYLIN-595.
-

> Kylin JDBC driver should not assume Kylin server listen on either 80 or 443
> ---
>
> Key: KYLIN-595
> URL: https://issues.apache.org/jira/browse/KYLIN-595
> Project: Kylin
>  Issue Type: Bug
>  Components: Driver - JDBC
>Affects Versions: v0.6.5
>Reporter: Shaofeng SHI
>Assignee: Shaofeng SHI
>Priority: Major
> Fix For: v0.7.1
>
>
> Kylin JDBC driver assumes the server is working on 80 or 443 port; When user 
> create a JDBC connection, it will check whether user specify the "ssl" 
> property with default value false; if false, it will connect to server with 
> http on port 80; if ssl=true, it will use https to connect the server on 443; 
> Kylin should not make such assumption.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KYLIN-3140) Auto merge jobs should not block user build jobs

2018-02-01 Thread Shaofeng SHI (JIRA)

[ 
https://issues.apache.org/jira/browse/KYLIN-3140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349817#comment-16349817
 ] 

Shaofeng SHI commented on KYLIN-3140:
-

Hi Gang, I understand your point.

The end user may not care how many merge job running, but the platform 
administrator cares. The "max-building-segments" was introduced to contraint 
the max concurrent jobs a Kylin server can run. If exceed this threshold, the 
server may encounter performance issue and report errors like OOM. That's why 
it doesn't differentiate the merge/refresh and build.

Usually if a job is failed, the administrator need fix the problem immediately 
and then decide to resume or discard it. If the root cause isn't identified, 
resume or discard doesn't help. Even for merge job, the administrator need take 
care, we don't see a strong reason to ignore that.

 

> Auto merge jobs should not block user build jobs
> 
>
> Key: KYLIN-3140
> URL: https://issues.apache.org/jira/browse/KYLIN-3140
> Project: Kylin
>  Issue Type: Improvement
>  Components: Job Engine
>Reporter: Wang, Gang
>Assignee: Shaofeng SHI
>Priority: Major
>
> Although in the latest version, Kylin support concurrent jobs. If the 
> concurrency is set to 1, there is some possibility that cube build job will 
> have dead lock. Say, when there is some issue which causes merge job failed, 
> even when you discard the job, another job will be launched and failed again 
> due to auto merge policy. And this failed merge job blocks user to build 
> incremental segment.
> Even if the concurrency is set to larger than 1, the auto merge jobs occupy 
> some concurrency quota. 
> While, from user perspective, they don't care much about the auto merge jobs, 
> and the auto merge jobs should not block the building/refresh jobs they 
> submit manually.
> A better way may be separating the auto merge jobs from the job queue, 
> parameter max-building-segments only limit jobs submitted by users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KYLIN-3141) Support offset for segment merge

2018-02-01 Thread Shaofeng SHI (JIRA)

[ 
https://issues.apache.org/jira/browse/KYLIN-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349812#comment-16349812
 ] 

Shaofeng SHI commented on KYLIN-3141:
-

Is it the same requirement as "volatile_range"? You can find it in latest 
Kylin, on CubeDesc.java. Please check KYLIN-1892

> Support offset for segment merge
> 
>
> Key: KYLIN-3141
> URL: https://issues.apache.org/jira/browse/KYLIN-3141
> Project: Kylin
>  Issue Type: New Feature
>  Components: Job Engine
>Reporter: Wang, Gang
>Assignee: Wang, Gang
>Priority: Minor
>
> This is a request to add an offset to the Kylin segment merge so as to avoid 
> immediate merge of segments after a 7 day / 30 day window.
> Introducing a delay(offset) would help in 2 things -
> a) When auto merge kicks off I have a new segment and my daily incremental 
> segment build script will fail because they won’t find the last segment.
> b) Lot of use cases where I may need to do data backfills for some of the 
> days of the previous week, but I end up refreshing the whole merged segment 
> instead of a day or two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KYLIN-3223) Query for the list of hybrid cubes results in NPE

2018-02-01 Thread Billy Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/KYLIN-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349773#comment-16349773
 ] 

Billy Liu commented on KYLIN-3223:
--

Thanks [~seva_ostapenko], I think you have identified the root cause. Do you 
want to submit a PR?

> Query for the list of hybrid cubes results in NPE
> -
>
> Key: KYLIN-3223
> URL: https://issues.apache.org/jira/browse/KYLIN-3223
> Project: Kylin
>  Issue Type: Bug
>  Components: REST Service
>Affects Versions: v2.2.0
> Environment: HDP 2.5.6, Kylin 2.2
>Reporter: Vsevolod Ostapenko
>Assignee: Zhixiong Chen
>Priority: Major
>
> Calling REST API to get the list of hybrid cubes returns stack trace with NPE 
> exception.
> {quote}curl -u ADMIN:KYLIN -X GET -H 'Content-Type: application/json'  -d {} 
> [http://localhost:7070/kylin/api/hybrids]
> {"code":"999","data":null,"msg":null,"stacktrace":"java.lang.NullPointerException\n\tat
>  
> java.util.concurrent.ConcurrentSkipListMap.doGet(ConcurrentSkipListMap.java:778)\n\tat
>  
> java.util.concurrent.ConcurrentSkipListMap.get(ConcurrentSkipListMap.java:1546)\n\tat
>  
> org.apache.kylin.metadata.cachesync.SingleValueCache.get(SingleValueCache.java:85)\n\tat
>  
> org.apache.kylin.metadata.project.ProjectManager.getProject(ProjectManager.java:172)\n\tat
>  
> org.apache.kylin.rest.util.AclEvaluate.getProjectInstance(AclEvaluate.java:39)\n\tat
>  
> org.apache.kylin.rest.util.AclEvaluate.checkProjectReadPermission(AclEvaluate.java:61)\n\tat
>  
> org.apache.kylin.rest.service.HybridService.listHybrids(HybridService.java:115)\n\tat
>  
> org.apache.kylin.rest.controller.HybridController.list(HybridController.java:76)\n\tat
>  sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat
>  
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat
>  java.lang.reflect.Method.invoke(Method.java:497)\n\tat 
> org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205)\n\tat
>  
> org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:133)\n\tat
>  
> org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:97)\n\tat
>  
> org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:827)\n\tat
>  
> org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:738)\n\tat
>  
> org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85)\n\tat
>  
> org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:967)\n\tat
>  
> org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901)\n\tat
>  
> org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)\n\tat
>  
> org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:861)\n\tat
>  javax.servlet.http.HttpServlet.service(HttpServlet.java:624)\n\tat 
> org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)\n\tat
>  javax.servlet.http.HttpServlet.service(HttpServlet.java:731)\n\tat 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)\n\tat
>  
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)\n\tat
>  org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)\n\tat 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)\n\tat
>  
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)\n\tat
>  
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317)\n\tat
>  
> org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127)\n\tat
>  
> org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91)\n\tat
>  
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)\n\tat
>  
> org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:114)\n\tat
>  
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)\n\tat
>  
> org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:137)\n\tat
> 

[jira] [Assigned] (KYLIN-3094) Upgrade zookeeper to 3.4.11

2018-02-01 Thread nichunen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nichunen reassigned KYLIN-3094:
---

Assignee: nichunen

> Upgrade zookeeper to 3.4.11
> ---
>
> Key: KYLIN-3094
> URL: https://issues.apache.org/jira/browse/KYLIN-3094
> Project: Kylin
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: nichunen
>Priority: Minor
>
> Current zookeeper release is 3.4.11
> We should upgrade dependency from 3.4.8 to 3.4.11 where there is important 
> security fix.
> One such critical fix is ZOOKEEPER-2146, which can be explored maliciously



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (KYLIN-679) Adding Spark Support to Apache Kylin

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI closed KYLIN-679.
--
Resolution: Fixed

> Adding Spark Support to Apache Kylin
> 
>
> Key: KYLIN-679
> URL: https://issues.apache.org/jira/browse/KYLIN-679
> Project: Kylin
>  Issue Type: New Feature
>  Components: Spark Engine
>Reporter: Luke Han
>Priority: Major
> Fix For: v2.0.0
>
>
> Challenges in current architecture:
> High latency when reading data from Hive 
> --Several hours to fetch data when join big tables
> --Route to SQL-on-Hadoop turned off due to performance issue
> Time-to-Market of data latency
> --Huge IO & Network traffic with MR jobs
> Streaming
> --Streaming process and pre-calculate cubes
> Where Spark could bring benefits to Kylin:
> Integrating with Spark SQL: 
> --Option I: Read data from SparkSQL instead of Hive
> --Option II: Route unsupported queries to SparkSQL
> --Option III: Kylin to be OLAP source of SparkSQL
> Spark Cube Build Engine
> --Efficiency cube generate engine with Spark
> Spark Streaming
> --Leverage SparkStreaming for StreamingOLAP
> HBase?
> --Any idea?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-679) Adding Spark Support to Apache Kylin

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI updated KYLIN-679:
---
Component/s: (was: General)
 Spark Engine

> Adding Spark Support to Apache Kylin
> 
>
> Key: KYLIN-679
> URL: https://issues.apache.org/jira/browse/KYLIN-679
> Project: Kylin
>  Issue Type: New Feature
>  Components: Spark Engine
>Reporter: Luke Han
>Priority: Major
> Fix For: v2.0.0
>
>
> Challenges in current architecture:
> High latency when reading data from Hive 
> --Several hours to fetch data when join big tables
> --Route to SQL-on-Hadoop turned off due to performance issue
> Time-to-Market of data latency
> --Huge IO & Network traffic with MR jobs
> Streaming
> --Streaming process and pre-calculate cubes
> Where Spark could bring benefits to Kylin:
> Integrating with Spark SQL: 
> --Option I: Read data from SparkSQL instead of Hive
> --Option II: Route unsupported queries to SparkSQL
> --Option III: Kylin to be OLAP source of SparkSQL
> Spark Cube Build Engine
> --Efficiency cube generate engine with Spark
> Spark Streaming
> --Leverage SparkStreaming for StreamingOLAP
> HBase?
> --Any idea?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (KYLIN-679) Adding Spark Support to Apache Kylin

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI closed KYLIN-679.
--
   Resolution: Fixed
Fix Version/s: (was: Future)
   v2.0.0

The Spark support has been implemented in 2.0; Close this Jira.

> Adding Spark Support to Apache Kylin
> 
>
> Key: KYLIN-679
> URL: https://issues.apache.org/jira/browse/KYLIN-679
> Project: Kylin
>  Issue Type: New Feature
>  Components: General
>Reporter: Luke Han
>Priority: Major
> Fix For: v2.0.0
>
>
> Challenges in current architecture:
> High latency when reading data from Hive 
> --Several hours to fetch data when join big tables
> --Route to SQL-on-Hadoop turned off due to performance issue
> Time-to-Market of data latency
> --Huge IO & Network traffic with MR jobs
> Streaming
> --Streaming process and pre-calculate cubes
> Where Spark could bring benefits to Kylin:
> Integrating with Spark SQL: 
> --Option I: Read data from SparkSQL instead of Hive
> --Option II: Route unsupported queries to SparkSQL
> --Option III: Kylin to be OLAP source of SparkSQL
> Spark Cube Build Engine
> --Efficiency cube generate engine with Spark
> Spark Streaming
> --Leverage SparkStreaming for StreamingOLAP
> HBase?
> --Any idea?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (KYLIN-679) Adding Spark Support to Apache Kylin

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI reopened KYLIN-679:


> Adding Spark Support to Apache Kylin
> 
>
> Key: KYLIN-679
> URL: https://issues.apache.org/jira/browse/KYLIN-679
> Project: Kylin
>  Issue Type: New Feature
>  Components: General
>Reporter: Luke Han
>Priority: Major
> Fix For: v2.0.0
>
>
> Challenges in current architecture:
> High latency when reading data from Hive 
> --Several hours to fetch data when join big tables
> --Route to SQL-on-Hadoop turned off due to performance issue
> Time-to-Market of data latency
> --Huge IO & Network traffic with MR jobs
> Streaming
> --Streaming process and pre-calculate cubes
> Where Spark could bring benefits to Kylin:
> Integrating with Spark SQL: 
> --Option I: Read data from SparkSQL instead of Hive
> --Option II: Route unsupported queries to SparkSQL
> --Option III: Kylin to be OLAP source of SparkSQL
> Spark Cube Build Engine
> --Efficiency cube generate engine with Spark
> Spark Streaming
> --Leverage SparkStreaming for StreamingOLAP
> HBase?
> --Any idea?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KYLIN-3223) Query for the list of hybrid cubes results in NPE

2018-02-01 Thread Zhixiong Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhixiong Chen reassigned KYLIN-3223:


Assignee: Zhixiong Chen  (was: luguosheng)

> Query for the list of hybrid cubes results in NPE
> -
>
> Key: KYLIN-3223
> URL: https://issues.apache.org/jira/browse/KYLIN-3223
> Project: Kylin
>  Issue Type: Bug
>  Components: REST Service
>Affects Versions: v2.2.0
> Environment: HDP 2.5.6, Kylin 2.2
>Reporter: Vsevolod Ostapenko
>Assignee: Zhixiong Chen
>Priority: Major
>
> Calling REST API to get the list of hybrid cubes returns stack trace with NPE 
> exception.
> {quote}curl -u ADMIN:KYLIN -X GET -H 'Content-Type: application/json'  -d {} 
> [http://localhost:7070/kylin/api/hybrids]
> {"code":"999","data":null,"msg":null,"stacktrace":"java.lang.NullPointerException\n\tat
>  
> java.util.concurrent.ConcurrentSkipListMap.doGet(ConcurrentSkipListMap.java:778)\n\tat
>  
> java.util.concurrent.ConcurrentSkipListMap.get(ConcurrentSkipListMap.java:1546)\n\tat
>  
> org.apache.kylin.metadata.cachesync.SingleValueCache.get(SingleValueCache.java:85)\n\tat
>  
> org.apache.kylin.metadata.project.ProjectManager.getProject(ProjectManager.java:172)\n\tat
>  
> org.apache.kylin.rest.util.AclEvaluate.getProjectInstance(AclEvaluate.java:39)\n\tat
>  
> org.apache.kylin.rest.util.AclEvaluate.checkProjectReadPermission(AclEvaluate.java:61)\n\tat
>  
> org.apache.kylin.rest.service.HybridService.listHybrids(HybridService.java:115)\n\tat
>  
> org.apache.kylin.rest.controller.HybridController.list(HybridController.java:76)\n\tat
>  sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat
>  
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat
>  java.lang.reflect.Method.invoke(Method.java:497)\n\tat 
> org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205)\n\tat
>  
> org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:133)\n\tat
>  
> org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:97)\n\tat
>  
> org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:827)\n\tat
>  
> org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:738)\n\tat
>  
> org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85)\n\tat
>  
> org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:967)\n\tat
>  
> org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901)\n\tat
>  
> org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)\n\tat
>  
> org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:861)\n\tat
>  javax.servlet.http.HttpServlet.service(HttpServlet.java:624)\n\tat 
> org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)\n\tat
>  javax.servlet.http.HttpServlet.service(HttpServlet.java:731)\n\tat 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)\n\tat
>  
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)\n\tat
>  org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)\n\tat 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)\n\tat
>  
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)\n\tat
>  
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317)\n\tat
>  
> org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127)\n\tat
>  
> org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91)\n\tat
>  
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)\n\tat
>  
> org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:114)\n\tat
>  
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)\n\tat
>  
> org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:137)\n\tat
>  
> 

[jira] [Updated] (KYLIN-3191) Remove the deprecated configuration item kylin.security.acl.default-role

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI updated KYLIN-3191:

Component/s: (was: General)
 Security

> Remove the deprecated configuration item kylin.security.acl.default-role
> 
>
> Key: KYLIN-3191
> URL: https://issues.apache.org/jira/browse/KYLIN-3191
> Project: Kylin
>  Issue Type: Task
>  Components: Security
>Affects Versions: v2.3.0
>Reporter: peng.jianhua
>Assignee: peng.jianhua
>Priority: Minor
> Fix For: v2.3.0
>
> Attachments: 0001-KYLIN-3191.patch
>
>
> Since KYLIN-2960, kylin.security.acl.default-role has been deprecated,so 
> remove it from the default kylin.properties



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-3224) data can't show when use kylin pushdown model

2018-02-01 Thread peng.jianhua (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

peng.jianhua updated KYLIN-3224:

Attachment: 0001-KYLIN-3224.patch

> data can't show when use kylin pushdown model 
> --
>
> Key: KYLIN-3224
> URL: https://issues.apache.org/jira/browse/KYLIN-3224
> Project: Kylin
>  Issue Type: Bug
>  Components: Web 
>Affects Versions: v2.2.0, v2.3.0
>Reporter: peng.jianhua
>Assignee: peng.jianhua
>Priority: Major
> Attachments: 0001-KYLIN-3224.patch, 01.PNG
>
>
> select * from kylin_sales
> use pushdown model,and the result shows like 01.png



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KYLIN-3203) Upgrade Jacoco release

2018-02-01 Thread nichunen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nichunen reassigned KYLIN-3203:
---

Assignee: nichunen

> Upgrade Jacoco release
> --
>
> Key: KYLIN-3203
> URL: https://issues.apache.org/jira/browse/KYLIN-3203
> Project: Kylin
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: nichunen
>Priority: Minor
>
> Jacoco is actively maintained by the community.
> Here is the latest release:
> https://github.com/jacoco/jacoco/releases/tag/v0.8.0
> We should upgrade to 0.8.0 release



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-3224) data can't show when use kylin pushdown model

2018-02-01 Thread peng.jianhua (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

peng.jianhua updated KYLIN-3224:

Description: 
select * from kylin_sales

use pushdown model,and the result shows like 01.png

  was:select * from kylin_sales


> data can't show when use kylin pushdown model 
> --
>
> Key: KYLIN-3224
> URL: https://issues.apache.org/jira/browse/KYLIN-3224
> Project: Kylin
>  Issue Type: Bug
>  Components: Web 
>Affects Versions: v2.2.0, v2.3.0
>Reporter: peng.jianhua
>Assignee: peng.jianhua
>Priority: Major
> Attachments: 01.PNG
>
>
> select * from kylin_sales
> use pushdown model,and the result shows like 01.png



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-3224) data can't show when use kylin pushdown model

2018-02-01 Thread peng.jianhua (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

peng.jianhua updated KYLIN-3224:

Attachment: 01.PNG

> data can't show when use kylin pushdown model 
> --
>
> Key: KYLIN-3224
> URL: https://issues.apache.org/jira/browse/KYLIN-3224
> Project: Kylin
>  Issue Type: Bug
>  Components: Web 
>Affects Versions: v2.2.0, v2.3.0
>Reporter: peng.jianhua
>Assignee: peng.jianhua
>Priority: Major
> Attachments: 01.PNG
>
>
> select * from kylin_sales



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KYLIN-3224) data can't show when use kylin pushdown model

2018-02-01 Thread peng.jianhua (JIRA)
peng.jianhua created KYLIN-3224:
---

 Summary: data can't show when use kylin pushdown model 
 Key: KYLIN-3224
 URL: https://issues.apache.org/jira/browse/KYLIN-3224
 Project: Kylin
  Issue Type: Bug
  Components: Web 
Affects Versions: v2.2.0, v2.3.0
Reporter: peng.jianhua
Assignee: peng.jianhua


select * from kylin_sales



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-3197) When ldap is opened, I use an ignored case user to login, the page does not respond.

2018-02-01 Thread Peng Xing (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peng Xing updated KYLIN-3197:
-
Component/s: (was: General)

> When ldap is opened, I use an ignored case user to login, the page does not 
> respond.
> 
>
> Key: KYLIN-3197
> URL: https://issues.apache.org/jira/browse/KYLIN-3197
> Project: Kylin
>  Issue Type: Bug
>  Components: Security
>Affects Versions: v2.3.0
>Reporter: Peng Xing
>Assignee: Peng Xing
>Priority: Major
>  Labels: patch
> Attachments: 
> 0001-KYLIN-3197-When-ldap-is-opened-I-use-an-ignored-case.patch, 
> image-2018-01-25-17-22-39-970.png
>
>
> When ldap is opened, I config the kylin.properties, and give wkhGroup the 
> admin permission.
> {code:java}
> ## Admin roles in LDAP, for ldap and saml
> kylin.security.acl.admin-role=wkhGroup
> {code}
> then I create a new user named 'wkh' whose group is 'wkhGroup', then I use 
> '{color:#ff}wkh{color}' to login in, which is normal.
>  But when I use '{color:#ff}WKH{color}' to login in, the page does not 
> respond.
>  I analyze the backgroud code, and find the function of 
> 'org.apache.kylin.rest.security.LDAPAuthoritiesPopulator.getGroupMembershipRoles(String,
>  String)' has problem.
>  When userDn is 
> "uid={color:#ff}wkh{color},ou=People,ou=defaultCluster,dc=zdh,dc=com" and 
> username is "{color:#ff}WKH{color}", then authorities will be null by the 
> follow code:
> {code:java}
> Set authorities = super.getGroupMembershipRoles(userDn, 
> username);
> {code}
> So I have added 'getAdditionalRoles' function to get the authorities again.
>  I have test the patch, please review, thanks!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-3199) The login dialog should be closed when ldap user with no permission login correctly

2018-02-01 Thread Peng Xing (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peng Xing updated KYLIN-3199:
-
Component/s: (was: Web )
 Security

> The login dialog should be closed when ldap user with no permission login 
> correctly
> ---
>
> Key: KYLIN-3199
> URL: https://issues.apache.org/jira/browse/KYLIN-3199
> Project: Kylin
>  Issue Type: Bug
>  Components: Security
>Affects Versions: v2.3.0
>Reporter: Peng Xing
>Assignee: Peng Xing
>Priority: Minor
>  Labels: patch
> Attachments: 
> 0001-KYLIN-3199-The-login-dialog-should-be-closed-when-ld.patch, 
> ldap_user_login.png
>
>
> 1. Open ldap authentication, but I do not give the admin permission to group 
> 'xpGroup';
> 2. Create a ldap user 'xp', who belongs to group 'xpGroup', so this user has 
> none permission.
> 3. When user 'xp' login in, the above bar has showed and been enabled, but 
> the login dialog still show.
> 4. Then you can click any button on above bar.
> Please refer to 'ldap_user_login.png'
> I think the login dialog should be closed when you login in correctly, and 
> redirect to the 'Model' page, but this user has no permission.
> I have modified this issue, please review the patch, thanks!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-3197) When ldap is opened, I use an ignored case user to login, the page does not respond.

2018-02-01 Thread Peng Xing (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peng Xing updated KYLIN-3197:
-
Component/s: Security

> When ldap is opened, I use an ignored case user to login, the page does not 
> respond.
> 
>
> Key: KYLIN-3197
> URL: https://issues.apache.org/jira/browse/KYLIN-3197
> Project: Kylin
>  Issue Type: Bug
>  Components: General, Security
>Affects Versions: v2.3.0
>Reporter: Peng Xing
>Assignee: Peng Xing
>Priority: Major
>  Labels: patch
> Attachments: 
> 0001-KYLIN-3197-When-ldap-is-opened-I-use-an-ignored-case.patch, 
> image-2018-01-25-17-22-39-970.png
>
>
> When ldap is opened, I config the kylin.properties, and give wkhGroup the 
> admin permission.
> {code:java}
> ## Admin roles in LDAP, for ldap and saml
> kylin.security.acl.admin-role=wkhGroup
> {code}
> then I create a new user named 'wkh' whose group is 'wkhGroup', then I use 
> '{color:#ff}wkh{color}' to login in, which is normal.
>  But when I use '{color:#ff}WKH{color}' to login in, the page does not 
> respond.
>  I analyze the backgroud code, and find the function of 
> 'org.apache.kylin.rest.security.LDAPAuthoritiesPopulator.getGroupMembershipRoles(String,
>  String)' has problem.
>  When userDn is 
> "uid={color:#ff}wkh{color},ou=People,ou=defaultCluster,dc=zdh,dc=com" and 
> username is "{color:#ff}WKH{color}", then authorities will be null by the 
> follow code:
> {code:java}
> Set authorities = super.getGroupMembershipRoles(userDn, 
> username);
> {code}
> So I have added 'getAdditionalRoles' function to get the authorities again.
>  I have test the patch, please review, thanks!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KYLIN-3223) Query for the list of hybrid cubes results in NPE

2018-02-01 Thread Vsevolod Ostapenko (JIRA)
Vsevolod Ostapenko created KYLIN-3223:
-

 Summary: Query for the list of hybrid cubes results in NPE
 Key: KYLIN-3223
 URL: https://issues.apache.org/jira/browse/KYLIN-3223
 Project: Kylin
  Issue Type: Bug
  Components: REST Service
Affects Versions: v2.2.0
 Environment: HDP 2.5.6, Kylin 2.2
Reporter: Vsevolod Ostapenko
Assignee: luguosheng


Calling REST API to get the list of hybrid cubes returns stack trace with NPE 
exception.
{quote}curl -u ADMIN:KYLIN -X GET -H 'Content-Type: application/json'  -d {} 
[http://localhost:7070/kylin/api/hybrids]

{"code":"999","data":null,"msg":null,"stacktrace":"java.lang.NullPointerException\n\tat
 
java.util.concurrent.ConcurrentSkipListMap.doGet(ConcurrentSkipListMap.java:778)\n\tat
 
java.util.concurrent.ConcurrentSkipListMap.get(ConcurrentSkipListMap.java:1546)\n\tat
 
org.apache.kylin.metadata.cachesync.SingleValueCache.get(SingleValueCache.java:85)\n\tat
 
org.apache.kylin.metadata.project.ProjectManager.getProject(ProjectManager.java:172)\n\tat
 
org.apache.kylin.rest.util.AclEvaluate.getProjectInstance(AclEvaluate.java:39)\n\tat
 
org.apache.kylin.rest.util.AclEvaluate.checkProjectReadPermission(AclEvaluate.java:61)\n\tat
 
org.apache.kylin.rest.service.HybridService.listHybrids(HybridService.java:115)\n\tat
 
org.apache.kylin.rest.controller.HybridController.list(HybridController.java:76)\n\tat
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat
 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat
 java.lang.reflect.Method.invoke(Method.java:497)\n\tat 
org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205)\n\tat
 
org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:133)\n\tat
 
org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:97)\n\tat
 
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:827)\n\tat
 
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:738)\n\tat
 
org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85)\n\tat
 
org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:967)\n\tat
 
org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901)\n\tat
 
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)\n\tat
 
org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:861)\n\tat
 javax.servlet.http.HttpServlet.service(HttpServlet.java:624)\n\tat 
org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)\n\tat
 javax.servlet.http.HttpServlet.service(HttpServlet.java:731)\n\tat 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)\n\tat
 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)\n\tat
 org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)\n\tat 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)\n\tat
 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)\n\tat
 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317)\n\tat
 
org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127)\n\tat
 
org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91)\n\tat
 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)\n\tat
 
org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:114)\n\tat
 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)\n\tat
 
org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:137)\n\tat
 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)\n\tat
 
org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:111)\n\tat
 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)\n\tat
 

[jira] [Commented] (KYLIN-3218) KYLIN interface :cannot load models

2018-02-01 Thread Jean-Luc BELLIER (JIRA)

[ 
https://issues.apache.org/jira/browse/KYLIN-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348760#comment-16348760
 ] 

Jean-Luc BELLIER commented on KYLIN-3218:
-

Hello again,

 

I created the .keystore file manually and placed it in the right folder so this 
error is fixed now.

However, I still have 3 kinds of mistakes :
 * *Exception in thread "Thread-14" java.lang.RuntimeException: Error while 
peeking at /kylin/kylin_metadata/job_engine/global_job_engine_lock*

 * *SEVERE: The web application [/kylin] appears to have started a thread named 
[TGT Renewer for 
[bellier...@hades.rte-france.com|mailto:bellier...@hades.rte-france.com]] but 
has failed to stop it. This is very likely to create a memory leak.* This one 
appears several times

 * *SEVERE: The web application [/kylin] created a ThreadLocal with key of type 
[java.lang.ThreadLocal] (value [java.lang.ThreadLocal@10cb8259]) and a value of 
type [org.apache.kylin.rest.msg.Message] (value 
[org.apache.kylin.rest.msg.Message@267ae210]) but failed to remove it when the 
web application was stopped. Threads are going to be renewed over time to try 
and avoid a probable memory leak.* This one appears several times

 

When I look at the folder /kylin/kylin_metadata, I just have 'cardinality'. I 
do not know where the file */job_engine/global_job_engine_lock* should come 
from.

 

Any help would be greatly appreciated.

Have a good day.

 

Best regards,

Jean-Luc.

 

> KYLIN interface :cannot load models 
> 
>
> Key: KYLIN-3218
> URL: https://issues.apache.org/jira/browse/KYLIN-3218
> Project: Kylin
>  Issue Type: Bug
>  Components: Client - CLI
>Affects Versions: v2.2.0
>Reporter: Jean-Luc BELLIER
>Assignee: hongbin ma
>Priority: Major
> Attachments: kylin.log, kylin.log, kylin.out, kylin.out, 
> kylin.out.node2
>
>
> Hello,
>  
> I am trying to use the tutorial example 'sample_cube' on KYLIN interface. I 
> installed the configuration without error. I launched the script sample.sh' 
> then the command 'kylin.sh start' from the bin directory.
> I can see the tables in Hive and the files in HDFS.
> Unfortunately, when I select the project 'learn_kylin' in the list, the 
> models cannot be refreshed.
> I tried to refresh the metadata from the 'System' tab without problem. But 
> the 'server config' and 'Sever environment are still blank. When I click on 
> 'reload config' button, I get the message : 'Oops ! Failed to take action'.
>  
> There may be something wrong in my config, but I cannot see what. I send in 
> attachment my 'kylin.log' and 'kylin.output' files.
>  
> Thank you in advance for your help. Have a good day.
>  
> Best regards,
> Jean-Luc.[^kylin.log]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-3138) cuboids on-demand build

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI updated KYLIN-3138:

Component/s: (was: General)

> cuboids on-demand build
> ---
>
> Key: KYLIN-3138
> URL: https://issues.apache.org/jira/browse/KYLIN-3138
> Project: Kylin
>  Issue Type: New Feature
>  Components: Job Engine, Query Engine, Spark Engine
>Affects Versions: v2.2.0, v2.3.0
>Reporter: Ruslan Dautkhanov
>Assignee: Shaofeng SHI
>Priority: Critical
>
> We just started using Kylin and quite like it so far.
> Although some of the datasets we have are quite wide to even consider for 
> OLAP cubing.
> Unless those cuboids will be built on-demand.
> I know some commercial non-open source products do this successfully. 
> This idea is to build a cuboid only when a user actually needs it. 
> So for example, our BI dashboards does a certain rollup, so then a SQL
> query hits Kylin backend. Kylin realizes it hasn't built that particular 
> cuboid just yet,
> so immediately starts building it. Users has to wait a bit longer first time
> it request that combination of dimensions. But all other requests or requests 
> of other users will be fast from that point on.
> Kylin (or any other OLAP solution) wouldn't be feasible to use on very wide 
> datasets 
> unless this on-demand functionality is implemented. For example, some 
> datasets we have have 100-200 dimensions. And we don't know up front rollups 
> users would want to do.
> Suggesting to have a new dimension build rule "lazy / on-demand". All 
> previous rules apply. This new rule type would mean, a cuboid for a 
> particular set of dimensions wouldn't be built up-front if it's marked as 
> "lazy / on-demand". 
> Thoughts / ideas?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-3221) Some improvements for lookup table

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI updated KYLIN-3221:

Component/s: (was: General)
 Query Engine
 Job Engine

> Some improvements for lookup table 
> ---
>
> Key: KYLIN-3221
> URL: https://issues.apache.org/jira/browse/KYLIN-3221
> Project: Kylin
>  Issue Type: Improvement
>  Components: Job Engine, Metadata, Query Engine
>Reporter: Ma Gang
>Assignee: Ma Gang
>Priority: Major
>
> There are two limitations for current look table design:
>  # lookup table size is limited, because table snapshot need to be cached in 
> Kylin server, too large snapshot table will break the server.
>  # lookup table snapshot references are stored in all segments of the cube, 
> cannot support global snapshot table, the global snapshot table means when 
> the lookup table is updated, it will take effective for all segments.
> To resolve the above limitations, we decide to do some improvements for the 
> existing lookup table design, below is the initial document, any comments and 
> suggestions are welcome.
> h2. Metadata
> Will add a new property in CubeDesc to describe how lookup tables will be 
> snapshot, it can be defined during the cube design
> |{{@JsonProperty}}{{(}}{{"snapshot_table_desc_list"}}{{)}}
> {{private}} {{List snapshotTableDescList = 
> Collections.emptyList();}}|
>  SnapshotTableDesc defines how table is stored and whether it is global or 
> not, currently we can support two types of store:
>  # "metaStore",  table snapshot is stored in the metadata store, it is the 
> same as current design, and this is the default option.
>  # "hbaseStore', table snapshot is stored in an additional hbase table.
> |{{@JsonProperty}}{{(}}{{"table_name"}}{{)}}
> {{private}} {{String tableName;}}
>  
> {{@JsonProperty}}{{(}}{{"store_type"}}{{)}}
> {{private}} {{String snapshotStorageType = }}{{"metaStore"}}{{;}}
>  
> {{@JsonProperty}}{{(}}{{"global"}}{{)}}
> {{private}} {{boolean}} {{global = }}{{false}}{{;}}|
>  
> Add 'snapshots' property in CubeInstance, to store snapshots resource path 
> for each table, when the table snapshot is set to global in cube design:
> |{{@JsonProperty}}{{(}}{{"snapshots"}}{{)}}
> {{private}} {{Map snapshots; }}{{// tableName -> 
> tableResoucePath mapping}}|
>  
> Add new meta model ExtTableSnapshot to describe the extended table snapshot 
> information, the information is stored in a new metastore path: 
> /ext_table_snapshot/\{tableName}/\{uuid}.snapshot, the metadata including 
> following info:
> |{{@JsonProperty}}{{(}}{{"tableName"}}{{)}}
> {{private}} {{String tableName;}}
>  
> {{@JsonProperty}}{{(}}{{"signature"}}{{)}}
> {{private}} {{TableSignature signature;}}
>  
> {{@JsonProperty}}{{(}}{{"storage_location_identifier"}}{{)}}
> {{private}} {{String storageLocationIdentifier;}}
>  
> {{@JsonProperty}}{{(}}{{"size"}}{{)}}
> {{private}} {{long}} {{size;}}
>  
> {{@JsonProperty}}{{(}}{{"row_cnt"}}{{)}}
> {{private}} {{long}} {{rowCnt;}}|
>  
> Add new section in 'Advance Setting' tab when do cube design, user can set 
> table snapshot properties for each table, and by default, it is segment level 
> and store to metadata store
> h2. Build
> If user specify 'hbaseStore' storageType for any lookup table, will use 
> MapReduce job convert the hive source table to hfiles, and then bulk load 
> hfiles to HTable. So it will add two job steps to do the lookup table 
> materialization.
> h2. HBase Lookup Table Schema
> all data are stored in raw value
> suppose the lookup table has primary keys: key1,key2
> rowkey will be:
> ||2 bytes||len1 bytes||2 bytes||len2 bytes||
> |key1 value length(len1)|key1 value|key 2 value length(len2)|key2 value|
>  
> 1 column family c, multiple columns which column name is the index of the 
> column in the table definition
> |c|
> |1|2|...|
>  
> h2. Query
> For key lookup query, directly call hbase get api to get entire row according 
> to key.
> For queries that need fetch keys according to the derived columns, iterate 
> all rows to get related keys.
> For queries that only hit the lookup table, iterate all rows and let calcite 
> to do aggregation and filter.
> h2. Management
> For each lookup table, admin can view how many snapshots it has in Kylin, and 
> can view each snapshot type/size information and which cube/segments the 
> snapshot is referenced, the snapshot tables that have no reference can be 
> deleted.
> h2. Cleanup
> When clean up metadata store, need to remove snapshot stored in HBase. And 
> need to clean up metadata store periodically by cronjob.
> h2. Future
>  # Add coprocessor for lookup table, to improve the performance of lookup 
> table query, and queries that filter by derived columns.
>  # Add secondly index support 

[jira] [Updated] (KYLIN-3221) Some improvements for lookup table

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI updated KYLIN-3221:

Component/s: Metadata

> Some improvements for lookup table 
> ---
>
> Key: KYLIN-3221
> URL: https://issues.apache.org/jira/browse/KYLIN-3221
> Project: Kylin
>  Issue Type: Improvement
>  Components: Job Engine, Metadata, Query Engine
>Reporter: Ma Gang
>Assignee: Ma Gang
>Priority: Major
>
> There are two limitations for current look table design:
>  # lookup table size is limited, because table snapshot need to be cached in 
> Kylin server, too large snapshot table will break the server.
>  # lookup table snapshot references are stored in all segments of the cube, 
> cannot support global snapshot table, the global snapshot table means when 
> the lookup table is updated, it will take effective for all segments.
> To resolve the above limitations, we decide to do some improvements for the 
> existing lookup table design, below is the initial document, any comments and 
> suggestions are welcome.
> h2. Metadata
> Will add a new property in CubeDesc to describe how lookup tables will be 
> snapshot, it can be defined during the cube design
> |{{@JsonProperty}}{{(}}{{"snapshot_table_desc_list"}}{{)}}
> {{private}} {{List snapshotTableDescList = 
> Collections.emptyList();}}|
>  SnapshotTableDesc defines how table is stored and whether it is global or 
> not, currently we can support two types of store:
>  # "metaStore",  table snapshot is stored in the metadata store, it is the 
> same as current design, and this is the default option.
>  # "hbaseStore', table snapshot is stored in an additional hbase table.
> |{{@JsonProperty}}{{(}}{{"table_name"}}{{)}}
> {{private}} {{String tableName;}}
>  
> {{@JsonProperty}}{{(}}{{"store_type"}}{{)}}
> {{private}} {{String snapshotStorageType = }}{{"metaStore"}}{{;}}
>  
> {{@JsonProperty}}{{(}}{{"global"}}{{)}}
> {{private}} {{boolean}} {{global = }}{{false}}{{;}}|
>  
> Add 'snapshots' property in CubeInstance, to store snapshots resource path 
> for each table, when the table snapshot is set to global in cube design:
> |{{@JsonProperty}}{{(}}{{"snapshots"}}{{)}}
> {{private}} {{Map snapshots; }}{{// tableName -> 
> tableResoucePath mapping}}|
>  
> Add new meta model ExtTableSnapshot to describe the extended table snapshot 
> information, the information is stored in a new metastore path: 
> /ext_table_snapshot/\{tableName}/\{uuid}.snapshot, the metadata including 
> following info:
> |{{@JsonProperty}}{{(}}{{"tableName"}}{{)}}
> {{private}} {{String tableName;}}
>  
> {{@JsonProperty}}{{(}}{{"signature"}}{{)}}
> {{private}} {{TableSignature signature;}}
>  
> {{@JsonProperty}}{{(}}{{"storage_location_identifier"}}{{)}}
> {{private}} {{String storageLocationIdentifier;}}
>  
> {{@JsonProperty}}{{(}}{{"size"}}{{)}}
> {{private}} {{long}} {{size;}}
>  
> {{@JsonProperty}}{{(}}{{"row_cnt"}}{{)}}
> {{private}} {{long}} {{rowCnt;}}|
>  
> Add new section in 'Advance Setting' tab when do cube design, user can set 
> table snapshot properties for each table, and by default, it is segment level 
> and store to metadata store
> h2. Build
> If user specify 'hbaseStore' storageType for any lookup table, will use 
> MapReduce job convert the hive source table to hfiles, and then bulk load 
> hfiles to HTable. So it will add two job steps to do the lookup table 
> materialization.
> h2. HBase Lookup Table Schema
> all data are stored in raw value
> suppose the lookup table has primary keys: key1,key2
> rowkey will be:
> ||2 bytes||len1 bytes||2 bytes||len2 bytes||
> |key1 value length(len1)|key1 value|key 2 value length(len2)|key2 value|
>  
> 1 column family c, multiple columns which column name is the index of the 
> column in the table definition
> |c|
> |1|2|...|
>  
> h2. Query
> For key lookup query, directly call hbase get api to get entire row according 
> to key.
> For queries that need fetch keys according to the derived columns, iterate 
> all rows to get related keys.
> For queries that only hit the lookup table, iterate all rows and let calcite 
> to do aggregation and filter.
> h2. Management
> For each lookup table, admin can view how many snapshots it has in Kylin, and 
> can view each snapshot type/size information and which cube/segments the 
> snapshot is referenced, the snapshot tables that have no reference can be 
> deleted.
> h2. Cleanup
> When clean up metadata store, need to remove snapshot stored in HBase. And 
> need to clean up metadata store periodically by cronjob.
> h2. Future
>  # Add coprocessor for lookup table, to improve the performance of lookup 
> table query, and queries that filter by derived columns.
>  # Add secondly index support for external snapshot table.



--
This message was sent by Atlassian 

[jira] [Updated] (KYLIN-3214) Initialize ExternalAclProvider when starting kylin

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI updated KYLIN-3214:

Component/s: (was: General)
 Security

> Initialize ExternalAclProvider when starting kylin
> --
>
> Key: KYLIN-3214
> URL: https://issues.apache.org/jira/browse/KYLIN-3214
> Project: Kylin
>  Issue Type: Improvement
>  Components: Security
>Affects Versions: v2.2.0
>Reporter: peng.jianhua
>Assignee: peng.jianhua
>Priority: Minor
> Attachments: 0001-KYLIN-3214.patch
>
>
> Currently, ExternalAclProvider is initialized only when calling acl related 
> api.
> Manage ACL through Ranger,ranger can not get the status of the 
> ExternalAclProvider in time because of ExternalAclProvider not initialized 
> when starting kylin.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-2703) Manage ACL through Apache Ranger

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-2703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI updated KYLIN-2703:

Component/s: (was: General)
 Security

> Manage ACL through Apache Ranger
> 
>
> Key: KYLIN-2703
> URL: https://issues.apache.org/jira/browse/KYLIN-2703
> Project: Kylin
>  Issue Type: New Feature
>  Components: Security
>Reporter: peng.jianhua
>Assignee: peng.jianhua
>Priority: Major
>  Labels: newbie, patch, scope
> Fix For: v2.2.0
>
> Attachments: 
> 0001-KYLIN-2703-kylin-supports-managing-access-rights-for.patch, 
> KylinAuditLog.jpg, KylinPlugins.jpg, KylinPolicies.jpg, 
> KylinServiceEntry.jpg, NewKylinPolicy.jpg, NewKylinService.jpg, 
> Ranger-PMS-hope.png
>
>
> Ranger is a framework to enable, monitor and manage comprehensive data 
> security across the Hadoop platform. Apache Ranger has the following goals:
> 1. Centralized security administration to manage all security related tasks 
> in a central UI or using REST APIs.
> 2. Fine grained authorization to do a specific action and/or operation with 
> Hadoop component/tool and managed through a central administration tool
> 3. Standardize authorization method across all Hadoop components.
> 4. Enhanced support for different authorization methods - Role based access 
> control, attribute based access control etc.
> 5. Centralize auditing of user access and administrative actions (security 
> related) within all the components of Hadoop.
> Ranger has supported enable, monitor and manage following components:
> 1. HDFS
> 2. HIVE
> 3. HBASE
> 4. KNOX
> 5. YARN
> 6. STORM
> 7. SOLR
> 8. KAFKA
> 9. ATLAS
> In order to improve the flexibility of kylin privilege control and enhance 
> value of kylin in the Apache Hadoop ecosystem, like hdfs, yarn, hive, hbase, 
> Kylin should also support that using Ranger to control access rights for 
> project and cube. 
> Specific implementation plan is as following:
> On the ranger website, administrators can configure policies to control user 
> access to projects and cube permissions.
> Kylin provides an abstract class and authorization interfaces for use by the 
> ranger plugin. kylin instantiates ranger plugin’s implementation class when 
> starting(this class extends the abstract class provided by kylin).
> Ranger plugin periodically polls ranger admin, updates the policy to the 
> local, and updates project and cube access rights based on policy information.
> In the Kylin side:
> 1. Kylin provides an abstract class that enables the ranger plugin's 
> implementation class to extend.
> 2. Add configuration item.  1) ranger authorization switch, 2) ranger plugin 
> implementation class's name.
> 3. Instantiate the ranger plugin implementation class when starting kylin.
> 4. kylin provides authorization interfaces for ranger plugin calls.
> 5. According to the ranger authorization configuration item, hide kylin's 
> authorization management page.
> 6. Using ranger manager access rights of the kylin does not affect kylin's 
> existing permissions functions and logic.
> In the Ranger side:
> 1. Ranger plugin will periodically polls ranger admin, updates the policy to 
> the local.
> 2. The ranger plugin invoking the authorization interfaces provided by kylin 
> to updates the project and cube access rights based on the policy information.
> reference link:https://issues.apache.org/jira/browse/RANGER-1672



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (KYLIN-2703) Manage ACL through Apache Ranger

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-2703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI reopened KYLIN-2703:
-

> Manage ACL through Apache Ranger
> 
>
> Key: KYLIN-2703
> URL: https://issues.apache.org/jira/browse/KYLIN-2703
> Project: Kylin
>  Issue Type: New Feature
>  Components: Security
>Reporter: peng.jianhua
>Assignee: peng.jianhua
>Priority: Major
>  Labels: newbie, patch, scope
> Fix For: v2.2.0
>
> Attachments: 
> 0001-KYLIN-2703-kylin-supports-managing-access-rights-for.patch, 
> KylinAuditLog.jpg, KylinPlugins.jpg, KylinPolicies.jpg, 
> KylinServiceEntry.jpg, NewKylinPolicy.jpg, NewKylinService.jpg, 
> Ranger-PMS-hope.png
>
>
> Ranger is a framework to enable, monitor and manage comprehensive data 
> security across the Hadoop platform. Apache Ranger has the following goals:
> 1. Centralized security administration to manage all security related tasks 
> in a central UI or using REST APIs.
> 2. Fine grained authorization to do a specific action and/or operation with 
> Hadoop component/tool and managed through a central administration tool
> 3. Standardize authorization method across all Hadoop components.
> 4. Enhanced support for different authorization methods - Role based access 
> control, attribute based access control etc.
> 5. Centralize auditing of user access and administrative actions (security 
> related) within all the components of Hadoop.
> Ranger has supported enable, monitor and manage following components:
> 1. HDFS
> 2. HIVE
> 3. HBASE
> 4. KNOX
> 5. YARN
> 6. STORM
> 7. SOLR
> 8. KAFKA
> 9. ATLAS
> In order to improve the flexibility of kylin privilege control and enhance 
> value of kylin in the Apache Hadoop ecosystem, like hdfs, yarn, hive, hbase, 
> Kylin should also support that using Ranger to control access rights for 
> project and cube. 
> Specific implementation plan is as following:
> On the ranger website, administrators can configure policies to control user 
> access to projects and cube permissions.
> Kylin provides an abstract class and authorization interfaces for use by the 
> ranger plugin. kylin instantiates ranger plugin’s implementation class when 
> starting(this class extends the abstract class provided by kylin).
> Ranger plugin periodically polls ranger admin, updates the policy to the 
> local, and updates project and cube access rights based on policy information.
> In the Kylin side:
> 1. Kylin provides an abstract class that enables the ranger plugin's 
> implementation class to extend.
> 2. Add configuration item.  1) ranger authorization switch, 2) ranger plugin 
> implementation class's name.
> 3. Instantiate the ranger plugin implementation class when starting kylin.
> 4. kylin provides authorization interfaces for ranger plugin calls.
> 5. According to the ranger authorization configuration item, hide kylin's 
> authorization management page.
> 6. Using ranger manager access rights of the kylin does not affect kylin's 
> existing permissions functions and logic.
> In the Ranger side:
> 1. Ranger plugin will periodically polls ranger admin, updates the policy to 
> the local.
> 2. The ranger plugin invoking the authorization interfaces provided by kylin 
> to updates the project and cube access rights based on the policy information.
> reference link:https://issues.apache.org/jira/browse/RANGER-1672



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (KYLIN-2703) Manage ACL through Apache Ranger

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-2703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI closed KYLIN-2703.
---
Resolution: Fixed

> Manage ACL through Apache Ranger
> 
>
> Key: KYLIN-2703
> URL: https://issues.apache.org/jira/browse/KYLIN-2703
> Project: Kylin
>  Issue Type: New Feature
>  Components: Security
>Reporter: peng.jianhua
>Assignee: peng.jianhua
>Priority: Major
>  Labels: newbie, patch, scope
> Fix For: v2.2.0
>
> Attachments: 
> 0001-KYLIN-2703-kylin-supports-managing-access-rights-for.patch, 
> KylinAuditLog.jpg, KylinPlugins.jpg, KylinPolicies.jpg, 
> KylinServiceEntry.jpg, NewKylinPolicy.jpg, NewKylinService.jpg, 
> Ranger-PMS-hope.png
>
>
> Ranger is a framework to enable, monitor and manage comprehensive data 
> security across the Hadoop platform. Apache Ranger has the following goals:
> 1. Centralized security administration to manage all security related tasks 
> in a central UI or using REST APIs.
> 2. Fine grained authorization to do a specific action and/or operation with 
> Hadoop component/tool and managed through a central administration tool
> 3. Standardize authorization method across all Hadoop components.
> 4. Enhanced support for different authorization methods - Role based access 
> control, attribute based access control etc.
> 5. Centralize auditing of user access and administrative actions (security 
> related) within all the components of Hadoop.
> Ranger has supported enable, monitor and manage following components:
> 1. HDFS
> 2. HIVE
> 3. HBASE
> 4. KNOX
> 5. YARN
> 6. STORM
> 7. SOLR
> 8. KAFKA
> 9. ATLAS
> In order to improve the flexibility of kylin privilege control and enhance 
> value of kylin in the Apache Hadoop ecosystem, like hdfs, yarn, hive, hbase, 
> Kylin should also support that using Ranger to control access rights for 
> project and cube. 
> Specific implementation plan is as following:
> On the ranger website, administrators can configure policies to control user 
> access to projects and cube permissions.
> Kylin provides an abstract class and authorization interfaces for use by the 
> ranger plugin. kylin instantiates ranger plugin’s implementation class when 
> starting(this class extends the abstract class provided by kylin).
> Ranger plugin periodically polls ranger admin, updates the policy to the 
> local, and updates project and cube access rights based on policy information.
> In the Kylin side:
> 1. Kylin provides an abstract class that enables the ranger plugin's 
> implementation class to extend.
> 2. Add configuration item.  1) ranger authorization switch, 2) ranger plugin 
> implementation class's name.
> 3. Instantiate the ranger plugin implementation class when starting kylin.
> 4. kylin provides authorization interfaces for ranger plugin calls.
> 5. According to the ranger authorization configuration item, hide kylin's 
> authorization management page.
> 6. Using ranger manager access rights of the kylin does not affect kylin's 
> existing permissions functions and logic.
> In the Ranger side:
> 1. Ranger plugin will periodically polls ranger admin, updates the policy to 
> the local.
> 2. The ranger plugin invoking the authorization interfaces provided by kylin 
> to updates the project and cube access rights based on the policy information.
> reference link:https://issues.apache.org/jira/browse/RANGER-1672



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-2956) building trie dictionary blocked on value of length over 4095

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-2956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI updated KYLIN-2956:

Component/s: (was: General)
 Job Engine

> building trie dictionary blocked on value of length over 4095 
> --
>
> Key: KYLIN-2956
> URL: https://issues.apache.org/jira/browse/KYLIN-2956
> Project: Kylin
>  Issue Type: Bug
>  Components: Job Engine
>Reporter: Wang, Gang
>Assignee: Wang, Gang
>Priority: Major
> Fix For: v2.3.0
>
> Attachments: 
> 0001-KYLIN-2956-building-trie-dictionary-blocked-on-value.patch
>
>
> In the new release, Kylin will check the value length when building trie 
> dictionary, in class TrieDictionaryBuilder method buildTrieBytes, through 
> method:
> private void positiveShortPreCheck(int i, String fieldName) {
> if (!BytesUtil.isPositiveShort(i)) {
> throw new IllegalStateException(fieldName + " is not positive short, 
> usually caused by too long dict value.");
> }
> }
> public static boolean isPositiveShort(int i) {
> return (i & 0x7000) == 0;
> }
> And 0x7000 in binary:      0111   , so the 
> value length should be less than      0001  0001 , 
> values 4095 in decimalism.
> I wonder why is 0x7000, should 0x8000 (    1000  
>  ), support max length:      0111     
> (32767) 
> be what you want? 
> Or 32767 may be too large, I prefer use 0xE000, 0xE000 (  
>   1110   ), support max length:     0001 
>     (8191) 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-3112) The job 'Pause' operation has logic bug in the kylin server.

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI updated KYLIN-3112:

Component/s: (was: General)
 Job Engine

> The job 'Pause' operation has logic bug in the kylin server.
> 
>
> Key: KYLIN-3112
> URL: https://issues.apache.org/jira/browse/KYLIN-3112
> Project: Kylin
>  Issue Type: Bug
>  Components: Job Engine
>Affects Versions: v2.2.0
>Reporter: peng.jianhua
>Assignee: peng.jianhua
>Priority: Minor
>  Labels: patch
> Fix For: v2.3.0
>
> Attachments: 
> 0001-KYLIN-3112-The-job-Pause-operation-has-logic-bug-in-.patch, 
> job_click_pause.png, job_click_pause_again.png
>
>
> 1. Click the 'Pause' item, although the result is successful, when you expand 
> the action, you will also find the 'Pause' item. refer to 
> [^job_click_pause.png]
> 2. When you click  'Pause' item again( the second click), when the result is 
> successful, then you expand the action, you will find the 'Pause' item has 
> changed to 'Resume'. refer to [^job_click_pause_again.png]
> I checked the logic code of job pause in kylin server, I found the job status 
> changed to new status in the metadata storage, but not changed in JobInstance 
> which would be sent to client, so the client got the old job status.
> Then you clicked 'Pause' item again, the JobInstance would be fetched from 
> the metadata storage, so it was the new one, then the client woud get the new 
> job status. 
> So I fixed it, please check the patch, thanks!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-3104) When the user log out from "Monitor" page, an alert dialog will pop up warning "Failed to load query."

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI updated KYLIN-3104:

   Priority: Minor  (was: Major)
Component/s: (was: General)

> When the user log out from "Monitor" page, an alert dialog will pop up 
> warning "Failed to load query."
> --
>
> Key: KYLIN-3104
> URL: https://issues.apache.org/jira/browse/KYLIN-3104
> Project: Kylin
>  Issue Type: Bug
>  Components: Web 
>Affects Versions: v2.3.0
>Reporter: peng.jianhua
>Assignee: peng.jianhua
>Priority: Minor
> Fix For: v2.3.0
>
> Attachments: 
> 0001-KYLIN-3104-When-the-user-log-out-from-Monitor-page-a.patch, 
> alert_dialog_will_pop_up_when_log_out_from_Monitor_page.PNG
>
>
> When the user log out from "Monitor" page, an alert dialog will pop up 
> warning "Failed to load query."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-3107) An alert dialog will pop up warning "Failed to load bar chat" when the user enter or log out from the "Dashboard" page.

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI updated KYLIN-3107:

   Priority: Minor  (was: Major)
Component/s: (was: General)

> An alert dialog will pop up warning "Failed to load bar chat" when the user 
> enter or log out from the "Dashboard" page.
> ---
>
> Key: KYLIN-3107
> URL: https://issues.apache.org/jira/browse/KYLIN-3107
> Project: Kylin
>  Issue Type: Bug
>  Components: Web 
>Affects Versions: v2.3.0
>Reporter: peng.jianhua
>Assignee: peng.jianhua
>Priority: Minor
> Attachments: alert_dialog_pop_up_when_enter_Dashboard_page.PNG, 
> alert_dialog_pop_up_when_log_out_from_Dashboard_page.PNG
>
>
> An alert dialog will pop up warning "Failed to load bar chat" when the user 
> enter or log out from the "Dashboard" page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (KYLIN-1867) Upgrade dependency libraries

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-1867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI closed KYLIN-1867.
---
Resolution: Fixed

> Upgrade dependency libraries
> 
>
> Key: KYLIN-1867
> URL: https://issues.apache.org/jira/browse/KYLIN-1867
> Project: Kylin
>  Issue Type: Improvement
>  Components: Tools, Build and Test
>Affects Versions: v1.5.2
>Reporter: Billy Liu
>Assignee: Billy Liu
>Priority: Minor
> Fix For: v1.5.4
>
> Attachments: KYLIN-1867.patch
>
>
> Currently, Kylin has 167 unique dependencies, but 109 are outdated, even some 
> has security vulnerabilities. The detail report could be found at 
> https://www.versioneye.com/user/projects/577fcf5a5bb139003969db09?child=summary
> Without changing the APIs and Hadoop ecosystem version compatible, it's 
> better if Kylin could catch up to the newest release dependencies. 
> I could work on this item, and check the compatible and pass the tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-1867) Upgrade dependency libraries

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-1867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI updated KYLIN-1867:

Component/s: (was: 3rd Party)

> Upgrade dependency libraries
> 
>
> Key: KYLIN-1867
> URL: https://issues.apache.org/jira/browse/KYLIN-1867
> Project: Kylin
>  Issue Type: Improvement
>  Components: Tools, Build and Test
>Affects Versions: v1.5.2
>Reporter: Billy Liu
>Assignee: Billy Liu
>Priority: Minor
> Fix For: v1.5.4
>
> Attachments: KYLIN-1867.patch
>
>
> Currently, Kylin has 167 unique dependencies, but 109 are outdated, even some 
> has security vulnerabilities. The detail report could be found at 
> https://www.versioneye.com/user/projects/577fcf5a5bb139003969db09?child=summary
> Without changing the APIs and Hadoop ecosystem version compatible, it's 
> better if Kylin could catch up to the newest release dependencies. 
> I could work on this item, and check the compatible and pass the tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (KYLIN-1867) Upgrade dependency libraries

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-1867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI reopened KYLIN-1867:
-

> Upgrade dependency libraries
> 
>
> Key: KYLIN-1867
> URL: https://issues.apache.org/jira/browse/KYLIN-1867
> Project: Kylin
>  Issue Type: Improvement
>  Components: 3rd Party, Tools, Build and Test
>Affects Versions: v1.5.2
>Reporter: Billy Liu
>Assignee: Billy Liu
>Priority: Minor
> Fix For: v1.5.4
>
> Attachments: KYLIN-1867.patch
>
>
> Currently, Kylin has 167 unique dependencies, but 109 are outdated, even some 
> has security vulnerabilities. The detail report could be found at 
> https://www.versioneye.com/user/projects/577fcf5a5bb139003969db09?child=summary
> Without changing the APIs and Hadoop ecosystem version compatible, it's 
> better if Kylin could catch up to the newest release dependencies. 
> I could work on this item, and check the compatible and pass the tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (KYLIN-1326) Changes to support KMeans with large feature space

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-1326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI reopened KYLIN-1326:
-

> Changes to support KMeans with large feature space
> --
>
> Key: KYLIN-1326
> URL: https://issues.apache.org/jira/browse/KYLIN-1326
> Project: Kylin
>  Issue Type: Improvement
>Reporter: Roy Levin
>Priority: Major
>
> The problem:
> -
> In Spark's KMeans code the center vectors are always represented as dense 
> vectors. As a result, when each such center has a large domain space the 
> algorithm quickly runs out of memory. In my example I have a feature space of 
> around 5 and k ~= 500. This sums up to around 200MB RAM for the center 
> vectors alone while in fact the center vectors are very sparse and require a 
> lot less RAM.
> Since I am running on a system with relatively low resources I keep getting 
> OutOfMemory errors. In my setting it is OK to trade off runtime for using 
> less RAM. This is what I set out to do in my solution while allowing users 
> the flexibility to choose.
> One solution could be to reduce the dimensions of the feature space but this 
> is not always the best approach. For example, when the object space is 
> comprised of users and the feature space of items. In such an example we may 
> want to run kmeans over a feature space which is a function of how many times 
> user i clicked item j. If we reduce the dimensions of the items we will not 
> be able to map the centers vectors back to the items. Moreover in a streaming 
> context detecting the changes WRT previous runs gets more difficult.
> My solution:
> 
> Allow the kmeans algorithm to accept a VectorFactory which decides when 
> vectors used inside the algorithm should be sparse and when they should be 
> dense. For backward compatibility the default behavior is to always make them 
> dense (like the situation is now). But now potentially the user can provide a 
> SmartVectorFactory (or some proprietary VectorFactory) which can decide to 
> make vectors sparse.
> For this I made the following changes:
> (1) Added a method called reassign to SparseVectors allowing to change the 
> indices and values
> (2) Allow axpy to accept SparseVectors
> (3) create a trait called VectorFactory and two implementations for it that 
> are used within KMeans code
> To get the above described solution do the following:
> git clone https://github.com/levin-royl/spark.git -b 
> SupportLargeFeatureDomains



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-1326) Changes to support KMeans with large feature space

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-1326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI updated KYLIN-1326:

Component/s: (was: 3rd Party)

> Changes to support KMeans with large feature space
> --
>
> Key: KYLIN-1326
> URL: https://issues.apache.org/jira/browse/KYLIN-1326
> Project: Kylin
>  Issue Type: Improvement
>Reporter: Roy Levin
>Priority: Major
>
> The problem:
> -
> In Spark's KMeans code the center vectors are always represented as dense 
> vectors. As a result, when each such center has a large domain space the 
> algorithm quickly runs out of memory. In my example I have a feature space of 
> around 5 and k ~= 500. This sums up to around 200MB RAM for the center 
> vectors alone while in fact the center vectors are very sparse and require a 
> lot less RAM.
> Since I am running on a system with relatively low resources I keep getting 
> OutOfMemory errors. In my setting it is OK to trade off runtime for using 
> less RAM. This is what I set out to do in my solution while allowing users 
> the flexibility to choose.
> One solution could be to reduce the dimensions of the feature space but this 
> is not always the best approach. For example, when the object space is 
> comprised of users and the feature space of items. In such an example we may 
> want to run kmeans over a feature space which is a function of how many times 
> user i clicked item j. If we reduce the dimensions of the items we will not 
> be able to map the centers vectors back to the items. Moreover in a streaming 
> context detecting the changes WRT previous runs gets more difficult.
> My solution:
> 
> Allow the kmeans algorithm to accept a VectorFactory which decides when 
> vectors used inside the algorithm should be sparse and when they should be 
> dense. For backward compatibility the default behavior is to always make them 
> dense (like the situation is now). But now potentially the user can provide a 
> SmartVectorFactory (or some proprietary VectorFactory) which can decide to 
> make vectors sparse.
> For this I made the following changes:
> (1) Added a method called reassign to SparseVectors allowing to change the 
> indices and values
> (2) Allow axpy to accept SparseVectors
> (3) create a trait called VectorFactory and two implementations for it that 
> are used within KMeans code
> To get the above described solution do the following:
> git clone https://github.com/levin-royl/spark.git -b 
> SupportLargeFeatureDomains



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (KYLIN-1326) Changes to support KMeans with large feature space

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-1326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI closed KYLIN-1326.
---
Resolution: Fixed

> Changes to support KMeans with large feature space
> --
>
> Key: KYLIN-1326
> URL: https://issues.apache.org/jira/browse/KYLIN-1326
> Project: Kylin
>  Issue Type: Improvement
>Reporter: Roy Levin
>Priority: Major
>
> The problem:
> -
> In Spark's KMeans code the center vectors are always represented as dense 
> vectors. As a result, when each such center has a large domain space the 
> algorithm quickly runs out of memory. In my example I have a feature space of 
> around 5 and k ~= 500. This sums up to around 200MB RAM for the center 
> vectors alone while in fact the center vectors are very sparse and require a 
> lot less RAM.
> Since I am running on a system with relatively low resources I keep getting 
> OutOfMemory errors. In my setting it is OK to trade off runtime for using 
> less RAM. This is what I set out to do in my solution while allowing users 
> the flexibility to choose.
> One solution could be to reduce the dimensions of the feature space but this 
> is not always the best approach. For example, when the object space is 
> comprised of users and the feature space of items. In such an example we may 
> want to run kmeans over a feature space which is a function of how many times 
> user i clicked item j. If we reduce the dimensions of the items we will not 
> be able to map the centers vectors back to the items. Moreover in a streaming 
> context detecting the changes WRT previous runs gets more difficult.
> My solution:
> 
> Allow the kmeans algorithm to accept a VectorFactory which decides when 
> vectors used inside the algorithm should be sparse and when they should be 
> dense. For backward compatibility the default behavior is to always make them 
> dense (like the situation is now). But now potentially the user can provide a 
> SmartVectorFactory (or some proprietary VectorFactory) which can decide to 
> make vectors sparse.
> For this I made the following changes:
> (1) Added a method called reassign to SparseVectors allowing to change the 
> indices and values
> (2) Allow axpy to accept SparseVectors
> (3) create a trait called VectorFactory and two implementations for it that 
> are used within KMeans code
> To get the above described solution do the following:
> git clone https://github.com/levin-royl/spark.git -b 
> SupportLargeFeatureDomains



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-2321) Message: Error while executing SQL "select * from root.id_card LIMIT 50000": AppendTrieDictionary can't retrive value from id

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-2321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI updated KYLIN-2321:

Component/s: (was: Job Engine)
 (was: 3rd Party)
 Query Engine

> Message: Error while executing SQL "select * from root.id_card LIMIT 5": 
> AppendTrieDictionary can't retrive value from id
> -
>
> Key: KYLIN-2321
> URL: https://issues.apache.org/jira/browse/KYLIN-2321
> Project: Kylin
>  Issue Type: Bug
>  Components: Query Engine
>Affects Versions: v1.6.0
> Environment: Message: Error while executing SQL "select * from 
> root.id_card LIMIT 5": AppendTrieDictionary can't retrive value from id
>Reporter: konglei
>Assignee: Dong Li
>Priority: Major
>
> Message: Error while executing SQL "select * from root.id_card LIMIT 5": 
> AppendTrieDictionary can't retrive value from id
> ==[QUERY]===
> 2016-12-26 16:59:16,553 ERROR [http-bio-7070-exec-10] 
> controller.BasicController:44 : 
> org.apache.kylin.rest.exception.InternalErrorException: Error while executing 
> SQL "select * from root.id_card LIMIT 5": AppendTrieDictionary can't 
> retrive value from id
> at 
> org.apache.kylin.rest.service.QueryService.doQueryWithCache(QueryService.java:389)
> at 
> org.apache.kylin.rest.controller.QueryController.query(QueryController.java:69)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:221)
> at 
> org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:136)
> at 
> org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104)
> at 
> org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:743)
> at 
> org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:672)
> at 
> org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:82)
> at 
> org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:933)
> at 
> org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:867)
> at 
> org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:951)
> at 
> org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:853)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:650)
> at 
> org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:827)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
> at 
> org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
> at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
> at 
> org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:118)
> at 
> org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:84)
> at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
> at 
> org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:113)
> at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
> at 
> org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:103)
> at 
> 

[jira] [Closed] (KYLIN-2321) Message: Error while executing SQL "select * from root.id_card LIMIT 50000": AppendTrieDictionary can't retrive value from id

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-2321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI closed KYLIN-2321.
---
Resolution: Fixed

> Message: Error while executing SQL "select * from root.id_card LIMIT 5": 
> AppendTrieDictionary can't retrive value from id
> -
>
> Key: KYLIN-2321
> URL: https://issues.apache.org/jira/browse/KYLIN-2321
> Project: Kylin
>  Issue Type: Bug
>  Components: Query Engine
>Affects Versions: v1.6.0
> Environment: Message: Error while executing SQL "select * from 
> root.id_card LIMIT 5": AppendTrieDictionary can't retrive value from id
>Reporter: konglei
>Assignee: Dong Li
>Priority: Major
>
> Message: Error while executing SQL "select * from root.id_card LIMIT 5": 
> AppendTrieDictionary can't retrive value from id
> ==[QUERY]===
> 2016-12-26 16:59:16,553 ERROR [http-bio-7070-exec-10] 
> controller.BasicController:44 : 
> org.apache.kylin.rest.exception.InternalErrorException: Error while executing 
> SQL "select * from root.id_card LIMIT 5": AppendTrieDictionary can't 
> retrive value from id
> at 
> org.apache.kylin.rest.service.QueryService.doQueryWithCache(QueryService.java:389)
> at 
> org.apache.kylin.rest.controller.QueryController.query(QueryController.java:69)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:221)
> at 
> org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:136)
> at 
> org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104)
> at 
> org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:743)
> at 
> org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:672)
> at 
> org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:82)
> at 
> org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:933)
> at 
> org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:867)
> at 
> org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:951)
> at 
> org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:853)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:650)
> at 
> org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:827)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
> at 
> org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
> at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
> at 
> org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:118)
> at 
> org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:84)
> at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
> at 
> org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:113)
> at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
> at 
> org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:103)
> at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
>   

[jira] [Reopened] (KYLIN-2321) Message: Error while executing SQL "select * from root.id_card LIMIT 50000": AppendTrieDictionary can't retrive value from id

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-2321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI reopened KYLIN-2321:
-

> Message: Error while executing SQL "select * from root.id_card LIMIT 5": 
> AppendTrieDictionary can't retrive value from id
> -
>
> Key: KYLIN-2321
> URL: https://issues.apache.org/jira/browse/KYLIN-2321
> Project: Kylin
>  Issue Type: Bug
>  Components: Query Engine
>Affects Versions: v1.6.0
> Environment: Message: Error while executing SQL "select * from 
> root.id_card LIMIT 5": AppendTrieDictionary can't retrive value from id
>Reporter: konglei
>Assignee: Dong Li
>Priority: Major
>
> Message: Error while executing SQL "select * from root.id_card LIMIT 5": 
> AppendTrieDictionary can't retrive value from id
> ==[QUERY]===
> 2016-12-26 16:59:16,553 ERROR [http-bio-7070-exec-10] 
> controller.BasicController:44 : 
> org.apache.kylin.rest.exception.InternalErrorException: Error while executing 
> SQL "select * from root.id_card LIMIT 5": AppendTrieDictionary can't 
> retrive value from id
> at 
> org.apache.kylin.rest.service.QueryService.doQueryWithCache(QueryService.java:389)
> at 
> org.apache.kylin.rest.controller.QueryController.query(QueryController.java:69)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:221)
> at 
> org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:136)
> at 
> org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104)
> at 
> org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:743)
> at 
> org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:672)
> at 
> org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:82)
> at 
> org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:933)
> at 
> org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:867)
> at 
> org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:951)
> at 
> org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:853)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:650)
> at 
> org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:827)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
> at 
> org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
> at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
> at 
> org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:118)
> at 
> org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:84)
> at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
> at 
> org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:113)
> at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
> at 
> org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:103)
> at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
> at 
> 

[jira] [Closed] (KYLIN-2321) Message: Error while executing SQL "select * from root.id_card LIMIT 50000": AppendTrieDictionary can't retrive value from id

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-2321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI closed KYLIN-2321.
---
Resolution: Won't Fix

I think the reason is that you mis-used the global dictionary for a dimension 
column. As the global diction doesn't support decoding, so such error is 
thrown. Please use a normal dictionary for dimension, as you need decode it.

> Message: Error while executing SQL "select * from root.id_card LIMIT 5": 
> AppendTrieDictionary can't retrive value from id
> -
>
> Key: KYLIN-2321
> URL: https://issues.apache.org/jira/browse/KYLIN-2321
> Project: Kylin
>  Issue Type: Bug
>  Components: Query Engine
>Affects Versions: v1.6.0
> Environment: Message: Error while executing SQL "select * from 
> root.id_card LIMIT 5": AppendTrieDictionary can't retrive value from id
>Reporter: konglei
>Assignee: Dong Li
>Priority: Major
>
> Message: Error while executing SQL "select * from root.id_card LIMIT 5": 
> AppendTrieDictionary can't retrive value from id
> ==[QUERY]===
> 2016-12-26 16:59:16,553 ERROR [http-bio-7070-exec-10] 
> controller.BasicController:44 : 
> org.apache.kylin.rest.exception.InternalErrorException: Error while executing 
> SQL "select * from root.id_card LIMIT 5": AppendTrieDictionary can't 
> retrive value from id
> at 
> org.apache.kylin.rest.service.QueryService.doQueryWithCache(QueryService.java:389)
> at 
> org.apache.kylin.rest.controller.QueryController.query(QueryController.java:69)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:221)
> at 
> org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:136)
> at 
> org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104)
> at 
> org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:743)
> at 
> org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:672)
> at 
> org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:82)
> at 
> org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:933)
> at 
> org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:867)
> at 
> org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:951)
> at 
> org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:853)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:650)
> at 
> org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:827)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
> at 
> org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
> at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
> at 
> org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:118)
> at 
> org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:84)
> at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
> at 
> org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:113)
> at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
> at 
> 

[jira] [Updated] (KYLIN-1850) Show Kylin Version on GUI

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-1850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI updated KYLIN-1850:

Component/s: (was: 3rd Party)
 Web 

> Show Kylin Version on GUI
> -
>
> Key: KYLIN-1850
> URL: https://issues.apache.org/jira/browse/KYLIN-1850
> Project: Kylin
>  Issue Type: Improvement
>  Components: Web 
>Affects Versions: v1.5.2
>Reporter: qianqiaoneng
>Assignee: Zhong,Jason
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-2403) tableau extract month in where

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-2403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI updated KYLIN-2403:

Component/s: (was: 3rd Party)
 BI Integration

> tableau extract month in where
> --
>
> Key: KYLIN-2403
> URL: https://issues.apache.org/jira/browse/KYLIN-2403
> Project: Kylin
>  Issue Type: Bug
>  Components: BI Integration
>Affects Versions: v1.5.4.1
>Reporter: Pavel Tarasov
>Priority: Major
>
> I’ve problem with tableau & kylin connect. When creating filter on month in 
> tableau it generate query with filter
> WHERE (({fn EXTRACT(MONTH  FROM "TEST_ORDERFACT"."DADD")} - 1) / 3 + 1 = 2).
> Detailed query example from tableau:
> SELECT "AMOCRM_MANAGERS"."NAME" AS "NAME__AMOCRM_MANAGERS_",
>  {fn EXTRACT(MONTH FROM "TEST_ORDERFACT"."DADD")} AS "mn_DADD_ok",
>  ({fn EXTRACT(MONTH FROM "TEST_ORDERFACT"."DADD")} - 1) / 3 + 1 AS 
> "qr_DADD_ok",
>  SUM("TEST_ORDERFACT"."AMOUNT") AS "sum_AMOUNT_ok",
>  {fn EXTRACT(YEAR FROM "TEST_ORDERFACT"."DADD")} AS "yr_DADD_ok"
> FROM "PTARASOV"."TEST_ORDERFACT" "TEST_ORDERFACT"
>  INNER JOIN "REALTYANALYTICS"."CLIENTCATEGORIES" "CLIENTCATEGORIES" ON 
> ("TEST_ORDERFACT"."CLIENTCATEGORY" = "CLIENTCATEGORIES"."ID")
>  INNER JOIN "REALTYANALYTICS"."AMOCRM_MANAGERS" "AMOCRM_MANAGERS" ON 
> ("TEST_ORDERFACT"."MANAGER" = "AMOCRM_MANAGERS"."ID")
>  LEFT JOIN "REALTYANALYTICS"."LOCATIONS" "LOCATIONS" ON 
> ("TEST_ORDERFACT"."REGION" = "LOCATIONS"."ID")
>  LEFT JOIN "REALTYANALYTICS"."ORDERFACTSERVICEPACKAGESOURCETYPES" 
> "ORDERFACTSERVICEPACKAGESOURCETYPES" ON 
> ("TEST_ORDERFACT"."ORDERFACTSERVICEPACKAGESOURCETYPEID" = 
> "ORDERFACTSERVICEPACKAGESOURCETYPES"."ID")
>  INNER JOIN "REALTYANALYTICS"."PRODUCTCATEGORIES" "PRODUCTCATEGORIES" ON 
> ("TEST_ORDERFACT"."TARIF" = "PRODUCTCATEGORIES"."ID")
>  INNER JOIN "REALTYANALYTICS"."PRODUCTS" "PRODUCTS" ON 
> ("TEST_ORDERFACT"."PRODUCT" = "PRODUCTS"."ID")
> WHERE (("AMOCRM_MANAGERS"."NAME" = 'Саркис Ирицян') AND 
> ("TEST_ORDERFACT"."USERSITEID" = 3032446) AND (({fn EXTRACT(MONTH FROM 
> "TEST_ORDERFACT"."DADD")} - 1) / 3 + 1 = 3))
> GROUP BY "AMOCRM_MANAGERS"."NAME",
>  {fn EXTRACT(MONTH FROM "TEST_ORDERFACT"."DADD")},
>  ({fn EXTRACT(MONTH FROM "TEST_ORDERFACT"."DADD")} - 1) / 3 + 1,
>  {fn EXTRACT(YEAR FROM "TEST_ORDERFACT"."DADD")}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (KYLIN-591) Leverage Zeppelin to interactive with Kylin

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI closed KYLIN-591.
--
Resolution: Fixed

> Leverage Zeppelin to interactive with Kylin
> ---
>
> Key: KYLIN-591
> URL: https://issues.apache.org/jira/browse/KYLIN-591
> Project: Kylin
>  Issue Type: New Feature
>  Components: BI Integration
>Reporter: Luke Han
>Assignee: Luke Han
>Priority: Major
>  Labels: gsoc2015
> Fix For: v1.0
>
>
> Detail to add...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (KYLIN-591) Leverage Zeppelin to interactive with Kylin

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI reopened KYLIN-591:


> Leverage Zeppelin to interactive with Kylin
> ---
>
> Key: KYLIN-591
> URL: https://issues.apache.org/jira/browse/KYLIN-591
> Project: Kylin
>  Issue Type: New Feature
>  Components: BI Integration
>Reporter: Luke Han
>Assignee: Luke Han
>Priority: Major
>  Labels: gsoc2015
> Fix For: v1.0
>
>
> Detail to add...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-591) Leverage Zeppelin to interactive with Kylin

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI updated KYLIN-591:
---
Component/s: (was: 3rd Party)
 BI Integration

> Leverage Zeppelin to interactive with Kylin
> ---
>
> Key: KYLIN-591
> URL: https://issues.apache.org/jira/browse/KYLIN-591
> Project: Kylin
>  Issue Type: New Feature
>  Components: BI Integration
>Reporter: Luke Han
>Assignee: Luke Han
>Priority: Major
>  Labels: gsoc2015
> Fix For: v1.0
>
>
> Detail to add...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-3222) The function of editing 'Advanced Dictionaries' in cube is unavailable.

2018-02-01 Thread Peng Xing (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peng Xing updated KYLIN-3222:
-
Description: 
There is a problem about editing 'Advanced Dictionaries' in cube, refer to 
'modify_advanced_dictionary.png' and 'modify_advanced_dictionary_no_effect.png'.
Please review the patch, thanks!

  was:There is a problem about editing 'Advanced Dictionaries' in cube, refer 
to 'modify_advanced_dictionary.png' and 
'modify_advanced_dictionary_no_effect.png'


> The function of editing 'Advanced Dictionaries' in cube is unavailable.
> ---
>
> Key: KYLIN-3222
> URL: https://issues.apache.org/jira/browse/KYLIN-3222
> Project: Kylin
>  Issue Type: Bug
>  Components: Web 
>Affects Versions: v2.3.0
>Reporter: Peng Xing
>Assignee: Peng Xing
>Priority: Major
>  Labels: patch
> Attachments: 
> 0001-KYLIN-3222-The-function-of-editing-Advanced-Dictiona.patch, 
> modify_advanced_dictionary.png, modify_advanced_dictionary_no_effect.png
>
>
> There is a problem about editing 'Advanced Dictionaries' in cube, refer to 
> 'modify_advanced_dictionary.png' and 
> 'modify_advanced_dictionary_no_effect.png'.
> Please review the patch, thanks!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-208) Measure do not work when use Tableau Data Source

2018-02-01 Thread Shaofeng SHI (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI updated KYLIN-208:
---
Component/s: (was: 3rd Party)
 BI Integration

> Measure do not work when use Tableau Data Source
> 
>
> Key: KYLIN-208
> URL: https://issues.apache.org/jira/browse/KYLIN-208
> Project: Kylin
>  Issue Type: Bug
>  Components: BI Integration
>Reporter: Luke Han
>Assignee: hongbin ma
>Priority: Trivial
>  Labels: github-import
> Fix For: v1.0
>
>
> How to reproduce:
> 1. Create connection in Desktop and publish "Data Source" to Tableau Server
> 2. Login Tableau Server and navigate to this data source
> 3. Click "New Workbook" with this data source
> 4. Drag and Drop some dimensions and measures
> Dimension values showing correct but Measures can't display.
>  Imported from GitHub 
> Url: https://github.com/KylinOLAP/Kylin/issues/297
> Created by: [lukehan|https://github.com/lukehan]
> Labels: 
> Milestone: Backlog
> Created at: Fri Dec 26 14:36:56 CST 2014
> State: open



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-3222) The function of editing 'Advanced Dictionaries' in cube is unavailable.

2018-02-01 Thread Peng Xing (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peng Xing updated KYLIN-3222:
-
Attachment: 0001-KYLIN-3222-The-function-of-editing-Advanced-Dictiona.patch

> The function of editing 'Advanced Dictionaries' in cube is unavailable.
> ---
>
> Key: KYLIN-3222
> URL: https://issues.apache.org/jira/browse/KYLIN-3222
> Project: Kylin
>  Issue Type: Bug
>  Components: Web 
>Affects Versions: v2.3.0
>Reporter: Peng Xing
>Assignee: Peng Xing
>Priority: Major
>  Labels: patch
> Attachments: 
> 0001-KYLIN-3222-The-function-of-editing-Advanced-Dictiona.patch, 
> modify_advanced_dictionary.png, modify_advanced_dictionary_no_effect.png
>
>
> There is a problem about editing 'Advanced Dictionaries' in cube, refer to 
> 'modify_advanced_dictionary.png' and 
> 'modify_advanced_dictionary_no_effect.png'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KYLIN-3222) The function of editing 'Advanced Dictionaries' in cube is unavailable.

2018-02-01 Thread Peng Xing (JIRA)
Peng Xing created KYLIN-3222:


 Summary: The function of editing 'Advanced Dictionaries' in cube 
is unavailable.
 Key: KYLIN-3222
 URL: https://issues.apache.org/jira/browse/KYLIN-3222
 Project: Kylin
  Issue Type: Bug
  Components: Web 
Affects Versions: v2.3.0
Reporter: Peng Xing
Assignee: Peng Xing
 Attachments: modify_advanced_dictionary.png, 
modify_advanced_dictionary_no_effect.png

There is a problem about editing 'Advanced Dictionaries' in cube, refer to 
'modify_advanced_dictionary.png' and 'modify_advanced_dictionary_no_effect.png'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-3218) KYLIN interface :cannot load models

2018-02-01 Thread Jean-Luc BELLIER (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Luc BELLIER updated KYLIN-3218:

Attachment: kylin.out.node2

Hello again,

 

I come back with new investigations and results. I noticed that the node on 
which kylin was installed did not have much memory available, so I installed it 
on another node.

By looking at the kylin.out file, I noticed failures indicating that some file 
were missing, especially tomcat/con/.keystore file.

 

I attached the corresponding file in the message. [^kylin.out.node2]

 

Thank you in advance for your help. Have a good day.

 

Best regards,

jean-Luc

> KYLIN interface :cannot load models 
> 
>
> Key: KYLIN-3218
> URL: https://issues.apache.org/jira/browse/KYLIN-3218
> Project: Kylin
>  Issue Type: Bug
>  Components: Client - CLI
>Affects Versions: v2.2.0
>Reporter: Jean-Luc BELLIER
>Assignee: hongbin ma
>Priority: Major
> Attachments: kylin.log, kylin.log, kylin.out, kylin.out, 
> kylin.out.node2
>
>
> Hello,
>  
> I am trying to use the tutorial example 'sample_cube' on KYLIN interface. I 
> installed the configuration without error. I launched the script sample.sh' 
> then the command 'kylin.sh start' from the bin directory.
> I can see the tables in Hive and the files in HDFS.
> Unfortunately, when I select the project 'learn_kylin' in the list, the 
> models cannot be refreshed.
> I tried to refresh the metadata from the 'System' tab without problem. But 
> the 'server config' and 'Sever environment are still blank. When I click on 
> 'reload config' button, I get the message : 'Oops ! Failed to take action'.
>  
> There may be something wrong in my config, but I cannot see what. I send in 
> attachment my 'kylin.log' and 'kylin.output' files.
>  
> Thank you in advance for your help. Have a good day.
>  
> Best regards,
> Jean-Luc.[^kylin.log]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KYLIN-2899) Enable segment level query cache

2018-02-01 Thread Shaofeng SHI (JIRA)

[ 
https://issues.apache.org/jira/browse/KYLIN-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348361#comment-16348361
 ] 

Shaofeng SHI commented on KYLIN-2899:
-

Good idea! 

> Enable segment level query cache
> 
>
> Key: KYLIN-2899
> URL: https://issues.apache.org/jira/browse/KYLIN-2899
> Project: Kylin
>  Issue Type: Sub-task
>  Components: Query Engine
>Affects Versions: v2.1.0
>Reporter: Zhong Yanghong
>Assignee: Ma Gang
>Priority: Major
> Fix For: v2.3.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KYLIN-3028) Build cube error when set S3 as working-dir

2018-02-01 Thread Shaofeng SHI (JIRA)

[ 
https://issues.apache.org/jira/browse/KYLIN-3028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348360#comment-16348360
 ] 

Shaofeng SHI edited comment on KYLIN-3028 at 2/1/18 10:38 AM:
--

It is a bug of kylin: if hdfs-working-dir is configured as a non-default file 
system, and not configure hbase.cluster-fs, this error will happen.

 

Now the behavior is consistent to: if not configure 'hbase.cluster-fs', will 
use the FS of 'hdfs-working-dir' as the FS for all intermediate data.


was (Author: shaofengshi):
It is a bug of kylin: if hdfs-working-dir is configured as a non-default file 
system, and not configure hbase.cluster-fs, this error will happen.

> Build cube error when set S3 as working-dir
> ---
>
> Key: KYLIN-3028
> URL: https://issues.apache.org/jira/browse/KYLIN-3028
> Project: Kylin
>  Issue Type: Bug
>  Components: Job Engine
>Affects Versions: v2.2.0
> Environment: AWS EMR 5.7, Apache Kylin 2.2 for HBase 1.x
>Reporter: Shaofeng SHI
>Assignee: Shaofeng SHI
>Priority: Minor
> Fix For: v2.3.0
>
>
> 1. Start an AWS EMR cluster, with HBase selected (data stored on S3);
> 2. Download and expand apache-kylin-2.2 for hbase 1.x binary package on EMR 
> 5.7's master node. Copy the "hbase.zookeeper.quorum" property from 
> /etc/hbase/conf/hbase-site.xml to $KYLIN_HOME/conf/kylin_job_conf.xml;  In 
> kylin.properties, set: "kylin.env.hdfs-working-dir=s3://mybucket/kylin"
> 3. Build the sample cube, in the job failed at "Create HTable" step, error is:
> {code}
> 2017-11-10 08:21:35,011 DEBUG [http-bio-7070-exec-2] 
> cachesync.Broadcaster:290 : Done broadcastingUPDATE, cube, kylin_sales_cube
> 2017-11-10 08:21:35,013 ERROR [Scheduler 1778356018 Job 
> 5a2893c9-3a76-458c-a03e-5cd97839fca5-393] common.HadoopShellExecutable:64 : 
> error execute 
> HadoopShellExecutable{id=5a2893c9-3a76-458c-a03e-5cd97839fca5-05, name=Create 
> HTable, state=RUNNING}
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=root, access=WRITE, 
> inode="/kylin/kylin_metadata/kylin-5a2893c9-3a76-458c-a03e-5cd97839fca5/kylin_sales_cube/rowkey_stats":hdfs:hadoop:drwxr-xr-x
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:320)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1695)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3896)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> 

[jira] [Commented] (KYLIN-3028) Build cube error when set S3 as working-dir

2018-02-01 Thread Shaofeng SHI (JIRA)

[ 
https://issues.apache.org/jira/browse/KYLIN-3028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348360#comment-16348360
 ] 

Shaofeng SHI commented on KYLIN-3028:
-

It is a bug of kylin: if hdfs-working-dir is configured as a non-default file 
system, and not configure hbase.cluster-fs, this error will happen.

> Build cube error when set S3 as working-dir
> ---
>
> Key: KYLIN-3028
> URL: https://issues.apache.org/jira/browse/KYLIN-3028
> Project: Kylin
>  Issue Type: Bug
>  Components: Job Engine
>Affects Versions: v2.2.0
> Environment: AWS EMR 5.7, Apache Kylin 2.2 for HBase 1.x
>Reporter: Shaofeng SHI
>Assignee: Shaofeng SHI
>Priority: Minor
> Fix For: v2.3.0
>
>
> 1. Start an AWS EMR cluster, with HBase selected (data stored on S3);
> 2. Download and expand apache-kylin-2.2 for hbase 1.x binary package on EMR 
> 5.7's master node. Copy the "hbase.zookeeper.quorum" property from 
> /etc/hbase/conf/hbase-site.xml to $KYLIN_HOME/conf/kylin_job_conf.xml;  In 
> kylin.properties, set: "kylin.env.hdfs-working-dir=s3://mybucket/kylin"
> 3. Build the sample cube, in the job failed at "Create HTable" step, error is:
> {code}
> 2017-11-10 08:21:35,011 DEBUG [http-bio-7070-exec-2] 
> cachesync.Broadcaster:290 : Done broadcastingUPDATE, cube, kylin_sales_cube
> 2017-11-10 08:21:35,013 ERROR [Scheduler 1778356018 Job 
> 5a2893c9-3a76-458c-a03e-5cd97839fca5-393] common.HadoopShellExecutable:64 : 
> error execute 
> HadoopShellExecutable{id=5a2893c9-3a76-458c-a03e-5cd97839fca5-05, name=Create 
> HTable, state=RUNNING}
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=root, access=WRITE, 
> inode="/kylin/kylin_metadata/kylin-5a2893c9-3a76-458c-a03e-5cd97839fca5/kylin_sales_cube/rowkey_stats":hdfs:hadoop:drwxr-xr-x
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:320)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1695)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3896)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
>  

[jira] [Commented] (KYLIN-2932) Simplify the thread model for in-memory cubing

2018-02-01 Thread Shaofeng SHI (JIRA)

[ 
https://issues.apache.org/jira/browse/KYLIN-2932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348324#comment-16348324
 ] 

Shaofeng SHI commented on KYLIN-2932:
-

Hi Yanghong, 

A couple of questions for this patch:

1. Under what condition this new builder should be used? Can it totally replace 
the old version?

2. Is there any evidence number to support this improvement? like performance 
improvement, resource consumption, etc.

3. If this can be covered by integration test, that would be great!

 

Thanks!

 

> Simplify the thread model for in-memory cubing
> --
>
> Key: KYLIN-2932
> URL: https://issues.apache.org/jira/browse/KYLIN-2932
> Project: Kylin
>  Issue Type: Improvement
>  Components: Job Engine
>Reporter: Wang Ken
>Assignee: Wang Ken
>Priority: Major
> Attachments: APACHE-KYLIN-2932.patch
>
>
> The current implementation uses split threads, task threads and main thread 
> to do the cube building, there is complex join and error handling logic.
> The new implement leverages the ForkJoinPool from JDK,  the event split logic 
> is handled in
> main thread. Cuboid task and sub-tasks are handled in fork join pool, cube 
> results are collected
> async and can be write to output earlier.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)