[ 
https://issues.apache.org/jira/browse/KYLIN-4522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17119204#comment-17119204
 ] 

Carlos Ignacio Molina López commented on KYLIN-4522:
----------------------------------------------------

Thank you very much for your Response, Dear Xiaoxiang Yu

I had tried with both versions 2.6.3 and 2.5.2 and I am getting this error when 
I try to import from a JDBC Data Source (I was able to do it successfully in 
version 2.6.6)

This is the content of kylin.log

2020-05-29 01:54:51,032 INFO [http-bio-7070-exec-8] conf.HiveConf:181 : Found 
configuration file file:/etc/hive/conf.dist/hive-site.xml
2020-05-29 01:54:51,367 INFO [http-bio-7070-exec-8] hive.metastore:407 : Trying 
to connect to metastore with URI 
thrift://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:9083
2020-05-29 01:54:51,393 INFO [http-bio-7070-exec-8] hive.metastore:481 : Opened 
a connection to metastore, current connections: 1
2020-05-29 01:54:51,407 INFO [http-bio-7070-exec-8] hive.metastore:534 : 
Connected to metastore.
2020-05-29 01:55:08,901 INFO [BadQueryDetector] service.BadQueryDetector:147 : 
Detect bad query.
2020-05-29 01:55:12,140 INFO [FetcherRunner 1956644476-47] 
threadpool.DefaultFetcherRunner:96 : Job Fetcher: 0 should running, 0 actual 
running, 0 stopped, 0 ready, 0 already succeed, 0 error, 0 discarded, 0 others
2020-05-29 01:55:42,141 INFO [FetcherRunner 1956644476-47] 
threadpool.DefaultFetcherRunner:96 : Job Fetcher: 0 should running, 0 actual 
running, 0 stopped, 0 ready, 0 already succeed, 0 error, 0 discarded, 0 others
2020-05-29 01:55:48,342 INFO [http-bio-7070-exec-4] hive.metastore:407 : Trying 
to connect to metastore with URI 
thrift://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:9083
2020-05-29 01:55:48,362 INFO [http-bio-7070-exec-4] hive.metastore:481 : Opened 
a connection to metastore, current connections: 2
2020-05-29 01:55:48,363 INFO [http-bio-7070-exec-4] hive.metastore:534 : 
Connected to metastore.
2020-05-29 01:55:48,474 ERROR [http-bio-7070-exec-4] 
controller.TableController:129 : Failed to load Hive Table
java.lang.RuntimeException: cannot get HiveTableMeta
 at 
org.apache.kylin.source.hive.HiveMetadataExplorer.loadTableMetadata(HiveMetadataExplorer.java:66)
 at 
org.apache.kylin.rest.service.TableService.extractHiveTableMeta(TableService.java:204)
 at 
org.apache.kylin.rest.service.TableService.loadHiveTablesToProject(TableService.java:130)
 at 
org.apache.kylin.rest.controller.TableController.loadHiveTables(TableController.java:113)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205)
 at 
org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:133)
 at 
org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:97)
 at 
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:827)
 at 
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:738)
 at 
org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85)
 at 
org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:967)
 at 
org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901)
 at 
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)
 at 
org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:872)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:650)
 at 
org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
 at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
 at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
 at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
 at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
 at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
 at 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317)
 at 
org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127)
 at 
org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91)
 at 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
 at 
org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:114)
 at 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
 at 
org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:137)
 at 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
 at 
org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:111)
 at 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
 at 
org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:170)
 at 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
 at 
org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63)

 

This is the content of kylin.out

 

OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was 
removed in 8.0
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/lib/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/lib/spark/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
usage: java org.apache.catalina.startup.Catalina [ -config \{pathname} ] [ 
-nonaming ] \{ -help | start | stop }
May 29, 2020 1:48:59 AM org.apache.catalina.core.AprLifecycleListener 
lifecycleEvent
INFO: The APR based Apache Tomcat Native library which allows optimal 
performance in production environments was not found on the java.library.path: 
:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native
May 29, 2020 1:48:59 AM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler ["http-bio-7070"]
May 29, 2020 1:48:59 AM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler ["http-bio-7443"]
May 29, 2020 1:49:00 AM org.apache.tomcat.util.net.jsse.JSSESocketFactory 
getStore
SEVERE: Failed to load keystore type JKS with path conf/.keystore due to 
/usr/local/kylin/apache-kylin-2.5.2-bin-hbase1x/tomcat/conf/.keystore (No such 
file or directory)
java.io.FileNotFoundException: 
/usr/local/kylin/apache-kylin-2.5.2-bin-hbase1x/tomcat/conf/.keystore (No such 
file or directory)
 at java.io.FileInputStream.open0(Native Method)
 at java.io.FileInputStream.open(FileInputStream.java:195)
 at java.io.FileInputStream.<init>(FileInputStream.java:138)
 at java.io.FileInputStream.<init>(FileInputStream.java:93)
 at 
sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
 at 
sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
 at java.net.URL.openStream(URL.java:1068)
 at 
org.apache.tomcat.util.file.ConfigFileLoader.getInputStream(ConfigFileLoader.java:100)
 at 
org.apache.tomcat.util.net.jsse.JSSESocketFactory.getStore(JSSESocketFactory.java:472)
 at 
org.apache.tomcat.util.net.jsse.JSSESocketFactory.getKeystore(JSSESocketFactory.java:381)
 at 
org.apache.tomcat.util.net.jsse.JSSESocketFactory.getKeyManagers(JSSESocketFactory.java:654)
 at 
org.apache.tomcat.util.net.jsse.JSSESocketFactory.getKeyManagers(JSSESocketFactory.java:594)
 at 
org.apache.tomcat.util.net.jsse.JSSESocketFactory.init(JSSESocketFactory.java:539)
 at 
org.apache.tomcat.util.net.jsse.JSSESocketFactory.createSocket(JSSESocketFactory.java:255)
 at org.apache.tomcat.util.net.JIoEndpoint.bind(JIoEndpoint.java:400)
 at org.apache.tomcat.util.net.AbstractEndpoint.init(AbstractEndpoint.java:728)
 at org.apache.coyote.AbstractProtocol.init(AbstractProtocol.java:452)
 at 
org.apache.coyote.http11.AbstractHttp11JsseProtocol.init(AbstractHttp11JsseProtocol.java:119)
 at org.apache.catalina.connector.Connector.initInternal(Connector.java:978)
 at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102)
 at 
org.apache.catalina.core.StandardService.initInternal(StandardService.java:560)
 at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102)
 at 
org.apache.catalina.core.StandardServer.initInternal(StandardServer.java:840)
 at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102)
 at org.apache.catalina.startup.Catalina.load(Catalina.java:654)
 at org.apache.catalina.startup.Catalina.start(Catalina.java:693)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:294)
 at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:428)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

 

Would you please suggest?

Thank you and kind regards

Carlos Molina

 

> Could not initialize class org.apache.hadoop.hbase.io.hfile.HFile Kylin 2.6.6 
> EMR  5.19
> ---------------------------------------------------------------------------------------
>
>                 Key: KYLIN-4522
>                 URL: https://issues.apache.org/jira/browse/KYLIN-4522
>             Project: Kylin
>          Issue Type: Bug
>          Components: Environment , Job Engine, Others
>    Affects Versions: v2.6.6
>         Environment: Release label: emr-5.19.0
> Hadoop distribution:Amazon 2.8.5
> Applications: Hive 2.3.3, HBase 1.4.7, Spark 2.3.2, Livy 0.5.0, ZooKeeper 
> 3.4.13, Sqoop 1.4.7, Oozie 5.0.0, Pig 0.17.0, HCatalog 2.3.3
>            Reporter: Carlos Ignacio Molina López
>            Priority: Major
>         Attachments: base_2020_05_25_14_29_52.zip
>
>
> Hi,
> I've tried to build the Sample kylin_sales_cube with Spark to run in Amazon 
> EMR Cluster. I saw issue KYLIN-3931 and suggestion is to use the 2.6.6 Engine 
> for Hadoop 3. In EMR Hadoop 3 is only available on EMR 6.0 which is very 
> recent and I had tried to setup versions 2.6.6 and 3.0.2 for Hadoop 3, but in 
> both cases the Kylin Site doesn't show up (Error 404 - Not Found). So I tried 
> to run in EMR 5.19 that has same version of Spark (2.3.2) used in Kylin 2.6.6.
> I am getting "java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.hbase.io.hfile.HFile" error message. 
> I had already copied the following jars to Spark Jars folder, as per 
> documentations and what I've read in kylin-issues mailing list archives:
> /usr/lib/hbase/hbase-hadoop-compat-1.4.7.jar
> /usr/lib/hbase/hbase-hadoop2-compat-1.4.7.jar
> /usr/lib/hbase/lib/hbase-common-1.4.7-tests.jar
> /usr/lib/hbase/lib/hbase-common-1.4.7.jar
> /usr/lib/hbase/hbase-client.jar
> /usr/lib/hbase/hbase-client-1.4.7.jar
> /usr/lib/hbase/hbase-server-1.4.7.jar
>  
> This is the output shown on the Step
> {{org.apache.kylin.engine.spark.exception.SparkException: OS command error 
> exit with return code: 1, error message: 20/05/25 14:03:46 WARN SparkConf: 
> The configuration key 'spark.yarn.executor.memoryOverhead' has been 
> deprecated as of Spark 2.3 and may be removed in the future. Please use the 
> new key 'spark.executor.memoryOverhead' 
> instead.org.apache.kylin.engine.spark.exception.SparkException: OS command 
> error exit with return code: 1, error message: 20/05/25 14:03:46 WARN 
> SparkConf: The configuration key 'spark.yarn.executor.memoryOverhead' has 
> been deprecated as of Spark 2.3 and may be removed in the future. Please use 
> the new key 'spark.executor.memoryOverhead' instead.20/05/25 14:03:47 INFO 
> RMProxy: Connecting to ResourceManager at 
> ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal/XXX.XXX.XXX.XXX:803220/05/25 
> 14:03:49 INFO Client: Requesting a new application from cluster with 4 
> NodeManagers20/05/25 14:03:49 INFO Client: Verifying our application has not 
> requested more than the maximum memory capability of the cluster (6144 MB per 
> container)20/05/25 14:03:49 INFO Client: Will allocate AM container, with 
> 5632 MB memory including 512 MB overhead20/05/25 14:03:49 INFO Client: 
> Setting up container launch context for our AM20/05/25 14:03:49 INFO Client: 
> Setting up the launch environment for our AM container20/05/25 14:03:49 INFO 
> Client: Preparing resources for our AM container20/05/25 14:03:51 WARN 
> Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back 
> to uploading libraries under SPARK_HOME.20/05/25 14:03:54 INFO Client: 
> Uploading resource 
> file:/mnt/tmp/spark-d26c4f1f-1b8a-4cf8-a05b-842294ce017d/__spark_libs__4034657074333893156.zip
>  -> 
> hdfs://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:8020/user/hadoop/.sparkStaging/application_1590337422418_0043/__spark_libs__4034657074333893156.zip20/05/25
>  14:03:54 INFO Client: Uploading resource 
> file:/usr/local/kylin/apache-kylin-2.6.6-bin-hbase1x/lib/kylin-job-2.6.6.jar 
> -> 
> hdfs://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:8020/user/hadoop/.sparkStaging/application_1590337422418_0043/kylin-job-2.6.6.jar20/05/25
>  14:03:55 INFO Client: Uploading resource 
> file:/usr/lib/hbase/lib/hbase-common-1.4.7.jar -> 
> hdfs://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:8020/user/hadoop/.sparkStaging/application_1590337422418_0043/hbase-common-1.4.7.jar20/05/25
>  14:03:55 INFO Client: Uploading resource 
> file:/usr/lib/hbase/lib/hbase-server-1.4.7.jar -> 
> hdfs://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:8020/user/hadoop/.sparkStaging/application_1590337422418_0043/hbase-server-1.4.7.jar20/05/25
>  14:03:55 INFO Client: Uploading resource 
> file:/usr/lib/hbase/lib/hbase-client-1.4.7.jar -> 
> hdfs://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:8020/user/hadoop/.sparkStaging/application_1590337422418_0043/hbase-client-1.4.7.jar20/05/25
>  14:03:55 INFO Client: Uploading resource 
> file:/usr/lib/hbase/lib/hbase-protocol-1.4.7.jar -> 
> hdfs://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:8020/user/hadoop/.sparkStaging/application_1590337422418_0043/hbase-protocol-1.4.7.jar20/05/25
>  14:03:55 INFO Client: Uploading resource 
> file:/usr/lib/hbase/lib/hbase-hadoop-compat-1.4.7.jar -> 
> hdfs://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:8020/user/hadoop/.sparkStaging/application_1590337422418_0043/hbase-hadoop-compat-1.4.7.jar20/05/25
>  14:03:56 INFO Client: Uploading resource 
> file:/usr/lib/hbase/lib/htrace-core-3.1.0-incubating.jar -> 
> hdfs://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:8020/user/hadoop/.sparkStaging/application_1590337422418_0043/htrace-core-3.1.0-incubating.jar20/05/25
>  14:03:56 INFO Client: Uploading resource 
> file:/usr/lib/hbase/lib/metrics-core-2.2.0.jar -> 
> hdfs://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:8020/user/hadoop/.sparkStaging/application_1590337422418_0043/metrics-core-2.2.0.jar20/05/25
>  14:03:56 WARN Client: Same path resource 
> file:///usr/lib/hbase/lib/hbase-hadoop-compat-1.4.7.jar added multiple times 
> to distributed cache.20/05/25 14:03:56 INFO Client: Uploading resource 
> file:/usr/lib/hbase/lib/hbase-hadoop2-compat-1.4.7.jar -> 
> hdfs://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:8020/user/hadoop/.sparkStaging/application_1590337422418_0043/hbase-hadoop2-compat-1.4.7.jar20/05/25
>  14:03:56 INFO Client: Uploading resource file:/etc/spark/conf/hive-site.xml 
> -> 
> hdfs://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:8020/user/hadoop/.sparkStaging/application_1590337422418_0043/hive-site.xml20/05/25
>  14:03:56 INFO Client: Uploading resource 
> file:/mnt/tmp/spark-d26c4f1f-1b8a-4cf8-a05b-842294ce017d/__spark_conf__1997289269037988671.zip
>  -> 
> hdfs://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:8020/user/hadoop/.sparkStaging/application_1590337422418_0043/__spark_conf__.zip20/05/25
>  14:03:56 INFO SecurityManager: Changing view acls to: hadoop20/05/25 
> 14:03:56 INFO SecurityManager: Changing modify acls to: hadoop20/05/25 
> 14:03:56 INFO SecurityManager: Changing view acls groups to: 20/05/25 
> 14:03:56 INFO SecurityManager: Changing modify acls groups to: 20/05/25 
> 14:03:56 INFO SecurityManager: SecurityManager: authentication disabled; ui 
> acls disabled; users  with view permissions: Set(hadoop); groups with view 
> permissions: Set(); users  with modify permissions: Set(hadoop); groups with 
> modify permissions: Set()20/05/25 14:03:56 INFO Client: Submitting 
> application application_1590337422418_0043 to ResourceManager20/05/25 
> 14:03:56 INFO YarnClientImpl: Submitted application 
> application_1590337422418_004320/05/25 14:03:57 INFO Client: Application 
> report for application_1590337422418_0043 (state: ACCEPTED)20/05/25 14:03:57 
> INFO Client:  client token: N/A diagnostics: AM container is launched, 
> waiting for AM container to Register with RM ApplicationMaster host: N/A 
> ApplicationMaster RPC port: -1 queue: default start time: 1590415436952 final 
> status: UNDEFINED tracking URL: 
> http://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:20888/proxy/application_1590337422418_0043/
>  user: hadoop20/05/25 14:03:58 INFO Client: Application report for 
> application_1590337422418_0043 (state: ACCEPTED)20/05/25 14:03:59 INFO 
> Client: Application report for application_1590337422418_0043 (state: 
> ACCEPTED)20/05/25 14:04:00 INFO Client: Application report for 
> application_1590337422418_0043 (state: ACCEPTED)20/05/25 14:04:01 INFO 
> Client: Application report for application_1590337422418_0043 (state: 
> ACCEPTED)20/05/25 14:04:02 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:02 INFO Client: 
>  client token: N/A diagnostics: N/A ApplicationMaster host: XXX.XXX.XXX.XXX 
> ApplicationMaster RPC port: 0 queue: default start time: 1590415436952 final 
> status: UNDEFINED tracking URL: 
> http://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:20888/proxy/application_1590337422418_0043/
>  user: hadoop20/05/25 14:04:03 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:04 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:05 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:06 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:07 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:08 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:09 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:10 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:11 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:12 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:13 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:14 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:15 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:16 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:17 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:18 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:19 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:21 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:22 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:23 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:24 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:25 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:26 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:27 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:28 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:29 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:30 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:31 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:32 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:33 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:34 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:35 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:36 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:37 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:38 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:39 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:40 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:41 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:42 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:43 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:44 INFO Client: Application report for 
> application_1590337422418_0043 (state: ACCEPTED)20/05/25 14:04:44 INFO 
> Client:  client token: N/A diagnostics: AM container is launched, waiting for 
> AM container to Register with RM ApplicationMaster host: N/A 
> ApplicationMaster RPC port: -1 queue: default start time: 1590415436952 final 
> status: UNDEFINED tracking URL: 
> http://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:20888/proxy/application_1590337422418_0043/
>  user: hadoop20/05/25 14:04:45 INFO Client: Application report for 
> application_1590337422418_0043 (state: ACCEPTED)20/05/25 14:04:46 INFO 
> Client: Application report for application_1590337422418_0043 (state: 
> ACCEPTED)20/05/25 14:04:47 INFO Client: Application report for 
> application_1590337422418_0043 (state: ACCEPTED)20/05/25 14:04:48 INFO 
> Client: Application report for application_1590337422418_0043 (state: 
> ACCEPTED)20/05/25 14:04:49 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:49 INFO Client: 
>  client token: N/A diagnostics: N/A ApplicationMaster host: XXX.XXX.XXX.XXX 
> ApplicationMaster RPC port: 0 queue: default start time: 1590415436952 final 
> status: UNDEFINED tracking URL: 
> http://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:20888/proxy/application_1590337422418_0043/
>  user: hadoop20/05/25 14:04:50 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:51 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:52 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:53 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:54 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:55 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:56 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:57 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:04:58 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:04:59 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:05:00 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:05:01 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:05:02 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:05:03 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:05:04 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:05:05 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:05:06 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:05:07 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:05:08 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:05:09 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:05:10 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:05:11 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:05:12 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:05:13 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:05:14 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:05:15 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:05:16 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:05:17 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:05:18 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:05:19 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:05:20 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:05:21 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:05:22 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:05:23 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:05:24 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:05:25 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> RUNNING)20/05/25 14:05:26 INFO Client: Application report for 
> application_1590337422418_0043 (state: RUNNING)20/05/25 14:05:27 INFO Client: 
> Application report for application_1590337422418_0043 (state: 
> FINISHED)20/05/25 14:05:27 INFO Client:  client token: N/A diagnostics: User 
> class threw exception: java.lang.RuntimeException: error execute 
> org.apache.kylin.storage.hbase.steps.SparkCubeHFile. Root cause: Job aborted. 
> at 
> org.apache.kylin.common.util.AbstractApplication.execute(AbstractApplication.java:42)
>  at org.apache.kylin.common.util.SparkEntry.main(SparkEntry.java:44) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721)Caused
>  by: org.apache.spark.SparkException: Job aborted. at 
> org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:100)
>  at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1083)
>  at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1081)
>  at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1081)
>  at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>  at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>  at org.apache.spark.rdd.RDD.withScope(RDD.scala:363) at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1081)
>  at 
> org.apache.spark.api.java.JavaPairRDD.saveAsNewAPIHadoopDataset(JavaPairRDD.scala:831)
>  at 
> org.apache.kylin.storage.hbase.steps.SparkCubeHFile.execute(SparkCubeHFile.java:238)
>  at 
> org.apache.kylin.common.util.AbstractApplication.execute(AbstractApplication.java:37)
>  ... 6 moreCaused by: org.apache.spark.SparkException: Job aborted due to 
> stage failure: Task 1 in stage 1.0 failed 4 times, most recent failure: Lost 
> task 1.3 in stage 1.0 (TID 15, ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal, 
> executor 3): org.apache.spark.SparkException: Task failed while writing rows 
> at 
> org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:155)
>  at 
> org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:83)
>  at 
> org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:78)
>  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at 
> org.apache.spark.scheduler.Task.run(Task.scala:109) at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)Caused by: 
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.hbase.io.hfile.HFile at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.<init>(StoreFile.java:880)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.<init>(StoreFile.java:805)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreFile$WriterBuilder.build(StoreFile.java:739)
>  at 
> org.apache.kylin.storage.hbase.steps.HFileOutputFormat3$1.getNewWriter(HFileOutputFormat3.java:224)
>  at 
> org.apache.kylin.storage.hbase.steps.HFileOutputFormat3$1.write(HFileOutputFormat3.java:181)
>  at 
> org.apache.kylin.storage.hbase.steps.HFileOutputFormat3$1.write(HFileOutputFormat3.java:153)
>  at 
> org.apache.spark.internal.io.HadoopMapReduceWriteConfigUtil.write(SparkHadoopWriter.scala:356)
>  at 
> org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:130)
>  at 
> org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:127)
>  at 
> org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1415)
>  at 
> org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:139)
>  ... 8 more}}
> {{Driver stacktrace: at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1803)
>  at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1791)
>  at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1790)
>  at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at 
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1790) 
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:871)
>  at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:871)
>  at scala.Option.foreach(Option.scala:257) at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:871)
>  at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2024)
>  at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1973)
>  at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1962)
>  at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at 
> org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:682) at 
> org.apache.spark.SparkContext.runJob(SparkContext.scala:2034) at 
> org.apache.spark.SparkContext.runJob(SparkContext.scala:2055) at 
> org.apache.spark.SparkContext.runJob(SparkContext.scala:2087) at 
> org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:78)
>  ... 16 moreCaused by: org.apache.spark.SparkException: Task failed while 
> writing rows at 
> org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:155)
>  at 
> org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:83)
>  at 
> org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:78)
>  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at 
> org.apache.spark.scheduler.Task.run(Task.scala:109) at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)Caused by: 
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.hbase.io.hfile.HFile at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.<init>(StoreFile.java:880)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.<init>(StoreFile.java:805)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreFile$WriterBuilder.build(StoreFile.java:739)
>  at 
> org.apache.kylin.storage.hbase.steps.HFileOutputFormat3$1.getNewWriter(HFileOutputFormat3.java:224)
>  at 
> org.apache.kylin.storage.hbase.steps.HFileOutputFormat3$1.write(HFileOutputFormat3.java:181)
>  at 
> org.apache.kylin.storage.hbase.steps.HFileOutputFormat3$1.write(HFileOutputFormat3.java:153)
>  at 
> org.apache.spark.internal.io.HadoopMapReduceWriteConfigUtil.write(SparkHadoopWriter.scala:356)
>  at 
> org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:130)
>  at 
> org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:127)
>  at 
> org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1415)
>  at 
> org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:139)
>  ... 8 more}}
> {{ ApplicationMaster host: XXX.XXX.XXX.XXX ApplicationMaster RPC port: 0 
> queue: default start time: 1590415436952 final status: FAILED tracking URL: 
> http://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:20888/proxy/application_1590337422418_0043/
>  user: hadoopException in thread "main" org.apache.spark.SparkException: 
> Application application_1590337422418_0043 finished with failed status at 
> org.apache.spark.deploy.yarn.Client.run(Client.scala:1165) at 
> org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1520) 
> at 
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
>  at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198) 
> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228) at 
> org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137) at 
> org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)20/05/25 14:05:27 
> INFO ShutdownHookManager: Shutdown hook called20/05/25 14:05:27 INFO 
> ShutdownHookManager: Deleting directory 
> /mnt/tmp/spark-04e9eed4-d16e-406c-9fb0-972cf355db0920/05/25 14:05:27 INFO 
> ShutdownHookManager: Deleting directory 
> /mnt/tmp/spark-d26c4f1f-1b8a-4cf8-a05b-842294ce017dThe command is: export 
> HADOOP_CONF_DIR=/etc/hadoop/conf && /usr/lib/spark/bin/spark-submit --class 
> org.apache.kylin.common.util.SparkEntry  --conf spark.executor.instances=40  
> --conf spark.yarn.queue=default  --conf 
> spark.history.fs.logDirectory=hdfs:///kylin/spark-history  --conf 
> spark.master=yarn  --conf spark.hadoop.yarn.timeline-service.enabled=false  
> --conf spark.executor.memory=5G  --conf spark.eventLog.enabled=true  --conf 
> spark.eventLog.dir=hdfs:///kylin/spark-history  --conf 
> spark.yarn.executor.memoryOverhead=1024  --conf spark.driver.memory=5G  
> --conf spark.submit.deployMode=cluster  --conf 
> spark.shuffle.service.enabled=true --jars 
> /usr/lib/hbase/lib/hbase-common-1.4.7.jar,/usr/lib/hbase/lib/hbase-server-1.4.7.jar,/usr/lib/hbase/lib/hbase-client-1.4.7.jar,/usr/lib/hbase/lib/hbase-protocol-1.4.7.jar,/usr/lib/hbase/lib/hbase-hadoop-compat-1.4.7.jar,/usr/lib/hbase/lib/htrace-core-3.1.0-incubating.jar,/usr/lib/hbase/lib/metrics-core-2.2.0.jar,/usr/lib/hbase/lib/hbase-hadoop-compat-1.4.7.jar,/usr/lib/hbase/lib/hbase-hadoop2-compat-1.4.7.jar,
>  /usr/local/kylin/apache-kylin-2.6.6-bin-hbase1x/lib/kylin-job-2.6.6.jar 
> -className org.apache.kylin.storage.hbase.steps.SparkCubeHFile -partitions 
> hdfs://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:8020/kylin/kylin_metadata/kylin-b75c7f69-2ebf-c5c3-4a6e-b01f177d911f/kylin_sales_cube/rowkey_stats/part-r-00000_hfile
>  -counterOutput 
> hdfs://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:8020/kylin/kylin_metadata/kylin-b75c7f69-2ebf-c5c3-4a6e-b01f177d911f/kylin_sales_cube/counter
>  -cubename kylin_sales_cube -output 
> hdfs://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:8020/kylin/kylin_metadata/kylin-b75c7f69-2ebf-c5c3-4a6e-b01f177d911f/kylin_sales_cube/hfile
>  -input 
> hdfs://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:8020/kylin/kylin_metadata/kylin-b75c7f69-2ebf-c5c3-4a6e-b01f177d911f/kylin_sales_cube/cuboid/
>  -segmentId 0d22a9ac-5256-02cd-a5b9-44de5247871f -metaUrl 
> kylin_metadata@hdfs,path=hdfs://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:8020/kylin/kylin_metadata/kylin-b75c7f69-2ebf-c5c3-4a6e-b01f177d911f/kylin_sales_cube/metadata
>  -hbaseConfPath 
> hdfs://ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal:8020/kylin/kylin_metadata/kylin-b75c7f69-2ebf-c5c3-4a6e-b01f177d911f/hbase-conf.xml
>  at 
> org.apache.kylin.engine.spark.SparkExecutable.doWork(SparkExecutable.java:347)
>  at 
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:167)
>  at 
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:71)
>  at 
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:167)
>  at 
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:114)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)}}
>  
> {{Please suggest how this issue can be troubleshooted.}}
> Thank you and kind regards
> {{Carlos Molina.}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to