[ 
https://issues.apache.org/jira/browse/KYLIN-5132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17469737#comment-17469737
 ] 

Yuming Cao commented on KYLIN-5132:
-----------------------------------

Hi [~zhangyaqian], I have time to continue to solve this question recently and 
I noticed the version of kylin 4.0.1 was updated yesterday, so I redeployed it 
today.

I have done the preparation described in document, I sync the table in data 
source and kylin report an error: cannot get HiveTableMeta,  after I check the 
kylin.log I think it may be caused by incompatible versions. The version of my 
component: CDH 6.3.2, Hadoop 3.0.0, Hive 2.1.1, Spark 2.4.0. The version of 
spark that download by "download-spark.sh" is spark-2.4.7-bin-hadoop2.7. 

I also tried to build a cube, but it failed.

Here are kylin.log, the log of spark after built cube, and the log was download 
from monitor .[^05799056-b559-4f8c-8d7c-5db030229392-00.log]

> controller.TableController:199 : HIVE_STATS_JDBC_TIMEOUT 
> java.lang.NoSuchFieldError: HIVE_STATS_JDBC_TIMEOUT
> ------------------------------------------------------------------------------------------------------------
>
>                 Key: KYLIN-5132
>                 URL: https://issues.apache.org/jira/browse/KYLIN-5132
>             Project: Kylin
>          Issue Type: Bug
>          Components: Spark Engine
>    Affects Versions: v4.0.1
>         Environment: CDH 6.3.2;Hadoop 3.0.0;Hive 2.1.1;Spark 2.4.0
>            Reporter: Yuming Cao
>            Priority: Major
>         Attachments: 05799056-b559-4f8c-8d7c-5db030229392-00.log, 
> 20220106142727.png, 20220106142758.png, 
> PM01_05799056b5594f8c8d7c5db03022939200.log, QQ截图20211123180315.png, kylin 
> (2)-1.log, kylin.log
>
>
> I tried it twice in total:
> For the first time, I added SPARK_HOME in /etc/profile , and then when I 
> started kylin, it prompted "Skip spark which not owned by kylin. SPARK_HOME 
> is /opt/cloudera/parcels/CDH/lib/spark and KYLIN_HOME is /opt/kylin.
> Please download the correct version of Apache Spark, unzip it, rename it 
> to'spark' and put it in /opt/kylin directory.
> Do not use the spark that comes with your hadoop environment."
> So I deleted SPARK_HOME from the environment variable, then used 
> "$KYLIN_HOME/bin/download-spark.sh" to download 
> "spark-2.4.7-bin-hadoop2.7.tgz" and automatically unzip it under 
> $KYLIN_HOME/spark, and started kylin again, and it prompted  "Failed to take 
> action" after I Load Table From Tree, and the log has the following error:
> "ERROR [http-bio-7070-exec-6] controller.TableController:199: 
> HIVE_STATS_JDBC_TIMEOUT
> java.lang.NoSuchFieldError: HIVE_STATS_JDBC_TIMEOUT". So what should I do 
> now, spark under $KYLIN_HOME needs to modify the configuration or how to do 
> it?
> Attach the error part of the log



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to