Hello,
In the /etc/hadoop/conf folder, I can see under the fs.defaultFS property the
following value : hdfs://:8020. The file indicates that
the content was automatically generated by Cloudera Manager. Are there other
points to check in this file ?
According to the trace of
Kylin has located the Hadoop config folder as "/etc/hadoop/conf", is this
the right location? Please check whether the core-site.xml under this
folder has the proper configuration, for example, 'fs.defaultFS' points to
the HDFS.
2018-02-06 16:58 GMT+08:00 BELLIER Jean-luc
All looks good; Can't believe that it can not create the folder on HDFS.
Please check more.
2018-02-06 18:58 GMT+08:00 BELLIER Jean-luc :
> Hello,
>
>
>
> In the /etc/hadoop/conf folder, I can see under the fs.defaultFS property
> the following value :
Hello ,
1. I did the following (in this order) :
· Stop the kylin process (kylin.sh stop)
· Rename the kylin.log and kylin.out files in $KYLIN_HOME/logs
· Checked the environment (check-env.sh)
· Start the kylin process (kylin.sh start)
So I get new
Hi,
I have to generate a big cube, about 400 M rows of historical data (and
many dimensions in small-mid size cluster). To avoid a very big cube
building process, I divided this process into month periods (about
30-40 M rows per month). When this process finish, an hourly load
process will
May be the fact that I cannot load Hive tables is due to the indefinite load of
the models. Is there a workaround to see if I can see the tables without
loading the models ?
Thank you again for your help.
Best regards,
Jean-Luc.
De : BELLIER Jean-luc
Envoyé : mardi 6 février 2018 16:34
À :
Since you used a vendor's Kylin package, please consider asking on their
mailing list.
Original message From: 谢巍盛 Date:
2/6/18 5:36 PM (GMT-08:00) To: user@kylin.apache.org Subject: Re: Unable to
build cube with Spark
Hi Shaofeng,
The
Okay, if that's the case, I assume setting up kylin on hadoop with
apache version instead of CDH/HDP wont have this issue.
Thanks.
On 2/7/18 9:44 AM, Ted Yu wrote:
Since you used a vendor's Kylin package, please consider asking on
their mailing list.
Original message
Hi Weisheng,
Two questions:
What's your Kylin version? and what's the data source of this cube, Hive or
Kafka?
2018-02-06 21:10 GMT+08:00 谢巍盛 :
> Hi,
>
> we follow the step in http://kylin.apache.org/docs21
> /tutorial/cube_spark.html try to build cube with Spark,
The default configuration for spark is very small; You need to tweak some
parameters (like below) or enable Spark dynamic resource allocation;
kylin.engine.spark-conf.spark.executor.memory=1G
kylin.engine.spark-conf.spark.executor.cores=2
kylin.engine.spark-conf.spark.executor.instances=1
If you
Hi Shaofeng,
The kylin used on our end is /apache/-/kylin/-/2.2.0/-/bin/-/cdh57/
data source is on Hive.
We managed to fix this issue by adding
spark-network-common_2.10-1.6.0-cdh5.9.2.jar under $SPARK_HOME/lib/
Still, it seems weird to us, we didnt setup Spark for kylin, somehow
kylin
Yes I did that.. I will come back with more logs.
Regards,
Manoj
From: ShaoFeng Shi [mailto:shaofeng...@apache.org]
Sent: Wednesday, February 07, 2018 6:46 AM
To: user
Subject: Re: apache kylin 2.1 - Spark Cube Building
The default configuration for spark is very small;
CDH is also supported. But the vendors may have customization on Spark, so
sometimes you may need to use their Spark release or copying additional
jars to the Spark that Kylin ships.
The information you provided is limited; In the first post you said there
is no Spark option in the cube wizard,
1. we were unable to build cube, the issue was later rooted in missing a
spark library, which is weird, because we didnt set up kylin with spark
by that time, we just wanna try it on MR first.
2. When we trying to set up kylin with Spark following the tutorial on
kylin's official web, we cannot
I have these configs for Spark – I guess this should be enough to process huge
volume of data. Another question – How can we have segement cubing build
process for Spark instead of by layer one? Pls. advise.
237 kylin.engine.spark-conf.spark.yarn.queue=RCMO_Pool
238
Hello,
I have cleaned all then environments (HBase, Hive) and restarted the sample.sh
script again. In the output messages, I can see several times the following :
2018-02-06 09:43:21,034 INFO [main] util.ZookeeperDistributedLock:154 :
4...@node2.hades.rte-france.com acquired lock at
Hello,
I thank you for your feedback and the information about the kylin.properties
file, which gives lots of information.
To answer your question, here is the result of the
./bin/find-hadoop-conf-dir.sh –v command
bellierjlu@node2:~$ $KYLIN_HOME/bin/find-hadoop-conf-dir.sh -v
Turn on verbose
Hi,
we follow the step in
http://kylin.apache.org/docs21/tutorial/cube_spark.html try to build
cube with Spark, but couldnt get it worked. The cube engine on the webUI
didnt offer another option other than MR, it didnt show Spark(beta) as
the tutorial shows. Anyone knows what the problem
18 matches
Mail list logo