Hello Team,
I am new to Kylin and just created the sample cube. however, when i create a
model with my own data and this is a very small data set with 10 rows in the
fact table. The process is failing with the following error message. I am
using the CDH quickstart VM.
May be this is a Yarn
Zhixiong Chen created KYLIN-2544:
Summary: lookup table show wrong join type
Key: KYLIN-2544
URL: https://issues.apache.org/jira/browse/KYLIN-2544
Project: Kylin
Issue Type: Bug
Hi xing, the picture wasn't dispalyed.
What's the value of "Source records"?
在 2017年4月11日 上午10:24,chenx...@mi-ya.com.cn 写道:
> hello:
> my question is
> the cube build succeed, its status is reday ,but there is no result in
> tables .the picture bellow:
>
>
>
>
>
>
hello:
my question is
the cube build succeed, its status is reday ,but there is no result in
tables .the picture bellow:
杭州米雅信息科技有限公司
大数据开发:陈星
邮箱:chenx...@mi-ya.com.cn
电话:13161708828
地址:杭州市滨江区江虹路459号1号楼B座11楼(英飞特大厦B座)
I also encountered the same problem, I would like to know how to solve it?
--
View this message in context:
http://apache-kylin.74782.x6.nabble.com/jira-Created-KYLIN-2511-org-apache-kylin-job-exception-ExecuteException-org-apache-kylin-job-exceptio-tp7450p7623.html
Sent from the Apache Kylin
*I use kylin-2.0 spark-1.6 hadoo-2.6.1 .
Building kylin cube by spark(beta) occur error .below is error log*
how can I fix it
OS command error exit with 1 -- export
HADOOP_CONF_DIR=/opt/kylin/hadoop-conf && /opt/spark/bin/spark-submit
--class org.apache.kylin.common.util.SparkEntry --conf
我理解你的意思是两个业务表分属于不同的cube,cube构建后,再关联查询?
如果是这样,两个表关联应该是查不出来的,而应该是把需要关联查询的表,放在同一个cube中去构建
在 2017-04-13 21:33:10,"lizh...@mobiexchanger.com [via Apache Kylin]"
写道:
你好,我现在有个问题请教:
I am testing 2.0 beta using the sample data set using Tableau 10.2, and SQL
server 2016 linked server.
--ISSUE1: In Tableau 10.2, with custom sql, the preview works, but the full
query returns only 1 row with nulls for the following sample query:
select part_dt, sum(price) as total_selled,
My hive fact table is partitioned on ingestion date column, but I need to
build cube and query the cube on the actual event date column. Events can
arrive days or event weeks late. I want to build the cube incrementally
daily by specifying the ingestion date range. Does 2.0 support this
scenario
> 1,cube的建立,只能跟project走吗?
Right, project is the namespace to separate cubes.
> 2,cube能否动态API创建?
Technically yes, but that create-cube API is not stable for the record. It
may evolve over time. Use at your own risk.
> 3,kylin里的hbase的数据能直接load吗?
No. Cube data is not meant to be accessed via HBase
Try set 'kylin.cube.algorithm=layer' in kylin.properties if it was auto
previously. Layer cubing is more stable.
On Thu, Apr 13, 2017 at 1:47 PM, rahulsingh
wrote:
> Hello All,
>
> I am facing this exception at step 18 build cube.
> I have increased yarn
Of course multiple cubes can co-exist in one project. Are these 2 cubes
sharing the same model?
在 2017年4月13日 下午4:52,lizh...@mobiexchanger.com 写道:
> 你好,我现在有个问题请教:
> 在kylin里我建了一个project和model还有2个cube,我现在的想法是使用2个cube分表计算2个业务维度的表.
> 但是我测试了发现两个cube计算后只有一个缓存了.不能两个同时使用嘛?
hi,你好,很高兴认识你,我们最近在使用贵团队产品,遇到了一些问题
1,cube的建立,只能跟project走吗?
2,cube能否动态API创建?
3,kylin里的hbase的数据能直接load吗?
以上问题,请反馈,感谢。
MEX-Longer-技术部
Shaofeng SHI created KYLIN-2543:
---
Summary: Still build dictionary for TopN group by column even
using non-dict encoding
Key: KYLIN-2543
URL: https://issues.apache.org/jira/browse/KYLIN-2543
Project:
It seems the hadoop environment configuration issue.
One possible reason is the configuration "mapreduce.task.io.sort.mb" of this
failed job is too large. For the detailed information please refer to
https://issues.apache.org/jira/browse/MAPREDUCE-6194.
Bubble up final exception in failures
-- --
??: "roger shi";;
: 2017??4??13??(??) 2:51
??: "dev";
: : Error: java.io.IOException: Unable to initialize any output
Could you please attach the complete stack trace of the error?
??: 35925138 <35925...@qq.com>
: 2017??4??13?? 13:45:09
??: dev
: Error: java.io.IOException: Unable to initialize any output collector
kylin??1.6.0
17 matches
Mail list logo