query without filter on such high cardinality is not good idea for Kylin
usage.
this query will pull all data back which looks like something about ETL or
others but not OLAP relative query.
Kylin's design purpose is for aggregation query with low latency which mean
user should narrow down query
I haven't apply filters on the high cardinality, account no. is a high
cardinality about 240w.
what can I do when I need to retain the high cardinality (account no).
Is that filters applied during creating model ?
2016-09-18 9:54 GMT+08:00 hongbin ma :
> Hi Mars
>
> the query is using cuboid 15
Hi Mars
the query is using cuboid 15 (0x), which means your query involves
the last four dimensions in the row key? From "Total scanned row: 100245548"
we can see the cuboid is hardly pre-aggregated as a cuboid. Can you check
the last four dimensions of the row key? Do they have very high
OK ,Kylin version is 1.5.2.1, fact table has 200,000,000 records, my cube
is very simple, a fact table about transaction, a dimension table of
account, and a dimension table of branch. account table's cardinality is
240w records. they left join on a acct_no column.
2016-09-16 23:56 GMT+08:00 Luke
Hi Mars,
You are trying to query data without group by, Kylin may not perform
very well without tuning your cube.
And we can't help you with just "log as below...", please offer more
detail information about your kylin's version, source data, metadata and so
on
Thanks.
Luke
Best Rega
hello all,
My query sql 'SELECT
A.ACCT_NO,F.BRAN_CODE,F.SET_DATE,F.ACCT_NO,F.DC_FLAG,F.TRANS_AMT
FROM NY.TRANS_FACT F LEFT JOIN NY.ACCOUNT_DIM A ON F.ACCT_NO=A.ACCT_NO
LIMIT 100' to query a cube (size :3.6G ,and fact table has 200,000,000),
the query is failed.
kylin log is as follow :
Us