Hi allI install Kylin binary package with Hbase namespace patch ??KYLIN-2846?? 
and have been used for nearly a month. My work always need several 
dimensions(large cardinality happened such as ID-num)and a dozen RAW measures.

I have some questiona about building cube. There are lots of successful cases 
that build cube with ten-billions of  data and have sub-seconds query speed, 
but when I actually use??cubing with ten-millions data failed sometimes and my 
querying is slow with a where filter and becomes slower when it comes to LIKE 
(10 million data costs 40 seconds). 


And here is a strange phenomenon like this:
I have a cube with 200 million rows, which contains three dimensions and no 
lookup table. But when I add a lookup table with 1400 rows and 4 RAW measures 
(two of them are Chinese string ), it fails at the 3 step, output is 'Job 
Counters \n failed reduce tasks=4'. I find that some key values in fact table 
out of the inner join lookup table, dose that cause the error? Are there any 
specific constrains when build a cube? 
For example, I notice that the dimensions should pick up an unique row or an 
error would happened.


Turn to streaming cube, I meet three problems.
Firstly, when I add streaming table, the Advanced Setting only has Timeout in 
web, Buffer Size  & Margin are missing.
Secondly, when I save my table and browse the table schema??Streaming Cluster 
config are blank which has been set before??and I can't Edit??throw an error 
message when I click save?? which is Failed to deal with the request: 
SteamingConfig Illegal.
Thirdly, after I new model and cube succesful and comes to build, an 
??Oops...Could not find Kafka dependency' happended. Obviously?? my kafka is 
ready??because I can consume it by java??


A long Email??thanks for reading??and hope for your reply??




Sincerely
Wei Li

Reply via email to