I not sure can understand your question cleanly. Can you give more
information about it, just like with a good sample. Also you can forward
the code what you found and think that is happened!
--
Sent from: http://apache-kylin.74782.x6.nabble.com/
Seems there your Java running time environment was not clean. Please check
the JAVA_HOME and PATH system variable, use the echo command see what output
from them.
By the way the Kylin also can run in Hadoop clusters which use JDK1.7, just
a simple modify. The steps like this:
1. modify the
It looks more like Kerberos authentication failure caused by time
synchronization problems. It is recommended to check whether time
synchronization is working. Then you can enter the beeline connection
command at the terminal which machine you running kylin to see if it works
properly.
Good luck!
Seems version not match error. Please check guava.jar version in your hadoop
environment. Try to use below command:
eg. CDH Env
find /opt/cloudera -name "guava*.jar"
Then list the hadoop lib folder and hbase lib folder's jars, find whether
their guava.jar version were same. If not sure, please
Hi vishalchm,
There were so many possibilities, and I think you give message was not
enough! Please tell about which version kylin you were using, and the data
feature , just like the data cardinality etc. also could you upload some
kylin log info which is better way to figure out the root case.
Hi vishalchm,
Thanks for your message. I had look some info from logs, but it no complete
that not found any help message. And there your data column feature see not
problem. What kind of aggregation group you designed, please show it. There
I find out your job is still running, if it was the
Hi,
I think you can upload the Kylin's diagnosis package which it include logs,
and then we can help you find the root case. Maybe your problem was broken
by your resource or other things.
--
Sent from: http://apache-kylin.74782.x6.nabble.com/
Hi, vishalchm:
>From your reply that could be judged it maybe your resource were busy when
you first time running this job. Please check it detail when next time meet
the same problems. By the way, give you some tips, you can use the yarn
command pull all logs from mapreduce job. Like this: