It is recommended to execute the script in the $KYLIN_HOME/bin directory to 
detect whether the kylin environment and dependencies are properly deployed, 
including:
check-env.sh
check-hive-usability.sh
check-port-availability.sh
find-hadoop-conf-dir.sh
find-hbase-dependency.sh
find-hive-dependency.sh
find-kafka-dependency.sh
find-spark-dependency.sh

> 在 2019年11月6日,11:31,王文辉 <[email protected]> 写道:
> 
> Hello, Sir:
> I have configured a set of kylin in the demo environment and production 
> environment, and none of them are configured with kylin.properties. The kylin 
> of the production environment does not report an error when running the task, 
> but the demo environment will show me the error.
> 
> 
> ------------------ 原始邮件 ------------------
> 发件人: "Yaqian Zhang"<[email protected]>;
> 发送时间: 2019年11月6日(星期三) 上午10:14
> 收件人: "user"<[email protected]>;
> 主题: Re: cube problem
> 
> Hi sir:
> 
> This article on the official website mentions the problem you have 
> encountered.
> 
> http://kylin.apache.org/blog/2016/06/10/standalone-hbase-cluster/ 
> <http://kylin.apache.org/blog/2016/06/10/standalone-hbase-cluster/>.
> 
> You can try again in the following way.
> 
> <[email protected]>
> 
>> 在 2019年11月6日,09:22,王文辉 <[email protected] 
>> <mailto:[email protected]>> 写道:
>> 
>> All executing command of hdfs no problems , The job log of Cube as follows:
>> 
>> java.lang.IllegalArgumentException: Wrong FS: 
>> hdfs://nameservice1/kylin_metadata/kylin- <>..., excepted:
>> 
>> hdfs://node111a11:8020 <>
>> at org. apache hadoop. fs File System. checkPath(Filesystem. java: 662)
>> at org. apache hadoop hdfs. DistributedFileSystem 
>> getPathName(DistributedFileSystem java: 22
>> at org. apache hadoop hdfs. DistributedFile System 
>> accesss0oo(DistributedFileSystem java: 113)
>> at org. apache hadoop hdfs. DistributedFileSystem$ 20. 
>> doCall(DistributedFileSystem. java: 1265)
>> at org. apache hadoop hdfs. DistributedFile System$ 20. 
>> doCall(DistributedFileSystem java: 1261)
>> at org. apache hadoop. fs. FileSystemLinkResolver 
>> resolve(FileSystemLinkResolver java: 81
>> at org. apache hadoop hdfs. DistributedFile System 
>> getFileStatus(DistributedFileSystem java: 1261)
>> at org. apache hadoop. fs FileSystem exists(FileSystem. java: 1418)
>> at org. apache kylin. common util. HadoopUtil getFilteronly Path(HadoopUtil 
>> java: 146)
>> at org. apache kylin engine. mr steps. CreateDictionary Job$2. getDictionary 
>> (CreateDictionary Job java: 78)
>> at org. apache kylin cube cli. DictionaryGeneratorCLI processSegment 
>> (DictionaryGeneratorCLI java: 62)
>> at org. apache kylin cube cli. DictionaryGeneratorCLI 
>> processSegment(Dictionary GeneratorCLI, java: 49)
>> at org. apache kylin engine. mr steps. CreateDictionary Job 
>> run(CreateDictionaryJob java: 66)
>> at org. apache hadoop util. ToolRunner run(ToolRunner java: 70)
>> at org. apache hadoop util. ToolRunner run(ToolRunner java: 84)
>> at org. apache kylin engine. mr. common Hadoop shell Executable 
>> dowork(Hadoop ShellExecutable java: 62)
>> at org. apache kylin job execution. DefaultchainedExecutable 
>> dowork(DefaultchainedExecutable java: 64)
>> at org. apache job execution. AbstractExecutable execute(AbstractExecutable 
>> java: 125)
>> at org. apache kylin job. impl. threadpool. DefaultschedulerSJobRunner 
>> run(Defaultscheduler, java: 144)
>> at java util. concurrent. ThreadPoolExecutor runWorker(ThreadPoolExecutor 
>> java: 1149
>> at java util. concurrent. ThreadPoolExecutor $worker. run(ThreadPoolExecutor 
>> java: 624
>> at java. lang Thread run(Thread. java: 748)
>> result code: 2
>> 
> 

Reply via email to