Thanks. I have created one now https://issues.apache.org/jira/browse/KYLIN-4427 ________________________________ From: Liukaige <[email protected]> Sent: Thursday, March 12, 2020 4:24 PM To: [email protected] <[email protected]> Subject: Re: Wrong FileSystem error when trying to enable system cubes and Dashboard in Kylin 2.6.4
Looks like a bug. It would be nice if you can raise a JIRA to track this. Thanks. Preeti Vipin <[email protected]<mailto:[email protected]>> 于2020年3月12日周四 下午7:19写道: Hi, I am trying to enable system cubes for the Dashboard using Kylin version 2.6. The tables are created correctly and the cube builds successfully but there is no query or job data on the dashboard, it shows 0. We use Azure storage for Hive(wasb:// file system). I can see that there is no data being updated on the Hive_Metrics tables in Azure. In Kylin logs I see the below error 2020-03-12 20:02:41,790 ERROR [metrics-blocking-reservoir-scheduler-0] hive.HiveReservoirReporter:119 : Wrong FS: wasb://*****.blob.core.windows.net/hive/warehouse/kylin.db/hive_metrics_query_cube_qa/kday_date=2020-03-12<http://blob.core.windows.net/hive/warehouse/kylin.db/hive_metrics_query_cube_qa/kday_date=2020-03-12>, expected: hdfs://*****-prod-bn01 java.lang.IllegalArgumentException: Wrong FS: wasb://*****.blob.core.windows.net/hive/warehouse/kylin.db/hive_metrics_query_cube_qa/kday_date=2020-03-12<http://blob.core.windows.net/hive/warehouse/kylin.db/hive_metrics_query_cube_qa/kday_date=2020-03-12>, expected: hdfs://*****-prod-bn01 at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:666) at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:214) at org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1442) at org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1438) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1454) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1448) at org.apache.kylin.metrics.lib.impl.hive.HiveProducer.write(HiveProducer.java:137) at org.apache.kylin.metrics.lib.impl.hive.HiveProducer.send(HiveProducer.java:122) at org.apache.kylin.metrics.lib.impl.hive.HiveReservoirReporter$HiveReservoirListener.onRecordUpdate(HiveReservoirReporter.java:117) at org.apache.kylin.metrics.lib.impl.BlockingReservoir.notifyListenerOfUpdatedRecord(BlockingReservoir.java:105) I checked the hive configs and it has the warehouse metastore dir correctly pointing to azure. I found another thread with similar problem where they are trying to use S3 instead of hdfs. http://apache-kylin.74782.x6.nabble.com/jira-Created-KYLIN-4385-KYLIN-system-cube-failing-to-update-table-when-run-on-EMR-with-S3-as-storageS-td14234.html I also followed the recommendations here https://www.mail-archive.com/[email protected]/msg04347.html and enabled all the necessary config values. Is this a bug in Kylin or a configuration issue on my cluster? Any help or guidance is appreciated. Thanks Preeti -- Best regards, Kaige Liu(刘凯歌) "Do small things with great love."
