[ 
https://issues.apache.org/jira/browse/KYLIN-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI closed KYLIN-2869.
-------------------------------
    Resolution: Invalid

I realized this is not an issue: the metadata should be in the HBase's cluster, 
it is not to be in the working hdfs.

I raise the question as in this env, I see some error saying couldn't load 
dictionary file from the default DFS path, while actually, it exists in the 
working HDFS dir. Later I realized that it was a problem when migrating from 
kylin 2.0 to kylin 2.1: in 2.0 the HBaseResourceStore puts large file to the 
working FS, in 2.1 it will put to the HBase's FS. In this env, as I didn't 
specify FS for HBase, it uses the default FS, but legacy data wasn't migrated.

> When using non-default FS as the working-dir, Kylin puts large metadata file 
> to default FS
> ------------------------------------------------------------------------------------------
>
>                 Key: KYLIN-2869
>                 URL: https://issues.apache.org/jira/browse/KYLIN-2869
>             Project: Kylin
>          Issue Type: Bug
>          Components: Metadata
>    Affects Versions: v2.1.0
>            Reporter: Shaofeng SHI
>            Assignee: Shaofeng SHI
>             Fix For: v2.2.0
>
>
> Azure HDInsight uses WASB as the default FS; I set using a HDFS full 
> qualified name as "kylin.env.hdfs-working-dir", when build a cube, a large 
> dict file was put to the DFS.  This should be a bug in 
> HBaseConnection.newHBaseConfiguration(), it overwrites the default fs setting 
> only when user set "kylin.storage.hbase.cluster-fs". But when user set a 
> non-default fs for working dir, it did nothing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to