[ 
https://issues.apache.org/jira/browse/KYLIN-2788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI updated KYLIN-2788:
--------------------------------
    Attachment: ConvertoHFileOnS3.xlsx

Attache the configuration parameters on this step. We can see that it uses 
"org.apache.hadoop.mapred.DirectFileOutputCommitter" as the committer class.

But in HBase's HFileOutputFormat2.java is using old mapreduce API. Which may 
need use "
org.apache.hadoop.mapreduce.lib.output.DirectFileOutputCommitter" as the 
committer

> HFile is not written to S3
> --------------------------
>
>                 Key: KYLIN-2788
>                 URL: https://issues.apache.org/jira/browse/KYLIN-2788
>             Project: Kylin
>          Issue Type: Bug
>    Affects Versions: v2.0.0
>            Reporter: Alexander Sterligov
>         Attachments: ConvertoHFileOnS3.xlsx
>
>
> I set kylin.hbase.cluster.fs to s3 bucket where hbase lives.
> Step "Convert Cuboid Data to HFile" finished without errors. Statistics at 
> the end of the job said that it has written lot's of data to s3.
> But there is no hfiles in kylin_metadata folder (kylin_metadata 
> /kylin-1e436685-7102-4621-a4cb-6472b866126d/<table name>/hfile), but only 
> _temporary folder and _SUCCESS file.
> _temporary contains hfiles inside attempt folders. it looks like there were 
> not copied from _temporary to result dir. But there is no errors neither in 
> kylin log, nor in reducers' logs.
> Then loading empty hfiles produces empty segments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to