Hi Dong

For metadata, just export and import is enough, there's nothing else. For
cube tables, you want to double check the coprocessor is migrated too.

About the reading/writing separation, my thinking is no need. Because all
cube tables are created through bulk load, and then remains readonly. There
no online HBase write in Kylin.

That said, there are other benefits to have two clusters, like disaster
recovery.

Cheers

On Fri, Apr 3, 2015 at 3:27 PM, dong wang <[email protected]> wrote:

> Hi, today I tried to just use the following 2 commands to backup and
> restore Kylin's 3 meta
> tables((kylin_metadata,kylin_metadata_acl,kylin_metadata_user)  instead of
> the given official commands, It seems to work OK, however, is there any
> other things that I haven't taken it into consideration in this way?
>
> hbase org.apache.hadoop.hbase.mapreduce.Export
> hbase org.apache.hadoop.hbase.mapreduce.Import
>
> 2015-04-02 22:48 GMT+08:00 dong wang <[email protected]>:
>
> > Thanks shaofeng, qianhao, I have an idea that I want to build all the
> > cubes in a cluster, and then provide the cube access service in another
> > cluster, which means "writing hbase and reading hbase in 2 different
> > clusters", if so, the writing hbase cluster will not affect the reading
> > hbase cluster, the reading hbase cluster can offer better performace to
> the
> > users, if we can easily backup the metadata and cube data in cluster A
> and
> > restore it in cluster B, which also means we can indeed backup all the
> > metadata and cube data except robust of the hbase itself.
> >
> > 2015-04-02 22:12 GMT+08:00 周千昊 <[email protected]>:
> >
> >> Hi dong
> >>      Can you tell us about what you really need to backup htable?
> >>      If that is a common use case, maybe we can make a tool for it. Or
> >> otherwise, maybe we can help you from kylin's perspective rather than
> >> hbase's
> >>      Because backup only htable can not work properly, since the
> >> coprocessor also need to be backup
> >>      PS, if you are looking for the code that we create htable for
> >> segments, you can refer to CreateHTableJob
> >>
> >> On Thu, Apr 2, 2015 at 6:53 PM dong wang <[email protected]>
> wrote:
> >>
> >> > when I try to backup and  restore the cube's hbase table as the
> >> following
> >> > familiar commands:
> >> >
> >> > 1, sudo -uhbase hbase org.apache.hadoop.hbase.mapreduce.Export tmember
> >> > /backup/projects/channel/tmember
> >> >
> >> > 2, hadoop fs -copyToLocal -crc /backup/projects/channel/tmember
> >> > /home/olap/scripts/cluster_mgr/test/tmember
> >> >
> >> > 3, sudo -uhbase hadoop fs -rm -r /backup/projects/channel/tmember
> >> >
> >> > 4, in hbase shell, disable table and drop table tmemeber
> >> >
> >> > 5, sudo -uhbase hadoop fs -copyFromLocal
> >> > /home/olap/scripts/cluster_mgr/test/tmember /backup/projects/channel/
> >> > tmember
> >> >
> >> > 6, sudo -uhbase hbase org.apache.hadoop.hbase.mapreduce.Import tmember
> >> > /backup/projects/channel/tmember
> >> >
> >> > the first 5 steps work OK, when step 6, it always throws error like
> the
> >> > following, no more information in hbase log
> >> >
> >> > 2015-04-02 17:57:52,358 INFO  [main] mapreduce.Job: Task Id :
> >> > attempt_1427547972456_0190_m_000000_0, Status : FAILED
> >> > Error: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsExc
> >> > eption:
> >> > Failed 2 actions: tmember: 2 times,
> >> >         at
> >> > org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(
> >> > AsyncProcess.java:192)
> >> >         at
> >> > org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(
> >> > AsyncProcess.java:176)
> >> >         at
> >> > org.apache.hadoop.hbase.client.AsyncProcess.getErrors(
> >> > AsyncProcess.java:913)
> >> >         at
> >> > org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.
> >> > java:985)
> >> >         at
> >> > org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1281)
> >> >         at
> org.apache.hadoop.hbase.client.HTable.close(HTable.java:1318)
> >> >         at
> >> > org.apache.hadoop.hbase.mapreduce.TableOutputFormat$
> >> > TableRecordWriter.close(TableOutputFormat.java:112)
> >> >         at
> >> > org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.
> >> > close(MapTask.java:667)
> >> >         at
> >> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:790)
> >> >         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> >> >         at
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> >> >         at java.security.AccessController.doPrivileged(Native Method)
> >> >         at javax.security.auth.Subject.doAs(Subject.java:415)
> >> >         at
> >> > org.apache.hadoop.security.UserGroupInformation.doAs(
> >> > UserGroupInformation.java:1642)
> >> >         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
> >> >
> >> >
> >> >
> >> >
> >> > finally, I found that if I create a empty hbase table named tmember
> >> first
> >> > before step 6, then step 6 works successfully,  however, how can we
> >> create
> >> > such a empty table or how to know the "create table" statement just
> like
> >> > "show create table" in mysql?
> >> >
> >>
> >
> >
>

Reply via email to