You can take incremental Hbase Snapshots for required tables and store it
in the DR cluster.
Restoring doesn't take much time in this case.

Thanks
Sandeep Nemuri

ᐧ

On Thu, Dec 24, 2015 at 11:49 AM, Vasudevan, Ramkrishna S <
ramkrishna.s.vasude...@intel.com> wrote:

> Am not very sure if Phoenix directly has any replication support now. But
> in your case as you are bulk loading the tables you are not able to
> replicate but that problems is addressed in HBase
> As part of
> https://issues.apache.org/jira/browse/HBASE-13153
> Where bulk loaded files can get replicated directly to the remote cluster
> like how WAL edits gets replicated.
>
> Regards
> Ram
>
> -----Original Message-----
> From: Krishnasamy Rajan [mailto:yume.kris...@gmail.com]
> Sent: Tuesday, December 22, 2015 8:04 AM
> To: user@phoenix.apache.org
> Subject: Backup and Recovery for disaster recovery
>
> Hi,
>
> We’re using HBase under phoenix. Need to setup DR site and ongoing
> replication.
> Phoenix tables are salted tables. In this scenario what is the best method
> to copy data to remote cluster?
> People give different opinions.  Replication will not work for us as we’re
> using bulk loading.
>
> Can you advise what are our options to copy data to remote cluster and
> keeping it up to date.
> Thanks for your inputs.
>
> -Regards
> Krishna
>



-- 
*  Regards*
*  Sandeep Nemuri*

Reply via email to