Hello,

I'm investigating DR options for HAWQ and was curious about the existing
master catalog synchronization process. My question is mainly around what
this process does at a high level and where I might look in the code base
or management tools to see about extending it for additional standby
masters (e.g. one in a geographically distant data center and/or different
logical HAWQ cluster). The assumption is the HDFS blocks would be
replicated by something like distcp via Falcon.

I believe there are obvious things to address like DFS / namenode URI
parameters, FQDNs, and certainly failure scenarios / edge cases, but I'm
mainly trying to get a dialog started to see what input, ideas, and
considerations others have. One thing I'm specifically interested in is
whether / how WAL can be used (@Keaton).


Thanks,
Kyle
-- 
*Kyle Dunn | Data Engineering | Pivotal*
Direct: 303.905.3171 <3039053171> | Email: [email protected]

Reply via email to