Re: [ovirt-users] oVirt DR: ansible with 4.1, only a subset of storage domain replicated

2018-02-08 Thread Maor Lipchuk
On Thu, Feb 8, 2018 at 10:34 AM, Luca 'remix_tj' Lorenzetto <
lorenzetto.l...@gmail.com> wrote:

> On Tue, Feb 6, 2018 at 9:33 PM, Maor Lipchuk  wrote:
> [cut]
> >> What i need is that informations about vms is replicated to the remote
> >> site with disk.
> >> In an older test i had the issue that disks were replicated to remote
> >> site, but vm configuration not!
> >> I've found disks in the "Disk"  tab of storage domain, but nothing on VM
> >> Import.
> >
> >
> >
> > Can you reproduce it and attach the logs of the setup before the disaster
> > and after the recovery?
> > That could happen in case of new created VMs and Templates which were not
> > yet updated in the OVF_STORE disk, since the OVF_STORE update process was
> > not running yet before the disaster.
> > Since the time of a disaster can't be anticipated, gaps like this might
> > happen.
> >
>
> I haven't tried the recovery yet using ansible. It was an experiment
> of possible procedure to be performed manually and was on 4.0.
> I asked about this unexpected behavior and Yaniv returned me that was
> due to OVF_STORE not updated and that in 4.1 there is an api call that
> updates OVF_STORE on demand.
>
> I'm creating a new setup today and i'll test again and check if i
> still hit the issue. Anyway if the problem persist i think that
> engine, for DR purposes, should upgrade the OVF_STORE as soon as
> possible when a new vm is created or has disks added.
>


If engine will update the OVF_STORE on any VM change that could reflect the
oVirt performance since it is a heavy operation,
although we do have some ideas to change that design so every VM change
will only change the VM OVF instead of the whole OVF_STORE disk.


> [cut]
> >>
> >> Ok, but if i keep master storage domain on a non replicate volume, do
> >> i require this function?
> >
> >
> > Basically it should also fail on VM/Template registration in oVirt 4.1
> since
> > there are also other functionalities like mapping of OVF attributes which
> > was added on VM/Templates registeration.
> >
>
> What do you mean? That i could fail to import any VM/Template? In what
> case?
>


If using the fail-over in ovirt-ansible-disaster-recovery, the VM/Template
registration process is being done automatically through the ovirt-ansible
tasks and it is based on the oVirt 4.2 API.
The task which registers the VMs and the Templates is being done there
without indicating the target cluster id, since in oVirt 4.2 we already
added the cluster name in the VM's/Template's OVF.
If your engine is oVirt 4.1 the registration will fail since in oVirt 4.1
the cluster id is mandatory.


>
> Another question:
>
> we have 2 DCs in main site, do we require to have also 2 DCs in
> recovery site o we can import all the storage domains in a single DC
> on recovery site? There could be uuid collisions or similar?
>


I think it could work, although I suggest that the clusters should be
compatible with those configurd in the primary setup
otherwise you might encounter problems when you will try to fail back (and
also to avoid any collisions of affinity groups/labels or networks).
For example if in your primary site you had DC1 with cluster1 and DC2 with
cluster2 then your secondary setup should be DC_Secondary with cluster_A
and cluster_B.
cluster1 will be mapped to cluster_A and cluster2 will be mapped to
cluster_B.

Another thing that might be troubling is with the master domain attribute
in the mapping var file.
That attribute indicates which storage domain is master or not.
Here is an example how it is being configured in the mapping file:
- dr_domain_type: nfs
  dr_primary_dc_name: Prod
  dr_primary_name: data_number
  dr_master_domain: True
  dr_secondary_dc_name: Recovery
  dr_secondary_address:
...


In your primary site you have two master storage domains, and in your
secondary site what will probably happen is that on import of storage
domains only one of those two storage domains will be master.
Now that I think of it, it might be better to configure the master
attribute for each of the setups, like so:
  dr_primary_master_domain: True
  dr_secondary_master_domain: False


>
> Thank you so much for your replies,
>
> Luca
> --
> "E' assurdo impiegare gli uomini di intelligenza eccellente per fare
> calcoli che potrebbero essere affidati a chiunque se si usassero delle
> macchine"
> Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
>
> "Internet è la più grande biblioteca del mondo.
> Ma il problema è che i libri sono tutti sparsi sul pavimento"
> John Allen Paulos, Matematico (1945-vivente)
>
> Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <
> lorenzetto.l...@gmail.com>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt DR: ansible with 4.1, only a subset of storage domain replicated

2018-02-08 Thread Luca 'remix_tj' Lorenzetto
On Tue, Feb 6, 2018 at 9:33 PM, Maor Lipchuk  wrote:
[cut]
>> What i need is that informations about vms is replicated to the remote
>> site with disk.
>> In an older test i had the issue that disks were replicated to remote
>> site, but vm configuration not!
>> I've found disks in the "Disk"  tab of storage domain, but nothing on VM
>> Import.
>
>
>
> Can you reproduce it and attach the logs of the setup before the disaster
> and after the recovery?
> That could happen in case of new created VMs and Templates which were not
> yet updated in the OVF_STORE disk, since the OVF_STORE update process was
> not running yet before the disaster.
> Since the time of a disaster can't be anticipated, gaps like this might
> happen.
>

I haven't tried the recovery yet using ansible. It was an experiment
of possible procedure to be performed manually and was on 4.0.
I asked about this unexpected behavior and Yaniv returned me that was
due to OVF_STORE not updated and that in 4.1 there is an api call that
updates OVF_STORE on demand.

I'm creating a new setup today and i'll test again and check if i
still hit the issue. Anyway if the problem persist i think that
engine, for DR purposes, should upgrade the OVF_STORE as soon as
possible when a new vm is created or has disks added.

[cut]
>>
>> Ok, but if i keep master storage domain on a non replicate volume, do
>> i require this function?
>
>
> Basically it should also fail on VM/Template registration in oVirt 4.1 since
> there are also other functionalities like mapping of OVF attributes which
> was added on VM/Templates registeration.
>

What do you mean? That i could fail to import any VM/Template? In what case?


Another question:

we have 2 DCs in main site, do we require to have also 2 DCs in
recovery site o we can import all the storage domains in a single DC
on recovery site? There could be uuid collisions or similar?

Thank you so much for your replies,

Luca
-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt DR: ansible with 4.1, only a subset of storage domain replicated

2018-02-06 Thread Maor Lipchuk
On Tue, Feb 6, 2018 at 11:32 AM, Luca 'remix_tj' Lorenzetto <
lorenzetto.l...@gmail.com> wrote:

> On Mon, Feb 5, 2018 at 7:20 PM, Maor Lipchuk  wrote:
> > Hi Luca,
> >
> > Thank you for your interst in the Disaster Recovery ansible solution, it
> is
> > great to see users get familiar with it.
> > Please see my comments inline
> >
> > Regards,
> > Maor
> >
> > On Mon, Feb 5, 2018 at 7:54 PM, Yaniv Kaul  wrote:
> >>
> >>
> >>
> >> On Feb 5, 2018 5:00 PM, "Luca 'remix_tj' Lorenzetto"
> >>  wrote:
> >>
> >> Hello,
> >>
> >> i'm starting the implementation of our disaster recovery site with RHV
> >> 4.1.latest for our production environment.
> >>
> >> Our production setup is very easy, with self hosted engine on dc
> >> KVMPDCA, and virtual machines both in KVMPDCA and KVMPD dcs. All our
> >> setup has an FC storage backend, which is EMC VPLEX/VMAX in KVMPDCA
> >> and EMC VNX8000. Both storage arrays supports replication via their
> >> own replication protocols (SRDF, MirrorView), so we'd like to delegate
> >> to them the replication of data to the remote site, which is located
> >> on another remote datacenter.
> >>
> >> In KVMPD DC we have some storage domains that contains non critical
> >> VMs, which we don't want to replicate to remote site (in case of
> >> failure they have a low priority and will be restored from a backup).
> >> In our setup we won't replicate them, so will be not available for
> >> attachment on remote site. Can be this be an issue? Do we require to
> >> replicate everything?
> >
> >
> > No, it is not required to replicate everything.
> > If there are no disks on those storage domains that attached to your
> > critical VMs/Templates you don't have to use them as part of yout mapping
> > var file
> >
>
> Excellent.
>
> >>
> >> What about master domain? Do i require that the master storage domain
> >> stays on a replicated volume or can be any of the available ones?
> >
> >
> >
> > You can choose which storage domains you want to recover.
> > Basically, if a storage domain is indicated as "master" in the mapping
> var
> > file then it should be attached first to the Data Center.
> > If your secondary setup already contains a master storage domain which
> you
> > dont care to replicate and recover, then you can configure your mapping
> var
> > file to only attach regular storage domains, simply indicate
> > "dr_master_domain: False" in the dr_import_storages for all the storage
> > domains. (You can contact me on IRC if you need some guidance with it)
> >
>
> Good,
>
> that's my case. I don't need a new master domain on remote side,
> because is an already up and running setup where i want to attach
> replicated storage and run the critical VMs.
>
>
>
> >>
> >>
> >> I've seen that since 4.1 there's an API for updating OVF_STORE disks.
> >> Do we require to invoke it with a frequency that is the compatible
> >> with the replication frequency on storage side.
> >
> >
> >
> > No, you don't have to use the update OVF_STORE disk for replication.
> > The OVF_STORE disk is being updated every 60 minutes (The default
> > configuration value),
> >
>
> What i need is that informations about vms is replicated to the remote
> site with disk.
> In an older test i had the issue that disks were replicated to remote
> site, but vm configuration not!
> I've found disks in the "Disk"  tab of storage domain, but nothing on VM
> Import.
>


Can you reproduce it and attach the logs of the setup before the disaster
and after the recovery?
That could happen in case of new created VMs and Templates which were not
yet updated in the OVF_STORE disk, since the OVF_STORE update process was
not running yet before the disaster.
Since the time of a disaster can't be anticipated, gaps like this might
happen.


>
> >>
> >> We set at the moment
> >> RPO to 1hr (even if planned RPO requires 2hrs). Does OVF_STORE gets
> >> updated with the required frequency?
> >
> >
> >
> > OVF_STORE disk is being updated every 60 minutes but keep in mind that
> the
> > OVF_STORE is being updated internally in the engine so it might not be
> > synced with the RPO which you configured.
> > If I understood correctly, then you are right by indicating that the
> data of
> > the storage domain will be synced at approximatly 2 hours = RPO of 1hr +
> > OVF_STORE update of 1hr
> >
>
> We require that we can recover vms with a status that is up to 2 hours
> ago. In worst case, from what you say, i think we'll be able to.
>
> [cut]
> >
> > Indeed,
> > We also introduced several functionalities like detach of master storage
> > domain , and attach of "dirty" master storage domain which are depndant
> on
> > the failover process, so unfortunatly to support a full recovery process
> you
> > will need oVirt 4.2 env.
> >
>
> Ok, but if i keep master storage domain on a non replicate volume, do
> i require this function?
>


Basically it should also fail on VM/Template registration in 

Re: [ovirt-users] oVirt DR: ansible with 4.1, only a subset of storage domain replicated

2018-02-06 Thread Luca 'remix_tj' Lorenzetto
On Mon, Feb 5, 2018 at 7:20 PM, Maor Lipchuk  wrote:
> Hi Luca,
>
> Thank you for your interst in the Disaster Recovery ansible solution, it is
> great to see users get familiar with it.
> Please see my comments inline
>
> Regards,
> Maor
>
> On Mon, Feb 5, 2018 at 7:54 PM, Yaniv Kaul  wrote:
>>
>>
>>
>> On Feb 5, 2018 5:00 PM, "Luca 'remix_tj' Lorenzetto"
>>  wrote:
>>
>> Hello,
>>
>> i'm starting the implementation of our disaster recovery site with RHV
>> 4.1.latest for our production environment.
>>
>> Our production setup is very easy, with self hosted engine on dc
>> KVMPDCA, and virtual machines both in KVMPDCA and KVMPD dcs. All our
>> setup has an FC storage backend, which is EMC VPLEX/VMAX in KVMPDCA
>> and EMC VNX8000. Both storage arrays supports replication via their
>> own replication protocols (SRDF, MirrorView), so we'd like to delegate
>> to them the replication of data to the remote site, which is located
>> on another remote datacenter.
>>
>> In KVMPD DC we have some storage domains that contains non critical
>> VMs, which we don't want to replicate to remote site (in case of
>> failure they have a low priority and will be restored from a backup).
>> In our setup we won't replicate them, so will be not available for
>> attachment on remote site. Can be this be an issue? Do we require to
>> replicate everything?
>
>
> No, it is not required to replicate everything.
> If there are no disks on those storage domains that attached to your
> critical VMs/Templates you don't have to use them as part of yout mapping
> var file
>

Excellent.

>>
>> What about master domain? Do i require that the master storage domain
>> stays on a replicated volume or can be any of the available ones?
>
>
>
> You can choose which storage domains you want to recover.
> Basically, if a storage domain is indicated as "master" in the mapping var
> file then it should be attached first to the Data Center.
> If your secondary setup already contains a master storage domain which you
> dont care to replicate and recover, then you can configure your mapping var
> file to only attach regular storage domains, simply indicate
> "dr_master_domain: False" in the dr_import_storages for all the storage
> domains. (You can contact me on IRC if you need some guidance with it)
>

Good,

that's my case. I don't need a new master domain on remote side,
because is an already up and running setup where i want to attach
replicated storage and run the critical VMs.



>>
>>
>> I've seen that since 4.1 there's an API for updating OVF_STORE disks.
>> Do we require to invoke it with a frequency that is the compatible
>> with the replication frequency on storage side.
>
>
>
> No, you don't have to use the update OVF_STORE disk for replication.
> The OVF_STORE disk is being updated every 60 minutes (The default
> configuration value),
>

What i need is that informations about vms is replicated to the remote
site with disk.
In an older test i had the issue that disks were replicated to remote
site, but vm configuration not!
I've found disks in the "Disk"  tab of storage domain, but nothing on VM Import.

>>
>> We set at the moment
>> RPO to 1hr (even if planned RPO requires 2hrs). Does OVF_STORE gets
>> updated with the required frequency?
>
>
>
> OVF_STORE disk is being updated every 60 minutes but keep in mind that the
> OVF_STORE is being updated internally in the engine so it might not be
> synced with the RPO which you configured.
> If I understood correctly, then you are right by indicating that the data of
> the storage domain will be synced at approximatly 2 hours = RPO of 1hr +
> OVF_STORE update of 1hr
>

We require that we can recover vms with a status that is up to 2 hours
ago. In worst case, from what you say, i think we'll be able to.

[cut]
>
> Indeed,
> We also introduced several functionalities like detach of master storage
> domain , and attach of "dirty" master storage domain which are depndant on
> the failover process, so unfortunatly to support a full recovery process you
> will need oVirt 4.2 env.
>

Ok, but if i keep master storage domain on a non replicate volume, do
i require this function?

I have to admit that i require, for subscription and support
requirements, to use RHV over oVirt. I've seen 4.2 is coming also from
that side, and we'll upgrade for sure when available.


[cut]
>
>
> Please feel free to share your comments and questions, I would very
> appreciate to know your user expirience.

Sure, i'll do! And i'll bother you on irc if i need some guidance :-)

Thank you so much,

Luca


-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico 

Re: [ovirt-users] oVirt DR: ansible with 4.1, only a subset of storage domain replicated

2018-02-05 Thread Maor Lipchuk
Hi Luca,

Thank you for your interst in the Disaster Recovery ansible solution, it is
great to see users get familiar with it.
Please see my comments inline

Regards,
Maor

On Mon, Feb 5, 2018 at 7:54 PM, Yaniv Kaul  wrote:

>
>
> On Feb 5, 2018 5:00 PM, "Luca 'remix_tj' Lorenzetto" <
> lorenzetto.l...@gmail.com> wrote:
>
> Hello,
>
> i'm starting the implementation of our disaster recovery site with RHV
> 4.1.latest for our production environment.
>
> Our production setup is very easy, with self hosted engine on dc
> KVMPDCA, and virtual machines both in KVMPDCA and KVMPD dcs. All our
> setup has an FC storage backend, which is EMC VPLEX/VMAX in KVMPDCA
> and EMC VNX8000. Both storage arrays supports replication via their
> own replication protocols (SRDF, MirrorView), so we'd like to delegate
> to them the replication of data to the remote site, which is located
> on another remote datacenter.
>
> In KVMPD DC we have some storage domains that contains non critical
> VMs, which we don't want to replicate to remote site (in case of
> failure they have a low priority and will be restored from a backup).
> In our setup we won't replicate them, so will be not available for
> attachment on remote site. Can be this be an issue? Do we require to
> replicate everything?
>
>
No, it is not required to replicate everything.
If there are no disks on those storage domains that attached to your
critical VMs/Templates you don't have to use them as part of yout mapping
var file


> What about master domain? Do i require that the master storage domain
> stays on a replicated volume or can be any of the available ones?
>
>

You can choose which storage domains you want to recover.
Basically, if a storage domain is indicated as "master" in the mapping var
file then it should be attached first to the Data Center.
If your secondary setup already contains a master storage domain which you
dont care to replicate and recover, then you can configure your mapping var
file to only attach regular storage domains, simply indicate
"dr_master_domain: False" in the dr_import_storages for all the storage
domains. (You can contact me on IRC if you need some guidance with it)


>
> I've seen that since 4.1 there's an API for updating OVF_STORE disks.
> Do we require to invoke it with a frequency that is the compatible
> with the replication frequency on storage side.
>
>

No, you don't have to use the update OVF_STORE disk for replication.
The OVF_STORE disk is being updated every 60 minutes (The default
configuration value),


> We set at the moment
> RPO to 1hr (even if planned RPO requires 2hrs). Does OVF_STORE gets
> updated with the required frequency?
>
>

OVF_STORE disk is being updated every 60 minutes but keep in mind that the
OVF_STORE is being updated internally in the engine so it might not be
synced with the RPO which you configured.
If I understood correctly, then you are right by indicating that the data
of the storage domain will be synced at approximatly 2 hours = RPO of 1hr +
 OVF_STORE update of 1hr


>
> I've seen a recent presentation by Maor Lipchuk that is showing the
> "automagic" ansible role for disaster recovery:
>
> https://www.slideshare.net/maorlipchuk/ovirt-dr-site-tosite-using-ansible
>
> It's also related with some youtube presentations demonstrating a real
> DR plan execution.
>
> But what i've seen is that Maor is explicitly talking about 4.2
> release. Does that role works only with >4.2 releases or can be used
> also on earlier (4.1) versions?
>
>
> Releases before 4.2 do not store complete information on the OVF store to
> perform such comprehensive failover. I warmly suggest 4.2!
> Y.
>

Indeed,
We also introduced several functionalities like detach of master storage
domain , and attach of "dirty" master storage domain which are depndant on
the failover process, so unfortunatly to support a full recovery process
you will need oVirt 4.2 env.


>
> I've tested a manual flow of replication + recovery through Import SD
> followed by Import VM and worked like a charm. Using a prebuilt
> ansible role will reduce my effort on creating a new automation for
> doing this.
>
> Anyone has experiences like mine?
>
> Thank you for the help you may provide, i'd like to contribute back to
> you with all my findings and with an usable tool (also integrated with
> storage arrays if possible).
>
>
Please feel free to share your comments and questions, I would very
appreciate to know your user expirience.


>
> Luca
>
> (Sorry for duplicate email, ctrl-enter happened before mail completion)
>
>
> --
> "E' assurdo impiegare gli uomini di intelligenza eccellente per fare
> calcoli che potrebbero essere affidati a chiunque se si usassero delle
> macchine"
> Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
>
> "Internet è la più grande biblioteca del mondo.
> Ma il problema è che i libri sono tutti sparsi sul pavimento"
> John Allen Paulos, Matematico (1945-vivente)
>
> Luca 'remix_tj' 

Re: [ovirt-users] oVirt DR: ansible with 4.1, only a subset of storage domain replicated

2018-02-05 Thread Yaniv Kaul
On Feb 5, 2018 5:00 PM, "Luca 'remix_tj' Lorenzetto" <
lorenzetto.l...@gmail.com> wrote:

Hello,

i'm starting the implementation of our disaster recovery site with RHV
4.1.latest for our production environment.

Our production setup is very easy, with self hosted engine on dc
KVMPDCA, and virtual machines both in KVMPDCA and KVMPD dcs. All our
setup has an FC storage backend, which is EMC VPLEX/VMAX in KVMPDCA
and EMC VNX8000. Both storage arrays supports replication via their
own replication protocols (SRDF, MirrorView), so we'd like to delegate
to them the replication of data to the remote site, which is located
on another remote datacenter.

In KVMPD DC we have some storage domains that contains non critical
VMs, which we don't want to replicate to remote site (in case of
failure they have a low priority and will be restored from a backup).
In our setup we won't replicate them, so will be not available for
attachment on remote site. Can be this be an issue? Do we require to
replicate everything?
What about master domain? Do i require that the master storage domain
stays on a replicated volume or can be any of the available ones?

I've seen that since 4.1 there's an API for updating OVF_STORE disks.
Do we require to invoke it with a frequency that is the compatible
with the replication frequency on storage side. We set at the moment
RPO to 1hr (even if planned RPO requires 2hrs). Does OVF_STORE gets
updated with the required frequency?

I've seen a recent presentation by Maor Lipchuk that is showing the
"automagic" ansible role for disaster recovery:

https://www.slideshare.net/maorlipchuk/ovirt-dr-site-tosite-using-ansible

It's also related with some youtube presentations demonstrating a real
DR plan execution.

But what i've seen is that Maor is explicitly talking about 4.2
release. Does that role works only with >4.2 releases or can be used
also on earlier (4.1) versions?


Releases before 4.2 do not store complete information on the OVF store to
perform such comprehensive failover. I warmly suggest 4.2!
Y.


I've tested a manual flow of replication + recovery through Import SD
followed by Import VM and worked like a charm. Using a prebuilt
ansible role will reduce my effort on creating a new automation for
doing this.

Anyone has experiences like mine?

Thank you for the help you may provide, i'd like to contribute back to
you with all my findings and with an usable tool (also integrated with
storage arrays if possible).

Luca

(Sorry for duplicate email, ctrl-enter happened before mail completion)


--
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <
lorenzetto.l...@gmail.com>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt DR: ansible with 4.1, only a subset of storage domain replicated

2018-02-05 Thread Luca 'remix_tj' Lorenzetto
Hello,

i'm starting the implementation of our disaster recovery site with RHV
4.1.latest for our production environment.

Our production setup is very easy, with self hosted engine on dc
KVMPDCA, and virtual machines both in KVMPDCA and KVMPD dcs. All our
setup has an FC storage backend, which is EMC VPLEX/VMAX in KVMPDCA
and EMC VNX8000. Both storage arrays supports replication via their
own replication protocols (SRDF, MirrorView), so we'd like to delegate
to them the replication of data to the remote site, which is located
on another remote datacenter.

In KVMPD DC we have some storage domains that contains non critical
VMs, which we don't want to replicate to remote site (in case of
failure they have a low priority and will be restored from a backup).
In our setup we won't replicate them, so will be not available for
attachment on remote site. Can be this be an issue? Do we require to
replicate everything?
What about master domain? Do i require that the master storage domain
stays on a replicated volume or can be any of the available ones?

I've seen that since 4.1 there's an API for updating OVF_STORE disks.
Do we require to invoke it with a frequency that is the compatible
with the replication frequency on storage side. We set at the moment
RPO to 1hr (even if planned RPO requires 2hrs). Does OVF_STORE gets
updated with the required frequency?

I've seen a recent presentation by Maor Lipchuk that is showing the
"automagic" ansible role for disaster recovery:

https://www.slideshare.net/maorlipchuk/ovirt-dr-site-tosite-using-ansible

It's also related with some youtube presentations demonstrating a real
DR plan execution.

But what i've seen is that Maor is explicitly talking about 4.2
release. Does that role works only with >4.2 releases or can be used
also on earlier (4.1) versions?

I've tested a manual flow of replication + recovery through Import SD
followed by Import VM and worked like a charm. Using a prebuilt
ansible role will reduce my effort on creating a new automation for
doing this.

Anyone has experiences like mine?

Thank you for the help you may provide, i'd like to contribute back to
you with all my findings and with an usable tool (also integrated with
storage arrays if possible).

Luca

(Sorry for duplicate email, ctrl-enter happened before mail completion)


-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt DR: ansible with 4.1, only a subset of storage domain replicated

2018-02-05 Thread Luca 'remix_tj' Lorenzetto
Hello,

i'm starting the implementation of our disaster recovery site with RHV
4.1.latest for our production environment.

Our production setup is very easy, with self hosted engine on dc
KVMPDCA, and virtual machines both in KVMPDCA and KVMPD dcs. All our
setup has an FC storage backend, which is EMC VPLEX/VMAX in KVMPDCA
and EMC VNX8000. Both storage arrays supports replication via their
own replication protocols (SRDF, MirrorView), so we'd like to delegate
to them the replication of data to the remote site, which is located
on another remote datacenter.

In KVMPD DC we have some storage domains that contains non critical
VMs, which we don't want to replicate to remote site (in case of
failure they have a low priority and will be restored from a backup).
In our setup we won't replicate them, so will be not available for
attachment on remote site. Can be this be an issue? Do we require to
replicate everything?
What about master domain? Do i require that the master storage domain
stays on a replicated volume or can be any of the available ones?

I've seen that since 4.1 there's an API for updating OVF_STORE disks.
Do we require to invoke it with a frequency that is the compatible
with the replication frequency on storage side. We set at the moment
RPO to 1hr (even if planned RPO requires 2hrs). Does OVF_STORE gets
updated with the required frequency?

I've seen a recent presentation by Maor Lipchuk that is showing the
automagic ansible role for disaster recovery:

-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users