Re: [ovirt-users] Switch from Fedora 20 to CentOS 7.1

2015-05-22 Thread Soeren Malchow
Dear Nir,

Thanks for the answer.

The problem is not related to ovirt, vdsm or libvirt, it was in gluster and the 
secondary ovirt cluster actually had the gluster mounted correctly and saw 
everything but it could not see the files in “dom_md”, we updated all gluster 
packages to 3.7.0 and all was good.

If someone else comes up with this - first check for this.

The switch from Fedora 20 to CentOS 7.1 works just fine if all gluster is on 
3.7.0 and the ovirt is on 3.5.2.1

Cheers
Soeren 






On 22/05/15 20:59, "Nir Soffer"  wrote:

>- Original Message -
>> From: "Soeren Malchow" 
>> To: "Jurriën Bloemen" , users@ovirt.org
>> Sent: Thursday, May 21, 2015 7:35:02 PM
>> Subject: Re: [ovirt-users] Switch from Fedora 20 to CentOS 7.1
>> 
>> Hi,
>> 
>> We now created the new Cluster based on CentOS 7.1 which went fine, then we
>> migrated 2 machines – no problem, we have Live Migration (back), Live Merge
>> and so on, all good.
>> 
>> But some additional machine have problems starting on the new cluster and
>> this happens
>> 
>> 
>> Grep for the Thread in vdsm.log
>> <— snip —>
>> 
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:21,999::vm::2264::vm.Vm::(_startUnderlyingVm)
>> vmId=`24bd5074-64fc-4aa0-87cd-5de3dd7b50d1`::Start
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,003::vm::2268::vm.Vm::(_startUnderlyingVm)
>> vmId=`24bd5074-64fc-4aa0-87cd-5de3dd7b50d1`::_ongoingCreations acquired
>> vdsm/vdsm.log:Thread-5475::INFO::2015-05-21
>> 18:27:22,008::vm::3261::vm.Vm::(_run)
>> vmId=`24bd5074-64fc-4aa0-87cd-5de3dd7b50d1`::VM wrapper has started
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,021::task::595::Storage.TaskManager.Task::(_updateState)
>> Task=`2bc7fe9c-204a-4ab7-a116-f7fbba32bd34`::moving from state init -> state
>> preparing
>> vdsm/vdsm.log:Thread-5475::INFO::2015-05-21
>> 18:27:22,028::logUtils::44::dispatcher::(wrapper) Run and protect:
>> getVolumeSize(sdUUID=u'276e9ba7-e19a-49c5-8ad7-26711934d5e4',
>> spUUID=u'0f954891-b1cd-4f09-99ae-75d404d95f9d',
>> imgUUID=u'eae65249-e5e8-49e7-90a0-c7385e80e6ca',
>> volUUID=u'8791f6ec-a6ef-484d-bd5a-730b22b19250', options=None)
>> vdsm/vdsm.log:Thread-5475::INFO::2015-05-21
>> 18:27:22,069::logUtils::47::dispatcher::(wrapper) Run and protect:
>> getVolumeSize, Return response: {'truesize': '2696552448', 'apparentsize':
>> '2696609792'}
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,069::task::1191::Storage.TaskManager.Task::(prepare)
>> Task=`2bc7fe9c-204a-4ab7-a116-f7fbba32bd34`::finished: {'truesize':
>> '2696552448', 'apparentsize': '2696609792'}
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,069::task::595::Storage.TaskManager.Task::(_updateState)
>> Task=`2bc7fe9c-204a-4ab7-a116-f7fbba32bd34`::moving from state preparing ->
>> state finished
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,070::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>> Owner.releaseAll requests {} resources {}
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,070::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>> Owner.cancelAll requests {}
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,070::task::993::Storage.TaskManager.Task::(_decref)
>> Task=`2bc7fe9c-204a-4ab7-a116-f7fbba32bd34`::ref 0 aborting False
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,071::task::595::Storage.TaskManager.Task::(_updateState)
>> Task=`c508cf8f-9f02-43a6-a45d-2b3f1d7e66be`::moving from state init -> state
>> preparing
>> vdsm/vdsm.log:Thread-5475::INFO::2015-05-21
>> 18:27:22,071::logUtils::44::dispatcher::(wrapper) Run and protect:
>> getVolumeSize(sdUUID=u'276e9ba7-e19a-49c5-8ad7-26711934d5e4',
>> spUUID=u'0f954891-b1cd-4f09-99ae-75d404d95f9d',
>> imgUUID=u'967d966c-3653-4ff6-9299-2fb5b4197c37',
>> volUUID=u'99b085e6-6662-43ef-8ab4-40bc00e82460', options=None)
>> vdsm/vdsm.log:Thread-5475::INFO::2015-05-21
>> 18:27:22,086::logUtils::47::dispatcher::(wrapper) Run and protect:
>> getVolumeSize, Return response: {'truesize': '1110773760', 'apparentsize':
>> '1110835200'}
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,087::task::1191::Storage.TaskManager.Task::(prepare)
>> Task=`c508cf8f-9f02-43a6-a45d-2b3f1d7e66be`::finished: {'truesi

Re: [ovirt-users] Switch from Fedora 20 to CentOS 7.1

2015-05-22 Thread Nir Soffer
- Original Message -
> From: "Soeren Malchow" 
> To: "Jurriën Bloemen" , users@ovirt.org
> Sent: Thursday, May 21, 2015 7:35:02 PM
> Subject: Re: [ovirt-users] Switch from Fedora 20 to CentOS 7.1
> 
> Hi,
> 
> We now created the new Cluster based on CentOS 7.1 which went fine, then we
> migrated 2 machines – no problem, we have Live Migration (back), Live Merge
> and so on, all good.
> 
> But some additional machine have problems starting on the new cluster and
> this happens
> 
> 
> Grep for the Thread in vdsm.log
> <— snip —>
> 
> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
> 18:27:21,999::vm::2264::vm.Vm::(_startUnderlyingVm)
> vmId=`24bd5074-64fc-4aa0-87cd-5de3dd7b50d1`::Start
> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
> 18:27:22,003::vm::2268::vm.Vm::(_startUnderlyingVm)
> vmId=`24bd5074-64fc-4aa0-87cd-5de3dd7b50d1`::_ongoingCreations acquired
> vdsm/vdsm.log:Thread-5475::INFO::2015-05-21
> 18:27:22,008::vm::3261::vm.Vm::(_run)
> vmId=`24bd5074-64fc-4aa0-87cd-5de3dd7b50d1`::VM wrapper has started
> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
> 18:27:22,021::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`2bc7fe9c-204a-4ab7-a116-f7fbba32bd34`::moving from state init -> state
> preparing
> vdsm/vdsm.log:Thread-5475::INFO::2015-05-21
> 18:27:22,028::logUtils::44::dispatcher::(wrapper) Run and protect:
> getVolumeSize(sdUUID=u'276e9ba7-e19a-49c5-8ad7-26711934d5e4',
> spUUID=u'0f954891-b1cd-4f09-99ae-75d404d95f9d',
> imgUUID=u'eae65249-e5e8-49e7-90a0-c7385e80e6ca',
> volUUID=u'8791f6ec-a6ef-484d-bd5a-730b22b19250', options=None)
> vdsm/vdsm.log:Thread-5475::INFO::2015-05-21
> 18:27:22,069::logUtils::47::dispatcher::(wrapper) Run and protect:
> getVolumeSize, Return response: {'truesize': '2696552448', 'apparentsize':
> '2696609792'}
> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
> 18:27:22,069::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`2bc7fe9c-204a-4ab7-a116-f7fbba32bd34`::finished: {'truesize':
> '2696552448', 'apparentsize': '2696609792'}
> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
> 18:27:22,069::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`2bc7fe9c-204a-4ab7-a116-f7fbba32bd34`::moving from state preparing ->
> state finished
> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
> 18:27:22,070::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
> 18:27:22,070::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
> 18:27:22,070::task::993::Storage.TaskManager.Task::(_decref)
> Task=`2bc7fe9c-204a-4ab7-a116-f7fbba32bd34`::ref 0 aborting False
> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
> 18:27:22,071::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`c508cf8f-9f02-43a6-a45d-2b3f1d7e66be`::moving from state init -> state
> preparing
> vdsm/vdsm.log:Thread-5475::INFO::2015-05-21
> 18:27:22,071::logUtils::44::dispatcher::(wrapper) Run and protect:
> getVolumeSize(sdUUID=u'276e9ba7-e19a-49c5-8ad7-26711934d5e4',
> spUUID=u'0f954891-b1cd-4f09-99ae-75d404d95f9d',
> imgUUID=u'967d966c-3653-4ff6-9299-2fb5b4197c37',
> volUUID=u'99b085e6-6662-43ef-8ab4-40bc00e82460', options=None)
> vdsm/vdsm.log:Thread-5475::INFO::2015-05-21
> 18:27:22,086::logUtils::47::dispatcher::(wrapper) Run and protect:
> getVolumeSize, Return response: {'truesize': '1110773760', 'apparentsize':
> '1110835200'}
> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
> 18:27:22,087::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`c508cf8f-9f02-43a6-a45d-2b3f1d7e66be`::finished: {'truesize':
> '1110773760', 'apparentsize': '1110835200'}
> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
> 18:27:22,087::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`c508cf8f-9f02-43a6-a45d-2b3f1d7e66be`::moving from state preparing ->
> state finished
> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
> 18:27:22,087::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
> 18:27:22,088::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
> 18:27:22,088::task::993::Storage.TaskManager.Task::(_decref)
> Task=`c508cf8f-9f02-43a6-a45d-2b3f1d7e66be`::ref 0 aborting False
> vdsm/

Re: [ovirt-users] Switch from Fedora 20 to CentOS 7.1

2015-05-21 Thread Soeren Malchow
ve 
users)
vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21 
18:27:24,112::resourceManager::641::Storage.ResourceManager::(releaseResource) 
Resource 'Storage.276e9ba7-e19a-49c5-8ad7-26711934d5e4' is free, finding out if 
anyone is waiting for it.
vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21 
18:27:24,113::resourceManager::649::Storage.ResourceManager::(releaseResource) 
No one is waiting for resource 'Storage.276e9ba7-e19a-49c5-8ad7-26711934d5e4', 
Clearing records.
vdsm/vdsm.log:Thread-5475::ERROR::2015-05-21 
18:27:24,113::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
{'message': "Volume does not exist: 
(u'8791f6ec-a6ef-484d-bd5a-730b22b19250',)", 'code': 201}}
vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21 
18:27:24,114::vm::2294::vm.Vm::(_startUnderlyingVm) 
vmId=`24bd5074-64fc-4aa0-87cd-5de3dd7b50d1`::_ongoingCreations released
vdsm/vdsm.log:Thread-5475::ERROR::2015-05-21 
18:27:24,114::vm::2331::vm.Vm::(_startUnderlyingVm) 
vmId=`24bd5074-64fc-4aa0-87cd-5de3dd7b50d1`::The vm start process failed
vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21 
18:27:24,117::vm::2786::vm.Vm::(setDownStatus) 
vmId=`24bd5074-64fc-4aa0-87cd-5de3dd7b50d1`::Changed state to Down: Bad volume 
specification {u'index': 0, u'iface': u'virtio', u'type': u'disk', u'format': 
u'cow', u'bootOrder': u'1', u'address': {u'slot': u'0x06', u'bus': u'0x00', 
u'domain': u'0x', u'type': u'pci', u'function': u'0x0'}, u'volumeID': 
u'8791f6ec-a6ef-484d-bd5a-730b22b19250', 'apparentsize': '2696609792', 
u'imageID': u'eae65249-e5e8-49e7-90a0-c7385e80e6ca', u'specParams': {}, 
u'readonly': u'false', u'domainID': u'276e9ba7-e19a-49c5-8ad7-26711934d5e4', 
'reqsize': '0', u'deviceId': u'eae65249-e5e8-49e7-90a0-c7385e80e6ca', 
'truesize': '2696552448', u'poolID': u'0f954891-b1cd-4f09-99ae-75d404d95f9d', 
u'device': u'disk', u'shared': u'false', u'propagateErrors': u'off', 
u'optional': u'false'} (code=1)

<— snip —>


Additionally i can find this

—
Thread-5475::ERROR::2015-05-21 
18:27:24,107::task::866::Storage.TaskManager.Task::(_setError) 
Task=`7ca2f743-09b1-4499-b9e4-5f640002a2bc`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 3235, in prepareImage
raise se.VolumeDoesNotExist(leafUUID)
VolumeDoesNotExist: Volume does not exist: 
(u'8791f6ec-a6ef-484d-bd5a-730b22b19250’,)

—


—
Thread-5475::ERROR::2015-05-21 
18:27:24,114::vm::2331::vm.Vm::(_startUnderlyingVm) 
vmId=`24bd5074-64fc-4aa0-87cd-5de3dd7b50d1`::The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 2271, in _startUnderlyingVm
self._run()
  File "/usr/share/vdsm/virt/vm.py", line 3266, in _run
self.preparePaths(devices[DISK_DEVICES])
  File "/usr/share/vdsm/virt/vm.py", line 2353, in preparePaths
drive['path'] = self.cif.prepareVolumePath(drive, self.id)
  File "/usr/share/vdsm/clientIF.py", line 277, in prepareVolumePath
raise vm.VolumeError(drive)
VolumeError: Bad volume specification {u'index': 0, u'iface': u'virtio', 
u'type': u'disk', u'format': u'cow', u'bootOrder': u'1', u'address': {u'slot': 
u'0x06', u'bus': u'0x00', u'domain': u'0x', u'type': u'pci', u'function': 
u'0x0'}, u'volumeID': u'8791f6ec-a6ef-484d-bd5a-730b22b19250', 'apparentsize': 
'2696609792', u'imageID': u'eae65249-e5e8-49e7-90a0-c7385e80e6ca', 
u'specParams': {}, u'readonly': u'false', u'domainID': 
u'276e9ba7-e19a-49c5-8ad7-26711934d5e4', 'reqsize': '0', u'deviceId': 
u'eae65249-e5e8-49e7-90a0-c7385e80e6ca', 'truesize': '2696552448', u'poolID': 
u'0f954891-b1cd-4f09-99ae-75d404d95f9d', u'device': u'disk', u'shared': 
u'false', u'propagateErrors': u'off', u'optional': u'false'}
Thread-5475::DEBUG::2015-05-21 18:27:24,117::v

Re: [ovirt-users] Switch from Fedora 20 to CentOS 7.1

2015-05-20 Thread Soeren Malchow
Great, thanks, that is the plan then

From: mailto:users-boun...@ovirt.org>> on behalf of 
"Bloemen, Jurriën"
Date: Wednesday 20 May 2015 15:27
To: "users@ovirt.org<mailto:users@ovirt.org>"
Subject: Re: [ovirt-users] Switch from Fedora 20 to CentOS 7.1

Hi Soeren,

Yes! That works perfectly. Did it myself several times.

Regards,

Jurriën

On 20-05-15 14:19, Soeren Malchow wrote:

Hi Vered,

Thanks for the quick answer, ok, understood

Then i could create a new Cluster in the same datacenter with newly installed 
hosts and then migrate the machines through shutting them down in the old 
cluster and then starting them in the new cluster, only thing i loose is the 
live migration

Regards
Soeren



On 20/05/15 14:04, "Vered Volansky" <mailto:ve...@redhat.com> 
wrote:



Hi Soeren,

oVirt Clusters support one host distribution (all hosts must be of the same 
distribution).
If the cluster is empty at one point, you can add a host of a different 
distribution than the cluster occupied before.
But there can't be two type of distributions at the same time in one cluster.

Regards,
Vered

- Original Message -


From: "Soeren Malchow" <mailto:soeren.malc...@mcon.net>
To: users@ovirt.org<mailto:users@ovirt.org>
Sent: Wednesday, May 20, 2015 2:58:11 PM
Subject: [ovirt-users] Switch from Fedora 20 to CentOS 7.1

Dear all,

Would it be possible to switch from fedora 20 to centos 7.1 (as far as i
understood it has live merge support now) within one cluster, meaning


* take out one compute host
* Reinstall that compute host with Centos 7.1
* Do a hosted-engine —deploy
* Migrate VM to the CentOS 7.1 host
* Take the next fedora host and reinstall

Any experiences, recommendations or remarks on that ?

Regards
Soeren

___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>http://lists.ovirt.org/mailman/listinfo/users

This message (including any attachments) may contain information that is 
privileged or confidential. If you are not the intended recipient, please 
notify the sender and delete this email immediately from your systems and 
destroy all copies of it. You may not, directly or indirectly, use, disclose, 
distribute, print or copy this email or any part of it if you are not the 
intended recipient
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Switch from Fedora 20 to CentOS 7.1

2015-05-20 Thread Bloemen , Jurriën
Hi Soeren,

Yes! That works perfectly. Did it myself several times.

Regards,

Jurriën

On 20-05-15 14:19, Soeren Malchow wrote:

Hi Vered,

Thanks for the quick answer, ok, understood

Then i could create a new Cluster in the same datacenter with newly installed 
hosts and then migrate the machines through shutting them down in the old 
cluster and then starting them in the new cluster, only thing i loose is the 
live migration

Regards
Soeren



On 20/05/15 14:04, "Vered Volansky" <mailto:ve...@redhat.com> 
wrote:



Hi Soeren,

oVirt Clusters support one host distribution (all hosts must be of the same 
distribution).
If the cluster is empty at one point, you can add a host of a different 
distribution than the cluster occupied before.
But there can't be two type of distributions at the same time in one cluster.

Regards,
Vered

- Original Message -


From: "Soeren Malchow" <mailto:soeren.malc...@mcon.net>
To: users@ovirt.org<mailto:users@ovirt.org>
Sent: Wednesday, May 20, 2015 2:58:11 PM
Subject: [ovirt-users] Switch from Fedora 20 to CentOS 7.1

Dear all,

Would it be possible to switch from fedora 20 to centos 7.1 (as far as i
understood it has live merge support now) within one cluster, meaning


* take out one compute host
* Reinstall that compute host with Centos 7.1
* Do a hosted-engine —deploy
* Migrate VM to the CentOS 7.1 host
* Take the next fedora host and reinstall

Any experiences, recommendations or remarks on that ?

Regards
Soeren

___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users


This message (including any attachments) may contain information that is 
privileged or confidential. If you are not the intended recipient, please 
notify the sender and delete this email immediately from your systems and 
destroy all copies of it. You may not, directly or indirectly, use, disclose, 
distribute, print or copy this email or any part of it if you are not the 
intended recipient
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Switch from Fedora 20 to CentOS 7.1

2015-05-20 Thread Soeren Malchow
Hi Vered,

Thanks for the quick answer, ok, understood

Then i could create a new Cluster in the same datacenter with newly installed 
hosts and then migrate the machines through shutting them down in the old 
cluster and then starting them in the new cluster, only thing i loose is the 
live migration

Regards
Soeren 



On 20/05/15 14:04, "Vered Volansky"  wrote:

>Hi Soeren,
>
>oVirt Clusters support one host distribution (all hosts must be of the same 
>distribution).
>If the cluster is empty at one point, you can add a host of a different 
>distribution than the cluster occupied before.
>But there can't be two type of distributions at the same time in one cluster.
>
>Regards,
>Vered
>
>- Original Message -
>> From: "Soeren Malchow" 
>> To: users@ovirt.org
>> Sent: Wednesday, May 20, 2015 2:58:11 PM
>> Subject: [ovirt-users] Switch from Fedora 20 to CentOS 7.1
>> 
>> Dear all,
>> 
>> Would it be possible to switch from fedora 20 to centos 7.1 (as far as i
>> understood it has live merge support now) within one cluster, meaning
>> 
>> 
>> * take out one compute host
>> * Reinstall that compute host with Centos 7.1
>> * Do a hosted-engine —deploy
>> * Migrate VM to the CentOS 7.1 host
>> * Take the next fedora host and reinstall
>> 
>> Any experiences, recommendations or remarks on that ?
>> 
>> Regards
>> Soeren
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Switch from Fedora 20 to CentOS 7.1

2015-05-20 Thread Gianluca Cecchi
A possible workflow could be, with little downtime and if source DC is
already 3.5 version, to follow:

http://www.ovirt.org/Features/ImportStorageDomain

so that you could setup the new hosts and DC and then attach the old
Storage Domain.
Not tried myself though yet

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Switch from Fedora 20 to CentOS 7.1

2015-05-20 Thread Vered Volansky
Hi Soeren,

oVirt Clusters support one host distribution (all hosts must be of the same 
distribution).
If the cluster is empty at one point, you can add a host of a different 
distribution than the cluster occupied before.
But there can't be two type of distributions at the same time in one cluster.

Regards,
Vered

- Original Message -
> From: "Soeren Malchow" 
> To: users@ovirt.org
> Sent: Wednesday, May 20, 2015 2:58:11 PM
> Subject: [ovirt-users] Switch from Fedora 20 to CentOS 7.1
> 
> Dear all,
> 
> Would it be possible to switch from fedora 20 to centos 7.1 (as far as i
> understood it has live merge support now) within one cluster, meaning
> 
> 
> * take out one compute host
> * Reinstall that compute host with Centos 7.1
> * Do a hosted-engine —deploy
> * Migrate VM to the CentOS 7.1 host
> * Take the next fedora host and reinstall
> 
> Any experiences, recommendations or remarks on that ?
> 
> Regards
> Soeren
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Switch from Fedora 20 to CentOS 7.1

2015-05-20 Thread Soeren Malchow
Dear all,

Would it be possible to switch from fedora 20 to centos 7.1 (as far as i 
understood it has live merge support now) within one cluster, meaning

  *   take out one compute host
  *   Reinstall that compute host with Centos 7.1
  *   Do a hosted-engine —deploy
  *   Migrate VM to the CentOS 7.1 host
  *   Take the next fedora host and reinstall

Any experiences, recommendations or remarks on that ?

Regards
Soeren
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users