[ovirt-users] Re: New Host Migration Failure & Console Failure

2020-02-24 Thread Jonathan Mathews
ame': 'p4p2', 'tx': '0', 'txDropped': '0', 'duplex': 'unknown',
'sampleTime': 1582539974.218536, 'rx': '0', 'state': 'down'}, 'bond0':
{'rxErrors': '0', 'txErrors': '0', 'speed': '1000', 'rxDropped': '0',
'name': 'bond0', 'tx': '2686605379', 'txDropped': '0', 'duplex': 'full',
'sampleTime': 1582539974.218536, 'rx': '54169240727', 'state': 'up'},
'ovirtmgmt': {'rxErrors': '0', 'txErrors': '0', 'speed': '1000',
'rxDropped': '845', 'name': 'ovirtmgmt', 'tx': '2575785594', 'txDropped':
'0', 'duplex': 'unknown', 'sampleTime': 1582539974.218536, 'rx':
'52484350965', 'state': 'up'}, 'lo': {'rxErrors': '0', 'txErrors': '0',
'speed': '1000', 'rxDropped': '0', 'name': 'lo', 'tx': '73036971',
'txDropped': '0', 'duplex': 'unknown', 'sampleTime': 1582539974.218536,
'rx': '73036971', 'state': 'up'}, 'ovs-system': {'rxErrors': '0',
'txErrors': '0', 'speed': '1000', 'rxDropped': '0', 'name': 'ovs-system',
'tx': '0', 'txDropped': '0', 'duplex': 'unknown', 'sampleTime':
1582539974.218536, 'rx': '0', 'state': 'down'}, ';vdsmdummy;': {'rxErrors':
'0', 'txErrors': '0', 'speed': '1000', 'rxDropped': '0', 'name':
';vdsmdummy;', 'tx': '0', 'txDropped': '0', 'duplex': 'unknown',
'sampleTime': 1582539974.218536, 'rx': '0', 'state': 'down'}, 'br-int':
{'rxErrors': '0', 'txErrors': '0', 'speed': '1000', 'rxDropped': '0',
'name': 'br-int', 'tx': '0', 'txDropped': '0', 'duplex': 'unknown',
'sampleTime': 1582539974.218536, 'rx': '0', 'state': 'down'}, 'em1':
{'rxErrors': '0', 'txErrors': '0', 'speed': '1000', 'rxDropped': '0',
'name': 'em1', 'tx': '2696308698', 'txDropped': '0', 'duplex': 'full',
'sampleTime': 1582539974.218536, 'rx': '54026312941', 'state': 'up'},
'genev_sys_6081': {'rxErrors': '0', 'txErrors': '11', 'speed': '1000',
'rxDropped': '0', 'name': 'genev_sys_6081', 'tx': '0', 'txDropped': '0',
'duplex': 'unknown', 'sampleTime': 1582539974.218536, 'rx': '0', 'state':
'up'}, 'em2': {'rxErrors': '0', 'txErrors': '0', 'speed': '1000',
'rxDropped': '0', 'name': 'em2', 'tx': '384', 'txDropped': '0', 'duplex':
'full', 'sampleTime': 1582539974.218536, 'rx': '331164676', 'state':
'up'}}, 'txDropped': '845', 'anonHugePages': '212', 'ksmPages': 100,
'elapsedTime': '434870.25', 'cpuLoad': '0.01', 'cpuSys': '0.04',
'diskStats': {'/var/log': {'free': '7434'}, '/var/run/vdsm/': {'free':
'48186'}, '/tmp': {'free': '905'}}, 'cpuUserVdsmd': '0.47',
'netConfigDirty': 'False', 'memCommitted': 0, 'ksmState': False,
'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 95451, 'bootTime':
'1582103448', 'haStats': {'active': False, 'configured': False, 'score': 0,
'localMaintenance': False, 'globalMaintenance': False}, 'momStatus':
'active', 'multipathHealth': {}, 'rxDropped': '0', 'outgoingVmMigrations':
0, 'swapTotal': 4095, 'swapFree': 4095, 'hugepages': defaultdict(, {1048576: {'resv_hugepages': 0, 'free_hugepages': 0,
'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages':
0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}, 2048:
{'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0,
'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0,
'nr_hugepages_mempolicy': 0}}), 'dateTime': '2020-02-24T10:26:14 GMT',
'cpuUser': '0.05', 'memFree': 95195, 'cpuIdle': '99.91', 'vmActive': 0,
'v2vJobs': {}, 'cpuSysVdsmd': '0.33'}} from=:::172.10.10.50,47246
(api:54)
2020-02-24 10:26:14,228+ INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
call Host.getStats succeeded in 0.03 seconds (__init__:312)

I would really appreciate any assistance.

Sincerely
Jonathan Mathews

On Wed, Feb 19, 2020 at 12:19 PM Jonathan Mathews 
wrote:

> Good Day
>
> I have installed new oVirt platform with hosted engine, but when I add a
> new host, the notification keeps repeating "Finished Activating Host" and
> it does not stop until I select do not disturb for 1 day. (Screenshot
> attached)
>
> Also once the new host has been added, I am unable to migrate a VM to it,
> if I start a VM up on the new host, I am unable to migrate the VM away from
> it or launch a console.
>
> Please see the following logs from when I tried to migrate a VM to the new
> host.
>
> 2020-02-19 09:48:27,034Z INFO
>  [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-55)
> [0de8c750-2955-40f3-ab75-a33d29894199] Running command:
> MigrateVmToServerCommand internal: false. Entities affected :  ID:
> 7b0b6e6d-d099-43e0-933f-3c335b54a3a1 Type: VMAction group MIGRATE_VM with
> role type USER
> 2020-02-19 09:48:27,113Z INFO
>  [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-55)
> [0de8c750-2955-40f3-ab75-a33d29894199] START, MigrateVDSCommand(
> MigrateVDSCommandParameters:{hostId='df715653-daf4-457e-839d-95683ab21234',
> vmId='7b0b6e6d-d099-43e0-933f-3c335b54a3a1', srcHost='
> host03.timefreight.co.za',
> dstVdsId='896b7f02-00e9-405c-b166-ec103a7f9ee8', dstHost='
> host01.timefreight.co.za:54321', migrationMethod='ONLINE',
> tunnelMigrat

[ovirt-users] New Host Migration Failure & Console Failure

2020-02-19 Thread Jonathan Mathews
27,242Z ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-92544) [] EVENT_ID:
VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed  (VM: accpac, Source:
host03.timefreight.co.za, Destination: host01.timefreight.co.za).
2020-02-19 09:48:27,254Z INFO
 [org.ovirt.engine.core.bll.MigrateVmToServerCommand]
(EE-ManagedThreadFactory-engine-Thread-92544) [] Lock freed to object
'EngineLock:{exclusiveLocks='[7b0b6e6d-d099-43e0-933f-3c335b54a3a1=VM]',
sharedLocks=''}'

The first host is running oVirt 4.3.7, the Hosted-Engine is running oVirt
4.3.8 and the new host is running oVirt 4.3.8

I would really appreciate some assistance.

Sincerely
Jonathan Mathews
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KLEZPTC3V45MVKFOPI5BUKUSIY5IAZ2U/


[ovirt-users] Re: oVirt change IP's & add new ISO share

2019-11-13 Thread Jonathan Mathews
Hi Pavel

Thank you for the advise, it helped a lot.

Do you have recommendation on changing the IP addresses of the oVirt
Engine, hosts and storage?

My customer did a SD-WAN setup and there are conflicting IP's, so I need to
move the oVirt environment to a new IP range.

Thanks

On Tue, Nov 12, 2019 at 2:15 PM Pavel Bar  wrote:

> Hi Jonathan,
> Probably the easiest way is to remove the domain.
>
> Can you try the following:
> 1) Go to "Storage" menu.
> 2) Go to "Domain" menu.
> 3) Click on the domain with the red status.
> 4) Destroy it (in your case the "Destroy" option should be enabled):
> [image: image.png]
>
> Hope it helps.
>
> Pavel
>
>
> On Wed, Nov 6, 2019 at 10:21 AM Jonathan Mathews 
> wrote:
>
>> Good Day
>>
>> Is it possible to get some assistance?
>>
>> I am really stuck with this.
>>
>> On Wed, Oct 30, 2019 at 12:33 PM Jonathan Mathews 
>> wrote:
>>
>>> Good Day
>>>
>>> I have to change the IP addresses of the oVirt Engine, hosts and storage
>>> to a new IP range. Please, can you advise the best way to do this and if
>>> there is anything I would need to change in the database?
>>>
>>> I have also run into an issue where someone has removed the ISO
>>> share/data on the storage, so I am unable to remove, activate, detach or
>>> even add a new ISO share.
>>> Please, can you advise the best wat to resolve this?
>>>
>>> Please see the below engine logs:
>>>
>>> 2019-10-30 11:39:13,918 ERROR
>>> [org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand]
>>> (org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Failed in
>>> 'DetachStorageDomainVDS' method
>>> 2019-10-30 11:39:13,942 ERROR
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Correlation ID: null, Call
>>> Stack: null, Custom Event ID: -1, Message: VDSM command failed: Storage
>>> domain does not exist: (u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',)
>>> 2019-10-30 11:39:13,943 ERROR
>>> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
>>> (org.ovirt.thread.pool-8-thread-27) [58f6cfb8]
>>> IrsBroker::Failed::DetachStorageDomainVDS: IRSGenericException:
>>> IRSErrorException: Failed to DetachStorageDomainVDS, error = Storage domain
>>> does not exist: (u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',), code = 358
>>> 2019-10-30 11:39:13,951 INFO
>>>  [org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand]
>>> (org.ovirt.thread.pool-8-thread-27) [58f6cfb8] FINISH,
>>> DetachStorageDomainVDSCommand, log id: 5547e2df
>>> 2019-10-30 11:39:13,951 INFO
>>>  [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
>>> (org.ovirt.thread.pool-8-thread-38) [58f6cfb8] -- executeIrsBrokerCommand:
>>> Attempting on storage pool '5849b030-626e-47cb-ad90-3ce782d831b3'
>>> 2019-10-30 11:39:13,951 ERROR
>>> [org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand]
>>> (org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Command
>>> 'org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand'
>>> failed: EngineException:
>>> org.ovirt.engine.core.vdsbroker.irsbroker.IRSErrorException:
>>> IRSGenericException: IRSErrorException: Failed to DetachStorageDomainVDS,
>>> error = Storage domain does not exist:
>>> (u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',), code = 358 (Failed with error
>>> StorageDomainDoesNotExist and code 358)
>>> 2019-10-30 11:39:13,952 INFO
>>>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand]
>>> (org.ovirt.thread.pool-8-thread-38) [58f6cfb8] START,
>>> HSMGetAllTasksInfoVDSCommand(HostName = host01,
>>> VdsIdVDSCommandParametersBase:{runAsync='true',
>>> hostId='291a3a19-7467-4783-a6f7-2b2dd0de9ad3'}), log id: 6cc238fb
>>> 2019-10-30 11:39:13,952 INFO
>>>  [org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand]
>>> (org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Command
>>> [id=cec030b7-4a62-43a2-9ae8-de56a5d71ef8]: Compensating CHANGED_STATUS_ONLY
>>> of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap;
>>> snapshot:
>>> EntityStatusSnapshot:{id='StoragePoolIsoMapId:{storagePoolId='5849b030-626e-47cb-ad90-3ce782d831b3',
>>> storageId='42b7d819-ce3a-4a18-a683-f4817c4bdb06'}', status='Inactive'}.
>>> 2019-10-30 11:39:13,975 ERROR
>>> [org.ovirt.engine.core.dal.dbbrok

[ovirt-users] Re: oVirt change IP's & add new ISO share

2019-11-06 Thread Jonathan Mathews
Good Day

Is it possible to get some assistance?

I am really stuck with this.

On Wed, Oct 30, 2019 at 12:33 PM Jonathan Mathews 
wrote:

> Good Day
>
> I have to change the IP addresses of the oVirt Engine, hosts and storage
> to a new IP range. Please, can you advise the best way to do this and if
> there is anything I would need to change in the database?
>
> I have also run into an issue where someone has removed the ISO share/data
> on the storage, so I am unable to remove, activate, detach or even add a
> new ISO share.
> Please, can you advise the best wat to resolve this?
>
> Please see the below engine logs:
>
> 2019-10-30 11:39:13,918 ERROR
> [org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand]
> (org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Failed in
> 'DetachStorageDomainVDS' method
> 2019-10-30 11:39:13,942 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Correlation ID: null, Call
> Stack: null, Custom Event ID: -1, Message: VDSM command failed: Storage
> domain does not exist: (u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',)
> 2019-10-30 11:39:13,943 ERROR
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> (org.ovirt.thread.pool-8-thread-27) [58f6cfb8]
> IrsBroker::Failed::DetachStorageDomainVDS: IRSGenericException:
> IRSErrorException: Failed to DetachStorageDomainVDS, error = Storage domain
> does not exist: (u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',), code = 358
> 2019-10-30 11:39:13,951 INFO
>  [org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand]
> (org.ovirt.thread.pool-8-thread-27) [58f6cfb8] FINISH,
> DetachStorageDomainVDSCommand, log id: 5547e2df
> 2019-10-30 11:39:13,951 INFO
>  [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
> (org.ovirt.thread.pool-8-thread-38) [58f6cfb8] -- executeIrsBrokerCommand:
> Attempting on storage pool '5849b030-626e-47cb-ad90-3ce782d831b3'
> 2019-10-30 11:39:13,951 ERROR
> [org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand]
> (org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Command
> 'org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand'
> failed: EngineException:
> org.ovirt.engine.core.vdsbroker.irsbroker.IRSErrorException:
> IRSGenericException: IRSErrorException: Failed to DetachStorageDomainVDS,
> error = Storage domain does not exist:
> (u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',), code = 358 (Failed with error
> StorageDomainDoesNotExist and code 358)
> 2019-10-30 11:39:13,952 INFO
>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand]
> (org.ovirt.thread.pool-8-thread-38) [58f6cfb8] START,
> HSMGetAllTasksInfoVDSCommand(HostName = host01,
> VdsIdVDSCommandParametersBase:{runAsync='true',
> hostId='291a3a19-7467-4783-a6f7-2b2dd0de9ad3'}), log id: 6cc238fb
> 2019-10-30 11:39:13,952 INFO
>  [org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand]
> (org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Command
> [id=cec030b7-4a62-43a2-9ae8-de56a5d71ef8]: Compensating CHANGED_STATUS_ONLY
> of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap;
> snapshot:
> EntityStatusSnapshot:{id='StoragePoolIsoMapId:{storagePoolId='5849b030-626e-47cb-ad90-3ce782d831b3',
> storageId='42b7d819-ce3a-4a18-a683-f4817c4bdb06'}', status='Inactive'}.
> 2019-10-30 11:39:13,975 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Correlation ID: 28ac658, Job
> ID: b31e0f44-2d82-47bf-90d9-f69e399d994f, Call Stack: null, Custom Event
> ID: -1, Message: Failed to detach Storage Domain iso to Data Center
> Default. (User: admin@internal)
>
>
>
>
>
> 2019-10-30 11:42:46,711 INFO
>  [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
> (org.ovirt.thread.pool-8-thread-25) [31e89bba] START,
> SPMGetAllTasksInfoVDSCommand( IrsBaseVDSCommandParameters:{runAsync='true',
> storagePoolId='5849b030-626e-47cb-ad90-3ce782d831b3',
> ignoreFailoverLimit='false'}), log id: 59192768
> 2019-10-30 11:42:48,825 ERROR
> [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand]
> (org.ovirt.thread.pool-8-thread-34) [31e89bba] Failed in
> 'ActivateStorageDomainVDS' method
> 2019-10-30 11:42:48,855 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (org.ovirt.thread.pool-8-thread-34) [31e89bba] Correlation ID: null, Call
> Stack: null, Custom Event ID: -1, Message: VDSM command failed: Storage
> domain does not exist: (u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',)
> 2019-10-30 11:42:48,856 ERROR
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> (org.ovirt.thre

[ovirt-users] oVirt change IP's & add new ISO share

2019-10-30 Thread Jonathan Mathews
Good Day

I have to change the IP addresses of the oVirt Engine, hosts and storage to
a new IP range. Please, can you advise the best way to do this and if there
is anything I would need to change in the database?

I have also run into an issue where someone has removed the ISO share/data
on the storage, so I am unable to remove, activate, detach or even add a
new ISO share.
Please, can you advise the best wat to resolve this?

Please see the below engine logs:

2019-10-30 11:39:13,918 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Failed in
'DetachStorageDomainVDS' method
2019-10-30 11:39:13,942 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Correlation ID: null, Call
Stack: null, Custom Event ID: -1, Message: VDSM command failed: Storage
domain does not exist: (u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',)
2019-10-30 11:39:13,943 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(org.ovirt.thread.pool-8-thread-27) [58f6cfb8]
IrsBroker::Failed::DetachStorageDomainVDS: IRSGenericException:
IRSErrorException: Failed to DetachStorageDomainVDS, error = Storage domain
does not exist: (u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',), code = 358
2019-10-30 11:39:13,951 INFO
 [org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-27) [58f6cfb8] FINISH,
DetachStorageDomainVDSCommand, log id: 5547e2df
2019-10-30 11:39:13,951 INFO
 [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-38) [58f6cfb8] -- executeIrsBrokerCommand:
Attempting on storage pool '5849b030-626e-47cb-ad90-3ce782d831b3'
2019-10-30 11:39:13,951 ERROR
[org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand]
(org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Command
'org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand'
failed: EngineException:
org.ovirt.engine.core.vdsbroker.irsbroker.IRSErrorException:
IRSGenericException: IRSErrorException: Failed to DetachStorageDomainVDS,
error = Storage domain does not exist:
(u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',), code = 358 (Failed with error
StorageDomainDoesNotExist and code 358)
2019-10-30 11:39:13,952 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-38) [58f6cfb8] START,
HSMGetAllTasksInfoVDSCommand(HostName = host01,
VdsIdVDSCommandParametersBase:{runAsync='true',
hostId='291a3a19-7467-4783-a6f7-2b2dd0de9ad3'}), log id: 6cc238fb
2019-10-30 11:39:13,952 INFO
 [org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand]
(org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Command
[id=cec030b7-4a62-43a2-9ae8-de56a5d71ef8]: Compensating CHANGED_STATUS_ONLY
of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap;
snapshot:
EntityStatusSnapshot:{id='StoragePoolIsoMapId:{storagePoolId='5849b030-626e-47cb-ad90-3ce782d831b3',
storageId='42b7d819-ce3a-4a18-a683-f4817c4bdb06'}', status='Inactive'}.
2019-10-30 11:39:13,975 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Correlation ID: 28ac658, Job
ID: b31e0f44-2d82-47bf-90d9-f69e399d994f, Call Stack: null, Custom Event
ID: -1, Message: Failed to detach Storage Domain iso to Data Center
Default. (User: admin@internal)





2019-10-30 11:42:46,711 INFO
 [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-25) [31e89bba] START,
SPMGetAllTasksInfoVDSCommand( IrsBaseVDSCommandParameters:{runAsync='true',
storagePoolId='5849b030-626e-47cb-ad90-3ce782d831b3',
ignoreFailoverLimit='false'}), log id: 59192768
2019-10-30 11:42:48,825 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-34) [31e89bba] Failed in
'ActivateStorageDomainVDS' method
2019-10-30 11:42:48,855 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-34) [31e89bba] Correlation ID: null, Call
Stack: null, Custom Event ID: -1, Message: VDSM command failed: Storage
domain does not exist: (u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',)
2019-10-30 11:42:48,856 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(org.ovirt.thread.pool-8-thread-34) [31e89bba]
IrsBroker::Failed::ActivateStorageDomainVDS: IRSGenericException:
IRSErrorException: Failed to ActivateStorageDomainVDS, error = Storage
domain does not exist: (u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',), code =
358
2019-10-30 11:42:48,864 INFO
 [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-34) [31e89bba] FINISH,
ActivateStorageDomainVDSCommand, log id: 518fdcf
2019-10-30 11:42:48,865 ERROR
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]

[ovirt-users] Re: Unable to change cluster and data center compatibility version

2019-02-25 Thread Jonathan Mathews
Hi Paul

Thank you for bringing this to my attention.

It does look like I will need to upgrade all my hosts to CentOS 7.

But would the prevent me from changing my cluster compatibility version to
3.6?

Thanks

On Wed, Feb 20, 2019 at 11:59 AM Staniforth, Paul <
p.stanifo...@leedsbeckett.ac.uk> wrote:

> Does oVirt 4.0 support Centos6 hosts?
>
>
> Regards,
>
>  Paul S.
> ------
> *From:* Jonathan Mathews 
> *Sent:* 20 February 2019 06:25
> *To:* Shani Leviim
> *Cc:* Ovirt Users
> *Subject:* [ovirt-users] Re: Unable to change cluster and data center
> compatibility version
>
> Hi Shani
>
> Yes, I did try and change the cluster compatibility first, and that's when
> I got the error: Ovirt: Some of the hosts still use legacy protocol which
> is not supported by cluster 3.6 or higher. In order to change it a host
> needs to be put to maintenance and edited in advanced options section.
>
> I did go through the article you suggested, however, I did not see
> anything that would help.
>
> Thanks
>
>
> On Tue, Feb 19, 2019 at 3:46 PM Shani Leviim  wrote:
>
>> Hi Jonathan,
>>
>> Did you try first to change the compatibility of all clusters and then
>> change the data center's compatibility?
>> This once seems related:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1375567
>>
>>
>> *Regards, *
>>
>> *Shani Leviim *
>>
>>
>> On Tue, Feb 19, 2019 at 11:01 AM Jonathan Mathews 
>> wrote:
>>
>>> Good Day
>>>
>>> I have been trying to upgrade a clients oVirt from 3.6 to 4.0 but have
>>> run into an issue where I am unable to change the cluster and data center
>>> compatibility version.
>>>
>>> I get the following error in the GUI:
>>>
>>> Ovirt: Some of the hosts still use legacy protocol which is not
>>> supported by cluster 3.6 or higher. In order to change it a host needs to
>>> be put to maintenance and edited in advanced options section.
>>>
>>> This error was received with all VM's off and all hosts in maintenance.
>>>
>>> The environment has the following currently installed:
>>>
>>> Engine - CentOS 7.4 - Ovirt Engine 3.6.7.5
>>> Host1 - CentOS 6.9 - VDSM 4.16.30
>>> Host2 - CentOS 6.9 - VDSM 4.16.30
>>> Host3 - CentOS 6.9 - VDSM 4.16.30
>>>
>>> I also have the following from engine.log
>>>
>>> [root@ovengine ~]# tail -f /var/log/ovirt-engine/engine.log
>>> 2018-09-22 07:11:33,920 INFO
>>> [org.ovirt.engine.core.vdsbroker.VmsStatisticsFetcher]
>>> (DefaultQuartzScheduler_Worker-93) [7533985f] Fetched 0 VMs from VDS
>>> 'd82a026c-31b4-4efc-8567-c4a6bdcaa826'
>>> 2018-09-22 07:11:34,685 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStoragePoolVDSCommand]
>>> (DefaultQuartzScheduler_Worker-99) [4b7e3710] FINISH,
>>> DisconnectStoragePoolVDSCommand, log id: 1ae6f0a9
>>> 2018-09-22 07:11:34,687 INFO
>>> [org.ovirt.engine.core.bll.storage.DisconnectHostFromStoragePoolServersCommand]
>>> (DefaultQuartzScheduler_Worker-99) [2a6aa6f6] Running command:
>>> DisconnectHostFromStoragePoolServersCommand internal: true. Entities
>>> affected :  ID: 5849b030-626e-47cb-ad90-3ce782d831b3 Type: StoragePool
>>> 2018-09-22 07:11:34,706 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
>>> (DefaultQuartzScheduler_Worker-99) [2a6aa6f6] START,
>>> DisconnectStorageServerVDSCommand(HostName = ovhost3,
>>> StorageServerConnectionManagementVDSParameters:{runAsync='true',
>>> hostId='d82a026c-31b4-4efc-8567-c4a6bdcaa826',
>>> storagePoolId='5849b030-626e-47cb-ad90-3ce782d831b3', storageType='NFS',
>>> connectionList='[StorageServerConnections:{id='3fdffb4c-250b-4a4e-b914-e0da1243550e',
>>> connection='172.16.0.10:/raid0/data/_NAS_NFS_Exports_/STORAGE1',
>>> iqn='null', vfsType='null', mountOptions='null', nfsVersion='null',
>>> nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'},
>>> StorageServerConnections:{id='4d95c8ca-435a-4e44-86a5-bc7f3a0cd606',
>>> connection='172.16.0.20:/data/ov-export', iqn='null', vfsType='null',
>>> mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
>>> iface='null', netIfaceName='null'},
>>> StorageServerConnections:{id='82ecbc89-bdf3-4597-9a93-b16f3a6ac117',
>>> connection='172.16.0.11:/raid1/data/_NAS_NFS_Exports_/4TB', iqn='null',
>>> vfsType='null', mountOptions='null', nfsVersion='null', nfs

[ovirt-users] Re: Unable to change cluster and data center compatibility version

2019-02-19 Thread Jonathan Mathews
Hi Shani

Yes, I did try and change the cluster compatibility first, and that's when
I got the error: Ovirt: Some of the hosts still use legacy protocol which
is not supported by cluster 3.6 or higher. In order to change it a host
needs to be put to maintenance and edited in advanced options section.

I did go through the article you suggested, however, I did not see anything
that would help.

Thanks


On Tue, Feb 19, 2019 at 3:46 PM Shani Leviim  wrote:

> Hi Jonathan,
>
> Did you try first to change the compatibility of all clusters and then
> change the data center's compatibility?
> This once seems related:
> https://bugzilla.redhat.com/show_bug.cgi?id=1375567
>
>
> *Regards,*
>
> *Shani Leviim*
>
>
> On Tue, Feb 19, 2019 at 11:01 AM Jonathan Mathews 
> wrote:
>
>> Good Day
>>
>> I have been trying to upgrade a clients oVirt from 3.6 to 4.0 but have
>> run into an issue where I am unable to change the cluster and data center
>> compatibility version.
>>
>> I get the following error in the GUI:
>>
>> Ovirt: Some of the hosts still use legacy protocol which is not supported
>> by cluster 3.6 or higher. In order to change it a host needs to be put to
>> maintenance and edited in advanced options section.
>>
>> This error was received with all VM's off and all hosts in maintenance.
>>
>> The environment has the following currently installed:
>>
>> Engine - CentOS 7.4 - Ovirt Engine 3.6.7.5
>> Host1 - CentOS 6.9 - VDSM 4.16.30
>> Host2 - CentOS 6.9 - VDSM 4.16.30
>> Host3 - CentOS 6.9 - VDSM 4.16.30
>>
>> I also have the following from engine.log
>>
>> [root@ovengine ~]# tail -f /var/log/ovirt-engine/engine.log
>> 2018-09-22 07:11:33,920 INFO
>> [org.ovirt.engine.core.vdsbroker.VmsStatisticsFetcher]
>> (DefaultQuartzScheduler_Worker-93) [7533985f] Fetched 0 VMs from VDS
>> 'd82a026c-31b4-4efc-8567-c4a6bdcaa826'
>> 2018-09-22 07:11:34,685 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStoragePoolVDSCommand]
>> (DefaultQuartzScheduler_Worker-99) [4b7e3710] FINISH,
>> DisconnectStoragePoolVDSCommand, log id: 1ae6f0a9
>> 2018-09-22 07:11:34,687 INFO
>> [org.ovirt.engine.core.bll.storage.DisconnectHostFromStoragePoolServersCommand]
>> (DefaultQuartzScheduler_Worker-99) [2a6aa6f6] Running command:
>> DisconnectHostFromStoragePoolServersCommand internal: true. Entities
>> affected :  ID: 5849b030-626e-47cb-ad90-3ce782d831b3 Type: StoragePool
>> 2018-09-22 07:11:34,706 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
>> (DefaultQuartzScheduler_Worker-99) [2a6aa6f6] START,
>> DisconnectStorageServerVDSCommand(HostName = ovhost3,
>> StorageServerConnectionManagementVDSParameters:{runAsync='true',
>> hostId='d82a026c-31b4-4efc-8567-c4a6bdcaa826',
>> storagePoolId='5849b030-626e-47cb-ad90-3ce782d831b3', storageType='NFS',
>> connectionList='[StorageServerConnections:{id='3fdffb4c-250b-4a4e-b914-e0da1243550e',
>> connection='172.16.0.10:/raid0/data/_NAS_NFS_Exports_/STORAGE1',
>> iqn='null', vfsType='null', mountOptions='null', nfsVersion='null',
>> nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'},
>> StorageServerConnections:{id='4d95c8ca-435a-4e44-86a5-bc7f3a0cd606',
>> connection='172.16.0.20:/data/ov-export', iqn='null', vfsType='null',
>> mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
>> iface='null', netIfaceName='null'},
>> StorageServerConnections:{id='82ecbc89-bdf3-4597-9a93-b16f3a6ac117',
>> connection='172.16.0.11:/raid1/data/_NAS_NFS_Exports_/4TB', iqn='null',
>> vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null',
>> nfsTimeo='null', iface='null', netIfaceName='null'},
>> StorageServerConnections:{id='29bb3394-fb61-41c0-bb5a-1fa693ec2fe2',
>> connection='172.16.0.11:/raid1/data/_NAS_NFS_Exports_/iso', iqn='null',
>> vfsType='null', mountOptions='null', nfsVersion='V3', nfsRetrans='null',
>> nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 48c5ffd6
>> 2018-09-22 07:11:34,991 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
>> (DefaultQuartzScheduler_Worker-99) [2a6aa6f6] FINISH,
>> DisconnectStorageServerVDSCommand, return:
>> {3fdffb4c-250b-4a4e-b914-e0da1243550e=0,
>> 29bb3394-fb61-41c0-bb5a-1fa693ec2fe2=0,
>> 82ecbc89-bdf3-4597-9a93-b16f3a6ac117=0,
>> 4d95c8ca-435a-4e44-86a5-bc7f3a0cd606=0}, log id: 48c5ffd6
>> 2018-09-22 07:11:56,367 WARN
>> [org.ovirt.engine.core.bll.UpdateVdsGroupCommand] (default task-29)
>> [

[ovirt-users] Unable to change cluster and data center compatibility version

2019-02-19 Thread Jonathan Mathews
Good Day

I have been trying to upgrade a clients oVirt from 3.6 to 4.0 but have run
into an issue where I am unable to change the cluster and data center
compatibility version.

I get the following error in the GUI:

Ovirt: Some of the hosts still use legacy protocol which is not supported
by cluster 3.6 or higher. In order to change it a host needs to be put to
maintenance and edited in advanced options section.

This error was received with all VM's off and all hosts in maintenance.

The environment has the following currently installed:

Engine - CentOS 7.4 - Ovirt Engine 3.6.7.5
Host1 - CentOS 6.9 - VDSM 4.16.30
Host2 - CentOS 6.9 - VDSM 4.16.30
Host3 - CentOS 6.9 - VDSM 4.16.30

I also have the following from engine.log

[root@ovengine ~]# tail -f /var/log/ovirt-engine/engine.log
2018-09-22 07:11:33,920 INFO
[org.ovirt.engine.core.vdsbroker.VmsStatisticsFetcher]
(DefaultQuartzScheduler_Worker-93) [7533985f] Fetched 0 VMs from VDS
'd82a026c-31b4-4efc-8567-c4a6bdcaa826'
2018-09-22 07:11:34,685 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStoragePoolVDSCommand]
(DefaultQuartzScheduler_Worker-99) [4b7e3710] FINISH,
DisconnectStoragePoolVDSCommand, log id: 1ae6f0a9
2018-09-22 07:11:34,687 INFO
[org.ovirt.engine.core.bll.storage.DisconnectHostFromStoragePoolServersCommand]
(DefaultQuartzScheduler_Worker-99) [2a6aa6f6] Running command:
DisconnectHostFromStoragePoolServersCommand internal: true. Entities
affected :  ID: 5849b030-626e-47cb-ad90-3ce782d831b3 Type: StoragePool
2018-09-22 07:11:34,706 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(DefaultQuartzScheduler_Worker-99) [2a6aa6f6] START,
DisconnectStorageServerVDSCommand(HostName = ovhost3,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='d82a026c-31b4-4efc-8567-c4a6bdcaa826',
storagePoolId='5849b030-626e-47cb-ad90-3ce782d831b3', storageType='NFS',
connectionList='[StorageServerConnections:{id='3fdffb4c-250b-4a4e-b914-e0da1243550e',
connection='172.16.0.10:/raid0/data/_NAS_NFS_Exports_/STORAGE1',
iqn='null', vfsType='null', mountOptions='null', nfsVersion='null',
nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'},
StorageServerConnections:{id='4d95c8ca-435a-4e44-86a5-bc7f3a0cd606',
connection='172.16.0.20:/data/ov-export', iqn='null', vfsType='null',
mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
iface='null', netIfaceName='null'},
StorageServerConnections:{id='82ecbc89-bdf3-4597-9a93-b16f3a6ac117',
connection='172.16.0.11:/raid1/data/_NAS_NFS_Exports_/4TB', iqn='null',
vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null',
nfsTimeo='null', iface='null', netIfaceName='null'},
StorageServerConnections:{id='29bb3394-fb61-41c0-bb5a-1fa693ec2fe2',
connection='172.16.0.11:/raid1/data/_NAS_NFS_Exports_/iso', iqn='null',
vfsType='null', mountOptions='null', nfsVersion='V3', nfsRetrans='null',
nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 48c5ffd6
2018-09-22 07:11:34,991 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(DefaultQuartzScheduler_Worker-99) [2a6aa6f6] FINISH,
DisconnectStorageServerVDSCommand, return:
{3fdffb4c-250b-4a4e-b914-e0da1243550e=0,
29bb3394-fb61-41c0-bb5a-1fa693ec2fe2=0,
82ecbc89-bdf3-4597-9a93-b16f3a6ac117=0,
4d95c8ca-435a-4e44-86a5-bc7f3a0cd606=0}, log id: 48c5ffd6
2018-09-22 07:11:56,367 WARN
[org.ovirt.engine.core.bll.UpdateVdsGroupCommand] (default task-29)
[1a31cc53] CanDoAction of action 'UpdateVdsGroup' failed for user
admin@internal. Reasons:
VAR__TYPE__CLUSTER,VAR__ACTION__UPDATE,ACTION_TYPE_FAILED_WRONG_PROTOCOL_FOR_CLUSTER_VERSION
2018-09-22 07:12:41,017 WARN
[org.ovirt.engine.core.bll.storage.UpdateStoragePoolCommand] (default
task-29) [efd285b] CanDoAction of action 'UpdateStoragePool' failed for
user admin@internal. Reasons:
VAR__TYPE__STORAGE__POOL,VAR__ACTION__UPDATE,$ClustersList
Default,ERROR_CANNOT_UPDATE_STORAGE_POOL_COMPATIBILITY_VERSION_BIGGER_THAN_CLUSTERS
2018-09-22 07:13:15,717 WARN
[org.ovirt.engine.core.bll.UpdateVdsGroupCommand] (default task-6)
[4c9f3ee8] CanDoAction of action 'UpdateVdsGroup' failed for user
admin@internal. Reasons:
VAR__TYPE__CLUSTER,VAR__ACTION__UPDATE,ACTION_TYPE_FAILED_WRONG_PROTOCOL_FOR_CLUSTER_VERSION
2018-09-22 07:15:21,460 WARN
[org.ovirt.engine.core.bll.UpdateVdsGroupCommand] (default task-28)
[649bae65] CanDoAction of action 'UpdateVdsGroup' failed for user
admin@internal. Reasons:
VAR__TYPE__CLUSTER,VAR__ACTION__UPDATE,ACTION_TYPE_FAILED_WRONG_PROTOCOL_FOR_CLUSTER_VERSION
2018-09-22 07:18:44,633 WARN
[org.ovirt.engine.core.bll.UpdateVdsGroupCommand] (default task-8)
[23167cfd] CanDoAction of action 'UpdateVdsGroup' failed for user
admin@internal. Reasons:
VAR__TYPE__CLUSTER,VAR__ACTION__UPDATE,ACTION_TYPE_FAILED_WRONG_PROTOCOL_FOR_CLUSTER_VERSION
2018-09-22 07:24:20,372 WARN
[org.ovirt.engine.core.bll.UpdateVdsGroupCommand] (default task-15)
[5d2ce633] CanDoAction of action 

Re: [ovirt-users] Failure to upgrade Cluster Compatibility Version

2018-03-12 Thread Jonathan Mathews
Hi

I do apologise, somehow all these emails seem to be going directly to my
trash, so I thought there was no reply.

It appears that I need to shutdown all VM's in that cluster, in order to
change the Cluster Compatibility Version.



On Thu, Mar 8, 2018 at 11:48 AM, Yaniv Kaul <yk...@redhat.com> wrote:

>
>
> On Thu, Mar 8, 2018 at 10:55 AM, Jonathan Mathews <jm3185...@gmail.com>
> wrote:
>
>> Hi , this has now become really urgent.
>>
>
> It's not clear to me why it's urgent.
> Please look at past replies and provide more information so we can assist
> you.
> Y.
>
>
>
>>
>> Everything I try, I am unable to get the Cluster Compatibility Version
>> to change.
>>
>> The entire platform is running the latest 3.6 release.
>>
>> On Tue, Mar 6, 2018 at 4:20 PM, Jonathan Mathews <jm3185...@gmail.com>
>> wrote:
>>
>>> Any chance of getting feedback on this?
>>>
>>> It is becoming urgent.
>>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failure to upgrade Cluster Compatibility Version

2018-03-12 Thread Jonathan Mathews
Hi Everyone

Is it possible to get some feedback on this?

On Thu, Mar 8, 2018 at 10:55 AM, Jonathan Mathews <jm3185...@gmail.com>
wrote:

> Hi , this has now become really urgent.
>
> Everything I try, I am unable to get the Cluster Compatibility Version to
> change.
>
> The entire platform is running the latest 3.6 release.
>
> On Tue, Mar 6, 2018 at 4:20 PM, Jonathan Mathews <jm3185...@gmail.com>
> wrote:
>
>> Any chance of getting feedback on this?
>>
>> It is becoming urgent.
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failure to upgrade Cluster Compatibility Version

2018-03-08 Thread Jonathan Mathews
Hi , this has now become really urgent.

Everything I try, I am unable to get the Cluster Compatibility Version to
change.

The entire platform is running the latest 3.6 release.

On Tue, Mar 6, 2018 at 4:20 PM, Jonathan Mathews <jm3185...@gmail.com>
wrote:

> Any chance of getting feedback on this?
>
> It is becoming urgent.
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failure to upgrade Cluster Compatibility Version

2018-03-06 Thread Jonathan Mathews
Any chance of getting feedback on this?

It is becoming urgent.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failure to upgrade Cluster Compatibility Version

2018-03-04 Thread Jonathan Mathews
Good Day

I do apologize for the for the duplication, but is anyone able to advise on
this?


On Wed, Feb 28, 2018 at 12:21 PM, Jonathan Mathews <jm3185...@gmail.com>
wrote:

> I have been upgrading my oVirt platform from 3.4 and I am trying to get to
> 4.2.
>
> I have managed to get the platform to 3.6, but need to upgrade the Cluster
> Compatibility Version.
>
> When I select 3.6 in the Cluster Compatibility Version and select OK, it
> highlights Compatibility Version in red, (image attached).
>
> There are no errors been displayed on screen, or in
> the /var/log/ovirt-engine/engine.log file.
>
> Please let me know if I am missing something and how I can resolve this?
>
> Thanks
> Jonathan
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Failure to upgrade Cluster Compatibility Version

2018-02-28 Thread Jonathan Mathews
I have been upgrading my oVirt platform from 3.4 and I am trying to get to
4.2.

I have managed to get the platform to 3.6, but need to upgrade the Cluster
Compatibility Version.

When I select 3.6 in the Cluster Compatibility Version and select OK, it
highlights Compatibility Version in red, (image attached).

There are no errors been displayed on screen, or in
the /var/log/ovirt-engine/engine.log file.

Please let me know if I am missing something and how I can resolve this?

Thanks
Jonathan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt GusterFS assistance

2015-03-23 Thread Jonathan Mathews
Hi I am trying to setup an Ovirt, Glusterfs, VM servers. I have followed
examples on setting up Ovirt and they have helped me so far, but not the
end point that I am looking for.
The web sites are:
http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/
http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/
http://www.linuxplumbersconf.org/2012/wp-content/uploads/2012/09/2012-lpc-virt-storage-virt-kvm-rao.pdf

I am running 3 HP micro servers and 2 HP DL360 G5
The 3 micro servers are my glusterfs storage and have been provisioned for
virt storage.
The 2 DL360 are my processing machines.

Now my 3 gluster hosts are in one cluster, the volume is in up status and
has been provisioned for Virt Storage. But the problem is that my mount
point is directed to one server, so when that server goes down, the volume
storage domain goes down. I am not sure whether there is a way of mounting
it by a volume identity, so when a server goes down the storage domain
stays up.

With my 2 processing hosts, I have them in one cluster, but I have not
gotten any where with this, as I want the Virtual machines to use the
gluster volume as storage but use the processing hosts hardware for
processing power.

I would appreciate any assistance.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users