[ovirt-users] 4.2.3 -- Snapshot in GUI Issue

2018-05-12 Thread Zack Gould
Is there no way to restore a snapshot via the GUI on 4.2 anymore?

I can't take snapshot, but there's no restore option. Since the new GUI
design, it appears that it's missing?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] 4.2.3 -- No Snapshot Restore on GUI?

2018-05-12 Thread mrnervus
Struggling to identify where on the GUI I can navigate to in order to perform a 
snapshot restore.

Is this functionality not available anymore?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: Is PostgreSQL 9.5 required now as of oVirt 4.2?

2018-05-12 Thread Lacey Powers
On 05/02/2018 11:59 PM, Yedidyah Bar David wrote:
> On Thu, May 3, 2018 at 2:58 AM, Lacey Powers  wrote:
>> Hi All,
>>
>> I have a setup of oVirt on 4.1 that has been working brilliantly for
>> over a year, serving the production workloads at my dayjob without
>> complaint.
>>
>> Originally, I had set it up with a custom PostgreSQL version, 9.6, from
>> the PGDG repositories, since the suggested 9.2 was already quite old,
>> and it allowed me to keep consistent versions of PostgreSQL across all
>> the infrastructure I have.
>>
>> Now that I am trying to upgrade to oVirt 4.2, when I run engine-setup
>> per the directions in the release notes documentation, engine-setup
>> insists on PostgreSQL 9.5 from Software Collections, comparing the
>> postgresql versions and then aborting.
>>
>> I don't see a way to tell it that I have a different running version of
>> PostgreSQL that's greater than 9.5 already.
>>
>> Does this mean that no other versions than 9.5 are supported, and I need
>> to downgrade and use the Software Collections version exclusively?
>>
>> Or is there a custom setting that I am missing that will enable me to
>> continue using the 9.6 install I already have.
>>
>> Thank you for your time.
> It's not supported out-of-the-box, but a simple workaround exists:
>
> http://lists.ovirt.org/pipermail/users/2018-March/087573.html
>
> We should probably document this somewhere more approachable...
>
> Best regards,

Hello Didi,

Just wanted to say those directions worked perfectly.

Thank you for the help. =)

Best,

Lacey
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: strange issue: vm lost info on disk

2018-05-12 Thread Nir Soffer
On Sat, 12 May 2018, 11:32 Benny Zlotnik,  wrote:

> Using the auto-generated snapshot is generally a bad idea as it's
> inconsistent,
>

What do you mean by inconsistant?


you should remove it before moving further
>
> On Fri, May 11, 2018 at 7:25 PM, Juan Pablo 
> wrote:
>
>> I rebooted it with no luck, them I used the auto-gen snapshot , same luck.
>> attaching the logs in gdrive
>>
>> thanks in advance
>>
>> 2018-05-11 12:50 GMT-03:00 Benny Zlotnik :
>>
>>> I see here a failed attempt:
>>> 2018-05-09 16:00:20,129-03 ERROR
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (EE-ManagedThreadFactory-engineScheduled-Thread-67)
>>> [bd8eeb1d-f49a-4f91-a521-e0f31b4a7cbd] EVENT_ID:
>>> USER_MOVED_DISK_FINISHED_FAILURE(2,011), User admin@internal-authz have
>>> failed to move disk mail02-int_Disk1 to domain 2penLA.
>>>
>>> Then another:
>>> 2018-05-09 16:15:06,998-03 ERROR
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (EE-ManagedThreadFactory-engineScheduled-Thread-34) [] EVENT_ID:
>>> USER_MOVED_DISK_FINISHED_FAILURE(2,011), User admin@internal-authz have
>>> failed to move disk mail02-int_Disk1 to domain 2penLA.
>>>
>>> Here I see a successful attempt:
>>> 2018-05-09 21:58:42,628-03 INFO
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (default task-50) [940b051c-8c63-4711-baf9-f3520bb2b825] EVENT_ID:
>>> USER_MOVED_DISK(2,008), User admin@internal-authz moving disk
>>> mail02-int_Disk1 to domain 2penLA.
>>>
>>>
>>> Then, in the last attempt I see the attempt was successful but live
>>> merge failed:
>>> 2018-05-11 03:37:59,509-03 ERROR
>>> [org.ovirt.engine.core.bll.MergeStatusCommand]
>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2)
>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Failed to live merge, still in
>>> volume chain: [5d9d2958-96bc-49fa-9100-2f33a3ba737f,
>>> 52532d05-970e-4643-9774-96c31796062c]
>>> 2018-05-11 03:38:01,495-03 INFO
>>> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
>>> (EE-ManagedThreadFactory-engineScheduled-Thread-51)
>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command 'LiveMigrateDisk' (id:
>>> '115fc375-6018-4d59-b9f2-51ee05ca49f8') waiting on child command id:
>>> '26bc52a4-4509-4577-b342-44a679bc628f' type:'RemoveSnapshot' to complete
>>> 2018-05-11 03:38:01,501-03 ERROR
>>> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
>>> (EE-ManagedThreadFactory-engineScheduled-Thread-51)
>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command id:
>>> '4936d196-a891-4484-9cf5-fceaafbf3364 failed child command status for step
>>> 'MERGE_STATUS'
>>> 2018-05-11 03:38:01,501-03 INFO
>>> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
>>> (EE-ManagedThreadFactory-engineScheduled-Thread-51)
>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command
>>> 'RemoveSnapshotSingleDiskLive' id: '4936d196-a891-4484-9cf5-fceaafbf3364'
>>> child commands '[8da5f261-7edd-4930-8d9d-d34f232d84b3,
>>> 1c320f4b-7296-43c4-a3e6-8a868e23fc35,
>>> a0e9e70c-cd65-4dfb-bd00-076c4e99556a]' executions were completed, status
>>> 'FAILED'
>>> 2018-05-11 03:38:02,513-03 ERROR
>>> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
>>> (EE-ManagedThreadFactory-engineScheduled-Thread-2)
>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Merging of snapshot
>>> '319e8bbb-9efe-4de4-a9a6-862e3deb891f' images
>>> '52532d05-970e-4643-9774-96c31796062c'..'5d9d2958-96bc-49fa-9100-2f33a3ba737f'
>>> failed. Images have been marked illegal and can no longer be previewed or
>>> reverted to. Please retry Live Merge on the snapshot to complete the
>>> operation.
>>> 2018-05-11 03:38:02,519-03 ERROR
>>> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
>>> (EE-ManagedThreadFactory-engineScheduled-Thread-2)
>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Ending command
>>> 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand'
>>> with failure.
>>> 2018-05-11 03:38:03,530-03 INFO
>>> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
>>> (EE-ManagedThreadFactory-engineScheduled-Thread-37)
>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command 'RemoveSnapshot' id:
>>> '26bc52a4-4509-4577-b342-44a679bc628f' child commands
>>> '[4936d196-a891-4484-9cf5-fceaafbf3364]' executions were completed, status
>>> 'FAILED'
>>> 2018-05-11 03:38:04,548-03 ERROR
>>> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
>>> (EE-ManagedThreadFactory-engineScheduled-Thread-66)
>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Ending command
>>> 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand' with failure.
>>> 2018-05-11 03:38:04,557-03 INFO
>>> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
>>> (EE-ManagedThreadFactory-engineScheduled-Thread-66)
>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Lock freed to object
>>> 

[ovirt-users] Re: Hosted Engine dependency issue

2018-05-12 Thread Simone Tiraboschi
On Fri, May 11, 2018 at 5:03 PM, Peter Harman 
wrote:

> Hello,
>
>
>
> I am new to Ovirt (and linux) , I successfully installed a working hosted
> engine twice but am now running into a dependency issue. I keep getting the
> below message:
>
>
>
> Error: Package: cockpit-storaged-160-1.el7.centos.noarch (extras)
>
>Requires: storaged-iscsi >= 2.1.1
>
>Available: storaged-iscsi-2.5.2-2.el7.x86_64
> (ovirt-4.2-centos-ovirt42)
>
>storaged-iscsi = 2.5.2-2.el7
>
>Available: storaged-iscsi-2.5.2-4.el7.x86_64 (extras)
>
>storaged-iscsi = 2.5.2-4.el7
>
> Error: Package: cockpit-storaged-160-1.el7.centos.noarch (extras)
>
>Requires: storaged >= 2.1.1
>
>Available: storaged-2.5.2-2.el7.x86_64
> (ovirt-4.2-centos-ovirt42)
>
>storaged = 2.5.2-2.el7
>
>Available: storaged-2.5.2-4.el7.x86_64 (extras)
>
>storaged = 2.5.2-4.el7
>
> Error: Package: cockpit-storaged-160-1.el7.centos.noarch (extras)
>
>Requires: storaged-lvm2 >= 2.1.1
>
>Available: storaged-lvm2-2.5.2-2.el7.x86_64
> (ovirt-4.2-centos-ovirt42)
>
>storaged-lvm2 = 2.5.2-2.el7
>
>Available: storaged-lvm2-2.5.2-4.el7.x86_64 (extras)
>
>storaged-lvm2 = 2.5.2-4.el7
>
>
>
> I wiped the machine and tried again with no luck. Does anybody know what I
> am doing wrong?
>

Please check thread "[ovirt-users] CentOS 7.5.1804 is now officially
available", as a quick workaround you could simply temporary add Centos
virt sig test repo:  https://buildlogs.centos.org/centos/7/virt/x86_
64/ovirt-4.2/



>
>
> *Peter Harman – Systems and Safety Cordinator  | Homeyer Precision
> Manufacturing*
>
>
>
> [image: Description: C:\Users\gruether\AppData\Local\Temp\Temp1_Homeyer
> Logo (2).zip\Homeyer Logo\Homeyer Logo.jpg]
>
>
>
> 16051 State Hwy 47, Marthasville, MO 63357| *E *phar...@homeyertool.com |
> *P* 636.433.2244 | *F* 636.433.5257
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: ONN v4.2.3 Hosted Engine Deployment - Auto addition of Gluster Hosts

2018-05-12 Thread Simone Tiraboschi
On Sat, May 12, 2018 at 9:33 AM,  wrote:

> The new Ovirt Node installation script in v4.2.3 automatically adds the
> gluster hosts after initial hosted engine setup.  This caused major
> problems as I had used different FQDNs on a different VLAN to initially
> setup my gluster nodes [to isolate storage traffic] - but then put them
> onto the "ovirtmgmt" network.  Because they were gluster nodes - I could
> not remove them from the ovirtmgmt network to re-install on the proper
> management VLAN.
>
> I recommend removing the auto-installation of the other gluster nodes - or
> at least provide an option to decline the automatic addition of the other
> nodes.
>

It should let you specify an address on gluster subnet and one on
management one for each host.
Please see https://bugzilla.redhat.com/show_bug.cgi?id=1466132#c3



> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: Gluster quorum

2018-05-12 Thread Doug Ingham
The two key errors I'd investigate are these...

2018-05-10 03:24:21,048+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:
> /gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27'
> with correct network as no gluster network found in cluster
> '59c10db3-0324-0320-0120-0339'
>
> 2018-05-10 03:24:20,749+02 ERROR 
> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob]
> (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses
> for volume 'volume1' of cluster 'C6220': null
>
> 2018-05-10 11:59:26,051+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.
> GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4)
> [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName =
> n4.itsmart.cloud, VdsIdVDSCommandParametersBase:
> {hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null
>

I'd start with that first one. Is the network/interface group of your
storage layer actually defined as a Gluster & Migration network within
oVirt?


On 12 May 2018 at 03:44, Demeter Tibor  wrote:

> Hi,
>
> Could someone help me please ? I can't finish my upgrade process.
>
> Thanks
> R
> Tibor
>
>
>
> - 2018. máj.. 10., 12:51, Demeter Tibor  írta:
>
> Hi,
>
> I've attached the vdsm and supervdsm logs. But I don't have engine.log
> here, because that is on hosted engine vm. Should I send that ?
>
> Thank you
>
> Regards,
>
> Tibor
> - 2018. máj.. 10., 12:30, Sahina Bose  írta:
>
> There's a bug here. Can you log one attaching this engine.log and also
> vdsm.log & supervdsm.log from n3.itsmart.cloud
>
> On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor 
> wrote:
>
>> Hi,
>>
>> I found this:
>>
>>
>> 2018-05-10 03:24:19,096+02 INFO  [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
>> GetGlusterVolumeAdvancedDetailsVDSCommand,
>> return: org.ovirt.engine.core.common.businessentities.gluster.
>> GlusterVolumeAdvancedDetails@ca97448e, log id: 347435ae
>> 2018-05-10 03:24:19,097+02 ERROR 
>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob]
>> (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses
>> for volume 'volume2' of cluster 'C6220': null
>> 2018-05-10 03:24:19,097+02 INFO  
>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>> (DefaultQuartzScheduler8) [7715ceda] Failed to acquire lock and wait lock
>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-0339=GLUSTER]',
>> sharedLocks=''}'
>> 2018-05-10 03:24:19,104+02 INFO  [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] START, 
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName
>> = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:
>> {hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 6908121d
>> 2018-05-10 03:24:19,106+02 ERROR [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] Command '
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})'
>> execution failed: null
>> 2018-05-10 03:24:19,106+02 INFO  [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
>> GetGlusterLocalLogicalVolumeListVDSCommand,
>> log id: 6908121d
>> 2018-05-10 03:24:19,107+02 INFO  [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] START, 
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName
>> = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:
>> {hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), log id: 735c6a5f
>> 2018-05-10 03:24:19,109+02 ERROR [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] Command '
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud,
>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})'
>> execution failed: null
>> 2018-05-10 03:24:19,109+02 INFO  [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
>> GetGlusterLocalLogicalVolumeListVDSCommand,
>> log id: 735c6a5f
>> 2018-05-10 03:24:19,110+02 INFO  [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] START, 
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName
>> = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:
>> {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 6f9e9f58
>> 2018-05-10 03:24:19,112+02 ERROR 

[ovirt-users] Re: strange issue: vm lost info on disk

2018-05-12 Thread Benny Zlotnik
Using the auto-generated snapshot is generally a bad idea as it's
inconsistent, you should remove it before moving further

On Fri, May 11, 2018 at 7:25 PM, Juan Pablo 
wrote:

> I rebooted it with no luck, them I used the auto-gen snapshot , same luck.
> attaching the logs in gdrive
>
> thanks in advance
>
> 2018-05-11 12:50 GMT-03:00 Benny Zlotnik :
>
>> I see here a failed attempt:
>> 2018-05-09 16:00:20,129-03 ERROR [org.ovirt.engine.core.dal.dbb
>> roker.auditloghandling.AuditLogDirector] 
>> (EE-ManagedThreadFactory-engineScheduled-Thread-67)
>> [bd8eeb1d-f49a-4f91-a521-e0f31b4a7cbd] EVENT_ID:
>> USER_MOVED_DISK_FINISHED_FAILURE(2,011), User admin@internal-authz have
>> failed to move disk mail02-int_Disk1 to domain 2penLA.
>>
>> Then another:
>> 2018-05-09 16:15:06,998-03 ERROR [org.ovirt.engine.core.dal.dbb
>> roker.auditloghandling.AuditLogDirector] 
>> (EE-ManagedThreadFactory-engineScheduled-Thread-34)
>> [] EVENT_ID: USER_MOVED_DISK_FINISHED_FAILURE(2,011), User
>> admin@internal-authz have failed to move disk mail02-int_Disk1 to domain
>> 2penLA.
>>
>> Here I see a successful attempt:
>> 2018-05-09 21:58:42,628-03 INFO  [org.ovirt.engine.core.dal.dbb
>> roker.auditloghandling.AuditLogDirector] (default task-50)
>> [940b051c-8c63-4711-baf9-f3520bb2b825] EVENT_ID: USER_MOVED_DISK(2,008),
>> User admin@internal-authz moving disk mail02-int_Disk1 to domain 2penLA.
>>
>>
>> Then, in the last attempt I see the attempt was successful but live merge
>> failed:
>> 2018-05-11 03:37:59,509-03 ERROR 
>> [org.ovirt.engine.core.bll.MergeStatusCommand]
>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2)
>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Failed to live merge, still in
>> volume chain: [5d9d2958-96bc-49fa-9100-2f33a3ba737f,
>> 52532d05-970e-4643-9774-96c31796062c]
>> 2018-05-11 03:38:01,495-03 INFO  [org.ovirt.engine.core.bll.Ser
>> ialChildCommandsExecutionCallback] 
>> (EE-ManagedThreadFactory-engineScheduled-Thread-51)
>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command 'LiveMigrateDisk' (id:
>> '115fc375-6018-4d59-b9f2-51ee05ca49f8') waiting on child command id:
>> '26bc52a4-4509-4577-b342-44a679bc628f' type:'RemoveSnapshot' to complete
>> 2018-05-11 03:38:01,501-03 ERROR [org.ovirt.engine.core.bll.sna
>> pshots.RemoveSnapshotSingleDiskLiveCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-51)
>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command id:
>> '4936d196-a891-4484-9cf5-fceaafbf3364 failed child command status for
>> step 'MERGE_STATUS'
>> 2018-05-11 03:38:01,501-03 INFO  [org.ovirt.engine.core.bll.sna
>> pshots.RemoveSnapshotSingleDiskLiveCommandCallback]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-51)
>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command
>> 'RemoveSnapshotSingleDiskLive' id: '4936d196-a891-4484-9cf5-fceaafbf3364'
>> child commands '[8da5f261-7edd-4930-8d9d-d34f232d84b3,
>> 1c320f4b-7296-43c4-a3e6-8a868e23fc35, a0e9e70c-cd65-4dfb-bd00-076c4e99556a]'
>> executions were completed, status 'FAILED'
>> 2018-05-11 03:38:02,513-03 ERROR [org.ovirt.engine.core.bll.sna
>> pshots.RemoveSnapshotSingleDiskLiveCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-2)
>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Merging of snapshot
>> '319e8bbb-9efe-4de4-a9a6-862e3deb891f' images
>> '52532d05-970e-4643-9774-96c31796062c'..'5d9d2958-96bc-49fa-9100-2f33a3ba737f'
>> failed. Images have been marked illegal and can no longer be previewed or
>> reverted to. Please retry Live Merge on the snapshot to complete the
>> operation.
>> 2018-05-11 03:38:02,519-03 ERROR [org.ovirt.engine.core.bll.sna
>> pshots.RemoveSnapshotSingleDiskLiveCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-2)
>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Ending command
>> 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand'
>> with failure.
>> 2018-05-11 03:38:03,530-03 INFO  [org.ovirt.engine.core.bll.Con
>> currentChildCommandsExecutionCallback] 
>> (EE-ManagedThreadFactory-engineScheduled-Thread-37)
>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command 'RemoveSnapshot' id:
>> '26bc52a4-4509-4577-b342-44a679bc628f' child commands
>> '[4936d196-a891-4484-9cf5-fceaafbf3364]' executions were completed,
>> status 'FAILED'
>> 2018-05-11 03:38:04,548-03 ERROR 
>> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-66)
>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Ending command
>> 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand' with failure.
>> 2018-05-11 03:38:04,557-03 INFO  
>> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-66)
>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Lock freed to object
>> 'EngineLock:{exclusiveLocks='[4808bb70-c9cc-4286-aa39-16b579
>> 8213ac=LIVE_STORAGE_MIGRATION]', sharedLocks=''}'
>>
>> I do not see the merge attempt in the vdsm.log, so please send vdsm logs
>> for 

[ovirt-users] Re: Gluster quorum

2018-05-12 Thread Demeter Tibor
Hi, 

Could someone help me please ? I can't finish my upgrade process. 

Thanks 
R 
Tibor 

- 2018. máj.. 10., 12:51, Demeter Tibor  írta: 

> Hi,

> I've attached the vdsm and supervdsm logs. But I don't have engine.log here,
> because that is on hosted engine vm. Should I send that ?

> Thank you

> Regards,

> Tibor
> - 2018. máj.. 10., 12:30, Sahina Bose  írta:

>> There's a bug here. Can you log one attaching this engine.log and also 
>> vdsm.log
>> & supervdsm.log from n3.itsmart.cloud

>> On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor < [ 
>> mailto:tdeme...@itsmart.hu |
>> tdeme...@itsmart.hu ] > wrote:

>>> Hi,

>>> I found this:

>>> 2018-05-10 03:24:19,096+02 INFO
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH,
>>> GetGlusterVolumeAdvancedDetailsVDSCommand, return:
>>> org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@ca97448e,
>>> log id: 347435ae
>>> 2018-05-10 03:24:19,097+02 ERROR
>>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7)
>>> [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of
>>> cluster 'C6220': null
>>> 2018-05-10 03:24:19,097+02 INFO
>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] 
>>> (DefaultQuartzScheduler8)
>>> [7715ceda] Failed to acquire lock and wait lock
>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-0339=GLUSTER]',
>>> sharedLocks=''}'
>>> 2018-05-10 03:24:19,104+02 INFO
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] START,
>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}),
>>> log id: 6908121d
>>> 2018-05-10 03:24:19,106+02 ERROR
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] Command
>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})'
>>> execution failed: null
>>> 2018-05-10 03:24:19,106+02 INFO
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH,
>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d
>>> 2018-05-10 03:24:19,107+02 INFO
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] START,
>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud,
>>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}),
>>> log id: 735c6a5f
>>> 2018-05-10 03:24:19,109+02 ERROR
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] Command
>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud,
>>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})'
>>> execution failed: null
>>> 2018-05-10 03:24:19,109+02 INFO
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH,
>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f
>>> 2018-05-10 03:24:19,110+02 INFO
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] START,
>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud,
>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}),
>>> log id: 6f9e9f58
>>> 2018-05-10 03:24:19,112+02 ERROR
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] Command
>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud,
>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})'
>>> execution failed: null
>>> 2018-05-10 03:24:19,112+02 INFO
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH,
>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f58
>>> 2018-05-10 03:24:19,113+02 INFO
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] START,
>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud,
>>> VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}),
>>> log id: 2ee46967
>>> 2018-05-10 03:24:19,115+02 ERROR
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] Command
>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = 

[ovirt-users] ONN v4.2.3 Hosted Engine Deployment - Auto addition of Gluster Hosts

2018-05-12 Thread glenn . farmer
The new Ovirt Node installation script in v4.2.3 automatically adds the gluster 
hosts after initial hosted engine setup.  This caused major problems as I had 
used different FQDNs on a different VLAN to initially setup my gluster nodes 
[to isolate storage traffic] - but then put them onto the "ovirtmgmt" network.  
Because they were gluster nodes - I could not remove them from the ovirtmgmt 
network to re-install on the proper management VLAN.

I recommend removing the auto-installation of the other gluster nodes - or at 
least provide an option to decline the automatic addition of the other nodes.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org