[ovirt-users] Re: Metrics Store installation - ansible playbook "deploy_cluster" - docker_image_availability

2018-11-09 Thread Marcelo Leandro
Hi
Can someone help me to resolve this issue?

Marcelo Leandro

Em qua, 31 de out de 2018 às 09:27, Shirly Radco 
escreveu:

> Adding OpenShift mailing list.
> Please help with this OpenShift installation.
> --
> SHIRLY RADCO
> BI SENIOR SOFTWARE ENGINEER
> Red Hat Israel 
> 
> TRIED. TESTED. TRUSTED. 
>
>>
>>
>> On Fri, Oct 26, 2018 at 9:28 PM Marcelo Leandro 
>> wrote:
>>
>>> Hello I am with same problem.
>>>
>>> ● origin-master-controllers.service - Atomic OpenShift Master Controllers
>>>
>>>Loaded: loaded
>>> (/usr/lib/systemd/system/origin-master-controllers.service; enabled; vendor
>>> preset: disabled)
>>>
>>>Active: inactive (dead) (Result: exit-code) since Fri 2018-10-26
>>> 15:27:19 -03; 1s ago
>>>
>>>  Docs: https://github.com/openshift/origin
>>>
>>>   Process: 26872 ExecStart=/usr/bin/openshift start master controllers
>>> --config=${CONFIG_FILE} $OPTIONS *(code=exited, status=255)*
>>>
>>>  Main PID: 26872 (code=exited, status=255)
>>>
>>>
>>> Oct 26 15:27:14 es.hybriddc.com.br systemd[1]: 
>>> *origin-master-controllers.service:
>>> main process exited, code=exited, status=255/n/a*
>>>
>>> Oct 26 15:27:14 es.hybriddc.com.br systemd[1]: *Failed to start Atomic
>>> OpenShift Master Controllers.*
>>>
>>> Oct 26 15:27:14 es.hybriddc.com.br systemd[1]: *Unit
>>> origin-master-controllers.service entered failed state.*
>>>
>>> Oct 26 15:27:14 es.hybriddc.com.br systemd[1]: 
>>> *origin-master-controllers.service
>>> failed.*
>>>
>>> Oct 26 15:27:19 es.hybriddc.com.br systemd[1]:
>>> origin-master-controllers.service holdoff time over, scheduling restart.
>>>
>>>
>>>
>>>
>>>
>>> Oct 26 15:27:30 es.hybriddc.com.br atomic-openshift-master-api[26916]:
>>> I1026 15:27:30.057760   26916 start_api.go:104] Using a listen address
>>> override "0.0.0.0:8443"
>>>
>>> Oct 26 15:27:30 es.hybriddc.com.br atomic-openshift-master-api[26916]:
>>> I1026 15:27:30.058002   26916 plugins.go:83] Registered admission plugin
>>> "NamespaceLifecycle"
>>>
>>> Oct 26 15:27:30 es.hybriddc.com.br atomic-openshift-master-api[26916]:
>>> I1026 15:27:30.058017   26916 plugins.go:83] Registered admission plugin
>>> "Initializers"
>>>
>>> Oct 26 15:27:30 es.hybriddc.com.br atomic-openshift-master-api[26916]:
>>> I1026 15:27:30.058028   26916 plugins.go:83] Registered admission plugin
>>> "ValidatingAdmissionWebhook"
>>>
>>> Oct 26 15:27:30 es.hybriddc.com.br atomic-openshift-master-api[26916]:
>>> I1026 15:27:30.058040   26916 plugins.go:83] Registered admission plugin
>>> "MutatingAdmissionWebhook"
>>>
>>> Oct 26 15:27:30 es.hybriddc.com.br atomic-openshift-master-api[26916]:
>>> I1026 15:27:30.058120   26916 plugins.go:83] Registered admission plugin
>>> "AlwaysAdmit"
>>>
>>> Oct 26 15:27:30 es.hybriddc.com.br atomic-openshift-master-api[26916]:
>>> I1026 15:27:30.058131   26916 plugins.go:83] Registered admission plugin
>>> "AlwaysPullImages"
>>>
>>> Oct 26 15:27:30 es.hybriddc.com.br atomic-openshift-master-api[26916]:
>>> I1026 15:27:30.058142   26916 plugins.go:83] Registered admission plugin
>>> "LimitPodHardAntiAffinityTopology"
>>>
>>> Oct 26 15:27:30 es.hybriddc.com.br atomic-openshift-master-api[26916]:
>>> I1026 15:27:30.058155   26916 plugins.go:83] Registered admission plugin
>>> "DefaultTolerationSeconds"
>>>
>>> Oct 26 15:27:30 es.hybriddc.com.br atomic-openshift-master-api[26916]:
>>> I1026 15:27:30.058164   26916 plugins.go:83] Registered admission plugin
>>> "AlwaysDeny"
>>>
>>> Oct 26 15:27:30 es.hybriddc.com.br atomic-openshift-master-api[26916]:
>>> I1026 15:27:30.058178   26916 plugins.go:83] Registered admission plugin
>>> "EventRateLimit"
>>>
>>> Oct 26 15:27:30 es.hybriddc.com.br atomic-openshift-master-api[26916]:
>>> I1026 15:27:30.058190   26916 plugins.go:83] Registered admission plugin
>>> "DenyEscalatingExec"
>>>
>>> Oct 26 15:27:30 es.hybriddc.com.br atomic-openshift-master-api[26916]:
>>> I1026 15:27:30.058199   26916 plugins.go:83] Registered admission plugin
>>> "DenyExecOnPrivileged"
>>>
>>> Oct 26 15:27:30 es.hybriddc.com.br atomic-openshift-master-api[26916]:
>>> I1026 15:27:30.058210   26916 plugins.go:83] Registered admission plugin
>>> "ExtendedResourceToleration"
>>>
>>> Oct 26 15:27:30 es.hybriddc.com.br atomic-openshift-master-api[26916]:
>>> I1026 15:27:30.058222   26916 plugins.go:83] Registered admission plugin
>>> "OwnerReferencesPermissionEnforcement"
>>>
>>> Oct 26 15:27:30 es.hybriddc.com.br atomic-openshift-master-api[26916]:
>>> I1026 15:27:30.058237   26916 plugins.go:83] Registered admission plugin
>>> "ImagePolicyWebhook"
>>>
>>> Oct 26 15:27:30 es.hybriddc.com.br atomic-openshift-master-api[26916]:
>>> I1026 15:27:30.058249   26916 plugins.go:83] Registered admission plugin
>>> "InitialResources"
>>>
>>> Oct 26 15:27:30 es.hybriddc.com.br atomic-openshift-master-api[26916]:
>>> I1026 15:27:30.058261   26916 plugins.go:83] Registered admission plugin
>>> "LimitRanger"
>>>
>>> Oct 26 

[ovirt-users] Re: oVirt Node on CentOS 7.5 and AMD EPYC Support

2018-11-09 Thread Mikael Öhman
This is, from what I can see, the last update Skylake-server in Ovirt: Am I 
correct to understand that this was never backported to 4.2?
I'm at 4.2.7 and would like to use Skylake-Server, but seems to be still 
unavailable.

As you mention backporting, I assume it is/will be in 4.3?

And as 4.3 release isn't anytime soon, is it recommended to apply Tobias 
"hack", or should I attempt to use some type of cpu-passthrough for now 
(though, I don't see a trivial way to enable this either).

Best regards, Mikael
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3YEIOCK4XCIMP5UKMDAA767EXGTY6T6A/


[ovirt-users] Re: ovirt 4.2.7 nested not importing she domain

2018-11-09 Thread Simone Tiraboschi
On Fri, Nov 9, 2018 at 2:19 PM Martin Sivak  wrote:

> Hi,
>
> > It completed without errors and the hosted_engine storage domain and the
> > HostedEngine inside it were already visible, without the former
> dependency
> > to create a data domain
>
> Glad it works for you.
>
> This is indeed one of the few small improvements in the new deployment
> procedure :) We do not recommend using the old procedure anymore
> unless there is something special that does not work there. In other
> words, try ansible first from now on.
>

Exactly: the "vintage" procedure is deprecated and we are also going to
remove it in 4.3.
Please notice that now, in addition to the interactive hosted-engine-setup
(from CLI and from cockpit GUI), we also have a pure ansible role that can
be executed by itself or combined with other ansible roles for automated
deployments or to create complex and richer environment with a single
ansible playbook.
The project is here:
https://github.com/oVirt/ovirt-ansible-hosted-engine-setup
while its artifacts are distributed as rpms or via ansible galaxy.


>
> Best regards
>
> --
> Martin Sivak
> HE ex-maintainer :)
>
> On Fri, Nov 9, 2018 at 1:56 PM, Gianluca Cecchi
>  wrote:
> >
> > On Fri, Nov 9, 2018 at 11:28 AM Simone Tiraboschi 
> > wrote:
> >>
> >>
> >>
> >> On Fri, Nov 9, 2018 at 12:45 AM Gianluca Cecchi
> >>  wrote:
> >>>
> >>> Hello,
> >>> I'm configuring a nested self hosted engine environment with 4.2.7 and
> >>> CentOS 7.5.
> >>> Domain type is NFS.
> >>> I deployed with
> >>>
> >>> hosted-engine --deploy --noansible
> >>>
> >>> All went apparently good but after creating the master storage domain I
> >>> see that the hosted engine domain is not automatically imported
> >>> At the moment I have only one host.
> >>>
> >>> ovirt-ha-agent status gives every 10 seconds:
> >>> Nov 09 00:36:30 ovirtdemo01.localdomain.local ovirt-ha-agent[18407]:
> >>> ovirt-ha-agent
> >>> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR
> >>> Unable to identify the OVF_STORE volume, falling back to initial
> vm.conf.
> >>> Please ensure you already added your first data domain for regular VMs
> >>>
> >>> In engine.log I see  every 15 seconds a dumpxml output ad the message:
> >>>
> >>> 2018-11-09 00:31:52,822+01 WARN
> >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerObjectsBuilder]
> >>> (EE-ManagedThreadFactory-engineScheduled-Thread-52) [7fcce3cb] null
> >>> architecture type, replacing with x86_64, VM [HostedEngine]
> >>>
> >>> see full below.
> >>>
> >>> Any hint?
> >>>
> >>
> >> Hi Gianluca,
> >> unfortunately it's a known regression: it's currently tracked here
> >> https://bugzilla.redhat.com/1639604
> >>
> >> In the mean time I'd suggest to use the new ansible flow witch is not
> >> affected by this issue or deploy with an engine-appliance shipped before
> >> 4.2.5 completing the upgrade on engine side only when everything is
> there as
> >> expected.
> >
> >
> > Thanks Simone,
> > I scratched and reinstalled using the 4.2. appliance and the default
> option
> > (with ansible) executing the command:
> >
> >  hosted-engine --deploy
> >
>
> > Gianluca
> >
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/N6XCZ3TZPCYVC4B4554AKQCJ25BE764I/
> >
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BHCVSSNKT65TDCWNVEXSYDQCUGR4YCPY/


[ovirt-users] Re: ovirt 4.2.7 nested not importing she domain

2018-11-09 Thread Martin Sivak
Hi,

> It completed without errors and the hosted_engine storage domain and the
> HostedEngine inside it were already visible, without the former dependency
> to create a data domain

Glad it works for you.

This is indeed one of the few small improvements in the new deployment
procedure :) We do not recommend using the old procedure anymore
unless there is something special that does not work there. In other
words, try ansible first from now on.

Best regards

--
Martin Sivak
HE ex-maintainer :)

On Fri, Nov 9, 2018 at 1:56 PM, Gianluca Cecchi
 wrote:
>
> On Fri, Nov 9, 2018 at 11:28 AM Simone Tiraboschi 
> wrote:
>>
>>
>>
>> On Fri, Nov 9, 2018 at 12:45 AM Gianluca Cecchi
>>  wrote:
>>>
>>> Hello,
>>> I'm configuring a nested self hosted engine environment with 4.2.7 and
>>> CentOS 7.5.
>>> Domain type is NFS.
>>> I deployed with
>>>
>>> hosted-engine --deploy --noansible
>>>
>>> All went apparently good but after creating the master storage domain I
>>> see that the hosted engine domain is not automatically imported
>>> At the moment I have only one host.
>>>
>>> ovirt-ha-agent status gives every 10 seconds:
>>> Nov 09 00:36:30 ovirtdemo01.localdomain.local ovirt-ha-agent[18407]:
>>> ovirt-ha-agent
>>> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR
>>> Unable to identify the OVF_STORE volume, falling back to initial vm.conf.
>>> Please ensure you already added your first data domain for regular VMs
>>>
>>> In engine.log I see  every 15 seconds a dumpxml output ad the message:
>>>
>>> 2018-11-09 00:31:52,822+01 WARN
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerObjectsBuilder]
>>> (EE-ManagedThreadFactory-engineScheduled-Thread-52) [7fcce3cb] null
>>> architecture type, replacing with x86_64, VM [HostedEngine]
>>>
>>> see full below.
>>>
>>> Any hint?
>>>
>>
>> Hi Gianluca,
>> unfortunately it's a known regression: it's currently tracked here
>> https://bugzilla.redhat.com/1639604
>>
>> In the mean time I'd suggest to use the new ansible flow witch is not
>> affected by this issue or deploy with an engine-appliance shipped before
>> 4.2.5 completing the upgrade on engine side only when everything is there as
>> expected.
>
>
> Thanks Simone,
> I scratched and reinstalled using the 4.2. appliance and the default option
> (with ansible) executing the command:
>
>  hosted-engine --deploy
>

> Gianluca
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/N6XCZ3TZPCYVC4B4554AKQCJ25BE764I/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UKOI55PFP64XO5A6YRAEVEUIG3K63N2B/


[ovirt-users] Re: ovirt 4.2.7 nested not importing she domain

2018-11-09 Thread Gianluca Cecchi
On Fri, Nov 9, 2018 at 11:28 AM Simone Tiraboschi 
wrote:

>
>
> On Fri, Nov 9, 2018 at 12:45 AM Gianluca Cecchi 
> wrote:
>
>> Hello,
>> I'm configuring a nested self hosted engine environment with 4.2.7 and
>> CentOS 7.5.
>> Domain type is NFS.
>> I deployed with
>>
>> hosted-engine --deploy --noansible
>>
>> All went apparently good but after creating the master storage domain I
>> see that the hosted engine domain is not automatically imported
>> At the moment I have only one host.
>>
>> ovirt-ha-agent status gives every 10 seconds:
>> Nov 09 00:36:30 ovirtdemo01.localdomain.local ovirt-ha-agent[18407]:
>> ovirt-ha-agent
>> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR
>> Unable to identify the OVF_STORE volume, falling back to initial vm.conf.
>> Please ensure you already added your first data domain for regular VMs
>>
>> In engine.log I see  every 15 seconds a dumpxml output ad the message:
>>
>> 2018-11-09 00:31:52,822+01 WARN
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerObjectsBuilder]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-52) [7fcce3cb] null
>> architecture type, replacing with x86_64, VM [HostedEngine]
>>
>> see full below.
>>
>> Any hint?
>>
>>
> Hi Gianluca,
> unfortunately it's a known regression: it's currently tracked here
> https://bugzilla.redhat.com/1639604
>
> In the mean time I'd suggest to use the new ansible flow witch is not
> affected by this issue or deploy with an engine-appliance shipped before
> 4.2.5 completing the upgrade on engine side only when everything is there
> as expected.
>

Thanks Simone,
I scratched and reinstalled using the 4.2. appliance and the default option
(with ansible) executing the command:

 hosted-engine --deploy

It completed without errors and the hosted_engine storage domain and the
HostedEngine inside it were already visible, without the former dependency
to create a data domain
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N6XCZ3TZPCYVC4B4554AKQCJ25BE764I/


[ovirt-users] Re: libgfapi support are "false" by default in ovirt 4.2 ?

2018-11-09 Thread Nir Soffer
On Fri, 9 Nov 2018, 9:17 Peter Krempa  On Fri, Nov 09, 2018 at 08:57:26 +0200, Nir Soffer wrote:
> > On Fri, 9 Nov 2018, 7:22 Mike Lykov  >
> > > 08.11.2018 18:50, Sahina Bose пишет:
> > > > On Thu, Nov 8, 2018 at 8:13 PM Simone Tiraboschi <
> stira...@redhat.com>
> > > wrote:
> > > >>
> > > >> Hi,
> > > >> adding also Sahina here.
> > > >> AFAIK it should be enabled by default in hyper-converged
> deployments.
> > > >>
> > > >> Can you please grep your deployment logs for ENABLE_LIBGFAPI?
> > > >
> > > > No, libgfapi access is disabled by default due to lack of HA
> > > > (https://bugzilla.redhat.com/show_bug.cgi?id=1484227)
> > >
> > > At this moment, for version mentioned below, this bug is actual?
> > >
> >
> > Yes, libvirt does not support multiple hosts for gluster disk yet.
>
> Libvirt does support multiple hosts for gluster when they are defined as
> disk XML e.g. when starting a VM.
>

Great! is this available in 7.6?



> We do not support that only in the case of snapshots as the legacy
> snapshot API in qemu accepts only paths and URIs and the multi-host
> definition cannot be expressed as an URI. It will become possible once
> we properly integrate with -blockdev.
>

When blockdev support is expected?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IJMTRIAKBCYGX5JGLQDMVCUBG4HZO6XE/


[ovirt-users] Re: ovirt 4.2.7 nested not importing she domain

2018-11-09 Thread Simone Tiraboschi
On Fri, Nov 9, 2018 at 12:45 AM Gianluca Cecchi 
wrote:

> Hello,
> I'm configuring a nested self hosted engine environment with 4.2.7 and
> CentOS 7.5.
> Domain type is NFS.
> I deployed with
>
> hosted-engine --deploy --noansible
>
> All went apparently good but after creating the master storage domain I
> see that the hosted engine domain is not automatically imported
> At the moment I have only one host.
>
> ovirt-ha-agent status gives every 10 seconds:
> Nov 09 00:36:30 ovirtdemo01.localdomain.local ovirt-ha-agent[18407]:
> ovirt-ha-agent
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR
> Unable to identify the OVF_STORE volume, falling back to initial vm.conf.
> Please ensure you already added your first data domain for regular VMs
>
> In engine.log I see  every 15 seconds a dumpxml output ad the message:
>
> 2018-11-09 00:31:52,822+01 WARN
> [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerObjectsBuilder]
> (EE-ManagedThreadFactory-engineScheduled-Thread-52) [7fcce3cb] null
> architecture type, replacing with x86_64, VM [HostedEngine]
>
> see full below.
>
> Any hint?
>
>
Hi Gianluca,
unfortunately it's a known regression: it's currently tracked here
https://bugzilla.redhat.com/1639604

In the mean time I'd suggest to use the new ansible flow witch is not
affected by this issue or deploy with an engine-appliance shipped before
4.2.5 completing the upgrade on engine side only when everything is there
as expected.


> Thanks
> Gianluca
>
> 2018-11-09 00:31:52,714+01 INFO
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
> (EE-ManagedThreadFactory-engineScheduled-Thread-52) [] VM
> '21c5fe9f-cd46-49fd-a6f3-009b4d450894' was discovered as 'Up' on VDS
> '4de40432-c1f7-4f20-b231-347095015fbd'(ovirtdemo01.localdomain.local)
> 2018-11-09 00:31:52,764+01 INFO
> [org.ovirt.engine.core.bll.AddUnmanagedVmsCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-52) [7fcce3cb] Running
> command: AddUnmanagedVmsCommand internal: true.
> 2018-11-09 00:31:52,766+01 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-52) [7fcce3cb] START,
> DumpXmlsVDSCommand(HostName = ovirtdemo01.localdomain.local,
> Params:{hostId='4de40432-c1f7-4f20-b231-347095015fbd',
> vmIds='[21c5fe9f-cd46-49fd-a6f3-009b4d450894]'}), log id: 5d5a0a63
> 2018-11-09 00:31:52,775+01 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-52) [7fcce3cb] FINISH,
> DumpXmlsVDSCommand, return: {21c5fe9f-cd46-49fd-a6f3-009b4d450894= type='kvm' id='2'>
>   HostedEngine
>   21c5fe9f-cd46-49fd-a6f3-009b4d450894
>   http://ovirt.org/vm/tune/1.0;
> xmlns:ovirt-vm="http://ovirt.org/vm/1.0;>
> 
> http://ovirt.org/vm/1.0;>
>  type="bool">False
> 0
> 1541719799.6
> 
>
> 2e8944d3-7ac4-4597-8883-c0b2937fb23b
> 
> 
> 
> 
>
> 0f1d0ce3-8843-4418-b882-0d84ca481717
> ovirtmgmt
> 
> 
> 
> 
>
> 6d167005-b547-4190-938f-ce1b82eae7af
> false
> 
> 
> 
> 
>
> 8eb98007-1d9a-4689-bbab-b3c7060efef8
>
> fbcfb922-0103-43fb-a2b6-2bf0c9e356ea
> /dev/vda
>
> 8eb98007-1d9a-4689-bbab-b3c7060efef8
>
> ----
> exclusive
>
> 64bdb7cd-60a1-4420-b3a6-607b20e2cd5a
> 
> 
> 
> 
>
> fbcfb922-0103-43fb-a2b6-2bf0c9e356ea
>
> 8eb98007-1d9a-4689-bbab-b3c7060efef8
> 0
>
> /rhev/data-center/mnt/ovirtdemo01.localdomain.local:_SHE__DOMAIN/fbcfb922-0103-43fb-a2b6-2bf0c9e356ea/images/8eb98007-1d9a-4689-bbab-b3c7060efef8/64bdb7cd-60a1-4420-b3a6-607b20e2cd5a.lease
>
> /rhev/data-center/mnt/ovirtdemo01.localdomain.local:_SHE__DOMAIN/fbcfb922-0103-43fb-a2b6-2bf0c9e356ea/images/8eb98007-1d9a-4689-bbab-b3c7060efef8/64bdb7cd-60a1-4420-b3a6-607b20e2cd5a
>
> 64bdb7cd-60a1-4420-b3a6-607b20e2cd5a
> 
> 
> 
> 
>   
>   6270976
>   6270976
>   2
>   
> 1020
>   
>   
> /machine
>   
>   
> 
>   oVirt
>   oVirt Node
>   7-5.1804.5.el7.centos
>   2820BD92-2B2B-42C5-912B-76FB65E93FBF
>   21c5fe9f-cd46-49fd-a6f3-009b4d450894
> 
>   
>   
> hvm
> 
>   
>   
> 
>   
>   
> Skylake-Client
> 
>   
>   
> 
> 
> 
>   
>   destroy
>   destroy
>   destroy
>   
> /usr/libexec/qemu-kvm
> 
>   
>   
>   
>   
>   
>   
> 
> 
>io='threads'/>
>file='/var/run/vdsm/storage/fbcfb922-0103-43fb-a2b6-2bf0c9e356ea/8eb98007-1d9a-4689-bbab-b3c7060efef8/64bdb7cd-60a1-4420-b3a6-607b20e2cd5a'/>
>   
>   
>   8eb98007-1d9a-4689-bbab-b3c7060efef8
>   
>   
>function='0x0'/>
> 
> 
>   
>function='0x0'/>
> 
> 
>   
>function='0x2'/>
> 
> 
>   
> 
> 
>   
>function='0x1'/>
> 
> 
>   
>function='0x0'/>
> 
>   

[ovirt-users] Re: storage healing question

2018-11-09 Thread Dev Ops
Just a quick note the volume in question is actually called bgl-vms-gfs. The 
original message is still valid. 

[root@bgl-vms-gfs03 bricks]# gluster volume heal bgl-vms-gfs info
Brick 10.8.255.1:/gluster/bgl-vms-gfs01/brick
/.shard/bd0bf192-e0e1-4b72-85cb-fa3497c555be.989
/.shard/bd0bf192-e0e1-4b72-85cb-fa3497c555be.988
/.shard/bd0bf192-e0e1-4b72-85cb-fa3497c555be.423
/.shard/cca2d4d0-7254-49c5-9db0-c9aaeb34c479.612
/.shard/cca2d4d0-7254-49c5-9db0-c9aaeb34c479.614
/.shard/cca2d4d0-7254-49c5-9db0-c9aaeb34c479.611
/.shard/bd0bf192-e0e1-4b72-85cb-fa3497c555be.236
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.48
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.52
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.423
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.424
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.611
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.612
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.799
/.shard/18954415-3210-4d93-8591-0b3e1e5b3a16.498
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.1175
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.1551
/c71bb8b0-c669-4bf6-8348-14aafd4a805f/images/9dc54d22-7cb3-4e07-adbb-70f0ec5b7e6b/5f8515f7-3fae-4af6-adc4-d38426a9aa72
/.shard/6792d5d0-1bd2-41cf-a48e-dbe015d3e9fd.611
/.shard/dfe31381-6b91-4eb1-9050-0332182e424a.424
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.50
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.51
/.shard/cca2d4d0-7254-49c5-9db0-c9aaeb34c479.424
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.236
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.425
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.428
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.427
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.614
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.1363
/.shard/6792d5d0-1bd2-41cf-a48e-dbe015d3e9fd.238
/.shard/6792d5d0-1bd2-41cf-a48e-dbe015d3e9fd.428
/.shard/6792d5d0-1bd2-41cf-a48e-dbe015d3e9fd.612
/.shard/6792d5d0-1bd2-41cf-a48e-dbe015d3e9fd.423
/.shard/6792d5d0-1bd2-41cf-a48e-dbe015d3e9fd.614
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.987
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.429
/.shard/dfe31381-6b91-4eb1-9050-0332182e424a.429
/.shard/dfe31381-6b91-4eb1-9050-0332182e424a.241
/c71bb8b0-c669-4bf6-8348-14aafd4a805f/dom_md/ids
/.shard/cca2d4d0-7254-49c5-9db0-c9aaeb34c479.987
/.shard/bd0bf192-e0e1-4b72-85cb-fa3497c555be.987
/.shard/6792d5d0-1bd2-41cf-a48e-dbe015d3e9fd.241
/.shard/6792d5d0-1bd2-41cf-a48e-dbe015d3e9fd.429
/.shard/bd0bf192-e0e1-4b72-85cb-fa3497c555be.424
/.shard/dfe31381-6b91-4eb1-9050-0332182e424a.987
/.shard/6792d5d0-1bd2-41cf-a48e-dbe015d3e9fd.987
/.shard/dfe31381-6b91-4eb1-9050-0332182e424a.238
/.shard/dfe31381-6b91-4eb1-9050-0332182e424a.428
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.238
/.shard/bd0bf192-e0e1-4b72-85cb-fa3497c555be.611
/.shard/18954415-3210-4d93-8591-0b3e1e5b3a16.504
/.shard/bd0bf192-e0e1-4b72-85cb-fa3497c555be.238
/.shard/bd0bf192-e0e1-4b72-85cb-fa3497c555be.428
/.shard/bd0bf192-e0e1-4b72-85cb-fa3497c555be.612
/.shard/bd0bf192-e0e1-4b72-85cb-fa3497c555be.614
/.shard/cca2d4d0-7254-49c5-9db0-c9aaeb34c479.241
/.shard/cca2d4d0-7254-49c5-9db0-c9aaeb34c479.429
/.shard/6792d5d0-1bd2-41cf-a48e-dbe015d3e9fd.236
/.shard/cca2d4d0-7254-49c5-9db0-c9aaeb34c479.989
/.shard/18954415-3210-4d93-8591-0b3e1e5b3a16.909
/__DIRECT_IO_TEST__
/.shard/bd0bf192-e0e1-4b72-85cb-fa3497c555be.240
/.shard/bd0bf192-e0e1-4b72-85cb-fa3497c555be.429
/.shard/18954415-3210-4d93-8591-0b3e1e5b3a16.497
/.shard/cca2d4d0-7254-49c5-9db0-c9aaeb34c479.238
/.shard/cca2d4d0-7254-49c5-9db0-c9aaeb34c479.428
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.241
/.shard/bd0bf192-e0e1-4b72-85cb-fa3497c555be.991
/.shard/cca2d4d0-7254-49c5-9db0-c9aaeb34c479.236
/.shard/cca2d4d0-7254-49c5-9db0-c9aaeb34c479.990
/.shard/6792d5d0-1bd2-41cf-a48e-dbe015d3e9fd.48
/.shard/6792d5d0-1bd2-41cf-a48e-dbe015d3e9fd.52
/.shard/6792d5d0-1bd2-41cf-a48e-dbe015d3e9fd.424
/.shard/6792d5d0-1bd2-41cf-a48e-dbe015d3e9fd.425
/.shard/6792d5d0-1bd2-41cf-a48e-dbe015d3e9fd.799
/.shard/6792d5d0-1bd2-41cf-a48e-dbe015d3e9fd.1175
/.shard/6792d5d0-1bd2-41cf-a48e-dbe015d3e9fd.1551
/.shard/6792d5d0-1bd2-41cf-a48e-dbe015d3e9fd.1363
/.shard/dfe31381-6b91-4eb1-9050-0332182e424a.612
/.shard/dfe31381-6b91-4eb1-9050-0332182e424a.614
/.shard/dfe31381-6b91-4eb1-9050-0332182e424a.236
/.shard/dfe31381-6b91-4eb1-9050-0332182e424a.611
/.shard/dfe31381-6b91-4eb1-9050-0332182e424a.989
/.shard/dfe31381-6b91-4eb1-9050-0332182e424a.988
/.shard/dfe31381-6b91-4eb1-9050-0332182e424a.990
/.shard/dfe31381-6b91-4eb1-9050-0332182e424a.991
Status: Connected
Number of entries: 86

Brick 10.8.255.2:/gluster/bgl-vms-gfs02/brick
Status: Connected
Number of entries: 0

Brick 10.8.255.3:/gluster/bgl-vms-gfs03/brick
/.shard/bd0bf192-e0e1-4b72-85cb-fa3497c555be.236
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.48
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.52
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.423
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.424
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.611
/.shard/5bb5bc8b-abfb-4ab8-9f12-cbc020b3d50f.612