[ovirt-users] error while setup

2019-04-24 Thread W3SERVICES
PLAY [gluster_servers]
*

TASK [Run a shell script]
**
changed: [localhost.localdomain] =>
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
localhost.localdomain)

PLAY RECAP
*
localhost.localdomain  : ok=1changed=1unreachable=0
failed=0


PLAY [gluster_servers]
*

TASK [Enable or disable services]
**
ok: [localhost.localdomain] => (item=chronyd)

PLAY RECAP
*
localhost.localdomain  : ok=1changed=0unreachable=0
failed=0


PLAY [gluster_servers]
*

TASK [start/stop/restart/reload services]
**
changed: [localhost.localdomain] => (item=chronyd)

PLAY RECAP
*
localhost.localdomain  : ok=1changed=1unreachable=0
failed=0


PLAY [gluster_servers]
*

TASK [Run a command in the shell]
**
changed: [localhost.localdomain] => (item=vdsm-tool configure --force)

PLAY RECAP
*
localhost.localdomain  : ok=1changed=1unreachable=0
failed=0


PLAY [gluster_servers]
*

TASK [Run a shell script]
**
changed: [localhost.localdomain] =>
(item=/usr/share/gdeploy/scripts/blacklist_all_disks.sh)

PLAY RECAP
*
localhost.localdomain  : ok=1changed=1unreachable=0
failed=0


PLAY [gluster_servers]
*

TASK [Clean up filesystem signature]
***
skipping: [localhost.localdomain] => (item=/dev/sdb)

TASK [Create Physical Volume]
**
failed: [localhost.localdomain] (item=/dev/sdb) => {"changed": false,
"failed_when_result": true, "item": "/dev/sdb", "msg": "  Device /dev/sdb
not found.\n", "rc": 5}
to retry, use: --limit @/tmp/tmpQtig6r/pvcreate.retry

PLAY RECAP
*
localhost.localdomain  : ok=0changed=0unreachable=0
failed=1



From


*Sunil Kumar  | 0996767Tech Head and Cloud Solution Architect*
*---*

*Website : *
*https://w3services.net *
https://w3services.com  (Tech Consultancy)
Office Tel. +91 09300670068  | Write A Google Review   ( Help Us )

*---*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DV2KSLGT6SMQNWAST7MKLTFQFN275W4T/


[ovirt-users] Re: Template Disk Corruption

2019-04-24 Thread Alex McWhirter
Every template, when you make a desktop VM out of it and then delete 
that VM. If you make a server VM there are no issues.



On 2019-04-24 09:30, Benny Zlotnik wrote:

Does it happen all the time? For every template you create?
Or is it for a specific template?

On Wed, Apr 24, 2019 at 12:59 PM Alex McWhirter  
wrote:


oVirt is 4.2.7.5
VDSM is 4.20.43

Not sure which logs are applicable, i don't see any obvious errors in
vdsm.log or engine.log. After you delete the desktop VM, and create
another based on the template the new VM still boots, it just reports
disk read errors and fails boot.

On 2019-04-24 05:01, Benny Zlotnik wrote:
> can you provide more info (logs, versions)?
>
> On Wed, Apr 24, 2019 at 11:04 AM Alex McWhirter 
> wrote:
>>
>> 1. Create server template from server VM (so it's a full copy of the
>> disk)
>>
>> 2. From template create a VM, override server to desktop, so that it
>> become a qcow2 overlay to the template raw disk.
>>
>> 3. Boot VM
>>
>> 4. Shutdown VM
>>
>> 5. Delete VM
>>
>>
>>
>> Template disk is now corrupt, any new machines made from it will not
>> boot.
>>
>>
>> I can't see why this happens as the desktop optimized VM should have
>> just been an overlay qcow file...
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4DGO7N3LL4DPE6PCMZSIJLXPAA6UTBU/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3NRRYHYBKRUGIVPOKDUKDD4F523WEF6J/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VFI6JVQZB53PVGVGHILAICSEBXXTYMZF/


[ovirt-users] Re: Upgrade from 4.3.2 to 4.3.3 fails on database schema update

2019-04-24 Thread eshwayri
Thank you; that worked.  Upgrade completed successfully.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TC5BDIM65UJJBKPQ47VOABIFVUOPKLVT/


[ovirt-users] Adding network to VM - What stupid thing have I missed?

2019-04-24 Thread eshwayri
When creating a new VM, it looks like I connect it's nic(s) under the 
"Instantiate VM network interfaces by picking a vNIC profile." setting.  The 
problem I am seeing is that the drop down only has "Empty" and "br-kvm-prod" 
(my production bridge).  I should have two more.  Under Networks and vNIC 
Profiles tabs, I also have a br-kvm-stor and kvm_heart-22 defined which don't 
appear in that list.  Am I missing something I need to do for these additional 
profiles to also appear in the drop-down?  Some of my VMs will need access to 
the other physical networks.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MWY3L2KKLNSXKRMMZSMEPE2RK6DR6FPL/


[ovirt-users] New disk creation very slow after upgrade to 4.3.3

2019-04-24 Thread Steffen Luitz
This is on a 3-node hyperconverged environment with glusterfs. 

After upgrading to oVirt 4.3.3 (from 4.3.2) creating a new disk takes very long 
(hours for a 100GByte disk, making it essentially impossible to create a new 
disk image). 

In the UI the default is "preallocated" but changing it to thin provision does 
not make any difference. Regardless of this setting, an fallocate process gets 
started on the SDM host:

/usr/bin/python2 /usr/libexec/vdsm/fallocate 107374182400 
/rhev/data-center/mnt/glusterSD/s-vmhost2-ovir.t...

Using fallocate to create a file directly on the underlying file system is 
fast.  Using it to create a file through the glusterfs fuse mount is very slow. 

Thanks for any insights. 

Steffen







 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AMDKSAFEUCUC7AZBFM2TEDRYMREAB6NI/


[ovirt-users] Arbiter brick disk performance

2019-04-24 Thread Leo David
Hello Everyone,
I need to look into adding some enterprise grade sas disks ( both ssd
and spinning  ),  and since the prices are not too low,  I would like to
benefit of replica 3 arbitrated.
Therefore,  I intend to buy some smaller disks for use them as arbiter
brick.
My question is, what performance ( regarding iops,  througput ) the arbiter
disks need to be. Should they be at least the same as the real data disks ?
Knowing that they only keep metadata, I am thinking that will not be so
much pressure on the arbiters.
Any thoughts?

Thank you !


-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E52TUH2HJY6DRA625643WVDEHAHZ7HOH/


[ovirt-users] [ANN] oVirt 4.3.3 second async update is now available

2019-04-24 Thread Sandro Bonazzola
The oVirt Team has just released a new version of the following packages:
- ovirt-engine-4.3.3.6
The async release addresses the following bugs:
- Bug 1701205  - Creating
a new VM over the not defaulted cluster fails with "CPU Profile doesn't
match provided Cluster" error.
- Bug 1700759  - engine
failed schema refresh v4.3.2 -> v4.3.3

Thanks,

-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5CLOMKX7VRQFWYQXV4BUHUU52QD5ZDGR/


[ovirt-users] Re: Arbiter brick disk performance

2019-04-24 Thread Strahil
I think 2 small ssds (raid 1 mdadm) can do the job better as ssds have lower 
latencies .You can use them both for OS (minimum needed is 60 GB) and the rest 
will be plenty for an arbiter.
By the way, if you plan using gluster snapshots - use thin LVM for the brick.

Best Regards,
Strahil NikolovOn Apr 24, 2019 16:20, Leo David  wrote:
>
> Hello Everyone,
> I need to look into adding some enterprise grade sas disks ( both ssd and 
> spinning  ),  and since the prices are not too low,  I would like to benefit 
> of replica 3 arbitrated.
> Therefore,  I intend to buy some smaller disks for use them as arbiter brick.
> My question is, what performance ( regarding iops,  througput ) the arbiter 
> disks need to be. Should they be at least the same as the real data disks ?
> Knowing that they only keep metadata, I am thinking that will not be so much 
> pressure on the arbiters.
> Any thoughts?
>
> Thank you !
>
>
> -- 
> Best regards, Leo David___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G6EBER33PILCCLLRADURVBBBCXNFNYRY/


[ovirt-users] Re: Upgrading from 4.2.8 to 4.3.3 broke Node NG GlusterFS

2019-04-24 Thread Strahil
Fix those disconnectes node and run find  against a node that has successfully 
mounted the volume.

Best Regards,
Strahil NikolovOn Apr 24, 2019 15:31, Andreas Elvers 
 wrote:
>
> The file handle is stale so find will display: 
>
> "find: 
> '/rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore': 
> Transport endpoint is not connected" 
>
> "stat /rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore" 
> will output 
> stat: cannot stat 
> '/rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore': 
> Transport endpoint is not connected 
>
> All Nodes are peering with the other nodes: 
> - 
> Saiph:~ andreas$ ssh node01 gluster peer status 
> Number of Peers: 2 
>
> Hostname: node02.infra.solutions.work 
> Uuid: 87fab40a-2395-41ce-857d-0b846e078cdb 
> State: Peer in Cluster (Connected) 
>
> Hostname: node03.infra.solutions.work 
> Uuid: 49025f81-e7c1-4760-be03-f36e0f403d26 
> State: Peer in Cluster (Connected) 
>  
> Saiph:~ andreas$ ssh node02 gluster peer status 
> Number of Peers: 2 
>
> Hostname: node03.infra.solutions.work 
> Uuid: 49025f81-e7c1-4760-be03-f36e0f403d26 
> State: Peer in Cluster (Disconnected) 
>
> Hostname: node01.infra.solutions.work 
> Uuid: f25e6bff-e5e2-465f-a33e-9148bef94633 
> State: Peer in Cluster (Connected) 
>  
> ssh node03 gluster peer status 
> Number of Peers: 2 
>
> Hostname: node02.infra.solutions.work 
> Uuid: 87fab40a-2395-41ce-857d-0b846e078cdb 
> State: Peer in Cluster (Connected) 
>
> Hostname: node01.infra.solutions.work 
> Uuid: f25e6bff-e5e2-465f-a33e-9148bef94633 
> State: Peer in Cluster (Connected)
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DI6AWTLIQIPWNK2M7PBABQ4TAPB4J3S3/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GE2WD7UOHGBSZDF7DRNEL7HHHUZZJQOP/


[ovirt-users] Re: Template Disk Corruption

2019-04-24 Thread Benny Zlotnik
Does it happen all the time? For every template you create?
Or is it for a specific template?

On Wed, Apr 24, 2019 at 12:59 PM Alex McWhirter  wrote:
>
> oVirt is 4.2.7.5
> VDSM is 4.20.43
>
> Not sure which logs are applicable, i don't see any obvious errors in
> vdsm.log or engine.log. After you delete the desktop VM, and create
> another based on the template the new VM still boots, it just reports
> disk read errors and fails boot.
>
> On 2019-04-24 05:01, Benny Zlotnik wrote:
> > can you provide more info (logs, versions)?
> >
> > On Wed, Apr 24, 2019 at 11:04 AM Alex McWhirter 
> > wrote:
> >>
> >> 1. Create server template from server VM (so it's a full copy of the
> >> disk)
> >>
> >> 2. From template create a VM, override server to desktop, so that it
> >> become a qcow2 overlay to the template raw disk.
> >>
> >> 3. Boot VM
> >>
> >> 4. Shutdown VM
> >>
> >> 5. Delete VM
> >>
> >>
> >>
> >> Template disk is now corrupt, any new machines made from it will not
> >> boot.
> >>
> >>
> >> I can't see why this happens as the desktop optimized VM should have
> >> just been an overlay qcow file...
> >> ___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >> oVirt Code of Conduct:
> >> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives:
> >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4DGO7N3LL4DPE6PCMZSIJLXPAA6UTBU/
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/3NRRYHYBKRUGIVPOKDUKDD4F523WEF6J/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2LJMZXLO7UU7OXI6KHZSOYUIVTC6KA6R/


[ovirt-users] Re: Upgrading from 4.2.8 to 4.3.3 broke Node NG GlusterFS

2019-04-24 Thread Andreas Elvers
After rebooting the node that was not able to mount the gluster volume things 
improved eventually. SPM went away and restarted for the Datacenter and 
suddenly node03 was able to mount the gluster volume. In between I was down to 
1/3 active Bricks which results in read only glusterfs. I was lucky to have the 
Engine still on NFS. But anyway... 

Thanks for your thoughts.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5NKHRCJWEZGXSBKRMR447RCX6GWAAZV6/


[ovirt-users] Re: Arbiter brick disk performance

2019-04-24 Thread Leo David
Thank you very much Strahil, very helpful. As always. So I would equip the
3rd server and alocate one small ( 120 - 240gb) consumer grade ssd for each
of the gluster volume, and at volume creation, to specify the small ssds as
the 3rd brick.
Do it make sense ?
Thank you !

On Wed, Apr 24, 2019, 18:10 Strahil  wrote:

> I think 2 small ssds (raid 1 mdadm) can do the job better as ssds have
> lower latencies .You can use them both for OS (minimum needed is 60 GB) and
> the rest will be plenty for an arbiter.
> By the way, if you plan using gluster snapshots - use thin LVM for the
> brick.
>
> Best Regards,
> Strahil Nikolov
> On Apr 24, 2019 16:20, Leo David  wrote:
>
> Hello Everyone,
> I need to look into adding some enterprise grade sas disks ( both ssd
> and spinning  ),  and since the prices are not too low,  I would like to
> benefit of replica 3 arbitrated.
> Therefore,  I intend to buy some smaller disks for use them as arbiter
> brick.
> My question is, what performance ( regarding iops,  througput ) the
> arbiter disks need to be. Should they be at least the same as the real data
> disks ?
> Knowing that they only keep metadata, I am thinking that will not be so
> much pressure on the arbiters.
> Any thoughts?
>
> Thank you !
>
>
> --
> Best regards, Leo David
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4MZ4UUCJM7CONEJHSIMASBO54RS2GXTJ/


[ovirt-users] Upgrading from 4.2.8 to 4.3.3 broke Node NG GlusterFS

2019-04-24 Thread Andreas Elvers
Hi,

I am currently upgrading my oVirt setup from 4.2.8 to 4.3.3.1.

The setup consists of:

Datacenter/Cluster Default: [fully upgraded to 4.3.3.1]
   2 nodes (node04,node05)- NFS storage domain with self hosted engine 

Datacenter Luise:
   Cluster1: 3 nodes (node01,node02,node03) - Node NG with GlusterFS - Ceph 
Cinder storage domain
  [Node1 and Node3 are upgraded to 4.3.3.1, Node2 is on 4.2.8]
   Cluster2: 1 node (node06)  - only Ceph Cinder storage domain [fully upgraded 
to 4.3.3.1]


Problems started when upgrading Luise/Cluster1 with GlusterFS:
(I always waited for GlusterFS to be fully synced before proceeding to the next 
step)

- Upgrade node01 to 4.3.3 -> OK
- Upgrade node03 to 4.3.3.1 -> OK
- Upgrade node01 to 4.3.3.1 -> GlusterFS became unstable.


I now get the error message:

VDSM node03.infra.solutions.work command ConnectStoragePoolVDS failed: Cannot 
find master domain: u'spUUID=f3218bf7-6158-4b2b-b272-51cdc3280376, 
msdUUID=02a32017-cbe6-4407-b825-4e558b784157'

And on node03 there is a problem with Gluster:

node03#: ls -l 
/rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore
ls: cannot access 
/rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore: Transport 
endpoint is not connected

The directory is available on node01 and node02.

The engine is reporting the brick on node03 as down. Node03 and Node06 are 
shown as NonOperational, because they are not able to access the gluster 
storage domain. 

A “gluster peer status” on node1, node2, and node3 shows all peers connected.

“gluster volume heal vmstore info” shows for all nodes:


gluster volume heal vmstore info
Brick node01.infra.solutions.work:/gluster_bricks/vmstore/vmstore
Status: Transport endpoint is not connected
Number of entries: -

Brick node02.infra.solutions.work:/gluster_bricks/vmstore/vmstore



/02a32017-cbe6-4407-b825-4e558b784157/dom_md/ids
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.66

/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.60
/02a32017-cbe6-4407-b825-4e558b784157/images/a3a10398-9698-4b73-84d9-9735448e3534/6161e310-4ad6-42d9-8117-5a89c5b2b4b6


/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.96


/.shard/d66880de-3fa1-4362-8c43-574a173c5f7d.133


/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.38
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.67
/__DIRECT_IO_TEST__


/02a32017-cbe6-4407-b825-4e558b784157/images/493188b2-c137-4440-99ee-43a753842a7d/9aa2d139-e3bd-406b-8fe0-b189123eaa73

/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.64
/.shard/d66880de-3fa1-4362-8c43-574a173c5f7d.132



/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.44
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.9
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.69

/02a32017-cbe6-4407-b825-4e558b784157/images/12e647fb-20aa-4957-b659-05fa75a9215e/f7e4b2a3-ab84-4eb5-a4e7-7208ddad8156





/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.35
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.32


/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.39


/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.34
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.68
Status: Connected
Number of entries: 47

Brick node03.infra.solutions.work:/gluster_bricks/vmstore/vmstore
/02a32017-cbe6-4407-b825-4e558b784157/images/12e647fb-20aa-4957-b659-05fa75a9215e/f7e4b2a3-ab84-4eb5-a4e7-7208ddad8156











/.shard/d66880de-3fa1-4362-8c43-574a173c5f7d.133






/02a32017-cbe6-4407-b825-4e558b784157/images/493188b2-c137-4440-99ee-43a753842a7d/9aa2d139-e3bd-406b-8fe0-b189123eaa73






/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.44







/02a32017-cbe6-4407-b825-4e558b784157/dom_md/ids



/02a32017-cbe6-4407-b825-4e558b784157/images/a3a10398-9698-4b73-84d9-9735448e3534/6161e310-4ad6-42d9-8117-5a89c5b2b4b6



/.shard/d66880de-3fa1-4362-8c43-574a173c5f7d.132



/__DIRECT_IO_TEST__
Status: Connected
Number of entries: 47

On Node03 there are several self healing processes, that seem to be doing 
nothing.

Oh well.. What now?

Best regards,
- Andreas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R5GS6AQXTEQRMUQNMEBDC72YG3A5JFF6/


[ovirt-users] Template Disk Corruption

2019-04-24 Thread Alex McWhirter
1. Create server template from server VM (so it's a full copy of the 
disk)


2. From template create a VM, override server to desktop, so that it 
become a qcow2 overlay to the template raw disk.


3. Boot VM

4. Shutdown VM

5. Delete VM



Template disk is now corrupt, any new machines made from it will not 
boot.



I can't see why this happens as the desktop optimized VM should have 
just been an overlay qcow file...

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4DGO7N3LL4DPE6PCMZSIJLXPAA6UTBU/


[ovirt-users] Re: Template Disk Corruption

2019-04-24 Thread Benny Zlotnik
can you provide more info (logs, versions)?

On Wed, Apr 24, 2019 at 11:04 AM Alex McWhirter  wrote:
>
> 1. Create server template from server VM (so it's a full copy of the
> disk)
>
> 2. From template create a VM, override server to desktop, so that it
> become a qcow2 overlay to the template raw disk.
>
> 3. Boot VM
>
> 4. Shutdown VM
>
> 5. Delete VM
>
>
>
> Template disk is now corrupt, any new machines made from it will not
> boot.
>
>
> I can't see why this happens as the desktop optimized VM should have
> just been an overlay qcow file...
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4DGO7N3LL4DPE6PCMZSIJLXPAA6UTBU/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3NRRYHYBKRUGIVPOKDUKDD4F523WEF6J/


[ovirt-users] Re: Upgrading from 4.2.8 to 4.3.3 broke Node NG GlusterFS

2019-04-24 Thread Andreas Elvers
Restarting improved things a little bit. Still bricks on node03 are shown as 
down, but "gluster volume status" is looking better.

Saiph:~ andreas$ ssh node01 gluster volume status vmstore
Status of volume: vmstore
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick node01.infra.solutions.work:/gluster_
bricks/vmstore/vmstore  49157 0  Y   24543
Brick node02.infra.solutions.work:/gluster_
bricks/vmstore/vmstore  49154 0  Y   23795
Brick node03.infra.solutions.work:/gluster_
bricks/vmstore/vmstore  49157 0  Y   1617
Self-heal Daemon on localhost   N/A   N/AY   32121
Self-heal Daemon on node03.infra.solutions.
workN/A   N/AY   25798
Self-heal Daemon on node02.infra.solutions.
workN/A   N/AY   30879

Task Status of Volume vmstore
--
There are no active volume tasks

Saiph:~ andreas$ ssh node02 gluster volume status vmstore
Status of volume: vmstore
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick node01.infra.solutions.work:/gluster_
bricks/vmstore/vmstore  49157 0  Y   24543
Brick node02.infra.solutions.work:/gluster_
bricks/vmstore/vmstore  49154 0  Y   23795
Brick node03.infra.solutions.work:/gluster_
bricks/vmstore/vmstore  49157 0  Y   1617
Self-heal Daemon on localhost   N/A   N/AY   30879
Self-heal Daemon on node03.infra.solutions.
workN/A   N/AY   25798
Self-heal Daemon on node01.infra.solutions.
workN/A   N/AY   32121

Task Status of Volume vmstore
--
There are no active volume tasks

Saiph:~ andreas$ ssh node03 gluster volume status vmstore
Status of volume: vmstore
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick node01.infra.solutions.work:/gluster_
bricks/vmstore/vmstore  49157 0  Y   24543
Brick node02.infra.solutions.work:/gluster_
bricks/vmstore/vmstore  49154 0  Y   23795
Brick node03.infra.solutions.work:/gluster_
bricks/vmstore/vmstore  49157 0  Y   1617
Self-heal Daemon on localhost   N/A   N/AY   25798
Self-heal Daemon on node01.infra.solutions.
workN/A   N/AY   32121
Self-heal Daemon on node02.infra.solutions.
workN/A   N/AY   30879

Task Status of Volume vmstore
--
There are no active volume tasks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AAGJ42GF267NOFEQXNJRUQJD7C5UCOM5/


[ovirt-users] Re: Template Disk Corruption

2019-04-24 Thread Alex McWhirter

oVirt is 4.2.7.5
VDSM is 4.20.43

Not sure which logs are applicable, i don't see any obvious errors in 
vdsm.log or engine.log. After you delete the desktop VM, and create 
another based on the template the new VM still boots, it just reports 
disk read errors and fails boot.


On 2019-04-24 05:01, Benny Zlotnik wrote:

can you provide more info (logs, versions)?

On Wed, Apr 24, 2019 at 11:04 AM Alex McWhirter  
wrote:


1. Create server template from server VM (so it's a full copy of the
disk)

2. From template create a VM, override server to desktop, so that it
become a qcow2 overlay to the template raw disk.

3. Boot VM

4. Shutdown VM

5. Delete VM



Template disk is now corrupt, any new machines made from it will not
boot.


I can't see why this happens as the desktop optimized VM should have
just been an overlay qcow file...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4DGO7N3LL4DPE6PCMZSIJLXPAA6UTBU/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3NRRYHYBKRUGIVPOKDUKDD4F523WEF6J/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TXD5OOS3NMRZCIWW7Q2CGKMIGIBUJNAR/


[ovirt-users] Re: Upgrading from 4.2.8 to 4.3.3 broke Node NG GlusterFS

2019-04-24 Thread Andreas Elvers
"systemctl restart glusterd" on node03 did not help. Still getting:

node03#: ls -l 
/rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore 
ls: cannot access 
/rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore: Transport 
endpoint is not connected

Engine still shows bricks on node03 as down.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2JMM4UBZ54TNHCFUFYDX2OOVCKEMXFBX/


[ovirt-users] Re: Prevent 2 different VMs from running on the same host

2019-04-24 Thread Jorick Astrego
Hi,

Yes, use affinity groups for this

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3-beta/html/virtual_machine_management_guide/sect-affinity_groups

*The VM Affinity Rule*

When you create an affinity group, you select the virtual machines
that belong to the group. To define /where these virtual machines
can run in relation to each other/, you enable a *VM Affinity Rule*:
A positive rule tries to run the virtual machines together on a
single host; a negative affinity rule tries to run the virtual
machines apart on separate hosts. If the rule cannot be fulfilled,
the outcome depends on whether the weight or filter module is enabled.

On 4/24/19 12:45 PM, Paulo Silva wrote:
> Hi,
>
> I have a cluster of 6 hosts using ovirt 4.3 and I want to make sure
> that 2 VMs are always started on different hosts.
> Is it possible to prevent 2 different VMs from running on the same
> physical host without specifying manually a different set of hosts
> where each VM can start running?
>
> Thanks
> -- 
> Paulo Silva mailto:paulo...@gmail.com>>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KQ4LVNAZMIL6RPZVFS5Z4URUJ2RHCHIH/




Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts 



Tel: 053 20 30 270  i...@netbulae.euStaalsteden 4-3A
KvK 08198180
Fax: 053 20 30 271  www.netbulae.eu 7547 TA Enschede
BTW NL821234584B01



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DMSMGYVO5YW3HFSWH3FPBTBMCHJ32GKW/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-24 Thread wodel youchi
Hi,

I am not sure if I understood your question, but here is a statement from
the install guide of RHHI (Deploying RHHI) :

"You cannot create a volume that spans more than 3 nodes, or expand an
existing volume so that it spans
across more than 3 nodes at a time."

Page 11 , 2.7 Scaling.

Regards.


Virus-free.
www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

Le mar. 23 avr. 2019 à 06:56,  a écrit :

> Use the created multipath devices
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3UQWBS3W23I3LTJQCZI7OI2467AW4JRO/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LQBYLEHE2GQTHJDA3WF3LEN6D6Z57HWH/


[ovirt-users] Prevent 2 different VMs from running on the same host

2019-04-24 Thread Paulo Silva
Hi,

I have a cluster of 6 hosts using ovirt 4.3 and I want to make sure that 2
VMs are always started on different hosts.
Is it possible to prevent 2 different VMs from running on the same physical
host without specifying manually a different set of hosts where each VM can
start running?

Thanks
-- 
Paulo Silva 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KQ4LVNAZMIL6RPZVFS5Z4URUJ2RHCHIH/


[ovirt-users] Re: Upgrading from 4.2.8 to 4.3.3 broke Node NG GlusterFS

2019-04-24 Thread Strahil Nikolov
 Try to run a find from a working server(for example node02):

find /rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore -exec 
stat {} \;


Also, check if all peers see each other.
Best Regards,Strahil Nikolov

В сряда, 24 април 2019 г., 3:27:41 ч. Гринуич-4, Andreas Elvers 
 написа:  
 
 Hi,

I am currently upgrading my oVirt setup from 4.2.8 to 4.3.3.1.

The setup consists of:

Datacenter/Cluster Default: [fully upgraded to 4.3.3.1]
  2 nodes (node04,node05)- NFS storage domain with self hosted engine 

Datacenter Luise:
  Cluster1: 3 nodes (node01,node02,node03) - Node NG with GlusterFS - Ceph 
Cinder storage domain
                  [Node1 and Node3 are upgraded to 4.3.3.1, Node2 is on 4.2.8]
  Cluster2: 1 node (node06)  - only Ceph Cinder storage domain [fully upgraded 
to 4.3.3.1]


Problems started when upgrading Luise/Cluster1 with GlusterFS:
(I always waited for GlusterFS to be fully synced before proceeding to the next 
step)

- Upgrade node01 to 4.3.3 -> OK
- Upgrade node03 to 4.3.3.1 -> OK
- Upgrade node01 to 4.3.3.1 -> GlusterFS became unstable.


I now get the error message:

VDSM node03.infra.solutions.work command ConnectStoragePoolVDS failed: Cannot 
find master domain: u'spUUID=f3218bf7-6158-4b2b-b272-51cdc3280376, 
msdUUID=02a32017-cbe6-4407-b825-4e558b784157'

And on node03 there is a problem with Gluster:

node03#: ls -l 
/rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore
ls: cannot access 
/rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore: Transport 
endpoint is not connected

The directory is available on node01 and node02.

The engine is reporting the brick on node03 as down. Node03 and Node06 are 
shown as NonOperational, because they are not able to access the gluster 
storage domain. 

A “gluster peer status” on node1, node2, and node3 shows all peers connected.

“gluster volume heal vmstore info” shows for all nodes:


gluster volume heal vmstore info
Brick node01.infra.solutions.work:/gluster_bricks/vmstore/vmstore
Status: Transport endpoint is not connected
Number of entries: -

Brick node02.infra.solutions.work:/gluster_bricks/vmstore/vmstore



/02a32017-cbe6-4407-b825-4e558b784157/dom_md/ids
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.66

/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.60
/02a32017-cbe6-4407-b825-4e558b784157/images/a3a10398-9698-4b73-84d9-9735448e3534/6161e310-4ad6-42d9-8117-5a89c5b2b4b6


/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.96


/.shard/d66880de-3fa1-4362-8c43-574a173c5f7d.133


/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.38
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.67
/__DIRECT_IO_TEST__


/02a32017-cbe6-4407-b825-4e558b784157/images/493188b2-c137-4440-99ee-43a753842a7d/9aa2d139-e3bd-406b-8fe0-b189123eaa73

/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.64
/.shard/d66880de-3fa1-4362-8c43-574a173c5f7d.132



/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.44
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.9
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.69

/02a32017-cbe6-4407-b825-4e558b784157/images/12e647fb-20aa-4957-b659-05fa75a9215e/f7e4b2a3-ab84-4eb5-a4e7-7208ddad8156





/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.35
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.32


/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.39


/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.34
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.68
Status: Connected
Number of entries: 47

Brick node03.infra.solutions.work:/gluster_bricks/vmstore/vmstore
/02a32017-cbe6-4407-b825-4e558b784157/images/12e647fb-20aa-4957-b659-05fa75a9215e/f7e4b2a3-ab84-4eb5-a4e7-7208ddad8156











/.shard/d66880de-3fa1-4362-8c43-574a173c5f7d.133






/02a32017-cbe6-4407-b825-4e558b784157/images/493188b2-c137-4440-99ee-43a753842a7d/9aa2d139-e3bd-406b-8fe0-b189123eaa73






/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.44







/02a32017-cbe6-4407-b825-4e558b784157/dom_md/ids



/02a32017-cbe6-4407-b825-4e558b784157/images/a3a10398-9698-4b73-84d9-9735448e3534/6161e310-4ad6-42d9-8117-5a89c5b2b4b6



/.shard/d66880de-3fa1-4362-8c43-574a173c5f7d.132



/__DIRECT_IO_TEST__
Status: Connected
Number of entries: 47

On Node03 there are several self healing processes, that seem to be doing 
nothing.

Oh well.. What now?

Best regards,
- Andreas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R5GS6AQXTEQRMUQNMEBDC72YG3A5JFF6/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Unable to use MAC address starting with reserved value 0xFE

2019-04-24 Thread Ricardo Alonso
Is there a way to use a mac starting with FE? The machine has a license 
requirement attached to the mac address, and when I try to start it, it fails 
with the message:

VM is down with error. Exit message: unsupported configuration: Unable to use 
MAC address starting with reserved value 0xFE - 'fe:XX:XX:XX:XX:XX'
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N2TRGRQLDSCIZNSLAZ62DIRH33FXXWIS/


[ovirt-users] Re: Upgrading from 4.2.8 to 4.3.3 broke Node NG GlusterFS

2019-04-24 Thread Andreas Elvers
The file handle is stale so find will display:

"find: '/rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore': 
Transport endpoint is not connected"

"stat /rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore" 
will output
stat: cannot stat 
'/rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore': 
Transport endpoint is not connected

All Nodes are peering with the other nodes:
- 
Saiph:~ andreas$ ssh node01 gluster peer status
Number of Peers: 2

Hostname: node02.infra.solutions.work
Uuid: 87fab40a-2395-41ce-857d-0b846e078cdb
State: Peer in Cluster (Connected)

Hostname: node03.infra.solutions.work
Uuid: 49025f81-e7c1-4760-be03-f36e0f403d26
State: Peer in Cluster (Connected)

Saiph:~ andreas$ ssh node02 gluster peer status
Number of Peers: 2

Hostname: node03.infra.solutions.work
Uuid: 49025f81-e7c1-4760-be03-f36e0f403d26
State: Peer in Cluster (Disconnected)

Hostname: node01.infra.solutions.work
Uuid: f25e6bff-e5e2-465f-a33e-9148bef94633
State: Peer in Cluster (Connected)

ssh node03 gluster peer status
Number of Peers: 2

Hostname: node02.infra.solutions.work
Uuid: 87fab40a-2395-41ce-857d-0b846e078cdb
State: Peer in Cluster (Connected)

Hostname: node01.infra.solutions.work
Uuid: f25e6bff-e5e2-465f-a33e-9148bef94633
State: Peer in Cluster (Connected)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DI6AWTLIQIPWNK2M7PBABQ4TAPB4J3S3/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-24 Thread Adrian Quintero
Strahil,
this is the issue I am seeing now

[image: image.png]

The is thru the UI when I try to create a new brick.

So my concern is if I modify the filters on the OS what impact will that
have after server reboots?

thanks,



On Mon, Apr 22, 2019 at 11:39 PM Strahil  wrote:

> I have edited my multipath.conf to exclude local disks , but you need to
> set '#VDSM private' as per the comments in the header of the file.
> Otherwise, use the /dev/mapper/multipath-device notation - as you would do
> with any linux.
>
> Best Regards,
> Strahil NikolovOn Apr 23, 2019 01:07, adrianquint...@gmail.com wrote:
> >
> > Thanks Alex, that makes more sense now  while trying to follow the
> instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd
> are locked and inidicating " multpath_member" hence not letting me create
> new bricks. And on the logs I see
> >
> > Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb",
> "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb'
> failed", "rc": 5}
> > Same thing for sdc, sdd
> >
> > Should I manually edit the filters inside the OS, what will be the
> impact?
> >
> > thanks again.
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLXBT2VLOCLBKOYJS3MYF/
>


-- 
Adrian Quintero
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLPP63M7DTDF2TLSMX556/