[ovirt-users] Single Hyperconverged Node Gluster Config

2018-09-16 Thread Jeremy Tourville
Hello,
I am trying to setup a single hyperconverged node.  The disks that I will be 
using for the creation of my engine and all VMs are on a RAID 5.  This volume 
is using hardware raid with an LSI controller (LSI 9361-4i).  I am unsure of 
the appropriate values to use for the Gluster config.  Here is the info about 
my environment.

[root@vmh ~]# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs 
(rw,nosuid,seclabel,size=65713924k,nr_inodes=16428481,mode=755)
securityfs on /sys/kernel/security type securityfs 
(rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)
devpts on /dev/pts type devpts 
(rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup 
(rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/freezer type cgroup 
(rw,nosuid,nodev,noexec,relatime,seclabel,freezer)
cgroup on /sys/fs/cgroup/pids type cgroup 
(rw,nosuid,nodev,noexec,relatime,seclabel,pids)
cgroup on /sys/fs/cgroup/cpuset type cgroup 
(rw,nosuid,nodev,noexec,relatime,seclabel,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup 
(rw,nosuid,nodev,noexec,relatime,seclabel,cpuacct,cpu)
cgroup on /sys/fs/cgroup/blkio type cgroup 
(rw,nosuid,nodev,noexec,relatime,seclabel,blkio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup 
(rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup 
(rw,nosuid,nodev,noexec,relatime,seclabel,net_prio,net_cls)
cgroup on /sys/fs/cgroup/devices type cgroup 
(rw,nosuid,nodev,noexec,relatime,seclabel,devices)
cgroup on /sys/fs/cgroup/perf_event type cgroup 
(rw,nosuid,nodev,noexec,relatime,seclabel,perf_event)
cgroup on /sys/fs/cgroup/memory type cgroup 
(rw,nosuid,nodev,noexec,relatime,seclabel,memory)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/mapper/onn_vmh-ovirt--node--ng--4.2.5.1--0.20180821.0+1 on / type ext4 
(rw,relatime,seclabel,discard,stripe=16,data=ordered)
selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs 
(rw,relatime,fd=22,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=43386)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel)
hugetlbfs on /dev/hugepages1G type hugetlbfs (rw,relatime,seclabel,pagesize=1G)
/dev/mapper/onn_vmh-tmp on /tmp type ext4 
(rw,relatime,seclabel,discard,stripe=16,data=ordered)
/dev/mapper/onn_vmh-home on /home type ext4 
(rw,relatime,seclabel,discard,stripe=16,data=ordered)
/dev/sdb1 on /boot type ext4 (rw,relatime,seclabel,data=ordered)
/dev/mapper/onn_vmh-var on /var type ext4 
(rw,relatime,seclabel,discard,stripe=16,data=ordered)
/dev/mapper/onn_vmh-var_log on /var/log type ext4 
(rw,relatime,seclabel,discard,stripe=16,data=ordered)
/dev/mapper/onn_vmh-var_crash on /var/crash type ext4 
(rw,relatime,seclabel,discard,stripe=16,data=ordered)
/dev/mapper/onn_vmh-var_log_audit on /var/log/audit type ext4 
(rw,relatime,seclabel,discard,stripe=16,data=ordered)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
/dev/mapper/3600605b00a2faca222fb4da81ac9bdb1p1 on /data type ext4 
(rw,relatime,seclabel,discard,stripe=64,data=ordered)
tmpfs on /run/user/0 type tmpfs 
(rw,nosuid,nodev,relatime,seclabel,size=13149252k,mode=700)

[root@vmh ~]# blkid
/dev/sda1: UUID="ce130131-c457-46a0-b6de-e50cc89a6da3" TYPE="ext4" 
PARTUUID="5fff73ae-70d4-4697-a307-5f68a4c00f4c"
/dev/sdb1: UUID="47422043-e5d0-4541-86ff-193f61a779b0" TYPE="ext4"
/dev/sdb2: UUID="2f2cf71b-b68e-985a-6433-ed1889595df0" 
UUID_SUB="a3e32abe-109c-b436-7712-6c9d9f1d57c9" 
LABEL="vmh.cyber-range.lan:pv00" TYPE="linux_raid_member"
/dev/sdc1: UUID="2f2cf71b-b68e-985a-6433-ed1889595df0" 
UUID_SUB="7132acc1-e210-f645-5df2-e1a1eff7f836" 
LABEL="vmh.cyber-range.lan:pv00" TYPE="linux_raid_member"
/dev/md127: UUID="utB9xu-zva6-j5Ci-3E49-uDya-g2AJ-MQu5Rd" TYPE="LVM2_member"
/dev/mapper/onn_vmh-ovirt--node--ng--4.2.5.1--0.20180821.0+1: 
UUID="8656ff1e-d217-4088-b353-6d2b9f602ce3" TYPE="ext4"
/dev/mapper/onn_vmh-swap: UUID="57a165d1-116e-4e64-a694-2618ffa3a79e" 
TYPE="swap"
/dev/mapper/3600605b00a2faca222fb4da81ac9bdb1p1: 
UUID="ce130131-c457-46a0-b6de-e50cc89a6da3" TYPE="ext4" 
PARTUUID="dac9e1fc-b0d7-43da-b52c-66bb059d8137"
/dev/mapper/onn_vmh-root: UUID="7cc65568-d408-43ab-a793-b6c110d7ba98" 
TYPE="ext4"
/dev/mapper/onn_vmh-home: UUID="c7e344d0-f401-4504-b52e-9b5c6023c10e" 
TYPE="ext4"
/dev/mapper/onn_vmh-tmp: UUID="3323e010-0dab-4166-b15d-d739a09b4c03" TYPE="ext4"

[ovirt-users] Re: Slow vm transfer speed from vmware esxi 5

2018-09-16 Thread Nir Soffer
On Fri, Sep 14, 2018 at 7:21 PM Bernhard Dick  wrote:

> Hi,
>
> it took some time to answer due to some other stuff, but now I had the
> time to look into it.
>
> Am 21.08.2018 um 17:02 schrieb Michal Skrivanek:
> > [...]
> >> Hi Bernhard,
> >>
> >> With the latest version of the ovirt-imageio and the v2v we are
> >> performing quite nicely, and without specifying
> >
> > the difference is that with the integrated v2v you don’t use any of
> > that. It’s going through the vCenter server which is the major slowdown.
> > With 10MB/s I do not expect the bottleneck is on our side in any way.
> > After all the integrated v2v is writing locally directly to the target
> > prepared volume so it’s probably even faster than imageio.
> >
> > the “new” virt-v2v -o rhv-upload method is not integrated in GUI, but
> > supports VDDK and SSH methods of access which both should be faster
> > you could try to use that, but you’d need to use it on cmdline
> I first tried the ssh way which already improved the speed. Afterwards I
> did some more experiments and ended up using vmfs-tools to mount the
> vmware datastore directly and I see transfer speeds of ~50-60MB/sec when
> transferring to an ovirt-export domain now. This seems to be the maximum
> the used system can handle when using the fuse-vmfs-way. That would be
> fast enough in my case (and is a huge improvement).
>
> However I cannot use the rhv-upload method because my storage domain is
> iSCSI and I get the error that sparse filetypes are not allowed (like
> being described at https://bugzilla.redhat.com/show_bug.cgi?id=1600547
> ). The solution from the Bug does also not help, because then instantly
> I get the error message that I'd need to use -oa sparse when using
> rhv-upload. This happens with the development version 1.39.9 of
> libguestfs and with the git master branch. Do you have some advice how
> to fix this / which version to use?
>

I used to disable the limit enforcing "sparse" in libguestfs upstream
source, but lately the simple check at the python plugin level was moved to
to the ocaml code, and I did not have time to understand it yet.

If you want to remove the limit, try to look here:
https://github.com/libguestfs/libguestfs/blob/51a9c874d3f0a9c4780f2cd3ee7072180446e685/v2v/output_rhv_upload.ml#L163

On RHEL, there is no such limit, and you can import vms to any kind of
storage.

Richard, can we remove the limit on sparse format? I don't see how this
limit
helps anyone.

oVirt support several combinations:

file:
- raw sparse
- raw preallocated
- qcow2 sparse (unsupported in v2v)

block:
- raw preallocated
- qcow2 sparse (unsupported in v2v)

It seems that oVirt SDK is does not have a good way to select the format
yet, so
virt-v2v cannot select the format for the user. This means the user need to
select
the format.

Nir

   Regards
>  Bernhard
>
> > https://github.com/oVirt/ovirt-ansible-v2v-conversion-host/ might help
> > to use it a bit more nicely
> >
> > Thanks,
> > michal
> >
> >> number I can tell you that weakest link is the read rate from the
> >> vmware data store. In our lab
> >> I can say that we roughly peek ~40 MiB/sec reading a single vm and the
> >> rest of our components(after the read from vmds)
> >> have no problem dealing with that - i.e buffering -> converting ->
> >> writing to imageio -> writing to storage
> >>
> >> So, in short, examine the read-rate from vm datastore, let us know,
> >> and please specify the versions you are using.
> >>
> >> ___
> >> Users mailing list -- users@ovirt.org 
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> 
> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >> oVirt Code of Conduct:
> >> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives:
> >>
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KQZPRX4M7V74FSYIY5LRUPC46CCJ2DCR/
> >>
> >> ___
> >> Users mailing list -- users@ovirt.org 
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> 
> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >> oVirt Code of Conduct:
> >> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives:
> >>
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WYKFJPEIPGVUT7Q6TMRVLYRAFQVWBQTI/
> >
>
>
> --
> Dipl.-Inf. Bernhard Dick
> Auf dem Anger 24
> DE-46485 Wesel
> www.BernhardDick.de
>
> jabber: bernh...@jabber.bdick.de
>
> Tel : +49.2812068620
> Mobil : +49.1747607927
> FAX : +49.2812068621
> USt-IdNr.: DE274728845
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> 

[ovirt-users] ansible ovirt-host-deploy error on master

2018-09-16 Thread Eitan Raviv
Hi,

On trying to connect a host to engine I get install failed with ansible
ovirt-host-deploy reporting "Could not find or access
/etc/pki/vdsm/libvirt-spice/ca-cert.pem" (log snippet attached) although
said directory holds all files with what looks like correct permissions
(and no manual intervention has been done there).

-rw-r--r--. 1 root kvm 1368 Sep 13 15:09 ca-cert.pem
-rw-r--r--. 1 root kvm 5074 Sep 16 13:37 server-cert.pem
-r--r-. 1 vdsm kvm 1704 Sep 16 13:37 server-key.pem


setup:
python2-ovirt-host-deploy-1.8.0-0.0.master.20180624095234.git827d6d1.el7.noarch
vdsm-4.30.0-585.gitd4d043e.el7.x86_64
ovirt-engine: master latest (dev build from today)

Any help much appreciated.
2018-09-16 13:38:00,178 p=7732 u= |  TASK [ovirt-host-deploy-vnc-certificates : Setup VNC PKI] **
2018-09-16 13:38:00,354 p=7732 u= |  An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option
2018-09-16 13:38:00,355 p=7732 u= |  failed: [192.168.122.236] (item=ca-cert.pem) => {
"changed": false, 
"item": "ca-cert.pem"
}

MSG:

Could not find or access '/etc/pki/vdsm/libvirt-spice/ca-cert.pem' on the Ansible Controller.
If you are using a module and expect the file to exist on the remote, see the remote_src option

2018-09-16 13:38:00,425 p=7732 u= |  An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option
2018-09-16 13:38:00,426 p=7732 u= |  failed: [192.168.122.236] (item=server-cert.pem) => {
"changed": false, 
"item": "server-cert.pem"
}

MSG:

Could not find or access '/etc/pki/vdsm/libvirt-spice/server-cert.pem' on the Ansible Controller.
If you are using a module and expect the file to exist on the remote, see the remote_src option

2018-09-16 13:38:00,499 p=7732 u= |  An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option
2018-09-16 13:38:00,500 p=7732 u= |  failed: [192.168.122.236] (item=server-key.pem) => {
"changed": false, 
"item": "server-key.pem"
}

MSG:

Could not find or access '/etc/pki/vdsm/libvirt-spice/server-key.pem' on the Ansible Controller.
If you are using a module and expect the file to exist on the remote, see the remote_src option

2018-09-16 13:38:00,502 p=7732 u= |  PLAY RECAP *
2018-09-16 13:38:00,502 p=7732 u= |  192.168.122.236: ok=22   changed=0unreachable=0failed=1___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZBC4BLZLGULSVOP5LU4YE2OCDMGFPCEQ/


[ovirt-users] Re: LVM Mirroring

2018-09-16 Thread Maor Lipchuk
On Thu, Sep 13, 2018 at 9:59 PM, René Koch  wrote:

> Hi list,
>
> Is it possible to use LVM mirroring for FC LUNs in oVirt 4.2?

I've 2 FC storages which do not have a replication functionality. So the
> idea is to create a LVM mirror using 1 LUN of storage 1 und 1 LUN of
> storage 2 for replicating data between the storages on host level (full
> RHEL 7.5 not oVirt Node). With e.g. pacemaker and plain KVM this works
> fine. Is it possible to have the same setup with oVirt 4.2?
>

(Adding Nir)
Are you asking that in relation to support disaster recovery for oVirt
setup?
I'm not sure if it will help you with what you are lookiong for, but have
you tried to look into Gluster Geo replication:
  https://docs.gluster.org/en/v3/Administrator%20Guide/Geo%20Replication/



> Thanks a lot.
>
>
> Regards,
> René
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/PESPIMJW7QAY2W5ZLOCQIYVQI2Q3WCB5/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2QXNDD2VVAX2WWQSJN3ZX6SKKM6TJZWK/


[ovirt-users] Re: moving disks around gluster domain is failing

2018-09-16 Thread Maor Lipchuk
On Fri, Sep 14, 2018 at 3:27 PM,  wrote:

> Moving disk from one gluster domain to another fails, either with the vm
> running or down..
> It strikes me that it says : File 
> "/usr/lib64/python2.7/site-packages/libvirt.py",
> line 718, in blockCopy
> if ret == -1: raise libvirtError ('virDomainBlockCopy() failed',
> dom=self)
> I'am sending the relevant piece of log..
>
> But it should be a file copy since it's gluster am I right?.
> Gluster volumes are lvm thick and have different shard sizes...
>
> 2018-09-14 15:05:53,325+0300 ERROR (jsonrpc/2) [virt.vm]
> (vmId='f90f6533-9d71-4102-9cd6-2d9960a4e585') Unable to start replication
> for sda to {u'domainID': u'd07231ca-89b8-490a
> -819d-8542e1eaee19', 'volumeInfo': {'path': u'vol3/d07231ca-89b8-490a-
> 819d-8542e1eaee19/images/3d95e237-441c-4b41-b823-
> 48d446b3eba6/5716acc8-7ee7-4235-aad6-345e565f3073', 'type
> ': 'network', 'hosts': [{'port': '0', 'transport': 'tcp', 'name':
> '10.252.166.129'}, {'port': '0', 'transport': 'tcp', 'name':
> '10.252.166.130'}, {'port': '0', 'transport': 'tc
> p', 'name': '10.252.166.128'}], 'protocol': 'gluster'}, 'format': 'cow',
> u'poolID': u'90946184-a7bd-11e8-950b-00163e11b631', u'device': 'disk',
> 'protocol': 'gluster', 'propagat
> eErrors': 'off', u'diskType': u'network', 'cache': 'none', u'volumeID':
> u'5716acc8-7ee7-4235-aad6-345e565f3073', u'imageID':
> u'3d95e237-441c-4b41-b823-48d446b3eba6', 'hosts': [
> {'port': '0', 'transport': 'tcp', 'name': '10.252.166.129'}], 'path':
> u'vol3/d07231ca-89b8-490a-819d-8542e1eaee19/images/
> 3d95e237-441c-4b41-b823-48d446b3eba6/5716acc8-7ee7-4235
> -aad6-345e565f3073', 'volumeChain': [{'domainID':
> u'd07231ca-89b8-490a-819d-8542e1eaee19', 'leaseOffset': 0, 'path':
> u'vol3/d07231ca-89b8-490a-819d-8542e1eaee19/images/3d95e237
> -441c-4b41-b823-48d446b3eba6/26214e9d-1126-42a0-85e3-c21f182b582f',
> 'volumeID': u'26214e9d-1126-42a0-85e3-c21f182b582f', 'leasePath':
> u'/rhev/data-center/mnt/glusterSD/10.252.1
> 66.129:_vol3/d07231ca-89b8-490a-819d-8542e1eaee19/images/
> 3d95e237-441c-4b41-b823-48d446b3eba6/26214e9d-1126-42a0-85e3-c21f182b582f.lease',
> 'imageID': u'3d95e237-441c-4b41-b823-
> 48d446b3eba6'}, {'domainID': u'd07231ca-89b8-490a-819d-8542e1eaee19',
> 'leaseOffset': 0, 'path': u'vol3/d07231ca-89b8-490a-
> 819d-8542e1eaee19/images/3d95e237-441c-4b41-b823-48d44
> 6b3eba6/2c6c8b45-7d70-4eff-9ea5-f2a377f0ef64', 'volumeID':
> u'2c6c8b45-7d70-4eff-9ea5-f2a377f0ef64', 'leasePath':
> u'/rhev/data-center/mnt/glusterSD/10.252.166.129:_vol3/d07231ca
> -89b8-490a-819d-8542e1eaee19/images/3d95e237-441c-4b41-
> b823-48d446b3eba6/2c6c8b45-7d70-4eff-9ea5-f2a377f0ef64.lease', 'imageID':
> u'3d95e237-441c-4b41-b823-48d446b3eba6'}, {'dom
> ainID': u'd07231ca-89b8-490a-819d-8542e1eaee19', 'leaseOffset': 0,
> 'path': u'vol3/d07231ca-89b8-490a-819d-8542e1eaee19/images/
> 3d95e237-441c-4b41-b823-48d446b3eba6/5716acc8-7ee7
> -4235-aad6-345e565f3073', 'volumeID': u'5716acc8-7ee7-4235-aad6-345e565f3073',
> 'leasePath': u'/rhev/data-center/mnt/glusterSD/10.252.166.129:_
> vol3/d07231ca-89b8-490a-819d-8542e
> 1eaee19/images/3d95e237-441c-4b41-b823-48d446b3eba6/
> 5716acc8-7ee7-4235-aad6-345e565f3073.lease', 'imageID':
> u'3d95e237-441c-4b41-b823-48d446b3eba6'}, {'domainID': u'd07231ca-89
> b8-490a-819d-8542e1eaee19', 'leaseOffset': 0, 'path':
> u'vol3/d07231ca-89b8-490a-819d-8542e1eaee19/images/
> 3d95e237-441c-4b41-b823-48d446b3eba6/579e0033-4b94-4675-af78-d017ed2698
> e9', 'volumeID': u'579e0033-4b94-4675-af78-d017ed2698e9', 'leasePath':
> u'/rhev/data-center/mnt/glusterSD/10.252.166.129:_
> vol3/d07231ca-89b8-490a-819d-8542e1eaee19/images/3d95e237-
> 441c-4b41-b823-48d446b3eba6/579e0033-4b94-4675-af78-d017ed2698e9.lease',
> 'imageID': u'3d95e237-441c-4b41-b823-48d446b3eba6'}]} (vm:4710)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4704, in
> diskReplicateStart
> self._startDriveReplication(drive)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4843, in
> _startDriveReplication
> self._dom.blockCopy(drive.name, destxml, flags=flags)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line
> 98, in f
> ret = attr(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
> line 130, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line
> 92, in wrapper
> return func(inst, *args, **kwargs)
>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 718, in
> blockCopy
> if ret == -1: raise libvirtError ('virDomainBlockCopy() failed',
> dom=self)
> libvirtError: argument unsupported: non-file destination not supported yet
>

Hi,

I think it could be related to https://bugzilla.redhat.com/1306562 (based
on https://bugzilla.redhat.com/1481688#c38)
Denis what do you think?

Regards,
Maor



> 2018-09-14 15:05:53,328+0300 INFO  (jsonrpc/2) [api.virt] FINISH
> diskReplicateStart 

[ovirt-users] Re: NFS Multipathing

2018-09-16 Thread Maor Lipchuk
On Fri, Sep 14, 2018 at 2:41 PM,  wrote:

> Hi,
> It should be possible, as oVirt is able to support NFS 4.1
> I have a Synology NAS which is also able to support this version of the
> protocol, but never found time to set this together and test it until now.
> Reagrds
>
>
> Le 30-Aug-2018 12:16:32 +0200, xrs...@xrs444.net a écrit:
>
> Hello all,
>
> I've been  looking around but I've not found anything definitive on
> whether Ovirt can do NFS Multipathing, and if so how?
>
> Does anyone have any good how tos or configuration guides?
>
>
I know that you asked about NFS, but if that helps oVirt do support
multipath for iSCSI storage domain:

https://ovirt.org/develop/release-management/features/storage/iscsi-multipath/
Hope it helps

Regards,
Maor


>
> Thanks,
>
> Thomas
>
>
> --
> FreeMail powered by mail.fr
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/TT2SDAQSZJQ4TWI6Q5AAYCJCZSTGH3HH/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HGJPFEYTGX444R34TPP6CVANGJ2KZAUH/


[ovirt-users] Re: NFS Multipathing

2018-09-16 Thread spfma
Hi, It should be possible, as oVirt is able to support NFS 4.1 I have a 
Synology NAS which is also able to support this version of the protocol, but 
never found time to set this together and test it until now. Reagrds 

Le 30-Aug-2018 12:16:32 +0200, xrs...@xrs444.net a crit: 
  Hello all,   I've been looking around but I've not found anything definitive 
on whether Ovirt can do NFS Multipathing, and if so how?   Does anyone have any 
good how tos or configuration guides?   Thanks,   Thomas   

-
FreeMail powered by mail.fr
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TT2SDAQSZJQ4TWI6Q5AAYCJCZSTGH3HH/


[ovirt-users] Re: [ANN] oVirt Engine 4.2.6 async update is now available

2018-09-16 Thread Yuval Turgeman
The failure was caused because you had /var/log LV that was not mounted for
some reason (was that manual by any chance?).  onn/var_crash was created
during the update and removed successfully because of the var_log issue.
The only question is why it didn't clean up the
onn/ovirt-node-ng-4.2.6, it failed to clean it up for some reason.  Was
the abrt process hanging this LV ?

Thanks,
Yuval

On Fri, Sep 14, 2018 at 11:53 AM,  wrote:

> I've managed to upgrade them now by removing logical volumes, usually it's
> just /dev/onn/home but one I had to keep reinstalling see were it failed so
> had to
>
> lvremove /dev/onn/ovirt-node-ng-4.2.6.1-0.20180913.0+1
> lvremove /dev/onn/var_crash
> lvremove   /dev/onn/var_log
> lvremove   /dev/onn/var_log_audit
>
> I had trouble removing because failed because it was in use since there
> was an abrt process holding onto the mount.
>
> Thanks,
>  Paul S.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/LAS4IEPE2IF53QJVPMJEFLU2Q76AWQRD/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IHIT2XQWUSA5F2R6B7AGGCBIDVZCC5QY/